Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/7&page=6' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Automated deployment of Docker Universal Control Plane with Terraform and Ansible

Xebia Blog - Tue, 01/19/2016 - 20:06

You got into the Docker Universal Control Plane beta and you are ready to get going, and then you see a list of manual commands to set it up. As you don't want to do anything manually, this guide will help you setup DUCP in a few minutes by using just a couple of variables. If you don't know what DUCP is, you can read the post I made earlier. The setup is based on one controller, and a configurable amount of replicas which will automatically join the controller to form a cluster. There a few requirements we need to address to make this work, like setting the external (public) IP while running the installer and passing the controller's certificate fingerprint to the replicas during setup. We will use Terraform to spin up the instances, and Ansible to provision the instances and let them connect to each other.

Prerequisites

Updated 2016-01-25: v0.7 has been released, and no Docker Hub account is needed anymore because the images are moved to public namespace. This is updated in the 'master' branch on the Github repository. If you still want to try v0.6, you can checkout tag 'v0.6'!

Before you start cloning a repository and executing commands, let's go over the prerequisites. You will need:

  • Access to the DUCP beta (during installation you will need to login with a Docker Hub account which is added to the 'dockerorca' organization, tested with v0.5, v0.6 and v0.7. Read update notice above for more information.)
  • An active Amazon Web Services and/or Google Cloud Platform account to create resources
  • Terraform (tested with v0.6.8 and v0.6.9)
  • Ansible (tested with 1.9.4 and 2.0.0.1)

Step 1: Clone the repository

CiscoCloud's terraform.py is used as Ansible dynamic discovery to find our Terraform provisioned instances, so --recursive is needed to also get the Git submodule.

git clone --recursive https://github.com/nautsio/ducp-terraform-ansible
cd ducp-terraform-ansible

Step 2.1: AWS specific instructions

These are the AWS specific instructions, if you want to use Google Cloud Platform, skip to step 2.2.

For the AWS based setup, you will be creating an aws_security_group with the rules in place for HTTPS (443) en SSH (22). With Terraform you can easily specify what we need by specifying ingress and egress configurations to your security group. By specifying 'self = true' the rule will be applied to the all resources in that to be created security group. In the single aws_instance for the ducp_controller we use the lookup function to get the right AMI from the list specified in vars.tf. Inside each aws_instance we can reference the created security group by using "${aws_security_group.ducp.name}". This is really easy and it keeps the file generic. To configure the amount of instances for ducp-replica we are using the count parameter. To identify each instance to our Ansible setup, we specify a name by using the tag parameter. Because we use the count parameter, we can generate a name by using a predefined string (ducp-replica) and add the index of the count to it. You can achieve this by using the concat function like so: "${concat("ducp-replica",count.index)}". The sshUser parameter in the tags block is used by Ansible to connect to the instances. The AMIs are configured inside vars.tf and by specifying a region, the correct AMI will be selected from the list.

variable "amis" {
    default = {
        ap-northeast-1 = "ami-46c4f128"
        ap-southeast-1 = "ami-e378bb80"
        ap-southeast-2 = "ami-67b8e304"
        eu-central-1   = "ami-46afb32a"
        eu-west-1      = "ami-2213b151"
        sa-east-1      = "ami-e0be398c"
        us-east-1      = "ami-4a085f20"
        us-west-1      = "ami-fdf09b9d"
        us-west-2      = "ami-244d5345"
    }
}

The list of AMIs

    ami = "${lookup(var.amis, var.region)}"

The lookup function

Let's configure the variables so you can use it to create the instances. Inside the repository you will find a terraform.tfvars.example file. You can copy or move this file to terraform.tfvars so that Terraform will pick it up during a plan or apply.

cd aws
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

  • region can be selected from available regions
  • access_key and secret_key can be obtained through the console
  • key_name is the name of the key pair to use for the instances
  • replica_count defines the amount of replicas you want

The file could look like the following:

region = "eu-central-1"
access_key = "string_obtained_from_console"
secret_key = "string_obtained_from_console"
key_name = "my_secret_key"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

 Step 2.2: Google Cloud Platform specific instructions

In GCP, it's a bit easier to set everything up. Because there are no images/AMI's per region, we can use a disk image with a static name. And because the google_compute_instance has a name variable, you can use the same count trick as we did on AWS, but this time without the metadata. By classifying the nodes with the tag https-server, it will automatically open port 443 in the firewall. Because you can specify the user that should be created with your chosen key, setting the ssh_user is needed to connect with Ansible later on.

Let's setup our Google Cloud Platform variables.

cd gce
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

The file could look like the following:

project = "my_gce_project"
credentials = "/path/to/file.json"
region = "europe-west1"
zone = "europe-west1-b"
ssh_user = "myawesomeusername"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Step 3: Running Ansible

Instances should all be there, now let's install the controller and add a replica. This setup uses terraform.py to retrieve the created instances (and IP addresses) based on the terraform.tfstate file. To make this work you need to specify the location of the tfstate file by settings TERRAFORM_STATE_ROOT to the current directory. Then you specify the script to lookup the inventory (-i) and the site.yml where you can assign the roles to the hosts.

There are two roles that will be applied to all hosts, called common and extip. Inside common everything is set up to get Docker running on the hosts, so it configures the apt repo, installs the docker-engine package and finally installs the docker-py package because Ansible needs this to use Docker. Inside extip, there are two shell commands to lookup external IP addresses. Because if you want to access DUCP on the external IP, it should be present inside the certificate that DUCP generates. Because the external IP addresses are not found on GCP instances and I wanted a generic approach where you would only need one command to provision both AWS and GCP I chose to look them up, and eventually register the variable extip with the one that was correctly looked up on the instances. Second reason to use the external IP, is that all the replicas need the external IP of the controller to register themselves. By passing the --url parameter to the join command, you can specify to what controller it should register.

--url https://"{{ hostvars['ducp-controller']['extip'] }}"

The extip variable used by replica

This also counts for the certificates fingerprint, the replica should provide the fingerprint of the controllers certificate to register successfully. We can access that variable the same way: "{{ hostvars['ducp-controller']['ducp_controller_fingerprint'].stdout }}. It specifies .stdout to only use the stdout part of the command to get the fingerprint, because it also registers exitcode and so.

To supply external variables, you can inject vars.yml through --extra-vars. Let's setup the vars.yml by copying the example file to vars.yml and configure it.

cd ../ansible
cp vars.yml.example vars.yml

As stated before, the installer will login to the Docker Hub and download images that live under the dockerorca organization. Your account needs to be added to this organization to let it succeed. Fill in your Docker Hub account details in vars.yml, and choose an admin user and admin password for your installation. If you use ssh-agent to store your SSH private keys, you can proceed with the ansible-playbook command, else you can specify your private-key file by adding --private-key <priv_key_location> to the list of arguments.

Let's run Ansible to set up DUCP. You need to change to the directory where the terraform.tfstate file resides, or change TERRAFORM_STATE_ROOT accordingly.

cd ../{gce,aws}
TERRAFORM_STATE_ROOT=. ansible-playbook -i ../terraform.py/terraform.py \
                       ../ansible/site.yml \
                       --extra-vars "@../ansible/vars.yml"

If all went well, you should see something like:

PLAY RECAP *********************************************************************
ducp-controller            : ok=13   changed=9    unreachable=0    failed=0   
ducp-replica0              : ok=12   changed=8    unreachable=0    failed=0   
ducp-replica1              : ok=12   changed=8    unreachable=0    failed=0

To check out our brand new DUCP installation, run the following command to extract the IP addresses from the created instances:

TERRAFORM_STATE_ROOT=. ../terraform.py/terraform.py --hostfile

Copy the IP address listed in front of ducp-controller and open up a web browser. Prefix the address with https://<ip> and now you can login with your chosen username and password.

ducp login

Let me emphasise that this is not a production ready setup, but can definitely help if you want to try out DUCP and maybe build a production ready version from this setup. If you want support for other platforms, please file an issue on Github or submit a pull request. I'll be more than happy to look into it for you.

Sponsored Post: Netflix, Macmillan, Aerospike, TrueSight Pulse, LaunchDarkly, Robinhood, StatusPage.io, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Manager - Site Reliability Engineering: Lead and grow the the front door SRE team in charge of keeping Netflix up and running. You are an expert of operational best practices and can work with stakeholders to positively move the needle on availability. Find details on the position here: https://jobs.netflix.com/jobs/398

  • Macmillan Learning, a premier e-learning institute, is looking for VP of DevOps to manage the DevOps teams based in New York and Austin. This is a very exciting team as the company is committed to fully transitioning to the Cloud, using a DevOps approach, with focus on CI/CD, and using technologies like Chef/Puppet/Docker, etc. Please apply here.

  • DevOps Engineer at Robinhood. We are looking for an Operations Engineer to take responsibility for our development and production environments deployed across multiple AWS regions. Top candidates will have several years experience as a Systems Administrator, Ops Engineer, or SRE at a massive scale. Please apply here.

  • Senior Service Reliability Engineer (SRE): Drive improvements to help reduce both time-to-detect and time-to-resolve while concurrently improving availability through service team engagement.  Ability to analyze and triage production issues on a web-scale system a plus. Find details on the position here: https://jobs.netflix.com/jobs/434

  • Manager - Performance Engineering: Lead the world-class performance team in charge of both optimizing the Netflix cloud stack and developing the performance observability capabilities which 3rd party vendors fail to provide.  Expert on both systems and web-scale application stack performance optimization. Find details on the position here https://jobs.netflix.com/jobs/860482

  • Senior Devops Engineer - StatusPage.io is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics
Fun and Informative Events
  • Aerospike, the high-performance NoSQL database, hosts a 1-hour live webinar on January 28 at 1PM PST / 4 PM EST on the topic of "From Development to Deployment" with Docker and Aerospike. This session will cover what Docker is and why it's important to Developers, Admins and DevOps when using Aerospike; it features an interactive demo showcasing the core Docker components and explaining how Aerospike makes developing & deploying multi-container applications simpler. Please click here to register.

  • Your event could be here. How cool is that?
Cool Products and Services
  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Agile Results for 2016

Agile Results is the personal productivity system for high-performance.

Agile Results is a “whole person” approach to personal productivity. It combines proven practices for mind, body, and emotions. It helps you realize your potential the agile way.  Best of all, it helps you make the most of what you’ve got to achieve higher levels of performance with less time, less effort, and more impact.

Agile Results helps you achieve rapid results by focusing on outcomes over activities, spending more time in your strengths, focusing on high-value activities, and using your best energy for your best results.

If you want to use Agile Results, it’s simple. I’ll show you how to get started, right, here, right now. If you already know Agile Results, then this will simply be a refresher.

Write Three Things Down

The way to get started with Agile Results is simple. Write three things down that you want to achieve today. Just ask yourself, “What are your Three Wins that you want to achieve today?”

For me, today, I want to achieve the following:

  1. I want to get agreement on a shared model across a few of our teams.
  2. I want to create a prototype for business model innovation.
  3. I want to create a distilled view of CEO concerns for a Mobile-First, Cloud-First world.

In my mind, I might just remember: shared model, business model innovation, and CEO. I’ll be focused on the outcomes, which are effectively agreement on a model, innovation in business models for a Mobile-First, Cloud-First world, and a clear representation of top CEO pains, needs, and desired outcomes.

Even if I throw away what I write down, or lose it, the value is in the brief moment I spent to prioritize and visualize the results that I want to achieve. 

This little vision will stick with me as a guide throughout my day.

Think in Three Wins

Writing these three items down, helps me focus. It helps me prioritize based on value. It also helps me create a simple vision for my day.

Plus, thinking in Three Wins adds the fun factor.

And, better yet, if somebody asks me tomorrow what my Three Wins were for yesterday, I should be able to tell a story that goes like this: I created rapport and a shared view with our partner teams, I created a working information model for business model innovation for a mobile-first cloud-first world, and I created a simplified view of the key priorities for CEOs in a Mobile-First, Cloud-First world.

When you can articulate the value you create, to yourself and others, it helps provide a sense of progress, and a story of impact.  Progress is actually one of the keys to workplace happiness, and even happiness in life.

In a very pragmatic way, by practicing your Three Wins, you are practicing how to identify and create value.  You are learning what is actually valued, by yourself and others, by the system that you are in.

And value is the ultimate short-cut.  Once you know what value is, you can shave off a lot of waste.

The big idea here is that it’s not your laundry list of To-Dos, activities, and reminders -- it’s your Three Wins or Three Outcomes or Three Results.

Use Your Best Energy for Your Best Results

Some people wonder why only Three Wins?  There is a lot of science behind the Rule of 3, but I find it better to look at how the Rule of 3 has stood the test of time.  The military uses it.  Marketing uses it.  You probably find yourself using it when you chunk things up into threes.

But don’t I have a bazillion things to do?

Yes. But can I do a bazillion things today? No. But what I can do is spend my best energy, on the best things, my best way.

That’s the best I can do.

But that’s actually a lot. When you focus on high-value outcomes and you really focus your time, attention, and energy on those high-value outcomes, you achieve a lot. And you learn a lot.

Will I get distracted? Sure. But I’ll use my Three Wins to get back on track.

Will I get randomized and will new things land on my plate? Of course, it’s the real-world. But I have Three Wins top of mind that I can prioritize against. I can see if I’m trading up for higher-value, higher-priorities, or if I’m simply getting randomized and focusing on lower-value distractions.

Will I still have a laundry list of To-Do items? I will. But, at the top of that list, I’ll have Three Wins that are my “tests for success” for the day, that I can keep going back to, and that will help me prioritize my list of actions, reminders, and To-Dos.

20-Minute Sprints

I’ll use 20-Minute Sprints to achieve most of my results. It will help me make meaningful progress on things, keep a fast pace, stay engaged with what I’m working on, and to use my best energy.

Whether it’s an ultradian rhythms, or just a natural breaking point, 20-Minute Sprints help with focus.

We aren’t very good at focusing if we need to focus “until we are done.” But we are a lot better at focusing if we have a finish line in site. Plus, with what I’m learning about vision, I wonder if spending more than 20-Minutes is where we start to fatigue our eye muscles, and don’t even know it.

Note that I primarily talk about 20-Minute Sprints as timeboxing, after all, that’s what it is, but I think it’s more helpful to use a specific number. I remember that 40-Hour Work Week was a good practice from Extreme Programming before it became Sustainable Pace. Once it became Sustainable Pace, then teams started doing the 70 or 80 hour work week, which is not only ineffective, it does more harm than good.

Net net – start with 20-Minute Sprints. If you find another timebox works better for you, than by all means use it, but there does seem to be something special about 20-Minute Sprints for paving your work through work.

If you’re wondering, what if you can’t complete your task in a 20-Minute Sprint? You do another sprint.

All the 20-Minute Sprint does is give you a simple timebox to focus and prioritize your time, attention, and energy, as well as to remind you to take brain breaks. And, the 20-Minute deadline also helps you sustain a faster pace (more like a “sprint” vs. a “job” or “walk”).

Just Start

I could say so much more, but I’d rather you just start doing Agile Results.

Go ahead and take a moment to think about your Three Wins for today, and go ahead and write them down.

Teach a friend, family member, or colleague Agile Results.  Spread the word.

Help more people bring out their best, even in their toughest situations.

A little clarity creates a lot of courage, and that goes a long when it comes to making big impact.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

What Life is Like with Agile Results

Categories: Architecture, Programming

Agile Results for 2016

Agile Results is the personal productivity system for high-performance.

Agile Results is a “whole person” approach to personal productivity. It combines proven practices for mind, body, and emotions. It helps you realize your potential the agile way.  Best of all, it helps you make the most of what you’ve got to achieve higher levels of performance with less time, less effort, and more impact.

Agile Results helps you achieve rapid results by focusing on outcomes over activities, spending more time in your strengths, focusing on high-value activities, and using your best energy for your best results.

If you want to use Agile Results, it’s simple. I’ll show you how to get started, right, here, right now. If you already know Agile Results, then this will simply be a refresher.

Write Three Things Down

The way to get started with Agile Results is simple. Write three things down that you want to achieve today. Just ask yourself, “What are your Three Wins that you want to achieve today?”

For me, today, I want to achieve the following:

  1. I want to get agreement on a shared model across a few of our teams.
  2. I want to create a prototype for business model innovation.
  3. I want to create a distilled view of CEO concerns for a Mobile-First, Cloud-First world.

In my mind, I might just remember: shared model, business model innovation, and CEO. I’ll be focused on the outcomes, which are effectively agreement on a model, innovation in business models for a Mobile-First, Cloud-First world, and a clear representation of top CEO pains, needs, and desired outcomes.

Even if I throw away what I write down, or lose it, the value is in the brief moment I spent to prioritize and visualize the results that I want to achieve. 

This little vision will stick with me as a guide throughout my day.

Think in Three Wins

Writing these three items down, helps me focus. It helps me prioritize based on value. It also helps me create a simple vision for my day.

Plus, thinking in Three Wins adds the fun factor.

And, better yet, if somebody asks me tomorrow what my Three Wins were for yesterday, I should be able to tell a story that goes like this: I created rapport and a shared view with our partner teams, I created a working information model for business model innovation for a mobile-first cloud-first world, and I created a simplified view of the key priorities for CEOs in a Mobile-First, Cloud-First world.

When you can articulate the value you create, to yourself and others, it helps provide a sense of progress, and a story of impact.  Progress is actually one of the keys to workplace happiness, and even happiness in life.

In a very pragmatic way, by practicing your Three Wins, you are practicing how to identify and create value.  You are learning what is actually valued, by yourself and others, by the system that you are in.

And value is the ultimate short-cut.  Once you know what value is, you can shave off a lot of waste.

The big idea here is that it’s not your laundry list of To-Dos, activities, and reminders — it’s your Three Wins or Three Outcomes or Three Results.

Use Your Best Energy for Your Best Results

Some people wonder why only Three Wins?  There is a lot of science behind the Rule of 3, but I find it better to look at how the Rule of 3 has stood the test of time.  The military uses it.  Marketing uses it.  You probably find yourself using it when you chunk things up into threes.

But don’t I have a bazillion things to do?

Yes. But can I do a bazillion things today? No. But what I can do is spend my best energy, on the best things, my best way.

That’s the best I can do.

But that’s actually a lot. When you focus on high-value outcomes and you really focus your time, attention, and energy on those high-value outcomes, you achieve a lot. And you learn a lot.

Will I get distracted? Sure. But I’ll use my Three Wins to get back on track.

Will I get randomized and will new things land on my plate? Of course, it’s the real-world. But I have Three Wins top of mind that I can prioritize against. I can see if I’m trading up for higher-value, higher-priorities, or if I’m simply getting randomized and focusing on lower-value distractions.

Will I still have a laundry list of To-Do items? I will. But, at the top of that list, I’ll have Three Wins that are my “tests for success” for the day, that I can keep going back to, and that will help me prioritize my list of actions, reminders, and To-Dos.

20-Minute Sprints

I’ll use 20-Minute Sprints to achieve most of my results. It will help me make meaningful progress on things, keep a fast pace, stay engaged with what I’m working on, and to use my best energy.

Whether it’s an ultradian rhythms, or just a natural breaking point, 20-Minute Sprints help with focus.

We aren’t very good at focusing if we need to focus “until we are done.” But we are a lot better at focusing if we have a finish line in site. Plus, with what I’m learning about vision, I wonder if spending more than 20-Minutes is where we start to fatigue our eye muscles, and don’t even know it.

Note that I primarily talk about 20-Minute Sprints as timeboxing, after all, that’s what it is, but I think it’s more helpful to use a specific number. I remember that 40-Hour Work Week was a good practice from Extreme Programming before it became Sustainable Pace. Once it became Sustainable Pace, then teams started doing the 70 or 80 hour work week, which is not only ineffective, it does more harm than good.

Net net – start with 20-Minute Sprints. If you find another timebox works better for you, than by all means use it, but there does seem to be something special about 20-Minute Sprints for paving your work through work.

If you’re wondering, what if you can’t complete your task in a 20-Minute Sprint? You do another sprint.

All the 20-Minute Sprint does is give you a simple timebox to focus and prioritize your time, attention, and energy, as well as to remind you to take brain breaks. And, the 20-Minute deadline also helps you sustain a faster pace (more like a “sprint” vs. a “job” or “walk”).

Just Start

I could say so much more, but I’d rather you just start doing Agile Results.

Go ahead and take a moment to think about your Three Wins for today, and go ahead and write them down.

Teach a friend, family member, or colleague Agile Results.  Spread the word.

Help more people bring out their best, even in their toughest situations.

A little clarity creates a lot of courage, and that goes a long when it comes to making big impact.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

What Life is Like with Agile Results

Categories: Architecture, Programming

Productivity Power Magazine

image

"Productivity is being able to do things that you were never able to do before." -- Franz Kafka

One of my experiments over the weekend was to do a fast roundup of my productivity articles.

Here it is -- Productivity Power Magazine:

Productivity Power Magazine

I wanted to create a profound knowledge base of principles, patterns, and practices for productivity.  I also wanted to make it fast, really fast, to be able to go through all of my productivity articles that I’ve created for Sources of Insight and on MSDN. 

I also wanted it to be more visual, I wanted thumbnails of each articles, so that I could flip through very quickly.

After looking at a few options, I tried Flipboard.  It’s a simple way to create personal magazines, and world class publications like The New York Times, PEOPLE Magazine, Fast Company and Vanity Fair use Flipboard.

Productivity Power Magazine (A Flipboard Experiment)

Here is my first Flipboard experiment to create Productivity Power Magazine:

Productivity Power Magazine

I think you’ll find Productivity Power Magazine a very fast way to go through all of my productivity articles.  You get to see everything and a glance, scroll through a visual list, and then dive into the ones you want to read.  If you care about productivity, this might be your productivity paradise.

Note that I take a “whole person” approach to productivity, with a focus on well-being.  I draw from positive psychology, sports psychology, project management practices, and a wide variety of sources to help you achieve high-performance.  Ultimately, it’s a patterns and practices approach to productivity to help you think, feel, and do your best, while enjoying the journey.

Some Challenges with Productivity Power Magazine

Flipboard is a fast way to roundup and share articles for a theme.

I do like Flipboard.  I did run into some issues though while creating my Productivity Power Magazine: 1)  I wasn’t able to figure out how to create a simpler URL for the landing page, 2)  I wasn’t able to swap out images if I didn’t like what was in the original article 3) I couldn’t add an image if the article was missing one, 4) I couldn’t easily re-sequence the flow of articles in the magazine, and 5) I can’t get my editorial comments to appear.  It seems like all of my write ups are in the tool, but don’t show on the page.

That said, I don’t know a faster, simpler, better way to create a catalog of all of my productivity articles at a glance.  What’s nice is that I can go across multiple sources, so it’s a powerful way to round up articles and package them for a specific theme, such as productivity in this case.

I can also see how I can use Flilpboard for doing research on the Web, alone or with a team of people, since you can invite people to contribute to your Flipboard.   You can also make Flipboards private, so you can choose which ones you share.

Take Productivity Power Magazine for a spin and let me know how it goes.

Categories: Architecture, Programming

Use Google For Throughput, Amazon And Azure For Low Latency

Which cloud should you use? It may depend on what you need to do with it. What Zach Bjornson needs to do is process large amounts scientific data as fast as possible, which means reading data into memory as fast as possible. So, he made benchmark using Google's new multi-cloud PerfKitBenchmarker, to figure out which cloud was best for the job.

The results are in a very detailed article: AWS S3 vs Google Cloud vs Azure: Cloud Storage Performance. Feel free to datamine the results for more insights, but overall his conclusions are:

Categories: Architecture

DevNexus 2016 in Atlanta, GA

Coding the Architecture - Simon Brown - Mon, 01/18/2016 - 13:33

I'm pleased to say I'll be in the United States next month for the DevNexus 2016 conference that is taking place in Atlanta, GA. In addition to a number of talks about software architecture, I'll also be running my popular "The Art of Visualising Software Architecture" workshop. Related to the (free) book with the same name, this hands-on workshop is about improving communication and specifically software architecture diagrams. We'll talk about UML and some anti-patterns of "boxes and lines" diagrams, but the real focus is on my "C4 model" and some lightweight techniques for communicating software architecture. The agenda and slides for this 1-day workshop are available online. I hope you'll be able to join me.

Categories: Architecture

Productivity Power Magazine

image

"Productivity is being able to do things that you were never able to do before." — Franz Kafka

One of my experiments over the weekend was to do a fast roundup of my productivity articles.

Here it is – Productivity Power Magazine:

Productivity Power Magazine

I wanted to create a profound knowledge base of principles, patterns, and practices for productivity.  I also wanted to make it fast, really fast, to be able to go through all of my productivity articles that I’ve created for Sources of Insight and on MSDN. 

I also wanted it to be more visual, I wanted thumbnails of each articles, so that I could flip through very quickly.

After looking at a few options, I tried Flipboard.  It’s a simple way to create personal magazines, and world class publications like The New York Times, PEOPLE Magazine, Fast Company and Vanity Fair use Flipboard.

Productivity Power Magazine (A Flipboard Experiment)

Here is my first Flipboard experiment to create Productivity Power Magazine:

Productivity Power Magazine

I think you’ll find Productivity Power Magazine a very fast way to go through all of my productivity articles.  You get to see everything and a glance, scroll through a visual list, and then dive into the ones you want to read.  If you care about productivity, this might be your productivity paradise.

Note that I take a “whole person” approach to productivity, with a focus on well-being.  I draw from positive psychology, sports psychology, project management practices, and a wide variety of sources to help you achieve high-performance.  Ultimately, it’s a patterns and practices approach to productivity to help you think, feel, and do your best, while enjoying the journey.

Some Challenges with Productivity Power Magazine

Flipboard is a fast way to roundup and share articles for a theme.

I do like Flipboard.  I did run into some issues though while creating my Productivity Power Magazine: 1)  I wasn’t able to figure out how to create a simpler URL for the landing page, 2)  I wasn’t able to swap out images if I didn’t like what was in the original article 3) I couldn’t add an image if the article was missing one, 4) I couldn’t easily re-sequence the flow of articles in the magazine, and 5) I can’t get my editorial comments to appear.  It seems like all of my write ups are in the tool, but don’t show on the page.

That said, I don’t know a faster, simpler, better way to create a catalog of all of my productivity articles at a glance.  What’s nice is that I can go across multiple sources, so it’s a powerful way to round up articles and package them for a specific theme, such as productivity in this case.

I can also see how I can use Flilpboard for doing research on the Web, alone or with a team of people, since you can invite people to contribute to your Flipboard.   You can also make Flipboards private, so you can choose which ones you share.

Take Productivity Power Magazine for a spin and let me know how it goes.

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For January 15th, 2016

Hey, it's HighScalability time:


Space walk from 2001: A Space Odyssey? Nope. A base jump from the CN Tower in Toronto.

 

If you like this Stuff then please consider supporting me on Patreon.
  • 13.5TB: open data from Yahoo for machine learning; 1+ exabytes: data stored in the cloud; 13: reasons autonomous cars should have steering wheels; 3,000: kilowatt-hours of energy generated by the solar bike path; 10TB: helium-filled hard disk; $224 Billion: 2016 gadget spending in US; 85: free ebooks; 17%: Azure price drop on some VMs; 20.5: tons of explosives detonated on Mythbusters; 20 Billion: Apple’s App Store Sales; 70%: Global Internet traffic goes through Northern Virginia; 12: photos showing the beauty of symmetry; 

  • Quotable Quote:
    • @WhatTheFFacts: Scaling Earth's 'life' to 46 years, the industrial revolution began 1 minute ago -- In that time we've destroyed half the world's forests.
    • David Brin: The apotheosis of Darth Vader was truly disgusting. Saving one demigod—a good demigod, his son—wiped away all his guilt from slaughtering billions of normal people.
    • Brian Brazil: In today’s world, having a 1:1 coupling between machines and services is becoming less common. We no longer have the webserver machine, we have one machine which hosts one part of the webserver service. 
    • @iamxavier: "Snapchat is said to have 7 billion mobile video views vs Facebook’s 8 bil.The kicker: Fb has 15x Snapchat’s users."
    • Charlie Stross: Do you want to know the real reason George R. R. Martin's next book is late? it's because keeping track of that much complexity and so many characters and situations is hard work, and he's not getting any younger. 
    • @raju: Unicorn-Size Losses: @Uber lost $671.4 million in 2014 & $987.2 million in the first half of 2015
    • @ValaAfshar: 3.8 trillion photos were taken in all of human history until mid-2011. 1 trillion photos were taken in 2015 alone
    • @ascendantlogic: 2010: Rewrite all the ruby apps with javascript 2012: Rewrite all the javascript apps with Go 2014: Rewrite all the Go apps with Rust
    • @kylebrussell: “Virtual reality was tried in the 90s!” Yeah, with screens that had 7.9% of the Oculus Rift CV1 resolution
    • @kevinmarks: #socosy2016 @BobMankoff: people don't like novelty - they like a little novelty in a cocoon of familiarity, that they could have thought of
    • @toddhoffious: The problem nature has solved is efficient variable length headers. Silicon doesn't like them for networks, or messaging protocols. DNA FTW.
    • @jaykreps: I'm loving the price war between cloud providers, cheap compute enables pretty much everything else in technology. 
    • The Confidence Game: Transition is the confidence game’s great ally, because transition breeds uncertainty. There’s nothing a con artist likes better than exploiting the sense of unease we feel when it appears that the world as we know it is about to change.
    • @somic: will 2016 be the year of customer-defined allocation strategies for aws spot fleet? (for example, through a call to aws lambda)
    • beachstartup: i run an infrastructure startup. the rule of thumb is once you hit $20-99k/month, you can cut your AWS bill in half somewhere else. sites in this phase generally only use about 20% of the features of aws.
    • @fart: the most important part of DevOps to me is “kissing the data elf”
    • @destroytoday: In comparison, @ProductHunt drove 1/4 the traffic of Hacker News, but brought in 700+ new users compared to only 20 from HN.
    • @aphyr~ Man, if people knew even a *tenth* of the f*cked up shit tech company execs have tried to pull... Folks are *awfully* polite on twitter.
    • @eric_analytics: It took Uber five years to get to a billion rides, and its Chinese rival just did it in one
    • lowpro: Being a 19 year old college student with many friends in high school, I can say snapchat is the most popular social network, followed by Instagram then Twitter, and lastly Facebook. If something is happening, people will snap and tweet about it, Instagram and Facebook are reserved for bigger events that are worth mentioning, snapchat and Twitter are for more day to day activities and therefore get used much more often.
    • Thaddeus Metz: The good, the true, and the beautiful give meaning to life when we transcend our animal nature by using our rational nature to realize states of affairs that would be appreciated from a universal perspective.
    • Reed Hastings: We realized we learned best by getting in the market and then learning, even if we’re less than perfect. Brazil is the best example. We started [there] four years ago. At first it was very slow growth, but because we were in the market talking to our members who had issues with the service, we could get those things fixed, and we learned faster.

  • Why has Bitcoin failed? From Mike Hearn: it has failed because the community has failed. What was meant to be a new, decentralised form of money that lacked “systemically important institutions” and “too big to fail” has become something even worse: a system completely controlled by just a handful of people. Worse still, the network is on the brink of technical collapse. The mechanisms that should have prevented this outcome have broken down, and as a result there’s no longer much reason to think Bitcoin can actually be better than the existing financial system.

  • Lessons learned on the path to production. From Docker CEO: 1) IaaS is too low; 2) PaaS is too high: Devs do not adopt locked down platforms; 3) End to end matters: Devs care about deployment, ops cares about app lifecycle and origin; 4) Build management, orchestration, & more in a way that enables portability; 5) Build for resilience, not zero defects; 6) If you do 5 right, agility + control

  • Is this the Tesla of database systems? No Compromises: Distributed Transactions with Consistency, Availability, and Performance: FaRMville transactions are processed by FaRM – the Fast Remote Memory system that we first looked at last year. A 90 machine FaRM cluster achieved 4.5 million TPC-C ‘new order’ transactions per second with a 99th percentile latency of 1.9ms. If you’re prepared to run at ‘only’ 4M tps, you can cut that latency in half. Oh, and it can recover from failure in about 60ms. 

  • Uber tells the story behind the design and implementation of their scalable datastore using MySQL. Uber took that path of many others in writing an entire layer on top of MySQL to create the database that best fits their use case. Uber wanted: to be able to linearly add capacity by adding more servers; write availability; a way of notifying downstream dependencies; secondary indexes; operation trust in the system, as it contains mission-critical trip data. They looked at Cassandra, Riak, and MongoDB, etc. Features alone did not decide their choice. What did?: "the decision ultimately came down to operational trust in the system we’d use."  If you are Uber this is a good reason that may not seem as important to those without accountability. Uber's design is inspired by Friendfeed, and the focus on the operational side inspired by Pinterest.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Judo Strategy

Xebia Blog - Thu, 01/14/2016 - 23:00

In the age of disruption incumbents with decades of history get swept away by startups at an alarming rate. What allows these fledgling companies to succeed? What is their strategy and how can you defend against such an opponent?

I found that that the similarities between Judo and Business strategy allow us to apply the Judo principles to become a Product Samurai.

There are people who learn by reading (carry on doing so), people who learn by listening and observing (see the video registration) and people who learn by experiencing (sign up here). With that out of our way, let’s get started.

Enter the dojo

The dojo is a place where Judoka’s come to practice. One of the key things that make Judo great is that it can be learned, and it can be taught. To illustrate just how powerful it is and what the three core principles are I asked my 9-year-old daughter to give a demonstration.

The three principles of Judo Strategy

Jigorō Kanō founded Judo after having studied many different styles. Upon reaching a deeper level of understanding of what works and what doesn’t he created his philosophy based on the rule of “maximum effect with minimal effort”. What contributed to this rule became part of Judo, what did not contribute was discarded.

The operation of disruptors is not much different. With no past to hang on to, startups are free to pivot and take technology, processes and tools from the best offerings and combine them to a new technique. If they do not operate on the principle of “maximum effect with minimal effort” they will die. This is a luxury that incumbents have but usually not leverage. Typically the incumbents have the capital, equipment, skills, market knowledge, channels and people, still choose not to leverage.

Movement

The first principle in Judo is movement. Before we get in close with our opponent we seek to grab him in a favorable position. Maybe we can grip a sleeve, or arm and catch the opponent off guard. As a disruptor I also seek for uncontested ground. A head on attack on the core product of an established player is usually met with great resistance, as a disruptor I cannot hope to win that battle.

Variables in the Lean Canvas

Figure: Variables in the Lean Canvas

So to seek uncontested ground, means launching under the radar. This will go against the advice of your marketing director who will tell you to make “as much noise as possible”. This will indeed attract more people, but also tell your opponent exactly what you are doing. So have your marketing align with your ability to execute. Why go multi-lingual when you can only serve local customers? Why do a nation wide campaign when you are still running a pilot in a single city? Why pinpoint the shortcomings in your competitor’s product when you cannot outpace them in development (yet)?

At some point contact is inevitable, but a good disruptor will have chosen a vantage point. By the time Dell became noticed they had a well-oiled distribution network in place and were able to scale quickly, whereas the competition still relied on brick and mortar partners. Digital media like nu.nl came out of nowhere and by the time traditional newspapers had noticed them it was not only hard to catch up, but their opponent had defined where the battle would take place: in the land of Apps, away from the strengths of traditional media. There is still a chance for incumbents, and that is to out innovate your opponent and to do so you have absorb some disruption DNA yourself.

Balance

The second principle is balance. Once we have gripped our opponent we are trying to destroy their balance whilst strengthening our own. Ironically this usually means keeping your enemies close.

Puppy ploy

Figure: Puppy ploy

Perhaps the best possible example of this is the “puppy ploy” that Apple pulled with the music industry in the early days of iTunes. By emphasizing that downloadable music was not really their business (it only represented 1 or 2%) and was highly unprofitable (due to illegal sources like Napster and Kazaa) did Apple obtain the position it still holds today, and became the dominant music platform. As history repeats itself, a small company from Finland did the same thing with streaming, rather than owning the music. If you close your eyes you can hear them pitch: “it’s only a few percent of your business, and it’s like radio, does radio effect your business?”.

A little bit closer to home, I’ve seen first hand how a major eCommerce platform brokered deals with numerous brands. Sure, there is a kickback involved, but it prevents your competitors from opening their own store. By now they have become the dominant player, and their partners have come to rely on their digital partner to run the shop for them. It’s a classical example of keeping your enemies so close, that they cannot leverage their strength.

Leverage 

My favourite category of throwing techniques (Nage Waza) are the Sutemi, or in English “sacrifice throws”. In these techniques you typically sacrifice your position so you can leverage the power of your opponent.

Basically it means: go after sunk cost. Observe your opponent and learn where he has invested. Virgin Air does not fly out of major airports, therewith circumventing the enormous investment that other airlines have made. Has your opponent invested in a warehouse to support building a one-day delivery service? Make delivery cost free! Is it a platform battle? Open source it and make money from running services on top of it.

Does it hurt? of course it does! This is why the first thing you learn in Judo is fall breaking (ukemi waza). The question is not if you will fall, but if you can get back up quickly enough. Now this is not a plea for polka style pivoting startup behavior. You still need a strategy and stick to your product vision, but be prepared to sacrifice in order to reach that.

I once ran a Customer Journey mapping workshop at Al Jazeera. Though we focused on Apps, the real question was: “what is the heart of the brand” How can we be a better news agency than ABC, BBC, CNN etc.? By creating better articles? by providing more in-depth news? Turned out we could send photographers where they could not. They had invested in different areas and by creating a photo driven news experience they would be hindered by sunk cost.

If you manage to take the battle to uncontested grounds and have destroyed your opponent’s balance, his strength will work against him. It took Coca Cola 15 years to respond to the larger Pepsi bottle due its investment in their iconic bottle. By the time they could and wanted to produce larger bottles, Pepsi had become the second largest brand. WhatsApp reached a 10 billion messages in 2 years; you don’t have Coca Cola’s luxury anymore.

Awesome machines that you need to let go of

Figure: Awesome machines that you need to let go of

Why did Internet only news agencies like nu.nl scored a dominant position in the mobile space? Because the incumbents were too reluctant to cannibalize their investments in dead tree technology.

Key take away 

Judo can be learned and so can these innovation practices. We have labeled the collection of these practices Continuous Innovation. Adopting these practices means adopting the DNA of a disruptor.

It’s a relentless search to find unmet market needs, operating under the radar until you find market-fit. You can apply typical Lean startup techniques like Wizard of Oz, landing pages or product bootcamp.

Following through fast means scalable architecture principles and an organization that can respond to change. As an incumbent, watch out for disruptors that destroy your balance; typically by running a nice of your business for you that will become strategic in the future.

Finally: if you are not prepared to sacrifice your current product for one that addresses the customers need better someone else will do it for you.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

 

AzureCon Keynote Announcements: India Regions, GPU Support, IoT Suite, Container Service, and Security Center

ScottGu's Blog - Scott Guthrie - Thu, 10/01/2015 - 06:43

Yesterday we held our AzureCon event and were fortunate to have tens of thousands of developers around the world participate.  During the event we announced several great new enhancements to Microsoft Azure including:

  • General Availability of 3 new Azure regions in India
  • Announcing new N-series of Virtual Machines with GPU capabilities
  • Announcing Azure IoT Suite available to purchase
  • Announcing Azure Container Service
  • Announcing Azure Security Center

We were also fortunate to be joined on stage by several great Azure customers who talked about their experiences using Azure including: Jet.com, Nascar, Alaska Airlines, Walmart, and ThyssenKrupp. Watching the Videos

All of the talks presented at AzureCon (including the 60 breakout talks) are now available to watch online.  You can browse and watch all of the sessions here.

image

My keynote to kick off the event was an hour long and provided an end-to-end look at Azure and some of the big new announcements of the day.  You can watch it here.

Below are some more details of some of the highlights:

Announcing General Availability of 3 new Azure regions in India

Yesterday we announced the general availability of our new India regions: Mumbai (West), Chennai (South) and Pune (Central).  They are now available for you to deploy solutions into.

This brings our worldwide presence of Azure regions up to 24 regions, more than AWS and Google combined. Over 125 customers and partners have been participating in the private preview of our new India regions.   We are seeing tremendous interest from industry sectors like Public Sector, Banking Financial Services, Insurance and Healthcare whose cloud adoption has been restricted by data residency requirements.  You can all now deploy your solutions too. Announcing N-series of Virtual Machines with GPU Support

This week we announced our new N-series family of Azure Virtual Machines that enable GPU capabilities.  Featuring NVidia’s best of breed Tesla GPUs, these Virtual Machines will help you run a variety of workloads ranging from remote visualization to machine learning to analytics.

The N-series VMs feature NVidia’s flagship GPU, the K80 which is well supported by NVidia’s CUDA development community. N-series will also have VM configurations featuring the latest M60 which was recently announced by NVidia. With support for M60, Azure becomes the first hyperscale cloud provider to bring the capabilities of NVidia’s Quadro High End Graphics Support to the cloud. In addition, N-series combines GPU capabilities with the superfast RDMA interconnect so you can run multi-machine, multi-GPU workloads such as Deep Learning and Skype Translator Training.

Announcing Azure Security Center

This week we announced the new Azure Security Center—a new Azure service that gives you visibility and control of the security of your Azure resources, and helps you stay ahead of threats and attacks.  Azure is the first cloud platform to provide unified security management with capabilities that help you prevent, detect, and respond to threats.

image

The Azure Security Center provides a unified view of your security state, so your team and/or your organization’s security specialists can get the information they need to evaluate risk across the workloads they run in the cloud.  Based on customizable policy, the service can provide recommendations. For example, the policy might be that all web applications should be protected by a web application firewall. If so, the Azure Security Center will automatically detect when web apps you host in Azure don’t have a web application firewall configured, and provide a quick and direct workflow to get a firewall from one of our partners deployed and configured:

image

Of course, even with the best possible protection in place, attackers will still try to compromise systems. To address this problem and adopt an “assume breach” mindset, the Azure Security Center uses advanced analytics, including machine learning, along with Microsoft’s global threat intelligence network to look for and alert on attacks. Signals are automatically collected from your Azure resources, the network, and integrated security partner solutions and analyzed to identify cyber-attacks that might otherwise go undetected. Should an incident occur, security alerts offer insights into the attack and suggest ways to remediate and recover quickly. Security data and alerts can also be piped to existing Security Information and Events Management (SIEM) systems your organization has already purchased and is using on-premises.

image

No other cloud vendor provides the depth and breadth of these capabilities, and they are going to enable you to build even more secure applications in the cloud.

Announcing Azure IoT Suite Available to Purchase

The Internet of Things (IoT) provides tremendous new opportunities for organizations to improve operations, become more efficient at what they do, and create new revenue streams.  We have had a huge interest in our Azure IoT Suite which until this week has been in public preview.  Our customers like Rockwell Automation and ThyssenKrupp Elevators are already connecting data and devices to solve business problems and improve their operations. Many more businesses are poised to benefit from IoT by connecting their devices to collect and analyze untapped data with remote monitoring or predictive maintenance solutions.

In working with customers, we have seen that getting started on IoT projects can be a daunting task starting with connecting existing devices, determining the right technology partner to work with and scaling an IoT project from proof of concept to broad deployment. Capturing and analyzing untapped data is complex, particularly when a business tries to integrate this new data with existing data and systems they already have. 

The Microsoft Azure IoT Suite helps address many of these challenges.  The Microsoft Azure IoT Suite helps you connect and integrate with devices more easily, and to capture and analyze untapped device data by using our preconfigured solutions, which are engineered to help you move quickly from proof of concept and testing to broader deployment. Today we support remote monitoring, and soon we will be delivering support for predictive maintenance and asset management solutions.

These solutions reliably capture data in the cloud and analyze the data both in real-time and in batch processing. Once your devices are connected, Azure IoT Suite provides real time information in an intuitive format that helps you take action from insights. Our advanced analytics then enables you to easily process data—even when it comes from a variety of sources, including devices, line of business assets, sensors and other systems and provide rich built-in dashboards and analytics tools for access to the data and insights you need. User permissions can be set to control reporting and share information with the right people in your organization.

Below is an example of the types of built-in dashboard views that you can leverage without having to write any code:

image

To support adoption of the Azure IoT Suite, we are also announcing the new Microsoft Azure Certified for IoT program, an ecosystem of partners whose offerings have been tested and certified to help businesses with their IoT device and platform needs. The first set of partners include Beaglebone, Freescale, Intel, Raspberry Pi, Resin.io, Seeed and Texas Instruments. These partners, along with experienced global solution providers are helping businesses harness the power of the Internet of Things today.  

You can learn more about our approach and the Azure IoT Suite at www.InternetofYourThings.com and partners can learn more at www.azure.com/iotdev. Announcing Azure IoT Hub

This week we also announced the public preview of our new Azure IoT Hub service which is a fully managed service that enables reliable and secure bi-directional communications between millions of IoT devices and an application back end. Azure IoT Hub offers reliable device-to-cloud and cloud-to-device hyper-scale messaging, enables secure communications using per-device security credentials and access control, and includes device libraries for the most popular languages and platforms.

Providing secure, scalable bi-directional communication from the heterogeneous devices to the cloud is a cornerstone of any IoT solution which Azure IoT hub addresses in the following way:

  • Per-device authentication and secure connectivity: Each device uses its own security key to connect to IoT Hub. The application back end is then able to individually whitelist and blacklist each device, enabling complete control over device access.
  • Extensive set of device libraries: Azure IoT device SDKs are available and supported for a variety of languages and platforms such as C, C#, Java, and JavaScript.
  • IoT protocols and extensibility: Azure IoT Hub provides native support of the HTTP 1.1 and AMQP 1.0 protocols for device connectivity. Azure IoT Hub can also be extended via the Azure IoT protocol gateway open source framework to provide support for MQTT v3.1.1.
  • Scale: Azure IoT Hub scales to millions of simultaneously connected devices, and millions of events per seconds.

Getting started with Azure IoT Hub is easy. Simply navigate to the Azure Preview portal, and use the Internet of Things->Azure IoT Hub. Choose the name, pricing tier, number of units and location and select Create to provision and deploy your IoT Hub:

image

Once the IoT hub is created, you can navigate to Settings and create new shared access policies and modify other messaging settings for granular control.

The bi-directional communication enabled with an IoT Hub provides powerful capabilities in a real world IoT solution such as the control of individual device security credentials and access through the use of a device identity registry.  Once a device identity is in the registry, the device can connect, send device-to-cloud messages to the hub, and receive cloud-to-device messages from backend applications with just a few lines of code in a secure way.

Learn more about Azure IoT Hub and get started with your own real world IoT solutions. Announcing the new Azure Container Service

’We’ve been working with Docker to integrate Docker containers with both Azure and Windows Server for some time. This week we announced the new Azure Container Service which leverages the popular Apache Mesos project to deliver a customer proven orchestration solution for applications delivered as Docker containers.

image[24]

The Azure Container Service enables users to easily create and manage a Docker enabled Apache Mesos cluster. The container management software running on these clusters is open source, and in addition to the application portability offered by tooling such as Docker and Docker Compose, you will be able to leverage portable container orchestration and management tooling such as Marathon, Chronos and Docker Swarm.

When utilizing the Azure Container Service, you will be able to take advantage of the tight integration with Azure infrastructure management features such as tagging of resources, Role Based Access Control (RBAC), Virtual Machine Scale Sets (VMSS) and the fully integrated user experience in the Azure portal. By coupling the enterprise class Azure cloud with key open source build, deploy and orchestration software, we maximize customer choice when it comes to containerize workloads.

The service will be available for preview by the end of the year. Learn More

Watch the AzureCon sessions online to learn more about all of the above announcements – plus a lot more that was covered during the day.  We are looking forward to seeing what you build with what you learn!

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing General Availability of HDInsight on Linux + new Data Lake Services and Language

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 21:54

Today, I’m happy to announce several key additions to our big data services in Azure, including the General Availability of HDInsight on Linux, as well as the introduction of our new Azure Data Lake and Language services. General Availability of HDInsight on Linux

Today we are announcing general availability of our HDInsight service on Ubuntu Linux.  HDInsight enables you to easily run managed Hadoop clusters in the cloud.  With today’s release we now allow you to configure these clusters to run using both a Windows Server Operating System as well as an Ubuntu based Linux Operating System.

HDInsight on Linux enables even broader support for Hadoop ecosystem partners to run in HDInsight providing you even greater choice of preferred tools and applications for running Hadoop workloads. Both Linux and Windows clusters in HDInsight are built on the same standard Hadoop distribution and offer the same set of rich capabilities.

Today’s new release also enables additional capabilities, such as, cluster scaling, virtual network integration and script action support. Furthermore, in addition to Hadoop cluster type, you can now create HBase and Storm clusters on Linux for your NoSQL and real time processing needs such as building an IoT application.

Create a cluster

HDInsight clusters running using Linux can now be easily created from the Azure Management portal under the Data + Analytics section.  Simply select Ubuntu from the cluster operating system drop-down, as well as optionally choose the cluster type you wish to create (we support base Hadoop as well as clusters pre-configured for workloads like Storm, Spark, HBase, etc).

image

All HDInsight Linux clusters can be managed by Apache Ambari. Ambari provides the ability to customize configuration settings of your Hadoop cluster while giving you a unified view of the performance and state of your cluster and providing monitoring and alerting within the HDInsight cluster.

image

Installing additional applications and Hadoop components

Similar to HDInsight Windows clusters, you can now customize your Linux cluster by installing additional applications or Hadoop components that are not part of default HDInsight deployment. This can be accomplished using Bash scripts with script action capability.  As an example, you can now install Hue on an HDInsight Linux cluster and easily use it with your workloads:

image

Develop using Familiar Tools

All HDInsight Linux clusters come with SSH connectivity enabled by default. You can connect to the cluster via a SSH client of your choice. Moreover, SSH tunneling can be leveraged to remotely access all of the Hadoop web applications from the browser.

image New Azure Data Lake Services and Language

We continue to see customers enabling amazing scenarios with big data in Azure including analyzing social graphs to increase charitable giving, analyzing radiation exposure and using the signals from thousands of devices to simulate ways for utility customers to optimize their monthly bills. These and other use cases are resulting in even more data being collected in Azure. In order to be able to dive deep into all of this data, and process it in different ways, you can now use our Azure Data Lake capabilities – which are 3 services that make big data easy.

The first service in the family is available today: Azure HDInsight, our managed Hadoop service that lets you focus on finding insights, and not spend your time having to manage clusters. HDInsight lets you deploy Hadoop, Spark, Storm and HBase clusters, running on Linux or Windows, managed, monitored and supported by Microsoft with a 99.9% SLA.

The other two services, Azure Data Lake Store and Azure Data Lake Analytics introduced below, are available in private preview today and will be available broadly for public usage shortly. Azure Data Lake Store

Azure Data Lake Store is a hyper-scale HDFS repository designed specifically for big data analytics workloads in the cloud. Azure Data Lake Store solves the big data challenges of volume, variety, and velocity by enabling you to store data of any type, at any size, and process it at any scale. Azure Data Lake Store can support near real-time scenarios such as the Internet of Things (IoT) as well as throughput-intensive analytics on huge data volumes. The Azure Data Lake Store also supports a variety of computation workloads by removing many of the restrictions constraining traditional analytics infrastructure like the pre-definition of schema and the creation of multiple data silos. Once located in the Azure Data Lake Store, Hadoop-based engines such as Azure HDInsight can easily mine the data to discover new insights.

Some of the key capabilities of Azure Data Lake Store include:

  • Any Data: A distributed file store that allows you to store data in its native format, Azure Data Lake Store eliminates the need to transform or pre-define schema in order to store data.
  • Any Size: With no fixed limits to file or account sizes, Azure Data Lake Store enables you to store kilobytes to exabytes with immediate read/write access.
  • At Any Scale: You can scale throughput to meet the demands of your analytic systems including the high throughput needed to analyze exabytes of data. In addition, it is built to handle high volumes of small writes at low latency making it optimal for near real-time scenarios like website analytics, and Internet of Things (IoT).
  • HDFS Compatible: It works out-of-the-box with the Hadoop ecosystem including other Azure Data Lake services such as HDInsight.
  • Fully Integrated with Azure Active Directory: Azure Data Lake Store is integrated with Azure Active Directory for identity and access management over all of your data.
Azure Data Lake Analytics with U-SQL

The new Azure Data Lake Analytics service makes it much easier to create and manage big data jobs. Built on YARN and years of experience running analytics pipelines for Office 365, XBox Live, Windows and Bing, the Azure Data Lake Analytics service is the most productive way to get insights from big data. You can get started in the Azure management portal, querying across data in blobs, Azure Data Lake Store, and Azure SQL DB. By simply moving a slider, you can scale up as much computing power as you’d like to run your data transformation jobs.

image

Today we are introducing a new U-SQL offering in the analytics service, an evolution of the familiar syntax of SQL.  U-SQL allows you to write declarative big data jobs, as well as easily include your own user code as part of those jobs. Inside Microsoft, developers have been using this combination in order to be productive operating on massive data sets of many exabytes of scale, processing mission critical data pipelines. In addition to providing an easy to use experience in the Azure management portal, we are delivering a rich set of tools in Visual Studio for debugging and optimizing your U-SQL jobs. This lets you play back and analyze your big data jobs, understanding bottlenecks and opportunities to improve both performance and efficiency, so that you can pay only for the resources you need and continually tune your operations.

image Learn More

For more information and to get started, check out the following links:

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Online AzureCon Conference this Tuesday

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 04:35

This Tuesday, Sept 29th, we are hosting our online AzureCon event – which is a free online event with 60 technical sessions on Azure presented by both the Azure engineering team as well as MVPs and customers who use Azure today and will share their best practices.

I’ll be kicking off the event with a keynote at 9am PDT.  Watch it to learn the latest on Azure, and hear about a lot of exciting new announcements.  We’ll then have some fantastic sessions that you can watch throughout the day to learn even more.

image

Hope to see you there!

Scott

omni
Categories: Architecture, Programming

Better Density and Lower Prices for Azure’s SQL Elastic Database Pools

ScottGu's Blog - Scott Guthrie - Wed, 09/23/2015 - 21:41

A few weeks ago, we announced the preview availability of the new Basic and Premium Elastic Database Pools Tiers with our Azure SQL Database service.  Elastic Database Pools enable you to run multiple, isolated and independent databases that can be auto-scaled automatically across a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.

Today, we are announcing some nice changes to the pricing structure of Elastic Database Pools as well as changes to the density of elastic databases within a pool.  These changes make it even more attractive to use Elastic Database Pools to build your applications.

Specifically, we are making the following changes:

  • Finalizing the eDTU price – With Elastic Database Pools you purchase units of capacity that we can call eDTUs – which you can then use to run multiple databases within a pool.  We have decided to not increase the price of eDTUs as we go from preview->GA.  This means that you’ll be able to pay a much lower price (about 50% less) for eDTUs than many developers expected.
  • Eliminating the per-database fee – In additional to lower eDTU prices, we are also eliminating the fee per database that we have had with the preview. This means you no longer need to pay a per-database charge to use an Elastic Database Pool, and makes the pricing much more attractive for scenarios where you want to have lots of small databases.
  • Pool density – We are announcing increased density limits that enable you to run many more databases per Elastic Database pool. See the chart below under “Maximum databases per pool” for specifics. This change will take effect at the time of general availability, but you can design your apps around these numbers.  The increase pool density limits will make Elastic Database Pools event more attractive.

image

 

Below are the updated parameters for each of the Elastic Database Pool options with these new changes:

image

For more information about Azure SQL Database Elastic Database Pools and Management tools go the technical overview here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing the Biggest VM Sizes Available in the Cloud: New Azure GS-VM Series

ScottGu's Blog - Scott Guthrie - Wed, 09/02/2015 - 18:51

Today, we’re announcing the release of the new Azure GS-series of Virtual Machine sizes, which enable Azure Premium Storage to be used with Azure G-series VM sizes. These VM sizes are now available to use in both our US and Europe regions.

Earlier this year we released the G-series of Azure Virtual Machines – which provide the largest VM size provided by any public cloud provider.  They provide up to 32-cores of CPU, 448 GB of memory and 6.59 TB of local SSD-based storage.  Today’s release of the GS-series of Azure Virtual Machines enables you to now use these large VMs with Azure Premium Storage – and enables you to perform up to 2,000 MB/sec of storage throughput , more than double any other public cloud provider.  Using the G5/GS5 VM size now also offers more than 20 gbps of network bandwidth, also more than double the network throughout provided by any other public cloud provider.

These new VM offerings provide an ideal solution to your most demanding cloud based workloads, and are great for relational databases like SQL Server, MySQL, PostGres and other large data warehouse solutions. You can also use the GS-series to significantly scale-up the performance of enterprise applications like Dynamics AX.

The G and GS-series of VM sizes are available to use now in our West US, East US-2, and West Europe Azure regions.  You’ll see us continue to expand availability around the world in more regions in the coming months. GS Series Size Details

The below table provides more details on the exact capabilities of the new GS-series of VM sizes:

Size

Cores

Memory

Max Disk IOPS

Max Disk Bandwidth

(MB per second)

Standard_GS1

2

28

5,000

125

Standard_GS2

4

56

10,000

250

Standard_GS3

8

112

20,000

500

Standard_GS4

16

224

40,000

1,000

Standard_GS5

32

448

80,000

2,000

Creating a GS-Series Virtual Machine

Creating a new GS series VM is very easy.  Simply navigate to the Azure Preview Portal, select New(+) and choose your favorite OS or VM image type:

image

Click the Create button, and then click the pricing tier option and select “View All” to see the full list of VM sizes. Make sure your region is West US, East US 2, or West Europe to select the G-series or the GS-Series:

image

When choosing a GS-series VM size, the portal will create a storage account using Premium Azure Storage. You can select an existing Premium Storage account, as well, to use for the OS disk of the VM:

image

Hitting Create will launch and provision the VM. Learn More

If you would like more information on the GS-Series VM sizes as well as other Azure VM Sizes then please visit the following page for additional details: Virtual Machine Sizes for Azure.

For more information on Premium Storage, please see: Premium Storage overview. Also, refer to Using Linux VMs with Premium Storage for more details on Linux deployments on Premium Storage.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Announcing Great New SQL Database Capabilities in Azure

ScottGu's Blog - Scott Guthrie - Thu, 08/27/2015 - 17:13

Today we are making available several new SQL Database capabilities in Azure that enable you to build even better cloud applications.  In particular:

  • We are introducing two new pricing tiers for our  Elastic Database Pool capability.  Elastic Database Pools enable you to run multiple, isolated and independent databases on a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.
  • We are also introducing new higher-end scale options for SQL Databases that enable you to run even larger databases with significantly more compute + storage + networking resources.

Both of these additions are available to start using immediately.  Elastic Database Pools

If you are a SaaS developer with tens, hundreds, or even thousands of databases, an elastic database pool dramatically simplifies the process of creating, maintaining, and managing performance across these databases within a budget that you control. 

image

A common SaaS application pattern (especially for B2B SaaS apps) is for the SaaS app to use a different database to store data for each customer.  This has the benefit of isolating the data for each customer separately (and enables each customer’s data to be encrypted separately, backed-up separately, etc).  While this pattern is great from an isolation and security perspective, each database can end up having varying and unpredictable resource consumption (CPU/IO/Memory patterns), and because the peaks and valleys for each customer might be difficult to predict, it is hard to know how much resources to provision.  Developers were previously faced with two options: either over-provision database resources based on peak usage--and overpay. Or under-provision to save cost--at the expense of performance and customer satisfaction during peaks.

Microsoft created elastic database pools specifically to help developers solve this problem.  With Elastic Database Pools you can allocate a shared pool of database resources (CPU/IO/Memory), and then create and run multiple isolated databases on top of this pool.  You can set minimum and maximum performance SLA limits of your choosing for each database you add into the pool (ensuring that none of the databases unfairly impacts other databases in your pool).  Our management APIs also make it much easier to script and manage these multiple databases together, as well as optionally execute queries that span across them (useful for a variety operations).  And best of all when you add multiple databases to an Elastic Database Pool, you are able to average out the typical utilization load (because each of your customers tend to have different peaks and valleys) and end up requiring far fewer database resources (and spend less money as a result) than you would if you ran each database separately.

The below chart shows a typical example of what we see when SaaS developers take advantage of the Elastic Pool capability.  Each individual database they have has different peaks and valleys in terms of utilization.  As you combine multiple of these databases into an Elastic Pool the peaks and valleys tend to normalize out (since they often happen at different times) to require much less overall resources that you would need if each database was resourced separately:

databases sharing eDTUs

Because Elastic Database Pools are built using our SQL Database service, you also get to take advantage of all of the underlying database as a service capabilities that are built into it: 99.99% SLA, multiple-high availability replica support built-in with no extra charges, no down-time during patching, geo-replication, point-in-time recovery, TDE encryption of data, row-level security, full-text search, and much more.  The end result is a really nice database platform that provides a lot of flexibility, as well as the ability to save money.

New Basic and Premium Tiers for Elastic Database Pools

Earlier this year at the //Build conference we announced our new Elastic Database Pool support in Azure and entered public preview with the Standard Tier edition of it.  The Standard Tier allows individual databases within the elastic pool to burst up to 100 eDTUs (a DTU represents a combination of Compute + IO + Storage performance) for performance. 

Today we are adding additional Basic and Premium Elastic Database Pools to the preview to enable a wider range of performance and cost options.

  • Basic Elastic Database Pools are great for light-usage SaaS scenarios.  Basic Elastic Database Pools allows individual databases performance bursts up to 5 eDTUs.
  • Premium Elastic Database Pools are designed for databases that require the highest performance per database. Premium Elastic Database Pools allows individual database performance bursts up to 1,000 eDTUs.

Collectively we think these three Elastic Database Pool pricing tier options provide a tremendous amount of flexibility and optionality for SaaS developers to take advantage of, and are designed to enable a wide variety of different scenarios. Easily Migrate Databases Between Pricing Tiers

One of the cool capabilities we support is the ability to easily migrate an individual database between different Elastic Database Pools (including ones with different pricing tiers).  For example, if you were a SaaS developer you could start a customer out with a trial edition of your application – and choose to run the database that backs it within a Basic Elastic Database Pool to run it super cost effectively.  As the customer’s usage grows you could then auto-migrate them to a Standard database pool without customer downtime.  If the customer grows up to require a tremendous amount of resources you could then migrate them to a Premium Database Pool or run their database as a standalone SQL Database with a huge amount of resource capacity.

This provides a tremendous amount of flexibility and capability, and enables you to build even better applications. Managing Elastic Database Pools

One of the the other nice things about Elastic Database Pools is that the service provides the management capabilities to easily manage large collections of databases without you having to worry about the infrastructure that runs it.   

You can create and mange Elastic Database Pools using our Azure Management Portal or via our Command-line tools or REST Management APIs.  With today’s update we are also adding support so that you can use T-SQL to add/remove new databases to/from an elastic pool.  Today’s update also adds T-SQL support for measuring resource utilization of databases within an elastic pool – making it even easier to monitor and track utilization by database.

image Elastic Database Pool Tier Capabilities

During the preview, we have been and will continue to tune a number of parameters that control the density of Elastic Database Pools as we progress through the preview.

In particular, the current limits for the number of databases per pool and the number of pool eDTUs is something we plan to steadily increase as we march towards the general availability release.  Our plan is to provide the highest possible density per pool, largest pool sizes, and the best Elastic Database Pool economics while at the same time keeping our 99.99 availability SLA.

Below are the current performance parameters for each of the Elastic Database Pool Tier options in preview today:

 

Basic Elastic

Standard Elastic

Premium Elastic

Elastic Database Pool

eDTU range per pool (preview limits)

100-1200 eDTUs

100-1200 eDTUs

125-1500 eDTUs

Storage range per pool

10-120 GB

100-1200 GB

63-750 GB

Maximum database per pool (preview limits)

200

200

50

Estimated monthly pool and add-on  eDTU costs (preview prices)

Starting at $0.2/hr (~$149/pool/mo).

Each additional eDTU $.002/hr (~$1.49/mo)

Starting at $0.3/hr (~$223/pool mo). 

Each additional eDTU $0.003/hr (~$2.23/mo)

Starting at $0.937/hr (`$697/pool/mo).

Each additional eDTU $0.0075/hr (~$5.58/mo)

Storage per eDTU

0.1 GB per eDTU

1 GB per eDTU

.5 GB per eDTU

Elastic Databases

eDTU max per database (preview limits)

0-5

0-100

0-1000

Storage max per DB

2 GB

250 GB

500 GB

Per DB cost (preview prices)

$0.0003/hr (~$0.22/mo)

$0.0017/hr (~$1.26/mo)

$0.0084/hr (~$6.25/mo)

We’ll continue to iterate on the above parameters and increase the maximum number of databases per pool as we progress through the preview, and would love your feedback as we do so.

New Higher-Scale SQL Database Performance Tiers

In addition to the enhancements for Elastic Database Pools, we are also today releasing new SQL Database Premium performance tier options for standalone databases. 

Today we are adding a new P4 (500 DTU) and a P11 (1750 DTU) level which provide even higher performance database options for SQL Databases that want to scale-up. The new P11 edition also now supports databases up to 1TB in size.

Developers can now choose from 10 different SQL Database Performance levels.  You can easily scale-up/scale-down as needed at any point without database downtime or interruption.  Each database performance tier supports a 99.99% SLA, multiple-high availability replica support built-in with no extra charges (meaning you don’t need to buy multiple instances to get an SLA – this is built-into each database), no down-time during patching, point-in-time recovery options (restore without needing a backup), TDE encryption of data, row-level security, and full-text search.

image

Learn More

You can learn more about SQL Databases by visiting the http://azure.microsoft.com web-site.  Check out the SQL Database product page to learn more about the capabilities SQL Databases provide, as well as read the technical documentation to learn more how to build great applications using it.

Summary

Today’s database updates enable developers to build even better cloud applications, and to use data to make them even richer more intelligent.  We are really looking forward to seeing the solutions you build.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Announcing Windows Server 2016 Containers Preview

ScottGu's Blog - Scott Guthrie - Wed, 08/19/2015 - 17:01

At DockerCon this year, Mark Russinovich, CTO of Microsoft Azure, demonstrated the first ever application built using code running in both a Windows Server Container and a Linux container connected together. This demo helped demonstrate Microsoft's vision that in partnership with Docker, we can help bring the Windows and Linux ecosystems together by enabling developers to build container-based distributed applications using the tools and platforms of their choice.

Today we are excited to release the first preview of Windows Server Containers as part of our Windows Server 2016 Technical Preview 3 release. We’re also announcing great updates from our close collaboration with Docker, including enabling support for the Windows platform in the Docker Engine and a preview of the Docker Engine for Windows. Our Visual Studio Tools for Docker, which we previewed earlier this year, have also been updated to support Windows Server Containers, providing you a seamless end-to-end experience straight from Visual Studio to develop and deploy code to both Windows Server and Linux containers. Last but not least, we’ve made it easy to get started with Windows Server Containers in Azure via a dedicated virtual machine image. Windows Server Containers

Windows Server Containers create a highly agile Windows Server environment, enabling you to accelerate the DevOps process to efficiently build and deploy modern applications. With today’s preview release, millions of Windows developers will be able to experience the benefits of containers for the first time using the languages of their choice – whether .NET, ASP.NET, PowerShell or Python, Ruby on Rails, Java and many others.

Today’s announcement delivers on the promise we made in partnership with Docker, the fast-growing open platform for distributed applications, to offer container and DevOps benefits to Linux and Windows Server users alike. Windows Server Containers are now part of the Docker open source project, and Microsoft is a founding member of the Open Container Initiative. Windows Server Containers can be deployed and managed either using the Docker client or PowerShell. Getting Started using Visual Studio

The preview of our Visual Studio Tools for Docker, which enables developers to build and publish ASP.NET 5 Web Apps or console applications directly to a Docker container, has been updated to include support for today’s preview of Windows Server Containers. The extension automates creating and configuring your container host in Azure, building a container image which includes your application, and publishing it directly to your container host. You can download and install this extension, and read more about it, at the Visual Studio Gallery here: http://aka.ms/vslovesdocker.

Once installed, developers can right-click on their projects within Visual Studio and select “Publish”:

image

Doing so will display a Publish dialog which will now include the ability to deploy to a Docker Container (on either a Windows Server or Linux machine):

image

You can choose to deploy to any existing Docker host you already have running:

image

Or use the dialog to create a new Virtual Machine running either Window Server or Linux with containers enabled.  The below screen-shot shows how easy it is to create a new VM hosted on Azure that runs today’s Windows Server 2016 TP3 preview that supports Containers – you can do all of this (and deploy your apps to it) easily without ever having to leave the Visual Studio IDE:

image Getting Started Using Azure

In June of last year, at the first DockerCon, we enabled a streamlined Azure experience for creating and managing Docker hosts in the cloud. Up until now these hosts have only run on Linux. With the new preview of Windows Server 2016 supporting Windows Server Containers, we have enabled a parallel experience for Windows users.

Directly from the Azure Marketplace, users can now deploy a Windows Server 2016 virtual machine pre-configured with the container feature enabled and Docker Engine installed. Our quick start guide has all of the details including screen shots and a walkthrough video so take a look here https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/azure_setup.

image

Once your container host is up and running, the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell. Getting Started Locally Using Hyper-V

Creating a virtual machine on your local machine using Hyper-V to act as your container host is now really easy. We’ve published some PowerShell scripts to GitHub that automate nearly the whole process so that you can get started experimenting with Windows Server Containers as quickly as possible. The quick start guide has all of the details at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup.

Once your container host is up and running the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell.

image Additional Information and Resources

A great list of resources including links to past presentations on containers, blogs and samples can be found in the community section of our documentation. We have also setup a dedicated Windows containers forum where you can provide feedback, ask questions and report bugs. If you want to learn more about the technology behind containers I would highly recommend reading Mark Russinovich’s blog on “Containers: Docker, Windows and Trends” that was published earlier this week. Summary

At the //Build conference earlier this year we talked about our plan to make containers a fundamental part of our application platform, and today’s releases are a set of significant steps in making this a reality.’ The decision we made to embrace Docker and the Docker ecosystem to enable this in both Azure and Windows Server has generated a lot of positive feedback and we are just getting started.

While there is still more work to be done, now users in the Window Server ecosystem can begin experiencing the world of containers. I highly recommend you download the Visual Studio Tools for Docker, create a Windows Container host in Azure or locally, and try out our PowerShell and Docker support. Most importantly, we look forward to hearing feedback on your experience.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Another elasticsearch blog post, now about Shield

Gridshore - Thu, 01/29/2015 - 21:13

<p>I just wrote another piece of text on my other blog. This time I wrote about the recently release elasticsearch plugin called Shield. If you want to learn more about securing your elasticsearch cluster, please head over to my other blog and start reading</p>

http://amsterdam.luminis.eu/2015/01/29/elasticsearch-shield-first-steps-using-java/

The post Another elasticsearch blog post, now about Shield appeared first on Gridshore.

Categories: Architecture, Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

New blogpost on kibana 4 beta

Gridshore - Tue, 12/02/2014 - 12:55

If you are like me interested in elasticsearch and kibana, than you might be interested in a blog post I wrote on my employers blog about the new Kibana 4 beta. If so, head over to my employers blog:

http://amsterdam.luminis.eu/2014/12/01/experiment-with-the-kibana-4-beta/

The post New blogpost on kibana 4 beta appeared first on Gridshore.

Categories: Architecture, Programming