Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

ScottGu's Blog - Scott Guthrie
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
Scott Guthrie lives in Seattle and builds a few products for Microsoft
Updated: 6 days 3 hours ago

AzureCon Keynote Announcements: India Regions, GPU Support, IoT Suite, Container Service, and Security Center

Thu, 10/01/2015 - 06:43

Yesterday we held our AzureCon event and were fortunate to have tens of thousands of developers around the world participate.  During the event we announced several great new enhancements to Microsoft Azure including:

  • General Availability of 3 new Azure regions in India
  • Announcing new N-series of Virtual Machines with GPU capabilities
  • Announcing Azure IoT Suite available to purchase
  • Announcing Azure Container Service
  • Announcing Azure Security Center

We were also fortunate to be joined on stage by several great Azure customers who talked about their experiences using Azure including: Jet.com, Nascar, Alaska Airlines, Walmart, and ThyssenKrupp. Watching the Videos

All of the talks presented at AzureCon (including the 60 breakout talks) are now available to watch online.  You can browse and watch all of the sessions here.


My keynote to kick off the event was an hour long and provided an end-to-end look at Azure and some of the big new announcements of the day.  You can watch it here.

Below are some more details of some of the highlights:

Announcing General Availability of 3 new Azure regions in India

Yesterday we announced the general availability of our new India regions: Mumbai (West), Chennai (South) and Pune (Central).  They are now available for you to deploy solutions into.

This brings our worldwide presence of Azure regions up to 24 regions, more than AWS and Google combined. Over 125 customers and partners have been participating in the private preview of our new India regions.   We are seeing tremendous interest from industry sectors like Public Sector, Banking Financial Services, Insurance and Healthcare whose cloud adoption has been restricted by data residency requirements.  You can all now deploy your solutions too. Announcing N-series of Virtual Machines with GPU Support

This week we announced our new N-series family of Azure Virtual Machines that enable GPU capabilities.  Featuring NVidia’s best of breed Tesla GPUs, these Virtual Machines will help you run a variety of workloads ranging from remote visualization to machine learning to analytics.

The N-series VMs feature NVidia’s flagship GPU, the K80 which is well supported by NVidia’s CUDA development community. N-series will also have VM configurations featuring the latest M60 which was recently announced by NVidia. With support for M60, Azure becomes the first hyperscale cloud provider to bring the capabilities of NVidia’s Quadro High End Graphics Support to the cloud. In addition, N-series combines GPU capabilities with the superfast RDMA interconnect so you can run multi-machine, multi-GPU workloads such as Deep Learning and Skype Translator Training.

Announcing Azure Security Center

This week we announced the new Azure Security Center—a new Azure service that gives you visibility and control of the security of your Azure resources, and helps you stay ahead of threats and attacks.  Azure is the first cloud platform to provide unified security management with capabilities that help you prevent, detect, and respond to threats.


The Azure Security Center provides a unified view of your security state, so your team and/or your organization’s security specialists can get the information they need to evaluate risk across the workloads they run in the cloud.  Based on customizable policy, the service can provide recommendations. For example, the policy might be that all web applications should be protected by a web application firewall. If so, the Azure Security Center will automatically detect when web apps you host in Azure don’t have a web application firewall configured, and provide a quick and direct workflow to get a firewall from one of our partners deployed and configured:


Of course, even with the best possible protection in place, attackers will still try to compromise systems. To address this problem and adopt an “assume breach” mindset, the Azure Security Center uses advanced analytics, including machine learning, along with Microsoft’s global threat intelligence network to look for and alert on attacks. Signals are automatically collected from your Azure resources, the network, and integrated security partner solutions and analyzed to identify cyber-attacks that might otherwise go undetected. Should an incident occur, security alerts offer insights into the attack and suggest ways to remediate and recover quickly. Security data and alerts can also be piped to existing Security Information and Events Management (SIEM) systems your organization has already purchased and is using on-premises.


No other cloud vendor provides the depth and breadth of these capabilities, and they are going to enable you to build even more secure applications in the cloud.

Announcing Azure IoT Suite Available to Purchase

The Internet of Things (IoT) provides tremendous new opportunities for organizations to improve operations, become more efficient at what they do, and create new revenue streams.  We have had a huge interest in our Azure IoT Suite which until this week has been in public preview.  Our customers like Rockwell Automation and ThyssenKrupp Elevators are already connecting data and devices to solve business problems and improve their operations. Many more businesses are poised to benefit from IoT by connecting their devices to collect and analyze untapped data with remote monitoring or predictive maintenance solutions.

In working with customers, we have seen that getting started on IoT projects can be a daunting task starting with connecting existing devices, determining the right technology partner to work with and scaling an IoT project from proof of concept to broad deployment. Capturing and analyzing untapped data is complex, particularly when a business tries to integrate this new data with existing data and systems they already have. 

The Microsoft Azure IoT Suite helps address many of these challenges.  The Microsoft Azure IoT Suite helps you connect and integrate with devices more easily, and to capture and analyze untapped device data by using our preconfigured solutions, which are engineered to help you move quickly from proof of concept and testing to broader deployment. Today we support remote monitoring, and soon we will be delivering support for predictive maintenance and asset management solutions.

These solutions reliably capture data in the cloud and analyze the data both in real-time and in batch processing. Once your devices are connected, Azure IoT Suite provides real time information in an intuitive format that helps you take action from insights. Our advanced analytics then enables you to easily process data—even when it comes from a variety of sources, including devices, line of business assets, sensors and other systems and provide rich built-in dashboards and analytics tools for access to the data and insights you need. User permissions can be set to control reporting and share information with the right people in your organization.

Below is an example of the types of built-in dashboard views that you can leverage without having to write any code:


To support adoption of the Azure IoT Suite, we are also announcing the new Microsoft Azure Certified for IoT program, an ecosystem of partners whose offerings have been tested and certified to help businesses with their IoT device and platform needs. The first set of partners include Beaglebone, Freescale, Intel, Raspberry Pi, Resin.io, Seeed and Texas Instruments. These partners, along with experienced global solution providers are helping businesses harness the power of the Internet of Things today.  

You can learn more about our approach and the Azure IoT Suite at www.InternetofYourThings.com and partners can learn more at www.azure.com/iotdev. Announcing Azure IoT Hub

This week we also announced the public preview of our new Azure IoT Hub service which is a fully managed service that enables reliable and secure bi-directional communications between millions of IoT devices and an application back end. Azure IoT Hub offers reliable device-to-cloud and cloud-to-device hyper-scale messaging, enables secure communications using per-device security credentials and access control, and includes device libraries for the most popular languages and platforms.

Providing secure, scalable bi-directional communication from the heterogeneous devices to the cloud is a cornerstone of any IoT solution which Azure IoT hub addresses in the following way:

  • Per-device authentication and secure connectivity: Each device uses its own security key to connect to IoT Hub. The application back end is then able to individually whitelist and blacklist each device, enabling complete control over device access.
  • Extensive set of device libraries: Azure IoT device SDKs are available and supported for a variety of languages and platforms such as C, C#, Java, and JavaScript.
  • IoT protocols and extensibility: Azure IoT Hub provides native support of the HTTP 1.1 and AMQP 1.0 protocols for device connectivity. Azure IoT Hub can also be extended via the Azure IoT protocol gateway open source framework to provide support for MQTT v3.1.1.
  • Scale: Azure IoT Hub scales to millions of simultaneously connected devices, and millions of events per seconds.

Getting started with Azure IoT Hub is easy. Simply navigate to the Azure Preview portal, and use the Internet of Things->Azure IoT Hub. Choose the name, pricing tier, number of units and location and select Create to provision and deploy your IoT Hub:


Once the IoT hub is created, you can navigate to Settings and create new shared access policies and modify other messaging settings for granular control.

The bi-directional communication enabled with an IoT Hub provides powerful capabilities in a real world IoT solution such as the control of individual device security credentials and access through the use of a device identity registry.  Once a device identity is in the registry, the device can connect, send device-to-cloud messages to the hub, and receive cloud-to-device messages from backend applications with just a few lines of code in a secure way.

Learn more about Azure IoT Hub and get started with your own real world IoT solutions. Announcing the new Azure Container Service

’We’ve been working with Docker to integrate Docker containers with both Azure and Windows Server for some time. This week we announced the new Azure Container Service which leverages the popular Apache Mesos project to deliver a customer proven orchestration solution for applications delivered as Docker containers.


The Azure Container Service enables users to easily create and manage a Docker enabled Apache Mesos cluster. The container management software running on these clusters is open source, and in addition to the application portability offered by tooling such as Docker and Docker Compose, you will be able to leverage portable container orchestration and management tooling such as Marathon, Chronos and Docker Swarm.

When utilizing the Azure Container Service, you will be able to take advantage of the tight integration with Azure infrastructure management features such as tagging of resources, Role Based Access Control (RBAC), Virtual Machine Scale Sets (VMSS) and the fully integrated user experience in the Azure portal. By coupling the enterprise class Azure cloud with key open source build, deploy and orchestration software, we maximize customer choice when it comes to containerize workloads.

The service will be available for preview by the end of the year. Learn More

Watch the AzureCon sessions online to learn more about all of the above announcements – plus a lot more that was covered during the day.  We are looking forward to seeing what you build with what you learn!

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing General Availability of HDInsight on Linux + new Data Lake Services and Language

Mon, 09/28/2015 - 21:54

Today, I’m happy to announce several key additions to our big data services in Azure, including the General Availability of HDInsight on Linux, as well as the introduction of our new Azure Data Lake and Language services. General Availability of HDInsight on Linux

Today we are announcing general availability of our HDInsight service on Ubuntu Linux.  HDInsight enables you to easily run managed Hadoop clusters in the cloud.  With today’s release we now allow you to configure these clusters to run using both a Windows Server Operating System as well as an Ubuntu based Linux Operating System.

HDInsight on Linux enables even broader support for Hadoop ecosystem partners to run in HDInsight providing you even greater choice of preferred tools and applications for running Hadoop workloads. Both Linux and Windows clusters in HDInsight are built on the same standard Hadoop distribution and offer the same set of rich capabilities.

Today’s new release also enables additional capabilities, such as, cluster scaling, virtual network integration and script action support. Furthermore, in addition to Hadoop cluster type, you can now create HBase and Storm clusters on Linux for your NoSQL and real time processing needs such as building an IoT application.

Create a cluster

HDInsight clusters running using Linux can now be easily created from the Azure Management portal under the Data + Analytics section.  Simply select Ubuntu from the cluster operating system drop-down, as well as optionally choose the cluster type you wish to create (we support base Hadoop as well as clusters pre-configured for workloads like Storm, Spark, HBase, etc).


All HDInsight Linux clusters can be managed by Apache Ambari. Ambari provides the ability to customize configuration settings of your Hadoop cluster while giving you a unified view of the performance and state of your cluster and providing monitoring and alerting within the HDInsight cluster.


Installing additional applications and Hadoop components

Similar to HDInsight Windows clusters, you can now customize your Linux cluster by installing additional applications or Hadoop components that are not part of default HDInsight deployment. This can be accomplished using Bash scripts with script action capability.  As an example, you can now install Hue on an HDInsight Linux cluster and easily use it with your workloads:


Develop using Familiar Tools

All HDInsight Linux clusters come with SSH connectivity enabled by default. You can connect to the cluster via a SSH client of your choice. Moreover, SSH tunneling can be leveraged to remotely access all of the Hadoop web applications from the browser.

image New Azure Data Lake Services and Language

We continue to see customers enabling amazing scenarios with big data in Azure including analyzing social graphs to increase charitable giving, analyzing radiation exposure and using the signals from thousands of devices to simulate ways for utility customers to optimize their monthly bills. These and other use cases are resulting in even more data being collected in Azure. In order to be able to dive deep into all of this data, and process it in different ways, you can now use our Azure Data Lake capabilities – which are 3 services that make big data easy.

The first service in the family is available today: Azure HDInsight, our managed Hadoop service that lets you focus on finding insights, and not spend your time having to manage clusters. HDInsight lets you deploy Hadoop, Spark, Storm and HBase clusters, running on Linux or Windows, managed, monitored and supported by Microsoft with a 99.9% SLA.

The other two services, Azure Data Lake Store and Azure Data Lake Analytics introduced below, are available in private preview today and will be available broadly for public usage shortly. Azure Data Lake Store

Azure Data Lake Store is a hyper-scale HDFS repository designed specifically for big data analytics workloads in the cloud. Azure Data Lake Store solves the big data challenges of volume, variety, and velocity by enabling you to store data of any type, at any size, and process it at any scale. Azure Data Lake Store can support near real-time scenarios such as the Internet of Things (IoT) as well as throughput-intensive analytics on huge data volumes. The Azure Data Lake Store also supports a variety of computation workloads by removing many of the restrictions constraining traditional analytics infrastructure like the pre-definition of schema and the creation of multiple data silos. Once located in the Azure Data Lake Store, Hadoop-based engines such as Azure HDInsight can easily mine the data to discover new insights.

Some of the key capabilities of Azure Data Lake Store include:

  • Any Data: A distributed file store that allows you to store data in its native format, Azure Data Lake Store eliminates the need to transform or pre-define schema in order to store data.
  • Any Size: With no fixed limits to file or account sizes, Azure Data Lake Store enables you to store kilobytes to exabytes with immediate read/write access.
  • At Any Scale: You can scale throughput to meet the demands of your analytic systems including the high throughput needed to analyze exabytes of data. In addition, it is built to handle high volumes of small writes at low latency making it optimal for near real-time scenarios like website analytics, and Internet of Things (IoT).
  • HDFS Compatible: It works out-of-the-box with the Hadoop ecosystem including other Azure Data Lake services such as HDInsight.
  • Fully Integrated with Azure Active Directory: Azure Data Lake Store is integrated with Azure Active Directory for identity and access management over all of your data.
Azure Data Lake Analytics with U-SQL

The new Azure Data Lake Analytics service makes it much easier to create and manage big data jobs. Built on YARN and years of experience running analytics pipelines for Office 365, XBox Live, Windows and Bing, the Azure Data Lake Analytics service is the most productive way to get insights from big data. You can get started in the Azure management portal, querying across data in blobs, Azure Data Lake Store, and Azure SQL DB. By simply moving a slider, you can scale up as much computing power as you’d like to run your data transformation jobs.


Today we are introducing a new U-SQL offering in the analytics service, an evolution of the familiar syntax of SQL.  U-SQL allows you to write declarative big data jobs, as well as easily include your own user code as part of those jobs. Inside Microsoft, developers have been using this combination in order to be productive operating on massive data sets of many exabytes of scale, processing mission critical data pipelines. In addition to providing an easy to use experience in the Azure management portal, we are delivering a rich set of tools in Visual Studio for debugging and optimizing your U-SQL jobs. This lets you play back and analyze your big data jobs, understanding bottlenecks and opportunities to improve both performance and efficiency, so that you can pay only for the resources you need and continually tune your operations.

image Learn More

For more information and to get started, check out the following links:

Hope this helps,


Categories: Architecture, Programming

Online AzureCon Conference this Tuesday

Mon, 09/28/2015 - 04:35

This Tuesday, Sept 29th, we are hosting our online AzureCon event – which is a free online event with 60 technical sessions on Azure presented by both the Azure engineering team as well as MVPs and customers who use Azure today and will share their best practices.

I’ll be kicking off the event with a keynote at 9am PDT.  Watch it to learn the latest on Azure, and hear about a lot of exciting new announcements.  We’ll then have some fantastic sessions that you can watch throughout the day to learn even more.


Hope to see you there!


Categories: Architecture, Programming

Better Density and Lower Prices for Azure’s SQL Elastic Database Pools

Wed, 09/23/2015 - 21:41

A few weeks ago, we announced the preview availability of the new Basic and Premium Elastic Database Pools Tiers with our Azure SQL Database service.  Elastic Database Pools enable you to run multiple, isolated and independent databases that can be auto-scaled automatically across a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.

Today, we are announcing some nice changes to the pricing structure of Elastic Database Pools as well as changes to the density of elastic databases within a pool.  These changes make it even more attractive to use Elastic Database Pools to build your applications.

Specifically, we are making the following changes:

  • Finalizing the eDTU price – With Elastic Database Pools you purchase units of capacity that we can call eDTUs – which you can then use to run multiple databases within a pool.  We have decided to not increase the price of eDTUs as we go from preview->GA.  This means that you’ll be able to pay a much lower price (about 50% less) for eDTUs than many developers expected.
  • Eliminating the per-database fee – In additional to lower eDTU prices, we are also eliminating the fee per database that we have had with the preview. This means you no longer need to pay a per-database charge to use an Elastic Database Pool, and makes the pricing much more attractive for scenarios where you want to have lots of small databases.
  • Pool density – We are announcing increased density limits that enable you to run many more databases per Elastic Database pool. See the chart below under “Maximum databases per pool” for specifics. This change will take effect at the time of general availability, but you can design your apps around these numbers.  The increase pool density limits will make Elastic Database Pools event more attractive.



Below are the updated parameters for each of the Elastic Database Pool options with these new changes:


For more information about Azure SQL Database Elastic Database Pools and Management tools go the technical overview here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing the Biggest VM Sizes Available in the Cloud: New Azure GS-VM Series

Wed, 09/02/2015 - 18:51

Today, we’re announcing the release of the new Azure GS-series of Virtual Machine sizes, which enable Azure Premium Storage to be used with Azure G-series VM sizes. These VM sizes are now available to use in both our US and Europe regions.

Earlier this year we released the G-series of Azure Virtual Machines – which provide the largest VM size provided by any public cloud provider.  They provide up to 32-cores of CPU, 448 GB of memory and 6.59 TB of local SSD-based storage.  Today’s release of the GS-series of Azure Virtual Machines enables you to now use these large VMs with Azure Premium Storage – and enables you to perform up to 2,000 MB/sec of storage throughput , more than double any other public cloud provider.  Using the G5/GS5 VM size now also offers more than 20 gbps of network bandwidth, also more than double the network throughout provided by any other public cloud provider.

These new VM offerings provide an ideal solution to your most demanding cloud based workloads, and are great for relational databases like SQL Server, MySQL, PostGres and other large data warehouse solutions. You can also use the GS-series to significantly scale-up the performance of enterprise applications like Dynamics AX.

The G and GS-series of VM sizes are available to use now in our West US, East US-2, and West Europe Azure regions.  You’ll see us continue to expand availability around the world in more regions in the coming months. GS Series Size Details

The below table provides more details on the exact capabilities of the new GS-series of VM sizes:




Max Disk IOPS

Max Disk Bandwidth

(MB per second)


























Creating a GS-Series Virtual Machine

Creating a new GS series VM is very easy.  Simply navigate to the Azure Preview Portal, select New(+) and choose your favorite OS or VM image type:


Click the Create button, and then click the pricing tier option and select “View All” to see the full list of VM sizes. Make sure your region is West US, East US 2, or West Europe to select the G-series or the GS-Series:


When choosing a GS-series VM size, the portal will create a storage account using Premium Azure Storage. You can select an existing Premium Storage account, as well, to use for the OS disk of the VM:


Hitting Create will launch and provision the VM. Learn More

If you would like more information on the GS-Series VM sizes as well as other Azure VM Sizes then please visit the following page for additional details: Virtual Machine Sizes for Azure.

For more information on Premium Storage, please see: Premium Storage overview. Also, refer to Using Linux VMs with Premium Storage for more details on Linux deployments on Premium Storage.

Hope this helps,


Categories: Architecture, Programming

Announcing Great New SQL Database Capabilities in Azure

Thu, 08/27/2015 - 17:13

Today we are making available several new SQL Database capabilities in Azure that enable you to build even better cloud applications.  In particular:

  • We are introducing two new pricing tiers for our  Elastic Database Pool capability.  Elastic Database Pools enable you to run multiple, isolated and independent databases on a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.
  • We are also introducing new higher-end scale options for SQL Databases that enable you to run even larger databases with significantly more compute + storage + networking resources.

Both of these additions are available to start using immediately.  Elastic Database Pools

If you are a SaaS developer with tens, hundreds, or even thousands of databases, an elastic database pool dramatically simplifies the process of creating, maintaining, and managing performance across these databases within a budget that you control. 


A common SaaS application pattern (especially for B2B SaaS apps) is for the SaaS app to use a different database to store data for each customer.  This has the benefit of isolating the data for each customer separately (and enables each customer’s data to be encrypted separately, backed-up separately, etc).  While this pattern is great from an isolation and security perspective, each database can end up having varying and unpredictable resource consumption (CPU/IO/Memory patterns), and because the peaks and valleys for each customer might be difficult to predict, it is hard to know how much resources to provision.  Developers were previously faced with two options: either over-provision database resources based on peak usage--and overpay. Or under-provision to save cost--at the expense of performance and customer satisfaction during peaks.

Microsoft created elastic database pools specifically to help developers solve this problem.  With Elastic Database Pools you can allocate a shared pool of database resources (CPU/IO/Memory), and then create and run multiple isolated databases on top of this pool.  You can set minimum and maximum performance SLA limits of your choosing for each database you add into the pool (ensuring that none of the databases unfairly impacts other databases in your pool).  Our management APIs also make it much easier to script and manage these multiple databases together, as well as optionally execute queries that span across them (useful for a variety operations).  And best of all when you add multiple databases to an Elastic Database Pool, you are able to average out the typical utilization load (because each of your customers tend to have different peaks and valleys) and end up requiring far fewer database resources (and spend less money as a result) than you would if you ran each database separately.

The below chart shows a typical example of what we see when SaaS developers take advantage of the Elastic Pool capability.  Each individual database they have has different peaks and valleys in terms of utilization.  As you combine multiple of these databases into an Elastic Pool the peaks and valleys tend to normalize out (since they often happen at different times) to require much less overall resources that you would need if each database was resourced separately:

databases sharing eDTUs

Because Elastic Database Pools are built using our SQL Database service, you also get to take advantage of all of the underlying database as a service capabilities that are built into it: 99.99% SLA, multiple-high availability replica support built-in with no extra charges, no down-time during patching, geo-replication, point-in-time recovery, TDE encryption of data, row-level security, full-text search, and much more.  The end result is a really nice database platform that provides a lot of flexibility, as well as the ability to save money.

New Basic and Premium Tiers for Elastic Database Pools

Earlier this year at the //Build conference we announced our new Elastic Database Pool support in Azure and entered public preview with the Standard Tier edition of it.  The Standard Tier allows individual databases within the elastic pool to burst up to 100 eDTUs (a DTU represents a combination of Compute + IO + Storage performance) for performance. 

Today we are adding additional Basic and Premium Elastic Database Pools to the preview to enable a wider range of performance and cost options.

  • Basic Elastic Database Pools are great for light-usage SaaS scenarios.  Basic Elastic Database Pools allows individual databases performance bursts up to 5 eDTUs.
  • Premium Elastic Database Pools are designed for databases that require the highest performance per database. Premium Elastic Database Pools allows individual database performance bursts up to 1,000 eDTUs.

Collectively we think these three Elastic Database Pool pricing tier options provide a tremendous amount of flexibility and optionality for SaaS developers to take advantage of, and are designed to enable a wide variety of different scenarios. Easily Migrate Databases Between Pricing Tiers

One of the cool capabilities we support is the ability to easily migrate an individual database between different Elastic Database Pools (including ones with different pricing tiers).  For example, if you were a SaaS developer you could start a customer out with a trial edition of your application – and choose to run the database that backs it within a Basic Elastic Database Pool to run it super cost effectively.  As the customer’s usage grows you could then auto-migrate them to a Standard database pool without customer downtime.  If the customer grows up to require a tremendous amount of resources you could then migrate them to a Premium Database Pool or run their database as a standalone SQL Database with a huge amount of resource capacity.

This provides a tremendous amount of flexibility and capability, and enables you to build even better applications. Managing Elastic Database Pools

One of the the other nice things about Elastic Database Pools is that the service provides the management capabilities to easily manage large collections of databases without you having to worry about the infrastructure that runs it.   

You can create and mange Elastic Database Pools using our Azure Management Portal or via our Command-line tools or REST Management APIs.  With today’s update we are also adding support so that you can use T-SQL to add/remove new databases to/from an elastic pool.  Today’s update also adds T-SQL support for measuring resource utilization of databases within an elastic pool – making it even easier to monitor and track utilization by database.

image Elastic Database Pool Tier Capabilities

During the preview, we have been and will continue to tune a number of parameters that control the density of Elastic Database Pools as we progress through the preview.

In particular, the current limits for the number of databases per pool and the number of pool eDTUs is something we plan to steadily increase as we march towards the general availability release.  Our plan is to provide the highest possible density per pool, largest pool sizes, and the best Elastic Database Pool economics while at the same time keeping our 99.99 availability SLA.

Below are the current performance parameters for each of the Elastic Database Pool Tier options in preview today:


Basic Elastic

Standard Elastic

Premium Elastic

Elastic Database Pool

eDTU range per pool (preview limits)

100-1200 eDTUs

100-1200 eDTUs

125-1500 eDTUs

Storage range per pool

10-120 GB

100-1200 GB

63-750 GB

Maximum database per pool (preview limits)




Estimated monthly pool and add-on  eDTU costs (preview prices)

Starting at $0.2/hr (~$149/pool/mo).

Each additional eDTU $.002/hr (~$1.49/mo)

Starting at $0.3/hr (~$223/pool mo). 

Each additional eDTU $0.003/hr (~$2.23/mo)

Starting at $0.937/hr (`$697/pool/mo).

Each additional eDTU $0.0075/hr (~$5.58/mo)

Storage per eDTU

0.1 GB per eDTU

1 GB per eDTU

.5 GB per eDTU

Elastic Databases

eDTU max per database (preview limits)




Storage max per DB

2 GB

250 GB

500 GB

Per DB cost (preview prices)

$0.0003/hr (~$0.22/mo)

$0.0017/hr (~$1.26/mo)

$0.0084/hr (~$6.25/mo)

We’ll continue to iterate on the above parameters and increase the maximum number of databases per pool as we progress through the preview, and would love your feedback as we do so.

New Higher-Scale SQL Database Performance Tiers

In addition to the enhancements for Elastic Database Pools, we are also today releasing new SQL Database Premium performance tier options for standalone databases. 

Today we are adding a new P4 (500 DTU) and a P11 (1750 DTU) level which provide even higher performance database options for SQL Databases that want to scale-up. The new P11 edition also now supports databases up to 1TB in size.

Developers can now choose from 10 different SQL Database Performance levels.  You can easily scale-up/scale-down as needed at any point without database downtime or interruption.  Each database performance tier supports a 99.99% SLA, multiple-high availability replica support built-in with no extra charges (meaning you don’t need to buy multiple instances to get an SLA – this is built-into each database), no down-time during patching, point-in-time recovery options (restore without needing a backup), TDE encryption of data, row-level security, and full-text search.


Learn More

You can learn more about SQL Databases by visiting the http://azure.microsoft.com web-site.  Check out the SQL Database product page to learn more about the capabilities SQL Databases provide, as well as read the technical documentation to learn more how to build great applications using it.


Today’s database updates enable developers to build even better cloud applications, and to use data to make them even richer more intelligent.  We are really looking forward to seeing the solutions you build.

Hope this helps,


Categories: Architecture, Programming

Announcing Windows Server 2016 Containers Preview

Wed, 08/19/2015 - 17:01

At DockerCon this year, Mark Russinovich, CTO of Microsoft Azure, demonstrated the first ever application built using code running in both a Windows Server Container and a Linux container connected together. This demo helped demonstrate Microsoft's vision that in partnership with Docker, we can help bring the Windows and Linux ecosystems together by enabling developers to build container-based distributed applications using the tools and platforms of their choice.

Today we are excited to release the first preview of Windows Server Containers as part of our Windows Server 2016 Technical Preview 3 release. We’re also announcing great updates from our close collaboration with Docker, including enabling support for the Windows platform in the Docker Engine and a preview of the Docker Engine for Windows. Our Visual Studio Tools for Docker, which we previewed earlier this year, have also been updated to support Windows Server Containers, providing you a seamless end-to-end experience straight from Visual Studio to develop and deploy code to both Windows Server and Linux containers. Last but not least, we’ve made it easy to get started with Windows Server Containers in Azure via a dedicated virtual machine image. Windows Server Containers

Windows Server Containers create a highly agile Windows Server environment, enabling you to accelerate the DevOps process to efficiently build and deploy modern applications. With today’s preview release, millions of Windows developers will be able to experience the benefits of containers for the first time using the languages of their choice – whether .NET, ASP.NET, PowerShell or Python, Ruby on Rails, Java and many others.

Today’s announcement delivers on the promise we made in partnership with Docker, the fast-growing open platform for distributed applications, to offer container and DevOps benefits to Linux and Windows Server users alike. Windows Server Containers are now part of the Docker open source project, and Microsoft is a founding member of the Open Container Initiative. Windows Server Containers can be deployed and managed either using the Docker client or PowerShell. Getting Started using Visual Studio

The preview of our Visual Studio Tools for Docker, which enables developers to build and publish ASP.NET 5 Web Apps or console applications directly to a Docker container, has been updated to include support for today’s preview of Windows Server Containers. The extension automates creating and configuring your container host in Azure, building a container image which includes your application, and publishing it directly to your container host. You can download and install this extension, and read more about it, at the Visual Studio Gallery here: http://aka.ms/vslovesdocker.

Once installed, developers can right-click on their projects within Visual Studio and select “Publish”:


Doing so will display a Publish dialog which will now include the ability to deploy to a Docker Container (on either a Windows Server or Linux machine):


You can choose to deploy to any existing Docker host you already have running:


Or use the dialog to create a new Virtual Machine running either Window Server or Linux with containers enabled.  The below screen-shot shows how easy it is to create a new VM hosted on Azure that runs today’s Windows Server 2016 TP3 preview that supports Containers – you can do all of this (and deploy your apps to it) easily without ever having to leave the Visual Studio IDE:

image Getting Started Using Azure

In June of last year, at the first DockerCon, we enabled a streamlined Azure experience for creating and managing Docker hosts in the cloud. Up until now these hosts have only run on Linux. With the new preview of Windows Server 2016 supporting Windows Server Containers, we have enabled a parallel experience for Windows users.

Directly from the Azure Marketplace, users can now deploy a Windows Server 2016 virtual machine pre-configured with the container feature enabled and Docker Engine installed. Our quick start guide has all of the details including screen shots and a walkthrough video so take a look here https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/azure_setup.


Once your container host is up and running, the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell. Getting Started Locally Using Hyper-V

Creating a virtual machine on your local machine using Hyper-V to act as your container host is now really easy. We’ve published some PowerShell scripts to GitHub that automate nearly the whole process so that you can get started experimenting with Windows Server Containers as quickly as possible. The quick start guide has all of the details at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup.

Once your container host is up and running the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell.

image Additional Information and Resources

A great list of resources including links to past presentations on containers, blogs and samples can be found in the community section of our documentation. We have also setup a dedicated Windows containers forum where you can provide feedback, ask questions and report bugs. If you want to learn more about the technology behind containers I would highly recommend reading Mark Russinovich’s blog on “Containers: Docker, Windows and Trends” that was published earlier this week. Summary

At the //Build conference earlier this year we talked about our plan to make containers a fundamental part of our application platform, and today’s releases are a set of significant steps in making this a reality.’ The decision we made to embrace Docker and the Docker ecosystem to enable this in both Azure and Windows Server has generated a lot of positive feedback and we are just getting started.

While there is still more work to be done, now users in the Window Server ecosystem can begin experiencing the world of containers. I highly recommend you download the Visual Studio Tools for Docker, create a Windows Container host in Azure or locally, and try out our PowerShell and Docker support. Most importantly, we look forward to hearing feedback on your experience.

Hope this helps,

Scott omni

Categories: Architecture, Programming