Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Technologies to look into as a sysadmin

Agile Testing - Grig Gheorghiu - Tue, 05/20/2014 - 18:58
These are some of the technologies that I think are either established, or new and promising, but all useful for sysadmins, no matter what their level of expertise is. Some of them I am already familiar with, some are on my TODO list, some I am exploring currently. They all reflect my own taste, so YMMV.

Operating systems
  • Ubuntu

Programming/scripting languages
  • Go
  • Python/Ruby

Configuration management
  • Chef
  • Ansible

Monitoring/graphing/logging/searching
  • Sensu
  • Graphite
  • Logstash
  • ElasticSearch

Load balancer/Web server
  • HAProxy
  • Nginx

Relational databases
  • MySQL
  • PostgreSQL

Non-relational distributed databases
  • Riak
  • Cassandra

Service discovery
  • etcd
  • consul

Virtualization
  • KVM
  • Vagrant
  • Docker

Software defined networking (SDN)
  • Open vSwitch

IaaS
  • OpenStack

PaaS
  • CloudFoundry


This should keep most people in the industry busy for a while ;-)

It's Networking. In Space! Or How E.T. Will Phone Home.

What will the version of the Internet that follows us to the stars look like? Yes, people are really thinking seriously about this sort of thing. Specifically the InterPlanetary Networking Special Interest Group (IPNSIG).

Ansible-like faster-than-light communication it isn't. There's no magical warp drive. Nor is a network of telepaths acting as a 'verse spanning telegraph system.

It's more mundane than that. And in many ways more interesting as it's sort of like the old Internet on steroids, the one that was based on on UUCP and dial-up connections, but over vastly longer distances and with much longer delays:

Categories: Architecture

A Short On How the Wayback Machine Stores More Pages than Stars in the Milky Way

How does the Wayback Machine work? Now with over 400 billion webpages indexed, allowing the Internet to be browsed all the way back to 1996, it's an even more compelling question. I've looked several times but I've never found a really good answer.

Here's some information from a thread on Hacker News. It starts with mmagin, a former Archive employee:

Categories: Architecture

Wrestling with the Browser Event Queue

Xebia Blog - Sat, 05/17/2014 - 16:03

In this post I want to show some things I’ve learned recently about the browser event queue.
It all started with the following innocent looking problem:

I was building a single page web application, and on one form the user should have two options to enter a stock order.
The options were represented as radio buttons, each of them having its own input field. The radio buttons gave the user the choice: he could either enter a order amount, or a order quantity.
Both fields should have their own specific input validations which should be triggered when the user left the input field.

Here's the Html snippet:

  <form>
    <input type="radio" name="options" id="option_amount">Amount</input>
    <input id="option_mount_input"/>
    <input type="radio" name="options" id="option_quantity">Quantity</input>
    <input id="option_quantity_input"/>
  </form>

And the Javascript:

function validate_amount() {
    if(!$.isNumeric($("#option_amount_input").text())) {
      alert("Please enter a valid amount");
    }
}

$("#option_amount_input").on("blur", validate_amount);

All was working fine, except for one scenario: what if the user entered an amount in a wrong way, but then decided he wanted to enter a quantity so he clicked the quantity option instead.
In that case I wouldn’t want the validation for the amount to go off.

Here is my first attempt to fix this scenario:

var shouldValidateAmount = true;

function validate_amount() {
    setTimeout(delayedAmountValidation, 0);
}

function click_quantity() {
    shouldValidateAmount = false;
}

function click_amount() {
    shouldValidateAmount = true;
}

function delayedAmountValidation() {
    if(shouldValidateAmount) {
         if(!$.isNumeric($("#option_amount_input").text())) {
            alert("Please enter a valid amount");
         }
    }
}

$("#option_amount_input").on("blur", validate_amount);
$("#option_quantity").on("click", click_quantity);
$("#option_amount").on("click", click_amount);

The idea is that when the user leaves the amount field, the amount validation will be triggered, but after a timeout. The timeout is zero so the validation event is just put at the end of the event queue.
So before the validation will be executed, the click handler for the quantity radio option will be handled first which will then cancel any postponed validations. (or so I thought)

But this didn’t work. For some reason the postponed validation was still being executed before the click handler.
Only if I adjusted the timout delay until it was high enough, in my case that was 70ms, it would be executed after the click handler. But that would be a very brittle solution of course.

Here's a diagram showing how the event queue handled my solution:

event queue
So what to do about this? Time to look at the browser event queue in some more depth.

It turns out that browser considers certain groups of event as batches. Those are groups which the browser considers to be logically belonging together.
An example is a blur-event for one input combined with a focus-gained event for another input.
Timeouts generated from within a batch have to wait until all the events from their batch are handled, and only then get a chance to execute.

A blur-event and a click event on another element do apparently not belong to the same batch. Therefore my timeout was executed immediately after the focus-lost event was handled, and before the click event handler.

To listen to the focus event on the radio option was no solution of course, because just because it got focus didn’t necessarily mean it was seleced; the user could have tabbed there as well.
But it turns out that there are more events than just onFocus that belong to the same batch as blur; mouseDown and mouseEnter events!

Here is the final solution, which listens for a mouseDown event on the radio button:

event queue batch
And the code:

var shouldValidateAmount = true;

function validate_amount() {
    setTimeout(delayedAmountValidation, 0);
}

function click_quantity() {
    shouldValidateAmount = false;
}

function click_amount() {
    shouldValidateAmount = true;
}

function delayedAmountValidation() {
    if(shouldValidateAmount) {
         if(!$.isNumeric($("#option_amount_input").text())) {
            alert("Please enter a valid amount");
         }
    }
}

$("#option_amount_input").on("blur", validate_amount);
$("#option_quantity").on("mousedown", click_quantity);
$("#option_amount").on("mousedown", click_amount);

References:
Link to Mozilla doc on events and timing

Beyond software craftsmanship

Coding the Architecture - Simon Brown - Sat, 05/17/2014 - 08:49

I had the pleasure of attending the Island Innovators unconference that took place in Jersey last month ... an event co-hosted by Yossi Vardi (the "godfather of Israel's tech industry") and Daniel Seal in conjunction with Locate Jersey and Digital Jersey. This was very different to the usual software development conferences that I attend, with the background of the attendees being very broad. To give you a flavour, the session highlights for me were discussions about SaaS businesses (including pricing models, etc) and one which presented some fascinating insights into the Israeli culture.

Software development - what's next?

Since this was an unconference, I signed up to lead a session to discuss where software development is heading.

First we talked about the current state of software development and I jotted down some notes on flip chart paper (red writing is bad stuff that happens, green represents some solutions). Nothing new here, but it set the scene for the rest of the discussion.

There was a huge number of ideas to solve the problems outlined above, and rather than writing them all down, I attempted to summarise them based upon a list of principles that a "good software developer" should adopt. This is what the next sheet shows.

Finally, we wrapped up the last few minutes of the session by discussing how all of this could possibly be achieved in the real world.

It's an interesting time for the software development industry. Although there's a ton of debate out there about whether software development is art, craft or engineering, I do know that we need to get better at it. Protected/regulated titles and apprenticeships have all come up in conversations before, but it's a complex issue given the sheer quantity and variety of software development out there. Jersey is a small place and perhaps we can use this to our advantage. Perhaps we should do something on a small scale regarding the professionalism of software developers and see what happens...

Categories: Architecture

Stuff The Internet Says On Scalability For May 16th, 2014

Hey, it's HighScalability time:


Cross Section of an Undersea Cable. It's industrial art. The parts. The story.
  • 400,000,000,000: Wayback Machine pages indexed; 100 billion: Google searches per month; 10 million: Snapchat monthly user growth.
  • Quotable Quotes:
    • @duncanjw: The Great Rewrite - many apps will be rewritten not just replatformed over next 10 years says @cote #openstacksummit
    • @RFFlores: The Openstack conundrum. If you don't adopt it, you will regret it in the future. If you do adopt it, you will regret it now
    • elementai: I love Redis so much, it became like a superglue where "just enough" performance is needed to resolve a bottleneck problem, but you don't have resources to rewrite a whole thing in something fast.
    • @antirez: "when software engineering is reduced to plumbing together generic systems, software engineers lose their sense of ownership"
    • Tom Akehurst: Microservices vs. monolith is a false dichotomy.
    • @joestump: “Keep in mind that any piece of butt-based infrastructure can fail at any time. Plan your infrastructure accordingly.” Ain’t that the truth?
    • @SalesforceEng: Check out the scale of Kafka @LinkedInEng. @bonkoif says these numbers are about a month old. 3.25 million msgs/sec. 
    • Don Neufeld: The first is to look deeply into the stack of implicit assumptions I’m working with. It’s often the unspoken assumptions that are the most important ones. The second flows from the first and it’s to focus less on building the right thing and more how we’re going to meet our immediate needs.
    • Dan Gillmor: We’re in danger of losing what’s made the Internet the most important medium in history – a decentralized platform where the people at the edges of the networks – that would be you and me – don’t need permission to communicate, create and innovate.

  • If you think of a Hotel as an app, hotels have been doing in-app purchases for a long time. They lead with a teaser rate and then charge for anything that might cross a desire-money threshold. Wifi, that's extra. Gym, that's extra. The bar, a cover charge. Drinks, so so expensive. The pool, extra. A lounge by the pool is double extra extra. To go all the way hotels just need to let you stay for free and then fully monetize all the gamification points.

  • Apple: We handle hundreds of millions of active users using some of the most desirable devices on the planet and several Billion iMesssages/day, 40 billion push notifications/day, 16+ trillion push notifications sent to date.

  • It's a data prison for everyone! Comcast plans data caps for all customers in 5 years, could be 500GB. Or just a few 4K movies.

  • From the future of everything to the verge of extinction. The Slow Decline of Peer-to-Peer File Sharing: People have shifted their activities to streaming over file sharing. Subscribers get quality content at a reasonable price and it's dead simple to use, whereas torrenting or file sharing is a little more complicated.

  • I don't think people understand how hard this is to do in practice. European Court Lets Users Erase Records on Web. Once data is stored on tape deleting takes rewriting all the non-deleted data to another tape. So it's far more efficient to forget indexes to data than delete the data. Which goes against the point I'd imagine.

  • How is a strategy tax hands off? @parislemon: Instagram's decision to use Facebook's much worse place database over Foursquare's has made the product worse. Stupid.

  • Excellent detailed example of the SEDA architecture in action. Guide to Cassandra Thread Pools. Follow the regal message as it flows from thread pool to thread pool, transforming as it makes its way to its final resting place.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so keep on going)...

Categories: Architecture

Using Dropwizard in combination with Elasticsearch

Gridshore - Thu, 05/15/2014 - 21:09

Dropwizard logo

How often do you start creating a new application? How often have you thought about configuring an application. Where to locate a config file, how to load the file, what format to use? Another thing you regularly do is adding timers to track execution time, management tools to do thread analysis etc. From a more functional perspective you want a rich client side application using angularjs. So you need a REST backend to deliver json documents. Does this sound like something you need regularly? Than this blog post is for you. If you never need this, please keep on reading, you might like it.

In this blog post I will create an application that show you all the available indexes in your elasticsearch cluster. Not very sexy, but I am going to use: AngularJS, Dropwizard and elasticsearch. That should be enough to get a lot of you interested.


What is Dropwizard

Dropwizard is a framework that combines a lot of other frameworks that have become the de facto standard in their own domain. We have jersey for REST interface, jetty for light weight container, jackson for json parsing, free marker for front-end templates, Metric for the metrics, slf4j for logging. Dropwizard has some utilities to combine these frameworks and enable you as a developer to be very productive in constructing your application. It provides building blocks like lifecycle management, Resources, Views, loading of bundles, configuration and initialization.

Time to jump in and start creating an application.

Structure of the application

The application is setup as a maven project. To start of we only need one dependency:

<dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-core</artifactId>
    <version>${dropwizard.version}</version>
</dependency>

If you want to follow along, you can check my github repository:


https://github.com/jettro/dropwizard-elastic

Configure your application

Every application needs configuration. In our case we need to configure how to connect to elasticsearch. In drop wizard you extend the Configuration class and create a pojo. Using jackson and hibernate validator annotations we configure validation and serialization. In our case the configuration object looks like this:

public class DWESConfiguration extends Configuration {
    @NotEmpty
    private String elasticsearchHost = "localhost:9200";

    @NotEmpty
    private String clusterName = "elasticsearch";

    @JsonProperty
    public String getElasticsearchHost() {
        return elasticsearchHost;
    }

    @JsonProperty
    public void setElasticsearchHost(String elasticsearchHost) {
        this.elasticsearchHost = elasticsearchHost;
    }

    @JsonProperty
    public String getClusterName() {
        return clusterName;
    }

    @JsonProperty
    public void setClusterName(String clusterName) {
        this.clusterName = clusterName;
    }
}

Then you need to create a yml file containing the properties in the configuration as well as some nice values. In my case it looks like this:

elasticsearchHost: localhost:9300
clusterName: jc-play

How often did you start in your project to create the configuration mechanism? Usually I start with maven and quickly move to tomcat. Not this time. We did do maven, now we did configuration. Next up is the runner for the application.

Add the runner

This is the class we run to start the application. Internally jetty is started. We extend the Application class and use the configuration class as a generic. This is the class that initializes the complete application. Used bundles are initialized, classes are created and passed to other classes.

public class DWESApplication extends Application<DWESConfiguration> {
    private static final Logger logger = LoggerFactory.getLogger(DWESApplication.class);

    public static void main(String[] args) throws Exception {
        new DWESApplication().run(args);
    }

    @Override
    public String getName() {
        return "dropwizard-elastic";
    }

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
    }

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
    }
}

When starting this application, we have no succes. A big error because we did not register any resources.

ERROR [2014-05-14 16:58:34,174] com.sun.jersey.server.impl.application.RootResourceUriRules: 
	The ResourceConfig instance does not contain any root resource classes.
Nothing happens, we just need a resource.

Before we can return something, we need to have something to return. We create a pogo called Index that contains one property called name. For now we just return this object as a json object. The following code shows the IndexResource that handles the requests that are related to the indexes.

@Path("/indexes")
@Produces(MediaType.APPLICATION_JSON)
public class IndexResource {

    @GET
    @Timed
    public Index showIndexes() {
        Index index = new Index();
        index.setName("A Dummy Index");

        return index;
    }
}

The @GET, @PATH and @Produces annotations are from the jersey rest library. @Timed is from the metrics library. Before starting the application we need to register our index resource with jersey.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource();
        environment.jersey().register(indexResource);
    }

Now we can start the application using the following runner from intellij. Later on we will create the executable jar.

Running the app from intelij

Run the application again, this time it works. You can browse to http://localhost:8080/index and see our dummy index as a nice json document. There is something in the logs though. I love this message, this is what you get when running the application without health checks.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!    THIS APPLICATION HAS NO HEALTHCHECKS. THIS MEANS YOU WILL NEVER KNOW      !
!     IF IT DIES IN PRODUCTION, WHICH MEANS YOU WILL NEVER KNOW IF YOU'RE      !
!    LETTING YOUR USERS DOWN. YOU SHOULD ADD A HEALTHCHECK FOR EACH OF YOUR    !
!         APPLICATION'S DEPENDENCIES WHICH FULLY (BUT LIGHTLY) TESTS IT.       !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Creating a health check

We add a health check, since we are creating an application interacting with elasticsearch, we create a health check for elasticsearch. Don’t think to much about how we connect to elasticsearch yet. We get there later on.

public class ESHealthCheck extends HealthCheck {

    private ESClientManager clientManager;

    public ESHealthCheck(ESClientManager clientManager) {
        this.clientManager = clientManager;
    }

    @Override
    protected Result check() throws Exception {
        ClusterHealthResponse clusterIndexHealths = clientManager.obtainClient().admin().cluster().health(new ClusterHealthRequest())
                .actionGet();
        switch (clusterIndexHealths.getStatus()) {
            case GREEN:
                return HealthCheck.Result.healthy();
            case YELLOW:
                return HealthCheck.Result.unhealthy("Cluster state is yellow, maybe replication not done? New Nodes?");
            case RED:
            default:
                return HealthCheck.Result.unhealthy("Something is very wrong with the cluster", clusterIndexHealths);

        }
    }
}

Just like with the resource handler, we need to register the health check. Together with the standard http port for normal users, another port is exposed for administration. Here you can find the metrics reports like Metrics, Ping, Threads, Healthcheck.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        Client client = ESClientFactorybean.obtainClient(config.getElasticsearchHost(), config.getClusterName());

        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource(client);
        environment.jersey().register(indexResource);

        final ESHealthCheck esHealthCheck = new ESHealthCheck(client);
        environment.healthChecks().register("elasticsearch", esHealthCheck);
    }

You as a reader now have an assignment to start the application and check the admin pages yourself: http://localhost:8081. We are going to connect to elasticsearch in the mean time.

Connecting to elasticsearch

We connect to elasticsearch using the transport client. This is taken care of by the ESClientManager. We make use of the drop wizard managed classes. The lifecycle of these classes is managed by drop wizard. From the configuration object we take the host(s) and the cluster name. Now we can obtain a client in the start method and pass this client to the classes that need it. The first class that needs it is the health check, but we already had a look at that one. Using the ESClientManager other classes have access to the client. The managed interface mandates the start as well as the stop method.

    @Override
    public void start() throws Exception {
        Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", clusterName).build();

        logger.debug("Settings used for connection to elasticsearch : {}", settings.toDelimitedString('#'));

        TransportAddress[] addresses = getTransportAddresses(host);

        logger.debug("Hosts used for transport client : {}", (Object) addresses);

        this.client = new TransportClient(settings).addTransportAddresses(addresses);
    }

    @Override
    public void stop() throws Exception {
        this.client.close();
    }

We need to register our managed class with the lifecycle of the environment in the runner class.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        ESClientManager esClientManager = new ESClientManager(config.getElasticsearchHost(), config.getClusterName());
        environment.lifecycle().manage(esClientManager);
    }	

Next we want to change the IndexResource to use the elasticsearch client to list all indexes.

    public List<Index> showIndexes() {
        IndicesStatusResponse indices = clientManager.obtainClient().admin().indices().prepareStatus().get();

        List<Index> result = new ArrayList<>();
        for (String key : indices.getIndices().keySet()) {
            Index index = new Index();
            index.setName(key);
            result.add(index);
        }
        return result;
    }

Now we can browse to http://localhost:8080/indexes and we get back a nice json object. In my case I got this:

[
	{"name":"logstash-tomcat-2014.05.02"},
	{"name":"mymusicnested"},
	{"name":"kibana-int"},
	{"name":"playwithip"},
	{"name":"logstash-tomcat-2014.05.08"},
	{"name":"mymusic"}
]
Creating a better view

Having this REST based interface with json documents is nice, but not if you are human like me (well kind of). So let us add some AngularJS magic to create a slightly better view. The following page can of course also be created with easier view technologies, but I want to demonstrate what you can do with dropwizard.

First we make it possible to use free marker as a template. To make this work we need to additional dependencies: dropwizard-views and dropwizard-views-freemarker. The first step is a view class that knows the free marker template to load and provide the fields that you template can read. In our case we want to expose the cluster name.

public class HomeView extends View {
    private final String clusterName;

    protected HomeView(String clusterName) {
        super("home.ftl");
        this.clusterName = clusterName;
    }

    public String getClusterName() {
        return clusterName;
    }
}

Than we have to create the free marker template. This looks like the following code block

<#-- @ftlvariable name="" type="nl.gridshore.dwes.HomeView" -->
<html ng-app="myApp">
<head>
    <title>DWAS</title>
</head>
<body ng-controller="IndexCtrl">
<p>Underneath a list of indexes in the cluster <strong>${clusterName?html}</strong></p>

<div ng-init="initIndexes()">
    <ul>
        <li ng-repeat="index in indexes">{{index.name}}</li>
    </ul>
</div>

<script src="/assets/js/angular-1.2.16.min.js"></script>
<script src="/assets/js/app.js"></script>
</body>
</html>

By default you put these template in the resources folder using the same sub folders as your view class has for the package. If you look closely you see some angularjs code, more on this later on. First we need to map a url to the view. This is done with a resource class. The following code block shows the HomeResource class that maps the “/” to the HomeView.

@Path("/")
@Produces(MediaType.TEXT_HTML)
public class HomeResource {
    private String clusterName;

    public HomeResource(String clusterName) {
        this.clusterName = clusterName;
    }

    @GET
    public HomeView goHome() {
        return new HomeView(clusterName);
    }
}

Notice we now configure it to return text/html. The goHome method is annotated with GET, so each GET request to the PATH “/” is mapped to the HomeView class. Now we need to tell jersey about this mapping. That is done in the runner class.

final HomeResource homeResource = new HomeResource(config.getClusterName());
environment.jersey().register(homeResource);
Using assets

The final part I want to show is how to use the assets bundle from drop zone to map a folder “/assets” to a part of the url. To use this bundle you have to add the following dependency in maven: dropwizard-assets. Than we can easily map the assets folder in our resources folder to the web assets folder

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
        dwesConfigurationBootstrap.addBundle(new ViewBundle());
        dwesConfigurationBootstrap.addBundle(new AssetsBundle("/assets/", "/assets/"));
    }

That is it, now you can load the angular javascript file. My very basic sample has one angular controller. This controller uses the $http service to call our /indexes url. The result is used to show the indexes in a list view.

myApp.controller('IndexCtrl', function ($scope, $http) {
    $scope.indexes = [];

    $scope.initIndexes = function () {
        $http.get('/indexes').success(function (data) {
            $scope.indexes = data;
        });
    };
});

And the result

the very basic screen showing the indexes

Concluding

This was my first go at using drop wizard, I must admit I like what I have seen so far. Not sure if I would create a big application with it, on the other hand it is really structured. Before moving on I would need to reed a bit more about the library and check all of its options. There is a lot more possible than what I have showed you in here.

References

The post Using Dropwizard in combination with Elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Paper: SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine

So how do you knit multiple datacenters and many thousands of phones and other clients into a single cooperating system?

Usually you don't. It's too hard. We see nascent attempts in services like Firebase and Parse. 

SwiftCloud, as described in SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine, goes two steps further, by leveraging Conflict free Replicated Data Types (CRDTs), which means "data can be replicated at multiple sites and be updated independently with the guarantee that all replicas converge to the same value. In a cloud environment, this allows a user to access the data center closer to the user, thus optimizing the latency for all users."

While we don't see these kind of systems just yet, they are a strong candidate for how things will work in the future, efficiently using resources at every level while supporting huge numbers of cooperating users.

Abstract:

Categories: Architecture

Google Says Cloud Prices Will Follow Moore’s Law: Are We All Renters Now?

After Google cut prices on their Google Cloud Platform Amazon quickly followed with their own price cuts. Even more interesting is what the future holds for pricing. The near future looks great. After that? We'll see.

Adrian Cockcroft highlights that Google thinks prices should follow Moore’s law, which means we should expect prices to halve every 18-24 months.

That's good news. Greater cost certainty means you can make much more aggressive build out plans. With the savings you can hire more people, handle more customers, and add those media rich features you thought you couldn't afford. Design is directly related to costs.

Without Google competing with Amazon there's little doubt the price reduction curve would be much less favorable.

As a late cloud entrant Google is now in a customer acquisition phase, so they are willing to pay for customers, which means lower prices are an acceptable cost of doing business. Profit and high margins are not the objective. Getting market share is what is important.

Amazon on the other hand has been reaping the higher margins earned from recurring customers. So Google's entrance into the early product life cycle phase is making Amazon eat into their margins and is forcing down prices over all.

But there's a floor to how low prices can go. Alen Peacock, co-founder of Space Monkey has an interesting graphic telling the story. This is Amazon's historical pricing for 1TB of storage in S3, graphed as a multiple of the historical pricing for 1TB of local hard disk:

Alen explains it this way:

Cloud prices do decrease over time, and have dropped significantly over the timespan shown in the graph, but this graph shows cloud storage prices as a multiple of hard disk prices. In other words, hard disk prices are dropping much faster than datacenter prices. This is because, right, datacenters have costs other than hard disks (power, cooling, bandwidth, building infrastructure, diesel backup generators, battery backup systems, fire suppression, staffing, etc). Most of those costs do not follow Moore's Law -- in fact energy costs are on a long trend upwards. So over time, the gap shown by the graph should continue to widen.

 

The economic advantages of building your own (but housed in datacenters) is there, but it isn't huge. There is also some long term strategic advantage to building your own, e.g., GDrive dropped their price dramatically at will because Google owns their datacenters, but Dropbox couldn't do that without convincing Amazon to drop the price they pay for S3.

Costs other than hardware began dominating in datacenters several years ago, Moore's Law-like effects are dampened. Energy/cooling and cooling costs do not follow Moore's Law, and those costs make up a huge component of the overall picture in datacenters. This is only going to get worse, barring some radical new energy production technology arriving on the scene.

What we're [Space Monkey] interested in, long term, is dropping the floor out from underneath all of these, and I think that only happens if you get out of the datacenter entirely.

As the size of cloud market is still growing there will still be a fight for market share. When growth slows and the market is divided between major players a collusionary pricing phase will take over. Cloud customers are sticky customers. It's hard to move off a cloud. The need for higher margins to justify the cash flow drain during the customer acquisition phase will reverse the favorable trends we are seeing now.

Until then it seems the economics indicate we are in a rent, not a buy world.

Related Articles 
  • IaaS Series: Cloud Storage Pricing – How Low Can They Go? - "For now it seems we can assume we’ve not seen the last of the big price reductions."
  • The Cloud Is Not Green
  • Brad Stone: “Bill Miller, the chief investment officer at Legg Mason Capital Management and a major Amazon shareholder, asked Bezos at the time about the profitability prospects for AWS. Bezos predicted they would be good over the long term but said that he didn’t want to repeat “Steve Jobs’s mistake” of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition.” 
Categories: Architecture

Sponsored Post: Apple, Cloudant, CopperEgg, Logentries, Wargaming.net, PagerDuty, HelloSign, CrowdStrike, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Enterprise Software Engineer. Apple's Emerging Technology Services group provides a Java based SOA platform for various applications to interact with each other. The platform is designed to handle millions of messages a day with very low latency. We have an immediate opening for a talented Software Engineer in a highly visible team who is passionate about exploring emerging technologies to create elegant scalable solutions. Please apply here
    • Mobile Services Software Engineer. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. Please apply here
    • Sr. Software Engineer-iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of performance tuning Java applications make your heart leap? If so, iOS Systems is looking for a highly motivated, detail-oriented, energetic individual with excellent written and oral skills who is not afraid to think outside the box and question assumptions. Please apply here
    • Senior Software Engineering Manager. As a Senior Software Engineering Manager on our team, you will be managing teams of very dedicated and talented engineering team. You will be responsible for managing the development of mobile point of sale system on iPod touch hardware. Please apply here.
    • Sr Software Engineer - Messaging Services. An exciting opportunity for a Software Engineer to join Apple's Messaging Services team. We build the cloud systems that power some of the busiest applications in the world, including iMessage, FaceTime and Apple Push Notifications. We handle hundreds of millions of active users using some of the most desirable devices on the planet and several Billion iMesssages/day, 40 billion push notifications/day, 16+ trillion push notifications sent to date. Please apply here.

  • Engine Programmer - C/C++. Wargaming|BigWorld is seeking Engine Programmers to join our team in Sydney, Australia. We offer a relocation package, Australian working visa & great salary + bonus. Your primary responsibility will be to work on our PC engine. Please apply here

  • Senior Engineer wanted for large scale, security oriented distributed systems application that offers career growth and independent work environment. Use your talents for good instead of getting people to click ads at CrowdStrike. Please apply here.

  • Ops Engineer - Are you passionate about scaling and automating cloud-based websites? Love Puppet and deployment scripts? Want to take advantage of both your sys-admin and DevOps skills? Join HelloSign as our second Ops Engineer and help us scale as we grow! Apply at http://www.hellosign.com/info/jobs

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events

  • The Biggest MongoDB Event Ever Is On. Will You Be There? Join us in New York City June 23-25 for MongoDB World! The conference lineup includes Amazon CTO Werner Vogels and Cloudera Co-Founder Mike Olson for keynote addresses.  You’ll walk away with everything you need to know to build and manage modern applications. Register before April 4 to take advantage of super early bird pricing.

  • Upcoming Webinar: Practical Guide to SQL - NoSQL Migration. Avoid common pitfalls of NoSQL deployment with the best practices in this May 8 webinar with Anton Yazovskiy of Thumbtack Technology. He will review key questions to ask before migration, and differences in data modeling and architectural approaches. Finally, he will walk you through a typical application based on RDBMS and will migrate it to NoSQL step by step. Register for the webinar.
Cool Products and Services
  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • PagerDuty helps operations and DevOps engineers resolve problems as quickly as possible. By aggregating errors from all your IT monitoring tools, and allowing easy on-call scheduling that ensures the right alerts reach the right people, PagerDuty increases uptime and reduces on-call burnout—so that you only wake up when you have to. Thousands of companies rely on PagerDuty, including Netflix, Etsy, Heroku, and Github.

  • Aerospike Releases Client SDK for Node.js 0.10.x. This client makes it easy to build applications in Node.js that need to store and retrieve data from a high-performance Aerospike cluster. This version exposes Key-Value Store functionality - which is the core of Aerospike's In-Memory NoSQL Database. Platforms supported: CentOS 6, RHEL 6, Debian 6, Debian7, Mac OS X, Ubuntu 12.04. Write your first app: https://github.com/aerospike/aerospike-client-nodejs.

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Azure: VM Security Extensions, ExpressRoute GA, Reserved IPs, Internal Load Balancing, Multi Site-to-Site VPNs, Storage Import/Export GA, New SMB File Service, API Management, Hybrid Connection Service, Redis Cache, Remote Apps and more…

ScottGu's Blog - Scott Guthrie - Mon, 05/12/2014 - 19:08

This morning we released a massive amount of enhancements to Microsoft Azure.  Today’s new capabilities and announcements include:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support

In a previous blog post I talked about the new VM Agent we introduced as an optional extension to Virtual Machines hosted on Azure.  The VM Agent is a lightweight and unobtrusive process that you can optionally enable to run inside your Windows and Linux VMs. The VM Agent can then be used to install and manage extensions, which are software modules that extend the functionality of a VM and help make common management scenarios easier.

Today I’m pleased to announce three new security extensions that we are enabling via the VM Agent:

  • Microsoft Antimalware
  • Symantec Endpoint Protection
  • TrendMicro’s Deep Security Agent

These extensions enable you to add richer security protection to your Virtual Machines using respected security products that we automate installing/managing.  These extensions are easy to enable within your Virtual Machines through either the Azure Management Portal or via the command-line.  To enable them using the Azure Management Portal simply check them when you create new a new Virtual Machine:

image

Once checked we’ll automate installing and running them within your VM.

Custom Powershell Script

This week we’ve also enabled a new “Custom Script” extension that enables you to specify a Powershell script file (.ps1 extension) to run in the VM immediately after it’s created.  This provides another way to customize your VM on creation without having to RDP in.  Alternatively you can also take advantage of the Chef and Puppet extensions we shipped last month.

Virtual Machines: Support for Capturing Images with both OS + Data Drives attached

Last month at the //Build conference we released command-line support for capturing VM images that contain both an OS disk as well as multiple data disks attached.  This new VM image support made it much easier to capture and automate VMs with richer configurations, as well as to snapshot VMs without having to run sysprep on them. 

With today’s release we have updated the Azure Management Portal to add support for capturing VM images that contain both an OS disk and multiple data disks as well.  One cool aspect of the “Capture” command is that it can now be run on both a stopped VM, as well as on a running VM as well (there is no need to restart it and the capture command completes in under a minute). 

To try this new support out, simply click the “Capture” button on a VM, and it will present a dialog that enables you to name the image you want to create: 

image

Once the image is captured it will show up in the “Images” section of the VM gallery – allowing to you easily create any number of new VM instances from it:

image

This new support is ideal for dev/test scenarios as well as for creating re-usable images for use with any other VM creation scenario.

Networking: General Availability of Azure ExpressRoute

I’m excited to announce the general availability release today of the Azure ExpressRoute service.

ExpressRoute enables dedicated, private, high-throughput network connectivity between Azure datacenters and your on-premises IT environments. Using ExpressRoute, you can connect your existing datacenters to Azure without having to flow any traffic over the public Internet, and enable–guaranteed network quality-of-service and the ability to use Azure as a natural extension of an existing private network or datacenter.  As part of our GA release we now offer an enterprise SLA for the service, as well as a variety of bandwidth tiers.

We have previously announced several provider partnerships with ExpressRoute including with: AT&T, Equinix, Verizon, BT, and Level3.  This week we are excited to announce new partnerships with TelecityGroup, SingTel and Zadara as well.  You can use any of these providers to setup private fiber connectivity directly to Azure using ExpressRoute.

You can get more information on the ExpressRoute website.

Networking: Multiple Site-to-Site VPNs and VNET-to-VNET Connectivity

I’m excited to announce the general availability release of two highly requested virtual networking features: multiple site-to-site VPN support and VNET-to-VNET connectivity.

Multiple Site to Site VPNs

Virtual Networks in Azure now supports more than one site-to-site connection, which enables you to securely connect multiple on-premises locations with a Virtual Network (VNET) in Azure. Using more than one site-to-site connection comes at no additional cost. You incur charges only for the VNET gateway uptime.

clip_image032

VNET to VNET Connectivity

With today’s release, we are also enabling VNET-to-VNET connectivity. That means that multiple virtual networks can now be directly and securely connected with one another. Using this feature, you can connect VNETs that are running in the same or different Azure regions and in case of different Azure regions have the traffic securely route via the Microsoft network backbone.

This feature enables scenarios that require presence in multiple regions (e.g. Europe and US, or East US and West US), applications that are highly available, or the integration of VNETs within a single region for a much larger network. This feature also enables you to connect VNETs across multiple different Azure account subscriptions, so you can now connect workloads across different divisions of your organization, or even different companies. The data traffic flowing between VNETs is charged at the same rate as egress traffic.

clip_image034

You can get more information on the Virtual Network website.

Networking: IP Reservation, Instance-level public IPs, Internal Load Balancing Support, Traffic Manager 

With today’s release we are also making available three highly request IP address features:

IP Reservations

With IP reservation, you can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. This enables scenarios where applications need to have static public IP addresses, and you want to be able to have the IP address survive the application being deleted and redeployed.  You can now reserve up to 5 addresses per subscription free of charge and assign them to VM or Cloud Service instances of your choice. If additional VIP reservations are needed, you can also reserve more addresses at additional cost.

This feature is now generally available as of today.  You can enable it via the command-line using new powershell cmdlets that we now support:

#Reserve a IP
New-AzureReservedIP -ReservedIPName EastUSVIP -Label "Reserved VIP in EastUS" -Location "East US"

#Use the Reserved IP during deployment
New-AzureVM -ServiceName "MyApp" -VMs $web1 -Location "East US" -VNetName VNetUSEast -ReservedIPName EastUSVIP

We will enable portal management support in a future management portal update.

Public IP Address per Virtual Machine

With Instance-level Public IPs for VMs, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint through a VIP. This feature will enable scenarios like easily running FTP servers in Azure and monitoring virtual machines directly using their IPs. 

We are making this new capability available in preview form today.  This feature is available only with new deployments and new virtual networks and can be enabled via PowerShell.

Internal Load Balancing (ILB) Support

Today’s new Internal Load Balancing support enables you to load-balance Azure virtual machines with a private IP address. The internally load balanced IP address will be accessible only within a virtual network (if the VM is within a virtual network) or within a cloud service (if the VM isn’t within a virtual network) – and means that no one outside of your application can access it. Internal Load Balancing is useful when you’re creating applications in which some of the tiers (for example: the database layer) aren’t public facing but require load balancing functionality. Internal Load Balancing is available in the standard tier of VMs at no additional cost.

We are making this new capability available in preview form today. ILB is available only with new deployments and new virtual networks and can be accessed via PowerShell.

Traffic Manager support for external endpoints

Starting today, Traffic Manager now supports routing traffic to both Azure endpoints and external endpoints (previously it only supported Azure endpoints).

Traffic Manager enables you to control the distribution of user traffic to your specified endpoints. With support for endpoints that reside outside of Azure, you can now build highly available applications that span both Azure, on-premises environments, and even other cloud providers. You can apply intelligent traffic management policies across all managed endpoints. This functionality is available now in preview and you can manage it via the command-line using powershell.

Learning More

You can learn more about Reserved IP addresses and the above networking features here.

Storage: General Availability Release of Azure Import/Export Service

Last November, we launched the preview of our Microsoft Azure Import/Export Service. Today, I am excited to announce the general availability release of the service.

The Microsoft Azure Import/Export Service enables you to move large amounts of data into and out of your Microsoft Azure Storage accounts by transferring them on hard disks. You can ship encrypted hard drives directly to our Microsoft Azure data centers, and we will automatically transfer the data to or from your Microsoft Azure Blobs for your storage account.  This enables you to import or export massive amounts of data quickly, cost effectively, and without being constrained by your network bandwidth.

This release of the Import/Export service has several new features as well as improvements to the preview functionality. We have expanded our service to new regions in addition to the US. We are now available in the US, Europe and the Asia Pacific regions. You can also now use either FedEx or DHL to ship the drives.  Simply provide an appropriate Fedex/DHL account number and we will also automatically ship the drives back to you:

clip_image028

More details about the improvements and new features of the Import/Export service can be found on the Microsoft Azure Storage Team Blog. Check out the Getting Started Guide to learn about how to use the Import/Export service. Feel free to send questions and comments to the waimportexport@microsoft.com.

Storage: New SMB File Sharing Service

I’m excited to announce the preview of the new Microsoft Azure File Service. The Azure File Service is a new capability of our existing Azure storage system and supports exposing network file shares using the standard SMB protocol.  Applications running in Azure can now easily share files across Windows and Linux VMs using this new SMB file-sharing service, with all VMs having both read and write access to the files.  The files stored within the service can also be accessed via a REST interface, which opens a variety of additional non-SMB sharing scenarios.

The Azure File Service is built on the same technology as the Blob, Table, and Queue Services, which means Azure Files is able to leverage the existing availability, durability, scalability, and geo redundancy that is built into our Storage platform. It is provided as a high-availability managed service run by us, meaning you don’t have to manage any VMs to coordinate it and we take care of all backups and maintenance for you.

Common Scenarios

  • Lift and Shift applications: Azure Files makes it easier to “lift and shift” existing applications to the cloud that use on-premise file shares to share data between parts of the application.
  • Shared Application Settings: A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many different virtual machines. Such configuration files can now be stored in an Azure File share, and read by all application instances.
  • Diagnostic Share: An Azure File share can also be used to save diagnostic files like logs, metrics, and crash dumps. Having these available through both the SMB and REST interface allows applications to build or leverage a variety of analysis tools for processing and analyzing the diagnostic data.
  • Dev/Test/Debug: When developers or administrators are working on virtual machines in the cloud, they often need a set of tools or utilities. Installing and distributing these utilities on each virtual machine where they are needed can be a time consuming exercise. With Azure Files, a developer or administrator can store their favorite tools on a file share, which can be easily connected to from any virtual machine.

To learn more about how to use the new Azure File Service visit here.

RemoteApp: Preview of new Remote App Service

I’m happy to announce the public preview today of Azure RemoteApp, a new service delivering Windows Client applications from the Azure cloud.

Azure RemoteApp can be used by IT to enable employees to securely access their corporate applications from a variety of devices (including mobile devices like iPads and Phones).  Applications can be scaled up or down quickly without expensive infrastructure costs and management complexity.

With Azure RemoteApp, your client applications run in the Azure cloud. Employees simply install the Microsoft Remote Desktop client on their devices and then can access applications via Microsoft’s Remote Desktop Protocol (RDP).  IT can optionally connect the applications back to on-premises networks (enabling hybrid connectivity) or alternatively run them entirely in the cloud.

clip_image048

With this service, you can bring scale, agility and global access to your business applications.

Azure RemoteApp is free during preview period. Learn more about Azure RemoteApp and try the service free during preview.

Hybrid Connections: Easily integrate Azure Websites and Mobile Services with on-premises resources

I’m excited to announce Hybrid Connections, a new and easy way to build hybrid applications on Azure. Hybrid Connections enable your Azure Website or Mobile Service to connect to on-premises data & services with just a few clicks within the Azure Management portal.  Today, we're also introducing a Free tier of Azure BizTalk Services that enables everyone to use this new hybrid connections feature for free.

With Hybrid Connections, Azure websites and mobile services can easily access on-premises resources as if they were located on the same private network. This makes it much easier to move applications to the cloud, while still connecting securely with existing enterprise assets.

image

Hybrid Connections support all languages and frameworks supported by Azure Websites (.NET, PHP, Java, Python, node.js) and Mobile Services (node.js, .NET).

The Hybrid Connections service does not require you to enable a VPN or open up firewall rules in order to use it. This makes it easy to deploy within enterprise environments.  Built-in monitoring and management support still enables enterprise administrators control and visibility into the resources accessed by their hybrid applications.

You can learn more about Hybrid Connections using the following links:

API Management: Announcing Preview of new Azure API Management Service

With the proliferation of mobile devices, it is important for organizations to be able to expose their existing backend systems via mobile-friendly APIs that enable internal app developers as well as external developer programs. Today, I’m excited announce the public preview of the new Azure API Management service that helps you better achieve this.

The new Azure API management service allows you to create an easy to use API façade over a diverse set of mobile backend services (including Mobile Services, Web Sites, VMs, Cloud Services and on-premises systems), and enables you to deliver a friendly API developer portal to your customers with documentation and samples, enable per-developer metering support that protects your APIs from abuse and overuse, and enable to you monitor and track API usage analytics:

clip_image014

Creating an API Management service

You can easily create a new instance of the Azure API Management service from the Azure Management Portal by clicking New->App Services->API Management->Create. Once the service has been created, you can get started on your API by clicking on the Manage button and transitioning to the Dashboard page on the Publisher portal.

clip_image015

Publishing an API

A typical API publishing workflow involves creating an API: first creating a façade over an existing backend service, and then configuring policies on it and packaging/publishing the API to the Developer portal for developers to be able to consume.

To create an API, select the Add API button within the publisher portal, and in the dialog that appears enter the API name, location of the backend service and suffix of the API root under the service domain name.  Note that you can implement the back-end of the API anywhere (including non-Azure cloud providers or locations).  You can also obviously host the API using Azure – including within a VM, Cloud Service, Web Site or Mobile Service.

Once you’ve defined the settings, click Save to create the API endpoint:

clip_image017

 

Once you’ve defined have created your API endpoint, you can customize it.  You can also set policies such as caching rules, and usage quotas and rate limits that you can apply for developers calling the API. These features end up being extremely useful when publishing an API for external developers (or mobile apps) to consume, and help ensure that your APIs cannot be abused.

Developer Portal

Once your API has been published, click on the Developer Portal link.  This will launch a developer portal page that can be used by developers to learn how to consume and use the API that you have published.  It provides a bunch of built-in support to help you create documentation pages for your APIs, as well as built-in testing tools.  You’ll also get an impressive list of copy-and-paste-ready code samples that help teach developers how to invoke your APIs from the most popular programming languages.  Best of all this is all automatically generated for you:

clip_image023

You can test out any of the APIs you’ve published without writing a line code by using the interactive console.

clip_image025

Analytics and reports

Once your API is published, you’ll want to be able to track how it is being used.  Back in the publisher portal you can click on the Analytics page to find reports on various aspects of the API, such as usage, health, latency, cache efficiency and more. With a single click, you can find out your most active developers and your most popular APIs and products. You can get time series metrics as well maps to show what geographies drive them.

clip_image027

Learn More

We are really excited about the new API Management service, and it is going to make securely publishing and tracking external APIs much simpler.  To learn more about API Management, follow the tutorials below:

Cache: New Azure Redis Cache Service

I’m excited to announce the preview of a new Azure Redis Cache Service.

This new cache service gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

We are offering the Azure Redis Cache Preview in two tiers:

  • Basic – A single Cache node (ideal for dev/test and non-critical workloads)
  • Standard – A replicated Cache (Two nodes, a Master and a Slave)

During the preview period, the Azure Redis Cache will be available in a 250 MB and 1 GB size. For a limited time, the cache will be offered free, with a limit of two caches per subscription.

Creating a New Cache Instance

Getting started with the new Azure Redis Cache is easy.  To create a new cache, sign in to the Azure Preview Portal, and click New -> Redis Cache (Preview):

image

Once the new cache options are configured, click Create. It can take a few minutes for the cache to be created. After the cache has been created, your new cache has a Running status and is ready for use with default settings:

clip_image042

Connect to the Cache

Application developers can use a variety of languages and corresponding client packages to connect to the Azure Redis Cache. Below we’ll use a .NET Redis client called StackExchange.Redis to connect to the cache endpoint. You can open any Visual Studio project and add the StackExchange.Redis NuGet package to it, via the NuGet package manager.

The cache endpoint and key can be obtained respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal:

clip_image046

Once you’ve retrieved these you can create a connection instance to the cache with the code below:

var connection = StackExchange.Redis

                                                   .ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, you can retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods.

cache.StringSet("Key1", "HelloWorld");

string value = cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure.

Learn More

For more information, visit the following links:

Store: Support for EA customers and channel partners in the Azure Store

With today’s update we are expanding the Azure Store to customers and channel partners subscribed to Azure via a direct Enterprise Agreement (EA). Azure EA customers in North America and Europe can now purchase a range of application and data services from 3rd party providers through the Store and have these subscriptions automatically billed against their EA.

image

You will be billed against your EA each quarter for all of your Store purchases on a separate, consolidated invoice.  Access to Azure Store can be managed by your EA Azure enrollment administrators, by going to Manage Accounts and Subscriptions under the Accounts section in the Enterprise Portal, where you can disable or re-enable access to 3rd party purchases via Store.  Please visit Azure Store to learn more.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microosft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming