Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Sending Windows logs to Papertrail with nxlog

Agile Testing - Grig Gheorghiu - Thu, 02/26/2015 - 01:04
I am revisiting Papertrail as a log aggregation tool. It's really easy to send Linux logs to Papertrail via syslog or rsyslog or syslog-ng (see this article on how to configure syslog with TLS) but to send Windows logs you need to jump through some hoops.

Papertrail recommends nxlog as their Windows log management tool of choice, so that's what I used. This Papertrail article explains how to install and configure nxlog on Windows (I recommend enabling TLS).  The nxlog.conf template file provided by Papertrail will send Windows Event logs over. I also wanted to send application-specific logs, so here's what I did:

1) Add an Input section to nxlog.conf for each directory containing the files you want to send to Papertrail. For example, if one of your applications logs to C:\MyApp1\logs and your log files end with .log, you could have this input section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

Some observations:

  • Blogger doesn't like angle brackets so replace START_ANGLE_BRACKET with < and END_ANGLE_BRACKET with >
  • The name MyApp1 is the name of this Input section
  • The File statement points to the location and name of the log files
  • The first Exec statement saves the log line under consideration as the variable $Message
  • The second Exec statement drops messages that contain a specific regular expression, in my case just 'GET /ping' -- which happens to be health checks from the load balancer that pollute the logs; you can replace this with any regular expression that will filter out log lines you don't want sent to Papertrail
  • The next few statements were in the sample Input stanza from the template nxlog.conf file so I just left them there
2) Add more Input sections, one for each log location (i.e. multiple log files under a given directory) that you want to send to Papertrail. You need to give each Input section a unique name (e.g. MyApp1 above).
3) Add a Route section for the Input sections defined previously. If you defined 2 Input sections MyApp1 and MyApp2, your Route section would look something like:

START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyApp1, MyApp2=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
The filewatcher_transformer section was already included in the sample nxlog.conf file from Papertrail. The Route section above says that the files processed by the 2 Input paths MyApp1 and MyApp2 will be processed through the statements defined in the filewatcher_transformer section, then will be sent to Papertrail by virtue of being processed through the statements defined in the syslogout section.
At this point, if you restart the nxlog service on your Windows box, you should start seeing log entries from your application(s) flowing into the Papertrail console.

Deep Learning without Deep Pockets

Now that you’ve transformed your system through successive evolutions of architecture goodness...you've made it cloud native, you now treat a fist full of datacenters as a single computer, you’ve microservicized it, you’ve containerized it, you’re continuously releasing and improving it, you’ve made it reactive, you’ve socialized it, you’ve mobilized it, you’ve Hadoop’ed it, you’ve made it DevOps friendly, and you have real-time dashboards that would make NORAD jealous...what’s next?

Deep learning is what’s next. Making machines that learn. The problem is how?

All the other transformations have been changes good programmers can learn to do. Deep learning is still deep magic. We are waiting for the Hadoop of deep learning to be built.

Until then, if you aren’t Google with Google sized clusters and cloisters of PhDs, what can you do? Greg Corrado, Senior Research Scientist at Google, gave a great presentation at the RE.WORK Deep Learning Summit 2015 (videos) that has some useful suggestions:

Categories: Architecture

The Microsoft Story for the Cloud

How has the Cloud changed your world?

One of the ways we challenge people is to ask, do you want to move to the Cloud, use the Cloud, or be the Cloud?

But to answer that well, you need to really be grounded in your vision for the future, and the role you wan to play.

The Cloud creates a brave new world.  It enables and powers the Digital Economy

Businesses need to cross the Cloud chasm (and some don’t make it) in an effort to stay relevant and to be what’s next.

Businesses need to re-imagine themselves and explore the art of the possible.

Business leaders and IT leaders need to help others forge their way forward in the Digital Frontier.

And it all starts with a story.

A story that inspires the hearts and minds so people can wrap their head around the challenge and the change.

I think Satya says the Microsoft story for the Cloud in a very simple and compelling way:

"We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more." -- Satya Nadella, Microsoft CEO

That’s a pretty simple and yet pretty powerful and compelling story of why do we do what we do.

It’s a great way to re-imagine and inspire our transformation to a productivity and platform company in a Mobile-first, Cloud-first world.   And, it’s a very simple story around productivity and empowerment that inspires and drives people in various roles and responsibilities to co-create the future in a profound way.

What is your simple story for how you re-imagine you or your business in a Mobile-First, Cloud-First world?

You Might Also Like

Business Scenarios for the Cloud

If You Want to Thrive at Microsoft

Microsoft Explained: Making Sense of the Microsoft Platform Story

Satya Nadella is All About Customer Focus, Employee Engagement, and Changing the World

Satya Nadella on The Future is Software

Satya Nadella on Everyone Has to Be a Leader

The Microsoft Story

Categories: Architecture, Programming

Why We Are Moving to the Cloud: Agility, Economics, and Innovation

I was reading the IT Showcase’s page on the Cloud platform.

I really liked the simple little story around why we are moving to the Cloud:

“Three words: Agility, economics and innovation. Cloud technology satisfies the CEO's desire for greater business agility, the CFO's desire to streamline operations, and the CMO's desire for a more innovative way to engage customers.”

Some people move to the Cloud because they see an ROI play.  Others move because they see opportunity cost.  Others move simply because they don’t want to be left behind.

The most common reason I see is business agility and to stay relevant in today’s world.

People are using the Cloud to re-imagine the customer experience, transform the workforce and employee productivity, and to transform operations and back-office activities.

In all cases, these transformations lead to business-model innovation and new opportunities to create and capture value.

Value is a moving target and the Cloud can help you stay in the game.

Are you in the game?

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data

Reenvision Your Customer Experience

Reenvision Your Operations

Categories: Architecture, Programming

Introducing Structurizr

Coding the Architecture - Simon Brown - Tue, 02/24/2015 - 16:36

I've mentioned Structurizr in passing, but I've never actually written a post that explains what it is and why I've built it. First, some background.

"What tool do you use to draw software architecture diagrams?"

I get asked this question almost every time I run one of my workshops, usually just after the section where I introduce the C4 model and show some example diagrams. My answer to date has been "just OmniGraffle or Visio", and recommending that people use a drawing tool to create software architecture diagrams has always bugged me. My Simple Sketches for Diagramming Your Software Architecture article provides an introduction to the C4 model and my thoughts on UML.

Once you have a simple way to think about and describe the architecture of a software system (and this is what the C4 model provides), you realise that the options for communicating it are relatively limited. And this is where the idea for a simple diagramming tool was born. In essence, I wanted to build a tool where the data is sourced from an underlying model and all I need to do is move the boxes around on the diagram canvas.

Part 1: Software architecture as code

Structurizr initially started out as a web application where you would build up the underlying model (the software systems, people, containers and components) by entering information about them through a number of HTML forms. Diagrams were then created by selecting which type of diagram you wanted (system context, container or component) and then by specifying which elements you wanted to see on the diagram. This did work but the user experience, particularly related to data entry, was awful, even for small systems.

Behind the scenes of the web application was a simple collection of domain classes that I used to represent software systems, containers and components. Creating a software architecture model using these classes was really succinct, and it struck me that perhaps this was a better option. The trade-off here is that you need to write code in order to create a software architecture model but, since software architects should code, this isn't a problem. ;-)

These classes have become what is now Structurizr for Java, an open source library for creating software architecture models as code. Having the software architecture model as code opens a number of opportunities for creating the model (e.g. extracting components automatically from a codebase) and communicating it (e.g. you can slice and dice the model to produce a number of different views as necessary). Since the models are code, they are also versionable alongside your codebase and can be integrated with your build system to keep your models up to date. The models themselves can then be output to another tool for visualisation.

Part 2: Web-based software architecture diagrams

structurizr.com is the other half of the story. It's a web application that takes a software architecture model (via an API) and provides a way to visualise it. Aside from changing the colour, size and position of the boxes, the graphical representation is relatively fixed. This in turn frees you up from messing around with creating static diagrams in drawing tools such as Visio.

Structurizr screenshot
A screenshot of Structurizr.

As far as features go, the list currently includes an API for getting/putting models, making models public/private, embedding diagrams into web pages, creating diagrams based upon different page sizes (paper and presentation slide sizes), exporting diagrams to a 300dpi PNG file (for printing or inclusion in a slide deck), automatic generation of a key/legend and a fullscreen presentation mode for showing diagrams directly from the tool. The recent webinar I did with JetBrains includes more information and a demo. Pricing is still to be confirmed, but there will be a free tier for individual use and probably some paid tiers for teams and organisations (e.g. for sharing private models).


An embedded software architecture diagram from structurizr.com (you can move the boxes).

It's worth pointing out that structurizr.com is my vision of what I want from a simple software architecture diagramming tool, but you're free to take the output from the open source library and create your own tooling to visualise the model. Examples include an export to DOT format (for importing into something like Graphviz), XMI format (for importing into UML tools), a desktop app, etc.

That's a quick introduction to Structurizr and, although it's still a work in progress, I'm slowly adding more users via a closed beta, with the goal of opening up registration next month. It definitely scratches an itch that I have, and I hope other people will find it useful too.

Categories: Architecture

Introducing ASP.NET 5

ScottGu's Blog - Scott Guthrie - Mon, 02/23/2015 - 21:41

The first preview release of ASP.NET 1.0 came out almost 15 years ago.  Since then millions of developers have used it to build and run great web applications, and over the years we have added and evolved many, many capabilities to it. 

I'm excited today to post about a new release of ASP.NET that we are working on that we are calling ASP.NET 5.  This new release is one of the most significant architectural updates we've done to ASP.NET.  As part of this release we are making ASP.NET leaner, more modular, cross-platform, and cloud optimized.  The ASP.NET 5 preview is now available as a preview release, and you can start using it today by downloading the latest CTP of Visual Studio 2015 which we just made available.

ASP.NET 5 is an open source web framework for building modern web applications that can be developed and run on Windows, Linux and the Mac. It includes the MVC 6 framework, which now combines the features of MVC and Web API into a single web programming framework.  ASP.NET 5 will also be the basis for SignalR 3 - enabling you to add real time functionality to cloud connected applications. ASP.NET 5 is built on the .NET Core runtime, but it can also be run on the full .NET Framework for maximum compatibility.

With ASP.NET 5 we are making a number of architectural changes that makes the core web framework much leaner (it no longer requires System.Web.dll) and more modular (almost all features are now implemented as NuGet modules - allowing you to optimize your app to have just what you need).  With ASP.NET 5 you gain the following foundational improvements:

  • Build and run cross-platform ASP.NET apps on Windows, Mac and Linux
  • Built on .NET Core, which supports true side-by-side app versioning
  • New tooling that simplifies modern Web development
  • Single aligned web stack for Web UI and Web APIs
  • Cloud-ready environment-based configuration
  • Integrated support for creating and using NuGet packages
  • Built-in support for dependency injection
  • Ability to host on IIS or self-host in your own process

The end result is an ASP.NET that you'll feel very familiar with, and which is also now even more tuned for modern web development.

Flexible, Cross-Platform Runtime

ASP.NET 5 works with two runtime environments to give you greater flexibility when hosting your app. The two runtime choices are:

.NET Core – a new, modular, cross-platform runtime with a smaller footprint.  When you target the .NET Core, you’ll be able to take advantage of some exciting new benefits:

1) You can deploy the .NET Core runtime with your app which means your app will run with this deployed version of the runtime rather than the version of the runtime that is installed on the host operating system. Your version of the runtime runs side-by-side with versions for other apps. You can update that runtime, if needed, without affecting other apps, or you can continue running on the same version even though other apps on the system have been updated.  This makes app deployment and framework updates much easier and less impactful to other apps running on a system.

2) Your app is only dependent on features it really needs. Therefore, you are never prompted to update/service the runtime for features that are not relevant to your app. You will spend less time testing and deploying updates that are perhaps unrelated to the functionality of your app.

3) Your app can now be run cross-platform. We will provide a cross-platform version of .NET Core for Windows, Linux and Mac OS X systems.  Regardless of which operating system you use for development or which operating system you target for deployment, you will be able to use .NET. The cross-platform version of the runtime has not been released yet, but we are working on it on GitHub and plan to have an official preview of it out soon.

.NET Framework – The API for .NET Core is currently more limited than the full .NET Framework, so you may need to modify existing apps to target .NET Core. If you don't want to have to update your app you can instead run ASP.NET 5 applications on the full .NET Framework (version 4.5.2 and above).  When doing this you have access to the complete set of .NET Framework APIs. Your existing applications and libraries will work without modification on this runtime. MVC 6 - a unified programming model

MVC, Web API and Web Pages provide complementary functionality and are frequently used together when developing a solution. However, in past ASP.NET releases, these programming frameworks were implemented separately and therefore contained some duplication and inconsistencies. With MVC 6, we are merging those models into a single programming model. Now, you can create a single web application that handles the Web UI and data services without needing to reconcile differences in these programming frameworks. You will also be able to seamlessly transition a simple site first developed with Web Pages into a more robust MVC application.

You can now return Razor views and content-negotiated data from the same controller and using the same MVC filter pipeline.

In addition to unifying the existing frameworks we are also adding new features to make server-side Web development easier, like the new tag helpers feature. Tag helpers let you use HTML helpers in your views by simply extending the semantics of tags in your markup.

So instead of writing this:

@Html.ValidationSummary(true, "", new { @class = "text-danger" })<?xml:namespace prefix = "o" />

<div class="form-group">

    @Html.LabelFor(m => m.UserName, new { @class = "col-md-2 control-label" })

    <div class="col-md-10">

        @Html.TextBoxFor(m => m.UserName, new { @class = "form-control" })

        @Html.ValidationMessageFor(m => m.UserName, "", new { @class = "text-danger" })

    </div>

</div>

You can instead write this:

<div asp-validation-summary="ModelOnly" class="text-danger"></div>

<div class="form-group">

    <label asp-for="UserName" class="col-md-2 control-label"></label>

    <div class="col-md-10">

        <input asp-for="UserName" class="form-control" />

        <span asp-validation-for="UserName" class="text-danger"></span>

    </div>

</div>

Tag helpers make authoring your views more natural and readable. They also simplify customizing the output of HTML helpers with additional markup while letting you take full advantage of the HTML editor.

For more examples of creating MVC 6 apps, see these tutorials. Modern web development

This week's ASP.NET 5 preview also includes a number of other great development features that enable you to build even better web applications:

Dynamic Development

In Visual Studio 2015, we take advantage of dynamic compilation to provide a streamlined developer experience. You no longer have to compile your application every time you want to see a change. Instead, just (1) edit the code, (2) save your changes, (3) refresh the browser, and then (4) see your change automatically appear.

image

You enjoy a development experience that is similar to working with an interpreted language without sacrificing the benefits of a compiled language.

You can also optionally use other code editors to work on your ASP.NET 5 projects. Every function within the Visual Studio user interface is matched with cross-platform command-line operations.

Integration with Popular Web Development Tools (Bower, Grunt and Gulp)

Another exciting feature in Visual Studio 2015 is built-in support for Bower, Grunt, and Gulp - popular open source tools that we think should be in every Web developer’s toolkit.

  • Bower is a package manager for client-side libraries, including both JavaScript and CSS libraries.
  • Grunt and Gulp are task runners, which help you to automate your web development workflow. You can use Grunt or Gulp for tasks like compiling LESS, CoffeeScript, or TypeScript files, running JSLint, or minifying JavaScript files.

Bower: To add a JavaScript library to your ASP.NET project add it directly in the bower.json config file:

image

Notice that Visual Studio gives you IntelliSense with a list of available packages. The next time you open the solution, Visual Studio automatically restores any missing packages, so you don’t need to check the packages into source control.

For server-side packages, you’ll still use NuGet Package Manager.

Grunt: In modern web development, you can find yourself managing a lot of tasks, just to build your app: Compiling LESS, TypeScript, or CoffeeScript files, linting, JavaScript minification, running JS unit tests, and so on. Every team will have its own set of requirements, depending on the particular tools that you use. Task runners make it easier to manage and coordinate these tasks. Visual Studio 2015 will support two popular task runners, Grunt and Gulp.

For example, let’s say you want to use Grunt to compile LESS files. Just go into package.json and add the grunt-contrib-less package, which is a third-party Grunt plugin.

image

Use the new Task Runner Explorer in Visual Studio 2015 to bind the task to a build step (pre-build, post-build, clean, or when the solution is opened).

image

This makes it incredibly easy to automate common tasks within your projects - and have them work both for you, as well as across a team wide project.

Simplified dependency management

In ASP.NET 5 you manage dependencies by adding NuGet packages. You can use the NuGet Package Manager or simply edit the JSON file (project.json) that lists the NuGet packages and versions used in your project. The project.json file is easy to work with and you can edit it with any text editor, which enables you to update dependencies even when the app has been deployed to the cloud.

The project.json file looks like:

image

In Visual Studio 2015, IntelliSense assists you with finding the available NuGet packages that you can add as dependencies.

image

And, Intellisense can even help you with the available versions:

image

Cloud-ready configuration

In ASP.NET 5, we eliminated the need to use Web.config file for configuration values. We wanted to make it easier for you to deploy your app to the cloud and have the app automatically read the correct configuration values for that environment. The new system enables you to request named values from a variety of sources (such as JSON, XML, or environment variables). You can decide which formats work best in your situation.

In the Startup.cs file, you can now add or remove the sources for configuration values.

image

The above code snippet shows a project that is set up to retrieve configuration values from a JSON file and environmental variables. You can change this code if you need to specify other sources. In the specified config.json file, you could provide the values.

image

In your host environment, such as Azure, you can set the environmental variables and those values are automatically used instead of local configuration values after the application is deployed. You can deploy your application without worrying about publishing test values.

Dependency injection (DI)

Dependency Injection (DI) is supported in existing ASP.NET frameworks, like MVC, Web API and SignalR, but not in a consistent and holistic way. ASP.NET 5 provides a built-in DI abstraction that is available in a consistent way throughout the entire web stack. You can access services at startup, in middleware, in filters, in controllers, in model binding and virtually any part of the pipeline where you want to use your services. ASP.NET 5 includes a minimalistic DI container to bootstrap the system, but you can easily replace the default container with your container of choice (Autofac, Ninject, etc). Services can be singleton, scoped to the request or transient.

For example, to see how to use constructor injection with ASP.NET MVC 6, create a new ASP.NET 5 Starter Web project and add a simple time service:

using System;

 

namespace WebApplication1

{

    public class TimeService

    {

        public TimeService()

        {

            Ticks = DateTime.Now.Ticks.ToString();

        }

        public String Ticks { get; set; }

    }

}

The simple service class sets the current Ticks when the constructor is called.

Next, register the time service as a transient service in the ConfigureServices method of the Startup class:

public void ConfigureServices(IServiceCollection services)

{

    services.AddMvc();

    services.AddTransient<TimeService>();

}

Then, update the HomeController to use constructor injection and to write the Ticks when the TimeService object was created.

public class HomeController : Controller

{

    public TimeService TimeService { get; set; }

 

    public HomeController(TimeService timeService)

    {

        TimeService = timeService;

    }

 

    public IActionResult About()

    {

        ViewBag.Message = TimeService.Ticks + " From Controller";

        System.Threading.Thread.Sleep(1);

        return View();

    }

 

    // Code removed for brevity

}

Notice the controller doesn't create a TimeService. It's injected when the controller is instantiated.

In MVC 6 you can use the [Activate] attribute to inject services via properties. You can use [Activate] not just on controllers but also on filters, and view components. This means you can simplify your controller code like this:

public class HomeController : Controller

{

    [Activate]

    public TimeService TimeService { get; set; }

 

    // Code removed for brevity

}

MVC 6 also supports DI into Razor views via the @inject keyword. In the code below, I’ve injected the time service into the about view directly and defined a TimeSvc property by which it can be accessed:

@using WebApplication23

@inject TimeService TimeSvc

 

<h3>@ViewBag.Message</h3>

 

<h3>

    @TimeSvc.Ticks From Razor

</h3>

When you run the app, you can see different ticks values from the controller and the view.

image

Fast HTTP performance

ASP.NET 5 introduces a new HTTP request pipeline that is modular so you can add only the components that you need. The pipeline is also no longer dependent on System.Web. By reducing the overhead in the pipeline, your app can experience better throughput and a more tuned HTTP stack. The new pipeline is based on many of the learnings from the Katana project and also supports OWIN.

To customize which components are used in the pipeline, use the Configure method in your Startup class. The Configure method is used to specify which middleware you want to “use” in your request pipeline. ASP.NET 5 already includes ported versions of many of the middleware from the Katana project, like middleware for static files, authentication and diagnostics. The following image shows some of the features you can add or remove to the pipeline for your project.

public void Configure(IApplicationBuilder app)

{

    // Add static files to the request pipeline.

    app.UseStaticFiles();

 

    // Add cookie-based authentication to the request pipeline.

    app.UseIdentity();

 

    // Add MVC and routing to the request pipeline.

    app.UseMvc(routes =>

    {

    routes.MapRoute(

        name: "default",

        template: "{controller}/{action}/{id?}",

        defaults: new { controller = "Home", action = "Index" });

 

});

You can also write your own middleware components and add them to the pipeline. Open source

We are developing ASP.NET 5 as an open source project on GitHub. You can view the code, see when changes were made, download the code, and submit changes. We believe making ASP.NET 5 open source will we make it easier for you to understand the code, understand our intended direction, and contribute to the project.

image

Docs and tutorials

To get started with ASP.NET 5 you can find docs and tutorials on the ASP.NET site at http://asp.net/vnext. The following tutorials will guide you through the steps of creating your first ASP.NET 5 project.

Also read this article for even more ASP.NET and Web Development improvements coming this week.

Hope this help,

Scott

omni
Categories: Architecture, Programming

HappyPancake: a Retrospective on Building a Simple and Scalable Foundation

This is a guest repost by Rinat Abdullin, who worked on HappyPancake, the largest free dating site in Sweden. Initially written in ASP.NET and MS SQL Database server, it eventually became overly complex and expensive to scale. This is the last post in a nearly two year long series of engaging articles on the evolution of the project. For the complete list please see the end of this article.

Our project at HappyPancake completed this week. We delivered a simple and scalable foundation for the next version of largest free dating web site in Sweden (with presence in Norway and Finland).

Journey

Below is the short map of that journey. It lists technologies and approaches that we evaluated for this project. Yellow regions highlight items which made their way into the final design.

Project Deliverables
Categories: Architecture

The Great Motivational Quotes Revamped

When you need to make things happen, motivational quotes can help you dig deep and get going.

I put together a very comprehensive collection of the world’s best motivational quotes a while back.

It was time for a refresh.  Here it is:

Motivational Quotes – The Great Motivational Quotes Collection

Imagine motivational wisdom of the ages and modern sages right at your fingertips all on one page.   I included motivational quotes from Bruce Lee, Tony Robbins, Winston Churchill, Waldo Emerson, Jim Rohn, and more.

See if you can find at least three motivational quotes that you can take with you on the road of life, to help you deal with setbacks and challenges, and to unleash your inner-awesome.

Getting Started with Motivational Quotes

I’ll start you off.   If you don’t already have these in your personal motivational quotes collection, here are a few that I draw from often:

“If you’re going through hell, keep going.” — Winston Churchill

“When it’s time to die, let us not discover that we have never lived.” -Henry David Thoreau

“Don’t ask yourself what the world needs, ask yourself what makes you come alive. And then go do that. Because what the world needs is people who have come alive.”— Howard Thurman

How’s that for a starter set?

Build Better Motivational Thought Habits

You can train your brain with motivational mantras.     Our thoughts are habits.   If you want to build better thought habits, then feed on some of the best motivational quotes of all time.

“An ounce of action is worth a ton of theory.” – Ralph Waldo Emerson

“Positive thinking won’t let you do anything but it will let you do everything better than negative thinking will.” -– Zig Ziglar

“The only person you are destined to become is the person you decide to be.” – Ralph Waldo Emerson

If you train yourself well, you won’t entirely eliminate motivational setbacks, but you’ll be able to defeat procrastination, and you’ll be able to bounce back faster when you find yourself in a slump.   Motivation is a skill you can build, and it will serve you well, in work and life.

You Create Your Future

The most important motivational concept to hold on to is the idea that you create your future.  Or, as Wayne Dyer puts it:

“Go for it now. The future is promised to no one.”

So go for the bold, and get your game face on.

If you need some help kick-starting your fire, stroll through the motivational quotes a few times until something really sinks in or clicks for you.  Life’s better with the right words, and there are just the right words already out there, just waiting to be found.

Enjoy and take your time sifting through the Motivational Quotes – The Great Motivational Quotes Collection.

Also, if you have a favorite motivational quote that I don’t have listed, let me know.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

The Great Inspirational Quotes Collection Revamped

I think of inspiration simply as “breathe life into.”

Whether you're shipping code, designing the next big thing, or simply making things happen, inspirational quotes can help keep you going.

In the spirit of helping people find their Eye of the Tiger or get their mojo on, I’ve put together a hand-crafted collection of the ultimate inspirational quotes:

The Inspirational Quotes Collection

If you’ve seen my collection of inspirational quotes before, it’s completely revamped.   It should be much easier to browse all of the inspirational quotes now so you can see some old familiar quotes that you may have heard of long ago, as well as many inspirational quotes, you have never heard of before.

Dive in, explore the collection of inspirational quotes, and see if you can find at least three inspirational quotes that breathe new life into your moment, your day, your work, or anything you do.

The Power of Inspirational Quotes

Inspirational quotes can help us move mountains.   The right inspirational words and ideas can help us boldly go where we have not gone before, as well as conquer our fears and soar to new heights.

Or, the right inspirational quote can simply help us roar a little louder inside, when we need it most.

Life isn’t always a bowl of cherries.  And work can be an incredible challenge.    And sometimes, even our best laid plans, go up in flames.

So having a repertoire of inspirational quotes and inspiring mantras at your mental fingertips can help you roll with the punches and keep going.

One of the most important inspirational ideas I learned early on goes like this:

Whatever doesn’t kill you makes you stronger.

It helped me turn trials into triumphs, and eventually learn to take on big challenges as a way to grow.

Another inspirational idea that really helped me find my way forward is by Ralph Waldo Emerson, and, it goes like this:

“Do not follow where the path may lead. Go, instead, where there is no path and leave a trail.”

Whenever I went on a new journey, down an unfamiliar path, it helped remind me that I don’t always need a trail, and that many times, it’s about blazing my own trail.

The power of inspirational quotes is their power to light a fire inside and fan the flames until we go and blaze our trail that leaves our self, and others, in awe.

What Lies Within Us

Perhaps, the greatest inspirational quote of all time is another amazing quote by Emerson:

“What lies behind us and what lies before us are tiny matters compared to what lies within us.”

It’s an awe-inspiring reminder to not only do what makes us come alive, but to realize our potential and unleash what we are capable of.

It’s Better to Burn Out, then Fade Away

So many inspirational quotes remind us that life is short and that we have to go for it.   But maybe George Bernard Shaw said it best:

“I want to be all used up when I die.”

One quote that I think about often is by Seth Godin:

“Life is like skiing.  Just like skiing, the goal is not to get to the bottom of the hill. It’s to have a bunch of good runs before the sun sets.”

It’s all about making the journey worth it.

When It’s Over

What do you do when it’s over.  It all depends.   Dr. Seuss has an interesting twist:

“Don’t cry because it’s over. Smile because it happened.”

But the one that I find has true wisdom is from Dave Weinbaum:

“The secret to a rich life is to have more beginnings than endings.”

Here’s to new many more beginnings in your life.

Enjoy and be sure to explore The Inspirational Quotes Collection to soar or roar in your own personal way.

You Might Also Like

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For February 20th, 2015

Hey, it's HighScalability time:


Networks are everywhere, they can even help reveal disease connections.
  • trillions: number of photons constantly hitting your eyes; $19 billion: Snapchat valuation;  8.5K: average number of questions asked on Stack Overflow per day
  • Quotable Quotes:
    • @BenedictEvans: End of 2014: 3.75-4bn mobiles ~1.5bn PCs  7-800m consumer PCs 1.2-1.3bn closed Android 4-500m open Android 650-675m iOS 80m Macs, ~75m Linux
    • @JeremiahLee: “Humans only use 10% of their internet.” —@nvcexploder #NodeSummit
    • beguiledfoil: Javavu - The feeling experienced when you see new languages make the same mistakes Java made 20 years ago and momentarily mistake said language for Java.
    • @ewolff: If Conway's Law is so important - are #Microservices more an organizational approach than an architecture?
    • @KentLangley: "Apache Spark Continues to Spread Beyond Hadoop." I would say supplant. 
    • Database Soup: An in-memory database is one which lacks the capability of spilling to disk.
    • Matthew Dillon: 1-2 year SSD wear on build boxes has been minimal.
    • @gwenshap: Except there is one writer and many readers - so schema and validation must be done on ingest. Anywhere else is just shifting responsibility
    • @jaykreps: Startup style of engineering (fail fast & iterate) doesn't work for every domain, esp. databases & financial systems
    • Taulant Ramabaja: Decentralization is not a goal in and of itself, it is a strategy
    • Eli Reisman: Etsy runs more Hadoop jobs by 7am than most companies do all day.
    • Dormando: We're [memcached] not sponsored like redis is. I've only ever lost money on this venture.
    • The Trust Engineers: There are more Facebook users than Catholics.

  • Exponent...The new integration is hardware + software + services. Not services like disk storage, but services  like HomeKit, HealthKit, Siri, Car Play, Apple Pay. Services that touch every part of our lives. Apple doesn't build cars, stores, or information services, it wraps them with an Apple layer that provides the customer with an integrated experience while taking full advantage of modularity. Modularity wrapped with integration. Owning the hardware is a better profit model than sercvices in the cloud.

  • Quite a response to You Don't Like Google's Go Because You Are Small on reddit. A vigorous 500+ comments were written. Golang isn't perfect. How disappointing, so many things are.

  • After making Linux work well on multiple cores that next bump in performance comes from Improving Linux networking performance. It's a hard job. For a 100Gb adapter on 3GHz CPU there are only about 200 CPU cycles to process each packet. Good break down of time budgets for for various instructions. The approach is improved batching at multiple layers of the stack and better memory management, which leads directly into Toward a more efficient slab allocator.

  • The process behind creating a Google Doodle for Alessandro Volta’s 270th Birthday reminds me a lot of the process of making old style illustrations as described in Cartographies of Time: A History of the Timeline. The idea is to encode symbolically as much of the important information as possible in a single diagram. The coded icon of a tiny skull could mean, for example, a king died while on the throne. A single flame could stand for the fall of man. This art is not completely lost with today's need to convey a lot of information on small screens. This sort of compression has advantages: Strass believed that a graphic representation of history held manifold advantages over a textual one: it revealed order, scale, and synchronism simply and without the trouble of memorization and calculation.
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...
Categories: Architecture

Exploring container platforms: StackEngine

Xebia Blog - Fri, 02/20/2015 - 15:51

Docker has been around for more than a year already, and there are a lot of container platforms popping up. In this series of blogposts I will explore these platforms and share some insights. This blogpost is about StackEngine.

TL;DR: StackEngine is (for now) just a nice frontend to the Docker binary. Nothing...

Free Book: Is Parallel Programming Hard, And, If So, What Can You Do About It?

The trouble ain’t that people are ignorant: it’s that they know so much that ain’t so." -- Josh Billings

 

Is Parallel Programming Hard? Yes. What Can You Do About It? To answer that, Paul McKenney, Distinguished Engineer at IBM Linux Technology Center, vetran of parallel powerhouses SRI and Sequent, has written an epic 400+ page book: Is Parallel Programming Hard, And, If So, What Can You Do About It? 

The goal of the book? "To help you understand how to design shared-memory parallel programs to perform and scale well with minimal risk to your sanity."

So it's not a book about parallelism in the sense of getting the most out of a distributed system, it's a book in the mechanical-sympathy sense of getting the most out of a single machine.

Some example section titles: Introduction, Alternative to Parallel Programming, What Makes Parallel Programming Hard, Hardware and its Habits, Tools of the Trade, Counting, Partitioning and Synchronized Design, Locking, Data Ownership, Deferred Processing, Data Structures, Validation, Formal Verification, Putting it All Together, Advanced Synchronization, Parallel Real-Time Computing, Ease of Use, Conflicting Visions of the Future.

To get a feel for the kind of things you'll learn in the book, here's an interview where Paul talks about what in parallel programming is the hardest to master:

Categories: Architecture

Azure: Machine Learning Service, Hadoop Storm, Cluster Scaling, Linux Support, Site Recovery and More

ScottGu's Blog - Scott Guthrie - Wed, 02/18/2015 - 17:06

Today we released a number of great enhancements to Microsoft Azure. These include:

  • Machine Learning: General Availability of the Azure Machine Learning Service
  • Hadoop: General Availability of Apache Storm Support, Hadoop 2.6 support, Cluster Scaling, Node Size Selection and preview of next Linux OS support
  • Site Recovery: General Availability of DR capabilities with SAN arrays

I've also included details in this blog post of other great Azure features that went live earlier this month:

  • SQL Database: General Availability of SQL Database (V12)
  • Web Sites: Support for Slot Settings
  • API Management: New Premium Tier
  • DocumentDB: New Asia and US Regions, SQL Parameterization and Increased Account Limits
  • Search: Portal Enhancements, Suggestions & Scoring, New Regions
  • Media: General Availability of Content Protection Service for Azure Media Services
  • Management: General Availability of the Azure Resource Manager

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Machine Learning: General Availability of Azure ML Service

Today, I’m excited to announce the General Availability of our Azure Machine Learning service.  The Azure Machine Learning Service is a powerful cloud-based predictive analytics service that makes it possible to quickly create analytics solutions.  It is a fully managed service - which means you do not need to buy any hardware nor manage VMs manually.

Data Scientists and Developers can use our innovative browser-based machine learning IDE to quickly create and automate machine learning workflows.  You can literally drag/drop hundreds of existing ML libraries to jump-start your predictive analytics solutions, and then optionally add your own custom R and Python scripts to extend them.  Our Machine Learning IDE works in any browser and enables you to rapidly develop and iterate on solutions:

image

With today's General Availability release you can easily discover and create web services, train/retrain your models through APIs, manage endpoints and scale web services on a per customer basis, and configure diagnostics for service monitoring and debugging. Additional new capabilities with today's release include:

  • The ability to create a configurable custom R module, incorporate your own train/predict R-scripts, and add python scripts using a large ecosystem of libraries such as numpy, scipy, pandas, scikit-learn etc. You can now train on terabytes of data using “Learning with Counts”, use PCA or one-class SVM for anomaly detection, and easily modify, filter, and clean data using familiar SQLite.
  • Azure ML Community Gallery that allows you to discover & learn experiments, and share through Twitter and LinkedIn. You can purchase marketplace apps through an Azure subscription and consume finished web services for Recommendation, Text Analytics, and Anomaly Detection directly from the Azure Marketplace.
  • A step-by-step guide for the Data Science journey from raw data to a consumable web service to ease the path for cloud-based data science. We have added the ability to use popular tools such as iPython Notebook and Python Tools for Visual Studio along with Azure ML.

Get Started

You can learn the basics of predictive analytics and machine learning using our step-by-step data science guide and tutorials.  No sign-up or credit card is required to get started using Azure Machine Learning (you can use the machine learning IDE and try experiments for free):

image

Also browse our machine learning gallery to run existing machine learning experiments others have already built - and optionally publish your own experiments for others to learn from:

image

Machine Learning and predictive analytics will fundamentally change the way all applications are built in the future.  The new Azure Machine Learning service provides an incredibly powerful and easy way to achieve this.  Start using it for production apps today! HDInsight: General Availability of Apache Storm, Cluster Scaling, Hadoop 2.6, Node Sizes, and Preview of HDInsight on Linux

Today I’m happy to also announce several major enhancements to HDInsight, our managed Hadoop service for powering Big Data workloads in Azure.

General Availability of Apache Storm support

With today's release, we are making it easy for you to do real-time streaming analytics using Hadoop by providing Apache Storm as a fully managed Service and making it generally available on HDInsight. This makes it incredibly easy to stand up and manage Storm clusters. As part of the Storm service on HDInsight we have improved productivity by enabling some key features:

  • Integration with our Azure Event Hubs service - which allows you to easily process any data that is collected via Event Hubs
  • First class .NET experience on top of Apache Storm giving you the option to use both Java and .NET with it
  • Library of spouts and bolts let you easily integrate other Azure services like SQL, HBase and DocumentDB
  • Visual Studio integration that makes it easy for developers to do full project management from within the Visual Studio environment

Creating Storm cluster and running a sample topology

You can easily spin up a new Storm cluster from the Azure management portal. The Storm Dashboard allows you to either upload an existing Storm topology or pick one of the sample topologies from the dropdown.  Topologies can be authored in code, or higher level programming models like Trident can be used. You can also monitor and manage all the topologies that are currently on your cluster via the Storm Dashboard.

image

.NET Topologies and a Visual Studio Experience

One of the big improvements we have done on top of Storm is to enable developers to write Storm topologies in .NET. One of the things I am particularly excited about with the Storm release is the Visual Studio experience that we have enabled for Storm on HDInsight. With the latest version of the Azure SDK, you will get Storm project templates under HDInsight. This will quickly get you started with writing Storm topologies without having to worry or setup the right references or write the skeleton code that is needed for every Storm topology.

Since Storm is available as part of the HDInsight service, all HDInsight features also apply to Storm clusters. For example, you can easily scale up or scale down a Storm cluster with no impact to the existing running topologies. This will enable you to easily grow or shrink Storm clusters depending on the speed of ingest data and latency requirements with no impact on the data which is being processed.  At the time of the cluster creation you have the choice to pick from a long list of available VMs to use for their Storm cluster on HDInsight.

HDInsight 3.2 Support

I’m pleased to announce the availability of the next major version of Hadoop in HDInsight clusters for Windows and Linux. This includes Hadoop 2.6, Hive 0.14, and substantial updates to all of the components in the stack.  Hive 0.14 contains work to improve performance and scalability through Tez, adds a powerful cost based optimizer, and introduces capabilities for handling UPDATE, INSERT and DELETE SQL statements, temporary tables which live for the duration of a development session and more. You can find more details on the Hive 0.14 release here.   Pig 0.14 adds support for ORC, allowing a single high performance format to be leveraged across Pig and Hive.  Additionally Pig can now target Tez instead of Map/Reduce, resulting in substantial performance improvements by changing the execution engine. Details on the Pig 0.14 release are here.  These bring the latest improvements in the open source ecosystem to HDInsight. 

To get started with a 3.2 cluster, use the Azure Management portal or the command-line. In addition to the VS tools for Storm, we've also updated the VS tools to include Hive query authoring.  We've also added improved statement completion, local validation, access in Visual Studio to the YARN task logs, and support for HDInsight clusters on Linux. In order to get these, you just need to install the Azure SDK for Visual Studio which contains the latest HDInsight tooling.

Cluster Scaling

Many of our customers have asked for the ability to change HDInsight cluster sizes on the fly.  This capability is now accessible in both the Azure portal, as well as through the command line and SDK's.  You can grow or shrink a Hadoop cluster to fit your workload by simply dragging the sizing slider.  We'll add more nodes to your cluster while it is processing and when your larger jobs are done, you can reduce the size of the cluster.  If you need more cores available in your subscription, you can open a Billing support ticket to request a larger quota. 

Node Size Selection

Finally, you can also now specify the VM sizes for the nodes within your HDInsight cluster.  This lets you optimize your cluster's resources to fit your workload.  We've made the entire A and D series of VM sizes available.  For each of the different types of roles within a cluster, we'll let you specify the machine type.  This allows you to tune the amount of CPU, RAM and SSD available to your jobs. 

HDInsight on Linux

Today we are also releasing a preview version of our HDInsight service that allows you to deploy HDInsight clusters using Ubuntu Linux containers.  This expands the operating system options you can use when running managed Hadoop workloads on Azure (previously HDInsight only supported Windows Server containers).

The new Linux support enables you to easily use familiar tools like SSH and Ambari to build Big Data workloads in Azure.  HDInsight on Linux clusters are built on the same Hadoop distribution as the Windows clusters, are fully integrated with Azure storage, and make it easy for customers leveraging Hadoop to take advantage of the SLA, management and support that HDInsight offers.  To get started, sign up for the preview here.  You can then easily create Linux clusters using the Azure Management Portal or via our command-line interfaces.

SSH connectivity to your HDInsight clusters is enabled by default for all HDInsight on Linux clusters. You can use an SSH client of your choice to connect to the cluster.  Additionally, SSH tunneling can be leveraged for forwarding traffic from your browser to all of the Hadoop web applications.

Learn More

For more information about Azure HDInsight, check out the following resources:

Site Recovery: General Availability of Enterprise DR with SANs

With today’s Azure release, we are also adding another significant capability to Azure Site Recovery’s disaster recovery and replication portfolio. Enterprises that seek to leverage their Storage Area Network (SAN) Arrays to enable high performance synchronous and asynchronous replication across their on-premises Hyper-V private clouds can now orchestrate end-to-end storage array-based replication and disaster recovery with Azure Site Recovery and System Center Virtual Machine Manager (SCVMM).

The addition of SAN as a replication channel enables key scenarios such as Synchronous Replication, Multi-VM Consistency, and support for Guest Clusters with Azure Site Recovery. With support for Shared VHDX and iSCSI Target LUNs, ASR will now be able to better meet the needs of enterprise-class applications such as SQL Server, SharePoint, and SAP etc.

To enable SAN Replication, in the Azure Management Portal select SAN when configuring SCVMM clouds in ASR. ASR in turn validates that the cloud being configured has host clusters that have been correctly zoned to a Storage Array, either via Fibre Channel or iSCSI. Once the cloud configuration is complete and the storage pools have been mapped, Replication Groups (group of storage LUNs that replicate together and thereby enable multi-VM replication consistency) can be enabled for replication. ASR automates the creation of target LUNs, target Replication Groups, and starts the array-based replication. 

Here’s an example of a Recovery Plan that can failover a SQL Guest Cluster deployed on a Replication Group:

image

Learn More

Visit the Azure Site Recovery forum on MSDN for additional information.

Getting started with Azure Site Recovery is easy - all you need is to simply sign up for a free Microsoft Azure trial. SQL Database: General Availability of SQL Database (V12)

Earlier this month we released the general availability version of our SQL Database (V12) service version.  We introduced a preview of this new release last December, and it includes a ton of new capabilities. These include:

  • Better management of large databases. We now support heavier database workload management with parallel queries, table partitioning, online indexing, worry-free large index rebuilds with the previous 2GB size limit removed, and more alter database commands.

  • Support for more programmability capabilities: You can now build even more robust applications with CLR, T-SQL Windows functions, XML index, and change tracking support.

  • Up to 100x performance improvements with support for In-memory columnstore queries for data mart and analytic workloads.

  • Improved monitoring and troubleshooting: Extended Events (XEvents) and visibility into over 100 new table views via an expanded set of Database Management Views (DMVs).

  • New S3 performance level: Today's preview introduces a new pricing option for SQL Databases. The new "S3" performance tier delivers 100 DTU of performance (twice the DTU level of the existing S2 tier) and all of the features available in the Standard tier. It enables an even more cost effective way to run applications with higher performance needs.

You can now take advantage of all of these features in general availability - with all databases backed by an enterprise grade SLA.

Upcoming Security Features

I'm also excited to announce a number of new security features that will start rolling out this month and this spring.  These features will help customers better protect their cloud data and help further meet corporate and industry compliance policies. These security enhancements include:

  • Row-Level Security
  • Dynamic Data Masking
  • Transparent Data Encryption

Available in preview today, customers can now implement Row-Level Security on databases to enable implementation of fine-grained access control over rows in a database table for greater control over which users can access which data.

Coming soon, SQL Database will introduce Dynamic Data Masking which is a policy-based security feature that helps limit the exposure of data in a database by returning masked data to non-privileged users who run queries over designated database fields, like credit card numbers, without changing data on the database. Finally, Transparent Data Encryption is coming soon to SQL Database V12 databases for encryption at rest on all databases.

Stay tuned over the coming months for details as we continue to rollout the V12 service general availability and upcoming security features. Web Sites: Support for Slot Settings

The Azure Web Sites service has always provided the ability to store application settings and connection strings as a part of your Web Site’s metadata.  Those settings become available at runtime via environment variables and, if you use .NET, the standard configuration manager API.  This feature has now been updated to work better with another Web Sites feature: deployment slots. 

Deployment slots provide an easy way for you to safely deploy and test new releases of your web applications prior to swapping them live into production.  Let’s say you have a website called mysite.azurewebsites.net with a deployment slot at mysite-staging.azurewebsites.net.  You can swap these slots at any given time, and with no downtime. This provides a nice infrastructure for upgrading your website. Until now, when you swapped the staging slot with the production site, all settings and connection strings would swap as well. Sometimes that’s exactly what you want and it works great. 

But what if, for testing purposes, your site uses a database and you explicitly want each slot to have its own database (e.g. a production database and a testing database)?  Prior to this month's release that would have been difficult to automate since the swap operation would move the staging connection string to the production site and vice versa. You would have to do something unnatural like going to the staging slot and manually updating the settings to the production values before performing the swap operation. Then, you would execute the swap, and finally manually update the staging settings to point to the staging database. That workflow is very complicated and error prone.  

New Slot Settings Support

Slot specific settings are the solution to this problem.  Simply go to the Azure Preview Portal, navigate to your Web Site’s Settings page, and you’ll see a new checkbox next to each app setting and connection string.  Check the boxes next to each app settings setting and/or connection string that should not participate in swap operations.  Each deployment slot has its own version of this settings page where you can go and enter the slot specific setting values.  You now have a lot more flexibility when it comes to managing deployment slots and flowing configuration between them during swaps:

image 

API Management: New Premium Tier

Earlier this month we released a preview of our new Premium Tier for our API Management Service.  The Azure API Management Service provides a great offering that helps customers expose web-based APIs to customers - and provides support for API protection via rate-limiting, quotas and keys, detailed analytics, easy developer on-boarding and much more.

As the strategic value of APIs increase, customers are demanding even more performance, higher availability and more enterprise-grade features. And in response we're delighted to introduce a new Premium tier of API Management which will offer a 99.95% SLA after preview and includes a number of key new features:

Multiple Geography Deployment

Until now each API Management service resided in a single region selected when the service is created. I’m pleased to announce the introduction of a new multi-region deployment feature that allows API publishers to easily distribute a single API Management service across any number of Azure regions. Customers who want to reduce latency for distributed API consumers and require extremely high availability can now enable multi-geo with minimal configuration.

image

Premium tier customers will now see an updated capacity section on the scale tab of the Azure Management portal. Additional units and regions can be added with a few clicks of the relevant dropdown controls and API Management will provision additional proxies beyond the primary region in a matter of minutes.

Multi-geo is particularly effective when combined with the API Management caching policy, which can provide a CDN-like capability for your mission critical and performance sensitive APIs. For more information on multiple-geography deployment, check out the documentation.

Azure Virtual Network / VPN integration

Many customers are already managing their on-premises APIs using API Management's mutual certificate authentication to secure their backend. The new Premium offering introduces a great new capability for organizations that prefer to use a VPN solution or want to leverage their Azure ExpressRoute connection. Available in the Premium Tier, VPN connectivity settings are available on the configure tab of the Azure Management Portal and can even be combined with multi-geo, with a separate VPN for each region. More information is available in the documentation.

image

Active Directory Integration

Prior to today’s release, API Management's developer portal allowed developers to self-serve sign up using a custom account created with their e-mail address or using popular social identity providers like Facebook, Twitter, Google and Microsoft account. Sometimes businesses and enterprises want more control and would like to restrict sign in options, often preferring Azure Active Directory.

With our latest release, we now allow you to configure Azure Active Directory as an identity provider for Azure API Management. Administrators can disable all other identity providers and restrict access to APIs and documentation based on AD group membership. What's more, access can be extended to allow multiple AAD tenants to access your developer portal, making it even easier to share your APIs with business partners.

image

Learning More

Check out the Azure Active Directory documentation for more information on the integration, and the pricing page for more information on the new premium tier. DocumentDB: New Asia and US Regions, SQL Parameterization and Increased Account Limits

Earlier this month we released the following new features and capabilities in our Azure DocumentDB service - which provides a fully managed NoSQL JSON database service:

  • New regional availability
  • Larger accounts and documents: Increased the number of capacity units per account and upported document size doubled
  • SQL parameterization: Support for handle and escape user input, preventing accidental exposure of data

New Regions

We have added new support for provisioning DocumentDB accounts in the East Asia, Southeast Asia, and US East Azure regions (in addition to our existing US West, East Europe and West Europe regions). We’ll continue to invest in regional expansion in order to give you the flexibility and choice you need when deciding where to locate your DocumentDB data.

Larger Accounts and Documents

Throughout the preview process we’ve steadily increased the maximum document and database sizes.  With this month's release we've increased the maximum size of an individual document from 256Kb to 512Kb. The Capacity Unit (CU) limit per DocumentDB Account has also been raised from 5 to 50 which means you can now scale a single DocumentDB account to 500GB of storage and 100,000 Request Units of provisioned throughput. As always, our preview quotas can be adjusted on a per account basis - contact us if you have a need for increased capacity.

SQL Parameterization

Instead of inventing a new query language, DocumentDB supports querying documents using SQL (Structured Query Language) over hierarchical JSON documents. We are pleased to announce that we have extended our SQL query capabilities by adding support for parameterized SQL queries in the Azure DocumentDB REST API and SDKs. Using this feature, you can now write parameterized SQL queries. Parameterized SQL provides robust handling and escaping of user input, preventing accidental exposure of data through “SQL injection”.

Let’s take a look at a sample using the .NET SDK. In addition to plain SQL strings and LINQ expressions, we’ve added a new SqlQuerySpec class that can be used to build parameterized queries.  Here’s a sample that queries a “Books” collection with a single user supplied parameter for author name:

IQueryable<Book> queryable = client.CreateDocumentQuery<Book>(<?xml:namespace prefix = "o" />

collectionSelfLink,

new SqlQuerySpec {

             QueryText = "SELECT * FROM books b WHERE (b.Author.Name = @name)",

             Parameters = new SqlParameterCollection()  {

              new SqlParameter("@name", "Herman Melville")

              }

       });

Note:

  • SQL parameters in DocumentDB use the familiar @ notation borrowed from T-SQL
  • Parameter values can be any valid JSON (strings, numbers, Booleans, null, even arrays or nested JSON)
  • Since DocumentDB is schema-less, parameters are not validated against any type
  • You could just as easily supply additional parameters by adding additional SqlParameters to the SqlParameterCollection

The DocumentDB REST API also natively supports parameterization. The .NET sample shown above translates to the following REST API call. To use parameterized queries, you need to specify the Content-Type Header as application/query+json and the query as JSON in the body, as shown below.

POST https://contosomarketing.documents.azure.com/dbs/XP0mAA==/colls/XP0mAJ3H-AA=/docs

HTTP/1.1 x-ms-documentdb-isquery: True

x-ms-date: Mon, 18 Aug 2014 13:05:49 GMT

authorization: type%3dmaster%26ver%3d1.0%26sig%3dkOU%2bBn2vkvIlHypfE8AA5fulpn8zKjLwdrxBqyg0YGQ%3d

x-ms-version: 2014-08-21

Accept: application/json

Content-Type: application/query+json

Host: contosomarketing.documents.azure.com

Content-Length: 50

{     

    "query": "SELECT * FROM books b WHERE (b.Author.Name = @name)",    

    "parameters": [         

        {"name": "@name", "value": "Herman Melville"}        

    ]

}

Queries can be issued against document collections, as well as system metadata collections like Databases, DocumentCollections, and Attachments using the approach shown above. To try this out, download the latest build of the DocumentDB SDK on any of the supported platforms (.NET, Java, Node.js, JavaScript, or Python).

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable. Submit your suggestions on the Microsoft Azure DocumentDB feedback forum. Search: Portal Enhancements, Suggestions & Scoring, New Regions

Earlier this month we released a bunch of great enhancements to our Azure Search service.  Azure Search provides developers with all of the features needed to build out search experiences for web and mobile applications without having to deal with the typical complexities that come with managing, tuning and scaling a large search service.

Azure Portal Enhancements

Last month we added the ability to create and manage your search indexes from the Azure Preview Portal. Since then, you have told us that this has really helped to speed up development as it greatly reduced the amount of code required, but we also heard that you needed more. As a result, we extended the portal by adding the ability to add Scoring Profiles as well as configure Cross Origin Resource Sharing from the portal.

Portal Support of Scoring Profiles

Scoring Profiles boost items up in the search results based on different factors that you control. For example, below, I have a hotels index and all other things being equal, I want highly rated hotels close to the users’ current location to appear at the top of the users search results. To do this, in the Azure Preview Portal, choose Add Scoring Profile and provide a name for it. In this case I am going to call it “closeToUser”. You can create one or more scoring profiles and name them as needed in the search request, allowing you to provide different search results based on different use cases.

image

Once closeToUser has been created, I can start adding weights and functions. For example, in this scoring profile, I chose to add:

  • Weighting: Use hotelName as a weighted field, such that if the search term is found in the hotelName, it gets a weighted boost
  • Distance: Leverage the spatial capabilities of Azure Search to boost a hotel if it is found to be closer to the user’s specified location
  • Magnitude: Provide a boost to the hotels that have higher ratings

All of these functions and weights are then combined into a final score that is used to rank documents.

image

Scoring Profiles can often be tricky and it tends to be mixed with the rest of the query. With Azure Search, scoring profiles experience has been simplified and they are separated from search queries so the scoring model stays outside of application code and can be updated independently. In addition, these scoring profiles are modeled as a set of high-level scoring functions combined with a way to do the typical field weights making editing and maintenance of scoring much simpler.

As demonstrated above, this user experience requires no coding and you can simply choose the fields that are important and apply the function or weight that makes the most sense. It is important to note that scoring profiles is a method of boosting the relevance of a document and should not be confused with sorting. There are a number of other functions available which you can learn more about in the MSDN documentation.

Cross Origin Resource Sharing (CORS)

Web Browsers commonly apply a same-origin restriction policy to network requests, preventing client-side web applications from issuing requests to another domain for security reasons. For example, JavaScript code that came from http://www.contoso.com could not issue a request to another domain such as http://www.northwindtraders.com. For Azure Search developers, this is important in cases where all the data is already publicly accessible and they want to save on latency by going straight to the search index from mobile devices or a browser.

CORS is a method that allows you to relax this restriction in a controlled way so you don’t compromise security. Azure Search uses CORS to allow JavaScript code inside browsers to make search requests directly to the Azure Search service and eliminate the need to proxy all requests through the originating server. We now offer the ability to configure CORS from the Azure Preview Portal, allowing you to easily enable cross-domain access and limit it to specific origins. This can be done from the index management portion of your search service as shown below.

image

Tag Boosting

As discussed with Scoring Profiles, there are many examples of where you may want to boost certain relevant items. To this end, we have also introduced a new and highly requested function to our set of scoring profile functions called Tag Boosting. This feature is currently part of our experimental API version, made available to you so you can test and provide feedback on these potential new features.

Tag Boosting allows you to boost documents that have tags in common with the search query. The tags for the search query are provided as a scoring parameter in each search request and then any document that contain these terms would get a boost. This capability can not only be helpful to enable search result customization, but could also be used for cases where you have specific items you want to promote. As an example, during a sporting event, a retailer might want to promote items that are related to the teams participating in that sporting event.

Improved Suggestions

Suggestions (auto-complete) is a feature that allows you to provide type-ahead suggestions as the user types. Just like scoring profiles, this is a great way to allow your users to find the content they are looking for quickly. When we first implemented search suggestions in Azure Search, we heard a number of requests to extend the capabilities of this feature to better suit your requirements. As a result, we have an entirely new implementation of suggestions to address these items. In particular, it will do infix matching for suggestions and if fuzzy matching is enabled, it’ll show more flexibility for spelling mistakes. It also allows up to 100 suggestions per result, has no limit in length other than field limits and doesn’t have the 3-character minimum length.

This enhancement is still under the experimental API version as we are continuing to gather feedback. For more information on this and to see a more detailed example of suggestions, please see the post on the Suggestions in the Azure Blog.

New Regions

As a final note, I wanted to point out that we are continuing to expand the global footprint of Azure Search. With the addition of East Asia and West Europe you can now provision Azure Search services in 8 regions across the globe. Media: General Availability of Content Protection Service

Earlier this month we released the general availability of our new Content Protection service for Azure Media Services. This is backed by an enterprise grade SLA for all customers.

We understand the importance of protecting your premium media content, and our robust new DRM offering features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service. You can either dynamically encrypt during delivery of your media or statically encrypt during the content processing workflow, and our content protection options are available for both live and on-demand workflows.

For more information on functionality and pricing, visit the Media Services Content Protection blog post, the Media Services Pricing webpage, or this Securing Media article.

Management: General Availability of the Azure Resource Manager

Earlier this month we reached general availability of the new Azure Resource Manager, and now provide a world-side SLA of the service. The Azure Resource Manager provides a core set of management capabilities that are fundamental to the Microsoft Azure Platform and form the basis of our new deployment and management model for all Azure services.  You can use the Azure Resource Manager to deploy and manage your Azure solutions at no cost.

The Azure Resource Manager provides a simple, and customizable experience to manage your applications running in Azure along with enterprise grade authentication and authorization capabilities. Benefits include:

Application Lifecycle Boundaries: Azure Resource Manager provides a deployment container called a Resource Group that serves as the lifecycle boundary of resources/services deployed in it - making it easy for you to deploy, manage and visualize services that are contained within it. You no longer have to deploy parts of your application ala carte and then stitch them together manually. A resource Group container supports one-click deployment and tear down of the entire application in a single operation.

Enterprise Grade Access Control: OAuth and Role-Based Access Control (RBAC) are now natively integrated into Azure Management and consistently apply to all services supported by the Resource Manager. Access and operations performed on these services are also logged automatically to enable you to audit them later. You can now use a rich set of platform and resource specific roles that can be applied at the subscription, resource group, or resource level - giving you granular control over who has access to what operation within your organization.

Rich Tagging and Categorization: The Azure Resource Manager supports metadata tagging of resource groups and contained resources, and you can use this tagging support to group objects in ways suitable to your own needs such as management, billing or monitoring. For example, you could mark certain resources or resource groups as being "Dev/Test" and use that to help filter your resources or charge back their bills differently to internal groups in your organization.  This provides the power needed to manage and monitor departmental applications, subscriptions, and billing data in a more streamlined fashion, especially for larger organizations.

Declarative Deployment Templates: The new Azure Resource Manager supports both an imperative API as well as a declarative template model that you can use to deploy rich multi-tier applications on Azure.  These applications can be composed from multiple Azure services (including both IaaS and PaaS based services) and support the ability for you to pass parameters and connection-strings across them.  For example, you could declarative create a SQL DB, Web Site and VM using a single template and automatically wire-up the connection-string details between them.

image

Learn More

Check out the following resources to learn more about the Azure Resource Manager, and start using it today:

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu omni

Categories: Architecture, Programming

Try, Option or Either?

Xebia Blog - Wed, 02/18/2015 - 09:45

Scala has a lot of different options for handling and reporting errors, which can make it hard to decide which one is best suited for your situation. In Scala and functional programming languages it is common to make the errors that can occur explicit in the functions signature (i.e. return type), in contrast with the common practice in other programming languages where either special values are used (-1 for a failed lookup anyone?) or an exception is thrown.

Let's go through the main options you have as a Scala developer and see when to use what!

Option
A special type of error that can occur is the absence of some value. For example when looking up a value in a database or a List you can use the find method. When implementing this in Java the common solution (at least until Java 7) would be to return null when a value cannot be found or to throw some version of the NotFound exception. In Scala you will typically use the Option[T] type, returning Some(value) when the value is found and None when the value is absent.

So instead of having to look at the Javadoc or Scaladoc you only need to look at the type of the function to know how a missing value is represented. Moreover you don't need to litter your code with null checks or try/catch blocks.

Another use case is in parsing input data: user input, JSON, XML etc.. Instead of throwing an exception for invalid input you simply return a None to indicate parsing failed. The disadvantage of using Option for this situation is that you hide the type of error from the user of your function which, depending on the use-case, may or may not be a problem. If that information is important keep on reading the next sections.

An example that ensures that a name is non-empty:

def validateName(name: String): Option[String] = {
  if (name.isEmpty) None
  else Some(name)
}

You can use the validateName method in several ways in your code:

// Use a default value

 validateName(inputName).getOrElse("Default name")

// Apply some other function to the result
 validateName(inputName).map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Option[Person]
 for {
   name <- validateName(inputName)
   age <- validateAge(inputAge)
 } yield Person(name, age)

Either
Option is nice to indicate failure, but if you need to provide some more information about the failure Option is not powerful enough. In that case Either[L,R] can be used. It has 2 implementations, Left and Right. Both can wrap a custom type, respectively type L and type R. By convention Right is right, so it contains the successful result and Left contains the error. Rewriting the validateName method to return an error message would give:

def validateName(name: String): Either[String, String] = {
 if (name.isEmpty) Left("Name cannot be empty")
 else Right(name)
 }

Similar to Option Either can be used in several ways. It differs from option because you always have to specify the so-called projection you want to work with via the left or right method:

// Apply some function to the successful result
validateName(inputName).right.map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Either[Person]
for {
 name <- validateName(inputName).right
 age <- validateAge(inputAge).right
} yield Person(name, age)

// Handle both the Left and Right case
validateName(inputName).fold {
  error => s"Validation failed: $error",
  result => s"Validation succeeded: $result"
}

// And of course pattern matching also works
validateName(inputName) match {
  case Left(error) => s"Validation failed: $error",
  case Right(result) => s"Validation succeeded: $result"
}

// Convert to an option:
validateName(inputName).right.toOption

This projection is kind of clumsy and can lead to several convoluted compiler error messages in for expressions. See for example the excellent and in detail discussion of the Either type in the The Neophyte's Guide to Scala Part 7. Due to these issues several alternative implementations for a kind of Either have been created, most well known are the \/  type in Scalaz and the Or type in Scalactic. Both avoid the projection issues of the Scala Either and, at the same time, add additional functionality for aggregating multiple validation errors into a single result type.

Try

Try[T] is similar to Either. It also has 2 cases, Success[T] for the successful case and Failure[Throwable] for the failure case. The main difference thus is that the failure can only be of type Throwable. You can use it instead of a try/catch block to postpone exception handling. Another way to look at it is to consider it as Scala's version of checked exceptions. Success[T] wraps the result value of type T, while the Failure case can only contain an exception.

Compare these 2 methods that parse an integer:

// Throws a NumberFormatException when the integer cannot be parsed
def parseIntException(value: String): Int = value.toInt

// Catches the NumberFormatException and returns a Failure containing that exception
// OR returns a Success with the parsed integer value
def parseInt(value: String): Try[Int] = Try(value.toInt)

The first function needs documentation describing that an exception can be thrown. The second function describes in its signature what can be expected and requires the user of the function to take the failure case into account. Try is typically used when exceptions need to be propagated, if the exception is not needed prefer any of the other options discussed.

Try offers similar combinators as Option[T] and Either[L,R]:

// Apply some function to the successful result
parseInt(input).map(_ * 2)

// Combine with other validations, short-circuiting on the first Failure
// returning a new Try[Stats]
for {
  age <- parseInt(inputAge)
  height <- parseDouble(inputHeight)
} yield Stats(age, height)

// Use a default value
parseAge(inputAge).getOrElse(0)

// Convert to an option
parseAge(inputAge).toOption

// And of course pattern matching also works
parseAge(inputAge) match {
  case Failure(exception) => s"Validation failed: ${exception.message}",
  case Success(result) => s"Validation succeeded: $result"
}

Note that Try is not needed when working with Futures! Futures combine asynchronous processing with the Exception handling capabilities of Try! See also Try is free in the Future.

Exceptions
Since Scala runs on the JVM all low-level error handling is still based on exceptions. In Scala you rarely see usage of exceptions and they are typically only used as a last resort. More common is to convert them to any of the types mentioned above. Also note that, contrary to Java, all exceptions in Scala are unchecked. Throwing an exception will break your functional composition and probably result in unexpected behaviour for the caller of your function. So it should be reserved as a method of last resort, for when the other options don’t make sense.
If you are on the receiving end of the exceptions you need to catch them. In Scala syntax:

try {
  dangerousCode()
} catch {
  case e: Exception => println("Oops")
} finally {
  cleanup
}

What is often done wrong in Scala is that all Throwables are caught, including the Java system errors. You should never catch Errors because they indicate a critical system error like the OutOfMemoryError. So never do this:

try {
  dangerousCode()
} catch {
  case _ => println("Oops. Also caught OutOfMemoryError here!")
}

But instead do this:

import scala.util.control.NonFatal

try {
  dangerousCode()
} catch {
  case NonFatal(_) => println("Ooops. Much better, only the non fatal exceptions end up here.")
}

To convert exceptions to Option or Either types you can use the methods provided in scala.util.control.Exception (scaladoc):

import scala.util.control.Exception._

val i = 0
val result: Option[Int] = catching(classOf[ArithmeticException]) opt { 1 / i }
val result: Either[Throwable, Int] = catching(classOf[ArithmeticException]) either { 1 / i }

Finally remember you can always convert an exception into a Try as discussed in the previous section.

TDLR;

  • Option[T], use it when a value can be absent or some validation can fail and you don't care about the exact cause. Typically in data retrieval and validation logic.
  • Either[L,R], similar use case as Option but when you do need to provide some information about the error.
  • Try[T], use when something Exceptional can happen that you cannot handle in the function. This, in general, excludes validation logic and data retrieval failures but can be used to report unexpected failures.
  • Exceptions, use only as a last resort. When catching exceptions use the facility methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) => }

One final advice is to read through the Scaladoc for all the types discussed here. There are plenty of useful combinators available that are worth using.

Hadoop and the OpenDataPlatform

hadoop-logo-square

Pivotal, IBM and Hortonworks announced today the “Open Data Platform” (ODP) – an attempt to standardize Hadoop. This move seems to be backed up by IBM, Teradata and others that appear as sponsors on the initiative site.

This move has a lot of potential and a few possible downsides.

ODP promises standardization – Cloudera’s Mike Olson downplays the importance of this “Every vendor shipping a Hadoop distribution builds off the Hadoop trunk. The APIs, data formats and semantics of trunk are stable. The project is a decade old, now, and the global Hadoop community exercises its governance obligations responsibly. There’s simply no fundamental incompatibility among the core Hadoop components shipped by the various vendors.”

I disagree. While it is true that there are no “fundamental incompatibility” there is a lot of non-fundamental ones. Each release by each vendor includes backport of features that are somewhere on the main trunk but far from the stable release. This means, that as a vendor, we have to both test our solutions on multiple distributions and work around the  subtle incompatibilities. We also have to limit ourselves to the lowest common denominator of the different platforms (or not support a distro) – for instance, until today, IBM did not support Yarn or Spark on their distribution

Hopefully standardization around common core will also mean that the involved vendors will deliver their value-add on that core unlike today where the offerings are based on proprietary extensions (this is true for Pivotal, IBM etc. not so much for Hortonworks). Today, we can’t take Impala and run it on Pivotal can we take Hawk and run it on HDP . With ODP we would, hopefully,  be able  mix-and-match and have installations where we can, say,  use IBM’s BigSQL with GemFire HD running on HDP and other such mixes. This can be good news for these vendors by enlarging their addressable market and for us a users by increasing our choice and reducing lock-in.

So what are the downsides/possible problems?

Well, for one we need to see that the scenarios I described above will actually happen and this isn’t just a marketing ploy. Another problem, the elephant in the room if you will,  is that the move is not complete –  Cloudera, a major Hadoop player, is not part of this move and as can be seen in the post referenced above, are against it. This is also true for MapR. With these two vendors out we still have multiple vendors to deal with and the problems ODP sets to solve will not disappear. I guess if ODP was led by the ASF or some other more “impartial” party it would have been easier to digest but as it is now all I can do is hope that both ODP will live to its expectations and that in the long run Cloudera and MapR will also join this initiative .

 

 

Categories: Architecture

JetBrains webinar recording: Software architecture as code

Coding the Architecture - Simon Brown - Tue, 02/17/2015 - 18:32

The lovely people at JetBrains have published the recording of the live webinar I did with them last week about software architecture as code. I've embedded the YouTube video below, but you should also go and take a look at their website because there are answers to a bunch of questions that I didn't get time to answer during the webinar itself.

If you've already seen one of my Software architecture vs code presentations, you should probably jump straight to the demo section where I show how to create a software architecture model with code and Structurizr. You can also get the slides and the code that I used.

Thanks again to JetBrains (especially Hadi Hariri, Trisha Gee and Robert Demmer) and to everybody who listened in.

Categories: Architecture

Sponsored Post: Apple, Sentient, Couchbase, Farmerswife, VividCortex, Internap, SocialRadar, Campanja, Transversal, MemSQL, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring a Application Security Engineer. Apple’s Gift Card Engineering group is looking for a software engineer passionate about application security for web applications and REST services. Be part of a team working on challenging and fast paced projects supporting Apple's business by delivering high volume, high performance, and high availability distributed transaction processing systems. Please apply here.

  • Apple is hiring a Software Engineer for Maps Services. The Maps Team is looking for a developer to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Sentient Technologies is hiring several Senior Distributed Systems Engineers and a Senior Distributed Systems QA Engineer. Sentient Technologies, is a privately held company seeking to solve the world’s most complex problems through massively scaled artificial intelligence running on one of the largest distributed compute resources in the world. Help us expand our existing million+ distributed cores to many, many more. Please apply here.

  • Want to be the leader and manager of a cutting-edge cloud deployment? Take charge of an innovative 24x7 web service infrastructure on the AWS Cloud? Join farmerswife on the beautiful island of Mallorca and help create the next generation on project management tools. Please apply here.

  • Senior DevOps EngineerSocialRadar. We are a VC funded startup based in Washington, D.C. operated like our West Coast brethren. We specialize in location-based technology. Since we are rapidly consuming large amounts of location data and monitoring all social networks for location events, we have systems that consume vast amounts of data that need to scale. As our Senior DevOps Engineer you’ll take ownership over that infrastructure and, with your expertise, help us grow and scale both our systems and our team as our adoption continues its rapid growth. Full description and application here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • Campanja is an Internet advertising optimization company born in the cloud and today we are one of the nordics bigger AWS consumers, the time has come for us to the embrace the next generation of cloud infrastructure. We believe in immutable infrastructure, container technology and micro services, we hope to use PaaS when we can get away with it but consume at the IaaS layer when we have to. Please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Rise of the Multi-Model Database. FoundationDB Webinar: March 10th at 1pm EST. Do you want a SQL, JSON, Graph, Time Series, or Key Value database? Or maybe it’s all of them? Not all NoSQL Databases are not created equal. The latest development in this space is the Multi Model Database. Please join FoundationDB for an interactive webinar as we discuss the Rise of the Multi Model Database and what to consider when choosing the right tool for the job.

  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/
Cool Products and Services
  • See how LinkedIn uses Couchbase to help power its “Follow” service for 300M+ global users, 24x7. http://info.couchbase.com/14Q4-MKTG-Website-LinkedIn-Scale-LP.html

  • VividCortex Developer edition delivers a groundbreaking performance management solution to startups, open-source projects, nonprofits, and other organizations free of charge. It integrates high-resolution metrics on queries, metrics, processes, databases, and the OS and hardware to deliver an unprecedented level of visibility into production database activity.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Aerospike demonstrates RAM-like performance with Google Compute Engine Local SSDs. After scaling to 1 M Writes/Second with 6x fewer servers than Cassandra on Google Compute Engine, we certified Google’s new Local SSDs using the Aerospike Certification Tool for SSDs (ACT) and found RAM-like performance and 15x storage cost savings. Read more.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Cancelling $http requests for fun and profit

Xebia Blog - Tue, 02/17/2015 - 09:11

At my current client, we have a large AngularJS application that is configured to show a full-page error whenever one of the $http requests ends up in error. This is implemented with an error interceptor as you would expect it to be. However, we’re also using some calculation-intense resources that happen to timeout once in a while. This combination is tricky: a user triggers a resource request when navigating to a certain page, navigates to a second page and suddenly ends up with an error message, as the request from the first page triggered a timeout error. This is a particular unpleasant side effect that I’m going to address in a generic way in this post.

There are of course multiple solutions to this problem. We could create a more resilient implementation in the backend that will not time out, but accepts retries. We could change the full-page error in something less ‘in your face’ (but you still would get some out-of-place error notification). For this post I’m going to fix it using a different approach: cancel any running requests when a user switches to a different location (the route part of the URL). This makes sense; your browser does the same when navigating from one page to another, so why not mimic this behaviour in your Angular app?

I’ve created a pretty verbose implementation to explain how to do this. At the end of this post, you’ll find a link to the code as a packaged bower component that can be dropped in any Angular 1.2+ app.

To cancel a running request, Angular does not offer that many options. Under the hood, there are some places where you can hook into, but that won’t be necessary. If we look at the $http usage documentation, the timeout property is mentioned and it accepts a promise to abort the underlying call. Perfect! If we set a promise on all created requests, and abort these at once when the user navigates to another page, we’re (probably) all set.

Let’s write an interceptor to plug in the promise in each request:

angular.module('angularCancelOnNavigateModule')
  .factory('HttpRequestTimeoutInterceptor', function ($q, HttpPendingRequestsService) {
    return {
      request: function (config) {
        config = config || {};
        if (config.timeout === undefined && !config.noCancelOnRouteChange) {
          config.timeout = HttpPendingRequestsService.newTimeout();
        }
        return config;
      }
    };
  });

The interceptor will not overwrite the timeout property when it is explicitly set. Also, if the noCancelOnRouteChange option is set to true, the request won’t be cancelled. For better separation of concerns, I’ve created a new service (the HttpPendingRequestsService) that hands out new timeout promises and stores references to them.

Let’s have a look at that pending requests service:

angular.module('angularCancelOnNavigateModule')
  .service('HttpPendingRequestsService', function ($q) {
    var cancelPromises = [];

    function newTimeout() {
      var cancelPromise = $q.defer();
      cancelPromises.push(cancelPromise);
      return cancelPromise.promise;
    }

    function cancelAll() {
      angular.forEach(cancelPromises, function (cancelPromise) {
        cancelPromise.promise.isGloballyCancelled = true;
        cancelPromise.resolve();
      });
      cancelPromises.length = 0;
    }

    return {
      newTimeout: newTimeout,
      cancelAll: cancelAll
    };
  });

So, this service creates new timeout promises that are stored in an array. When the cancelAll function is called, all timeout promises are resolved (thus aborting all requests that were configured with the promise) and the array is cleared. By setting the isGloballyCancelled property on the promise object, a response promise method can check whether it was cancelled or another exception has occurred. I’ll come back to that one in a minute.

Now we hook up the interceptor and call the cancelAll function at a sensible moment. There are several events triggered on the root scope that are good hook candidates. Eventually I settled for $locationChangeSuccess. It is only fired when the location change is a success (hence the name) and not cancelled by any other event listener.

angular
  .module('angularCancelOnNavigateModule', [])
  .config(function($httpProvider) {
    $httpProvider.interceptors.push('HttpRequestTimeoutInterceptor');
  })
  .run(function ($rootScope, HttpPendingRequestsService) {
    $rootScope.$on('$locationChangeSuccess', function (event, newUrl, oldUrl) {
      if (newUrl !== oldUrl) {
        HttpPendingRequestsService.cancelAll();
      }
    })
  });

When writing tests for this setup, I found that the $locationChangeSuccess event is triggered at the start of each test, even though the location did not change yet. To circumvent this situation, the function does a simple difference check.

Another problem popped up during testing. When the request is cancelled, Angular creates an empty error response, which in our case still triggers the full-page error. We need to catch and handle those error responses. We can simply add a responseError function in our existing interceptor. And remember the special isGloballyCancelled property we set on the promise? That’s the way to distinguish between cancelled and other responses.

We add the following function to the interceptor:

      responseError: function (response) {
        if (response.config.timeout.isGloballyCancelled) {
          return $q.defer().promise;
        }
        return $q.reject(response);
      }

The responseError function must return a promise that normally re-throws the response as rejected. However, that’s not what we want: neither a success nor a failure callback should be called. We simply return a never-resolving promise for all cancelled requests to get the behaviour we want.

That’s all there is to it! To make it easy to reuse this functionality in your Angular application, I’ve packaged this module as a bower component that is fully tested. You can check the module out on this GitHub repo.

When development resembles the ageing of wine

Xebia Blog - Mon, 02/16/2015 - 20:29

Once upon a time I was asked to help out a software product company.  The management briefing went something like this: "We need you to increase productivity, the guys in development seem to be unable to ship anything! and if they do ship something it's only a fraction of what we expected".

And so the story begins. Now there are many ways how we can improve the teams outcome and its output (the first matters more), but it always starts with observing what they do today and trying to figure out why.

It turns out that requests from the business were treated like a good wine, and were allowed to "age", in the oak barrel that was called Jira. Not so much to add flavour in the form of details, requirements, designs, non functional requirements or acceptance criteria, but mainly to see if the priority of this request would remain stable over a period of time.

In the days that followed I participated in the "Change Control Board" and saw what he meant. Management would change priorities on the fly and make swift decisions on requirements that would take weeks to implement. To stay in vinotology terms, wine was poured in and out the barrels at such a rate that it bore more resemblance to a blender than to the art of wine making.

Though management was happy to learn I had unearthed to root cause to their problem, they were less pleased to learn that they themselves were responsible.  The Agile world created the Product Owner role for this, and it turned out that this is hat, that can only be worn by a single person.

Once we funnelled all the requests through a single person, both responsible for the success of the product and for the development, we saw a big change. Not only did the business got a reliable sparring partner, but the development team had a single voice when it came to setting the priorities. Once the team starting finishing what they started we started shipping at regular intervals, with features that we all had committed to.

Of course it did not take away the dynamics of the business, but it allowed us to deliver, and become reliable in how and when we responded to change. Perhaps not the most aged wine, but enough to delight our customers and learn what we should put in our barrel for the next round.

 

ScottGu Azure event in London on March 2nd

ScottGu's Blog - Scott Guthrie - Mon, 02/16/2015 - 19:16

On March 2nd I'm doing an Azure event in London that you can attend for free.  I'll be speaking for about 2.5 hours and will do an end-to-end walkthrough of Microsoft Azure, show off a bunch of demos of great new features/capabilities, and talk about some of the improvements coming out over the next few months.

logo[1]

You can sign-up and attend the event for free (while tickets last - they are going fast).  If you are interested sign-up now.  The event is being held at the Mermaid Conference & Events Centre in Blackfriars, London:

mermaidspic3[1]

Hope to see some of you there!

Scott

omni
Categories: Architecture, Programming