Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

R: ggplot – Plotting a single variable line chart (geom_line requires the following missing aesthetics: y)

Mark Needham - Sat, 09/13/2014 - 12:41

I’ve been learning how to do moving averages in R and having done that calculation I wanted to plot these variables on a line chart using ggplot.

The vector of rolling averages looked like this:

> rollmean(byWeek$n, 4)
  [1]  3.75  2.00  1.25  1.00  1.25  1.25  1.75  1.75  1.75  2.50  2.25  2.75  3.50  2.75  2.75
 [16]  2.25  1.50  1.50  2.00  2.00  2.00  2.00  1.25  1.50  2.25  2.50  3.00  3.25  2.75  4.00
 [31]  4.25  5.25  7.50  6.50  5.75  5.00  3.50  4.00  5.75  6.25  6.25  6.00  5.25  6.25  7.25
 [46]  7.75  7.00  4.75  2.75  1.75  2.00  4.00  5.25  5.50 11.50 11.50 12.75 14.50 12.50 11.75
 [61] 11.00  9.25  5.25  4.50  3.25  4.75  7.50  8.50  9.25 10.50  9.75 15.25 16.00 15.25 15.00
 [76] 10.00  8.50  6.50  4.25  3.00  4.25  4.75  7.50 11.25 11.00 11.50 10.00  6.75 11.25 12.50
 [91] 12.00 11.50  6.50  8.75  8.50  8.25  9.50  8.50  8.75  9.50  8.00  4.25  4.50  7.50  9.00
[106] 12.00 19.00 19.00 22.25 23.50 22.25 21.75 19.50 20.75 22.75 22.75 24.25 28.00 23.00 26.00
[121] 24.25 21.50 26.00 24.00 28.25 25.50 24.25 31.50 31.50 35.75 35.75 29.00 28.50 27.25 25.50
[136] 27.50 26.00 23.75

I initially tried to plot a line chart like this:

library(ggplot2)
library(zoo)
rollingMean = rollmean(byWeek$n, 4)
qplot(rollingMean) + geom_line()

which resulted in this error:

stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
Error: geom_line requires the following missing aesthetics: y

It turns out we need to provide an x and y value if we want to draw a line chart. In this case we’ll generate the ‘x’ value – we only care that the y values get plotted in order from left to right:

qplot(1:length(rollingMean), rollingMean, xlab ="Week Number") + geom_line()
2014 09 13 16 58 57

If we want to use the ‘ggplot’ function then we need to put everything into a data frame first and then plot it:

ggplot(data.frame(week = 1:length(rollingMean), rolling = rollingMean),
       aes(x = week, y = rolling)) +
  geom_line()

2014 09 13 17 11 13

Categories: Programming

R: Calculating rolling or moving averages

Mark Needham - Sat, 09/13/2014 - 09:15

I’ve been playing around with some time series data in R and since there’s a bit of variation between consecutive points I wanted to smooth the data out by calculating the moving average.

I struggled to find an in built function to do this but came across Didier Ruedin’s blog post which described the following function to do the job:

mav <- function(x,n=5){filter(x,rep(1/n,n), sides=2)}

I tried plugging in some numbers to understand how it works:

> mav(c(4,5,4,6), 3)
Time Series:
Start = 1 
End = 4 
Frequency = 1 
[1]       NA 4.333333 5.000000       NA

Here I was trying to do a rolling average which took into account the last 3 numbers so I expected to get just two numbers back – 4.333333 and 5 – and if there were going to be NA values I thought they’d be at the beginning of the sequence.

In fact it turns out this is what the ‘sides’ parameter controls:

sides	
for convolution filters only. If sides = 1 the filter coefficients are for past values only; if sides = 2 they 
are centred around lag 0. In this case the length of the filter should be odd, but if it is even, more of the 
filter is forward in time than backward.

So in our ‘mav’ function the rolling average looks both sides of the current value rather than just at past values. We can tweak that to get the behaviour we want:

mav <- function(x,n=5){filter(x,rep(1/n,n), sides=1)}
> mav(c(4,5,4,6), 3)
Time Series:
Start = 1 
End = 4 
Frequency = 1 
[1]       NA       NA 4.333333 5.000000

The NA values are annoying for any plotting we want to do so let’s get rid of them:

> na.omit(mav(c(4,5,4,6), 3))
Time Series:
Start = 3 
End = 4 
Frequency = 1 
[1] 4.333333 5.000000

Having got to this point I noticed that Didier had referenced the zoo package in the comments and it has a built in function to take care of all this:

> library(zoo)
> rollmean(c(4,5,4,6), 3)
[1] 4.333333 5.000000

I also realised I can list all the functions in a package with the ‘ls’ function so I’ll be scanning zoo’s list of functions next time I need to do something time series related – there’ll probably already be a function for it!

> ls("package:zoo")
  [1] "as.Date"              "as.Date.numeric"      "as.Date.ts"          
  [4] "as.Date.yearmon"      "as.Date.yearqtr"      "as.yearmon"          
  [7] "as.yearmon.default"   "as.yearqtr"           "as.yearqtr.default"  
 [10] "as.zoo"               "as.zoo.default"       "as.zooreg"           
 [13] "as.zooreg.default"    "autoplot.zoo"         "cbind.zoo"           
 [16] "coredata"             "coredata.default"     "coredata<-"          
 [19] "facet_free"           "format.yearqtr"       "fortify.zoo"         
 [22] "frequency<-"          "ifelse.zoo"           "index"               
 [25] "index<-"              "index2char"           "is.regular"          
 [28] "is.zoo"               "make.par.list"        "MATCH"               
 [31] "MATCH.default"        "MATCH.times"          "median.zoo"          
 [34] "merge.zoo"            "na.aggregate"         "na.aggregate.default"
 [37] "na.approx"            "na.approx.default"    "na.fill"             
 [40] "na.fill.default"      "na.locf"              "na.locf.default"     
 [43] "na.spline"            "na.spline.default"    "na.StructTS"         
 [46] "na.trim"              "na.trim.default"      "na.trim.ts"          
 [49] "ORDER"                "ORDER.default"        "panel.lines.its"     
 [52] "panel.lines.tis"      "panel.lines.ts"       "panel.lines.zoo"     
 [55] "panel.plot.custom"    "panel.plot.default"   "panel.points.its"    
 [58] "panel.points.tis"     "panel.points.ts"      "panel.points.zoo"    
 [61] "panel.polygon.its"    "panel.polygon.tis"    "panel.polygon.ts"    
 [64] "panel.polygon.zoo"    "panel.rect.its"       "panel.rect.tis"      
 [67] "panel.rect.ts"        "panel.rect.zoo"       "panel.segments.its"  
 [70] "panel.segments.tis"   "panel.segments.ts"    "panel.segments.zoo"  
 [73] "panel.text.its"       "panel.text.tis"       "panel.text.ts"       
 [76] "panel.text.zoo"       "plot.zoo"             "quantile.zoo"        
 [79] "rbind.zoo"            "read.zoo"             "rev.zoo"             
 [82] "rollapply"            "rollapplyr"           "rollmax"             
 [85] "rollmax.default"      "rollmaxr"             "rollmean"            
 [88] "rollmean.default"     "rollmeanr"            "rollmedian"          
 [91] "rollmedian.default"   "rollmedianr"          "rollsum"             
 [94] "rollsum.default"      "rollsumr"             "scale_x_yearmon"     
 [97] "scale_x_yearqtr"      "scale_y_yearmon"      "scale_y_yearqtr"     
[100] "Sys.yearmon"          "Sys.yearqtr"          "time<-"              
[103] "write.zoo"            "xblocks"              "xblocks.default"     
[106] "xtfrm.zoo"            "yearmon"              "yearmon_trans"       
[109] "yearqtr"              "yearqtr_trans"        "zoo"                 
[112] "zooreg"
Categories: Programming

React In Modern Web Applications: Part 2

Xebia Blog - Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

Announcing $100,000 for Startups on Google Cloud Platform

Google Code Blog - Fri, 09/12/2014 - 16:12
This post originally appeared on the Google Cloud Platform blog 
by Julie Pearl, Director, Developer Relations

Today at the Google for Entrepreneurs Global Partner Summit, Urs Hölzle, Senior Vice President, Technical Infrastructure & Google Fellow announced Google Cloud Platform for Startups. This new program will help eligible early-stage startups take advantage of the cloud and get resources to quickly launch and scale their idea by receiving $100,000 in Cloud Platform credit, 24/7 support, and access to our technical solutions team.

This offer is available to startups around the world through top incubators, accelerators and investors. We are currently working with over 50 global partners to provide this offer to startups who have less than $5 million dollars in funding and have less than $500,000 in annual revenue. In addition, we will continue to add more partners over time.

This offer supports our core Google Cloud Platform philosophy: we want developers to focus on code; not worry about managing infrastructure. Starting today, startups can take advantage of this offer and begin using the same infrastructure platform we use at Google.

Thousands of startups have built successful applications on Google Cloud Platform and those applications have grown to serve tens of millions of users. It has been amazing to watch Snapchat send over 700 million photos and videos a day and Khan Academy teach millions of students. Another example, Headspace, is helping millions of people keep their minds healthier and happier using Cloud Platform for Startups. We look forward to helping the next generation of startups launch great products.

For more information on Google Cloud Platform for Startups, visit http://cloud.google.com/startups.

Posted by Katie Miller, Google Developer Platform Team
Categories: Programming

Azure: SQL Databases, API Management, Media Services, Websites, Role Based Access Control and More

ScottGu's Blog - Scott Guthrie - Fri, 09/12/2014 - 07:14

This week we released a major set of updates to Microsoft Azure. This week’s updates include:

  • SQL Databases: General Availability of Azure SQL Database Service Tiers
  • API Management: General Availability of our API Management Service
  • Media Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer
  • Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal
  • Role-based Access Control: Preview release of role-based access control for Azure Management operations
  • Alerting: General Availability of Azure Alerting and new alerts on events

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers

I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure.

Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size.

Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time.

Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data).

Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host.

Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing

SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively.

Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month.

The below table provides a quick look at the different tiers and functionality:

image

This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels.

During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more.

Learn more about SQL Databases on Azure here. API Management Service: General Availability Release

I’m excited to announce the General Availability of the Azure API Management Service.

In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications.

The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to:

  • Throttle, rate limit and quota your APIs
  • Gain analytic insights on how your APIs are being used and by whom
  • Secure your APIs using OAuth or key-based access
  • Track the health of your APIs and quickly identify errors
  • Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs

Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal

The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs.

With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes.

image

For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial.

Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer

This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services.

The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection

This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding

This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers.

To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer

Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners

I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners:

  • Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure.
  • Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services.
  • Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms.
Learn More

Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress

This week we’ve also released a number of great updates to our Azure Websites service.

Virtual Network Integration

Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well.

The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network.

You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have:

image

Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress

This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases.

It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users.

You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure:

image

Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup

This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now:

  • Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within.
  • Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website.
  • On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live.

These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure

As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources.

Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include:

  • A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs.
  • You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way.

Below are some more details on how this works and can be enabled.

Azure Active Directory

Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises.

Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications:

image

All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions

In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access:

  • A Owner can perform all management operations for a resource and its child resources including access management.
  • A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others.
  • A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets.

In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged.

Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in).

Restricting Access based on Role Based Permissions

Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups.

You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up:

Step 1: Setting up Roles at the Subscription Level

We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription.

To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for: 

image

Then scroll down within the blade of subscription you opened, and locate the Roles tile within it:

image

Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role. 

Below I’ve opened up the default Reader role and added David and Fred to it:

image

Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc).

Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. scott@outlook.com), to also be mapped into a role.

Step 2: Setting up Roles at the Resource Level

Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it. 

If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions.

For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it.

To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this:

image

Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control

The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources.

With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access

Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment.

Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support

I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription.

Alerts are available in the full azure portal by clicking Management Services in the left navigation bar:

image

Also, alerting is available on most of the resources in the Azure preview portal:

image

You can create alerts on metrics from 8 different services today (and we are adding more all the time):

  • Cloud Services
  • Virtual Machines
  • Websites
  • Web hosting plans
  • Storage accounts
  • SQL databases
  • Redis Cache
  • DocumentDB accounts

In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

omni
Categories: Architecture, Programming

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Soft Skills is the Manning Deal of the Day!

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Great news! Early access to my new book Soft Skills: The Software Developer’s Life Manual, is on sale today (9/11/2014) only as Manning’s deal of the day! If you’ve been thinking about getting the book, now is probably the last chance to get it at a discounted rate. Just use the code: dotd091114au to the get […]

The post Soft Skills is the Manning Deal of the Day! appeared first on Simple Programmer.

Categories: Programming

Open as Many Doors as Possible

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Even though specialization is important, it doesn’t mean you shouldn’t strive to open up as many doors as possible in your life.

The post Open as Many Doors as Possible appeared first on Simple Programmer.

Categories: Programming

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Cloud Changes the Game from Deployment to Adoption

Before the Cloud, there was a lot of focus on deployment, as if deployment was success. 

Once you shipped the project, it was time to move on to the next project.  And project success was measured in terms of “on time” and “on budget.”   If you could deploy things quickly, you were a super shipper.

Of course, what we learned was that if you simply throw things over the wall and hope they stick, it’s not very successful.

"If you build it" ... users don't always come.

It was easy to confuse shipping projects on time and on budget with business impact.  

But let's compound the problem. 

The Development Hump

The big hump of software development was the hump in the middle—A big development hump.  And that hump was followed by a big deployment hump (installing software, fixing issues, dealing with deployment hassles, etc.)

So not only were development cycles long, but deployment was tough, too.

Because development cycles were long, and deployment was so tough, it was easy to confuse effort for value.

Cloud Changes the Hump

Now, let's turn it around.

With the Cloud, deployment is simplified.  You can reach more users, and it's easier to scale.  And it's easier to be available 24x7.

Add Agile to the mix, and people ship smaller, more frequent releases.

So with smaller, more-frequent releases, and simpler deployment, some software teams have turned into shipping machines.

The Cloud shrinks the development and deployment humps.

So now the game is a lot more obvious.

Deployment doesn't mark the finish.  It starts the game.

The real game of software success is adoption.

The Adoption Hump is Where the Benefits Are

If you picture the old IT project hump, where there is a long development cycle in the middle, now it's shorter humps in the middle.

The big hump is now user adoption.

It’s not new.  It was always there.   But the adoption hump was hidden beyond the development and deployment humps, and simply written off as “Value Leakage.”

And if you made it over the first two humps, since most projects did not plan or design for adoption, or allocate any resources or time, adoption was mostly an afterthought.  

And so the value leaked.

But the adoption hump is where the business benefits are.   The ROI is sitting there, gathering dust, in our "pay-for-play" world.   The value is simply waiting to be released and unleashed. 

Software solutions are sitting idle waiting for somebody to realize the value.

Accelerate Value by Accelerating Adoption

All of the benefits to the business are locked up in that adoption hump.   All of the benefits around how users will work better, faster, or cheaper, or how you will change the customer interaction experience, or how back-office systems will be better, faster, cheaper ... they are all locked up in that adoption hump.

As I said before, the key to Value Realization is adoption.  

So if you want to realize more value, drive more user adoption. 

And if you want to accelerate value, then accelerate user adoption.

In Essence …

In a Cloud world, the original humps of design, development, and deployment shrink.   But it’s not just time and effort that shrink.  Costs shrink, too.   With online platforms to build on (Infrastructure as a Service, Platforms as a Service, and Software as a Service), you don’t have to start from scratch or roll your own.   And if you adopt a configure before customize mindset, you can further reduce your costs of design and development.

Architecture moves up the stack from basic building blocks to composition.

And adoption is where the action is.  

What was the afterthought in the last generation of solutions, is now front and center. 

In the new world, adoption is a planned spend, and it’s core to the success of the planned value delivery.

If you want to win the game, think “Adoption-First.”

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Running unit tests on iOS devices

Xebia Blog - Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

Update on Android Wear Paid Apps

Android Developers Blog - Mon, 09/08/2014 - 22:02

Update (8 September 2014): All of the issues in the post below have now been resolved in Android Studio 0.8.3 onwards, released on 21 July 2014. The gradle wearApp rule, and the packaging documentation, were updated to use res/raw. The instructions on this blog post remain correct and you can continue to use manual packaging if you want. You can upload Android Wear paid apps to the Google Play using the standard release wearApp release mechanism.

We have a workaround to enable paid apps (and other apps that use Google Play's forward-lock mechanism) on Android Wear. The assets/ directory of those apps, which contains the wearable APK, cannot be extracted or read by the wearable installer. The workaround is to place the wearable APK in the res/raw directory instead.

As per the documentation, there are two ways to package your wearable app: use the “wearApp” Gradle rule to package your wearable app or manually package the wearable app. For paid apps, the workaround is to manually package your apps with the following two changes, and you cannot use the “wearApp” Gradle rule. To manually package the wearable APK into res/raw, do the following:

  1. Copy the signed wearable app into your handheld project's res/raw directory and rename it to wearable_app.apk, it will be referred to as wearable_app.
  2. Create a res/xml/wearable_app_desc.xml file that contains the version and path information of the wearable app:
    <wearableApp package="wearable app package name">
        <versionCode>1</versionCode>
        <versionName>1.0</versionName>
        <rawPathResId>wearable_app</rawPathResId>
    </wearableApp>

    The package, versionCode, and versionName are the same as values specified in the wearable app's AndroidManifest.xml file. The rawPathResId is the static variable name of the resource. If the filename of your resource is wearable_app.apk, the static variable name would be wearable_app.

  3. Add a <meta-data> tag to your handheld app's <application> tag to reference the wearable_app_desc.xml file.
    <meta-data android:name="com.google.android.wearable.beta.app"
               android:resource="@xml/wearable_app_desc"/>
  4. Build and sign the handheld app.

We will be updating the “wearApp” Gradle rule in a future update to the Android SDK build tools to support APK embedding into res/raw. In the meantime, for paid apps you will need to follow the manual steps outlined above. We will be also be updating the documentation to reflect the above workaround. We're working to make this easier for you in the future, and we apologize for the inconvenience.


Join the discussion on
+Android Developers


Categories: Programming

An F# newbie using SQLite

Eric.Weblog() - Eric Sink - Mon, 09/08/2014 - 19:00

Like I said in a tweet on Friday, I'm guessing everybody's first 10,000 lines of F# are crap. That's a lot of bad code I need to write, so I figure maybe I better get started.

This blog entry is a journal of my first attempts at using F# to do some SQLite stuff. I'm using SQLitePCL.raw, which is a Portable Class Library wrapper (written in C#) allowing .NET developers to call the native SQLite library.

My program has five "stanzas":

  • ONE: Open a SQLite database and create a table with two integer columns called a and b.

  • TWO: Insert 16 rows with a going from 1 through 16.

  • THREE: Set column b equal to a squared, and lookup the value of b for a=7.

  • FOUR: Loop over all the rows where b<120 and sum the a values.

  • FIVE: Close the database and print the two results.

I've got three implementations of this program to show you -- one in C# and two in F#.

C#

Here's the C# version I started with:

using System;
using System.IO;

using SQLitePCL;

public class foo
{
    // helper functions to check SQLite result codes and throw

    private static bool is_error(int rc)
    {
        return (
                (rc != raw.SQLITE_OK)
                && (rc != raw.SQLITE_ROW)
                && (rc != raw.SQLITE_DONE)
           );
    }

    private static void check(int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(string.Format("{0}", rc));
        }
    }

    private static void check(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
    }

    private static int checkthru(sqlite3 conn, int rc)
    {
        if (is_error(rc))
        {
            throw new Exception(raw.sqlite3_errmsg(conn));
        }
        else
        {
            return rc;
        }
    }

    // MAIN program

    public static void Main()
    {
        sqlite3 conn = null;

        // ONE: open the db and create the table

        check(raw.sqlite3_open(":memory:", out conn));

        check(conn, raw.sqlite3_exec(conn, "CREATE TABLE foo (a int, b int)"));

        // TWO: insert 16 rows

        for (int i=1; i<=16; i++)
        {
            string sql = string.Format("INSERT INTO foo (a) VALUES ({0})", i);
            check(conn, raw.sqlite3_exec(conn, sql));
        }

        // THREE: set b = a squared and find b for a=7

        check(conn, raw.sqlite3_exec(conn, "UPDATE foo SET b = a * a"));

        sqlite3_stmt stmt = null;
        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?", out stmt));

        check(conn, raw.sqlite3_bind_int(stmt, 1, 7));
        check(conn, raw.sqlite3_step(stmt));
        int vsq = raw.sqlite3_column_int(stmt, 0);
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FOUR: fetch sum(a) for all rows where b < 120

        check(conn, raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120", out stmt));

        int vsum = 0;

        while (raw.SQLITE_ROW == (checkthru(conn, raw.sqlite3_step(stmt))))
        {
            vsum += raw.sqlite3_column_int(stmt, 0);
        }
        
        check(conn, raw.sqlite3_finalize(stmt));
        stmt = null;

        // FIVE: close and print the results

        check(raw.sqlite3_close(conn));
        conn = null;

        Console.WriteLine("val: {0}", vsq);
        Console.WriteLine("sum: {0}", vsum);
    }
}

Notes:

  • I'm coding against the 'raw' SQLite API, which returns integer error codes rather than throwing exceptions. So I've written some little check functions which throw on any result code that signifies an error condition.

  • In the first stanza, I'm opening ":memory:" rather than an actual file on disk so that I can be sure the db starts clean.

  • In the second stanza, I'm constructing the SQL string rather than using parameter substitution. This is a bad idea for two reasons. First, parameter substitution eliminates SQL injection attacks. Second, forcing SQLite to compile a SQL statement inside a loop is going to cause poor performance.

  • In the third stanza, I'm going out of my way to do this more properly, using prepare/bind/step/finalize. Ironically, this is the case where it doesn't matter as much, since I'm not looping.

  • In the fourth stanza, I specifically want to loop over the rows in C# even though I could easily just do the sum in SQL.

F#, first attempt

OK, now here's a painfully direct translation of this code to F#:

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    (rc <> raw.SQLITE_OK) 
    && (rc <> raw.SQLITE_ROW) 
    && (rc <> raw.SQLITE_DONE)

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else ()

let checkthru conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

// MAIN program

// ONE: open the db and create the table

let (rc,conn) = raw.sqlite3_open(":memory:") 
check1 rc

check2 conn (raw.sqlite3_exec (conn, "CREATE TABLE foo (a int, b int)"))

// TWO: insert 16 rows

for i = 1 to 16 do 
    let sql = (sprintf "INSERT INTO foo (a) VALUES (%d)" i)
    check2 conn (raw.sqlite3_exec (conn, sql ))

// THREE: set b = a squared and find b for a=7

check2 conn (raw.sqlite3_exec (conn, "UPDATE foo SET b = a * a"))

let rc2,stmt = raw.sqlite3_prepare_v2(conn, "SELECT b FROM foo WHERE a=?")
check2 conn rc2

check2 conn (raw.sqlite3_bind_int(stmt, 1, 7))
check2 conn (raw.sqlite3_step(stmt))
let vsq = raw.sqlite3_column_int(stmt, 0)
check2 conn (raw.sqlite3_finalize(stmt))

// FOUR: fetch sum(a) for all rows where b < 120

let rc3,stmt2 = raw.sqlite3_prepare_v2(conn, "SELECT a FROM foo WHERE b<120")
check2 conn rc3

let mutable vsum = 0

while raw.SQLITE_ROW = ( checkthru conn (raw.sqlite3_step(stmt2)) ) do 
    vsum <- vsum + (raw.sqlite3_column_int(stmt2, 0))

check2 conn (raw.sqlite3_finalize(stmt2))

// FIVE: close and print the results

check1 (raw.sqlite3_close(conn))

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • The is_error function actually looks kind of elegant to me in this form. Note that != is spelled <>. Also there is no return keyword, as the value of the last expression just becomes the return value of the function.

  • The F# way is to use type inference. For example, in the is_error function, the rc parameter is strongly typed as an integer even though I haven't declared it that way. The F# compiler looks at the function and sees that I am comparing the parameter against raw.SQLITE_OK, which is an integer, therefore rc must be an integer as well. F# does have a syntax for declaring the type explicitly, but this is considered bad practice.

  • The check2 and checkthru functions are identical except that one returns the unit type (which is kind of like void) and the other passes the integer argument through. In C# this wouldn't matter and I could have just had the check functions return their argument when they don't throw. But F# gives warnings ("This expression should have type 'unit', but has type...") for any expression whose values is not used.

  • In C#, I overloaded check() so that I could sometimes call it with the sqlite3 connection handle and sometimes without. F# doesn't do function overloading, so I did two versions of the function called check1 and check2.

  • Since raw.sqlite3_open() has an out parameter, F# automatically converts this to return a tuple with two items (the actual return value is first, and the value in the out parameter is second). It took me a little while to figure out the right syntax to get the two parts into separate variables.

  • It took me even longer to figure out that calling a .NET method in F# uses a different syntax than calling a regular F# function. I was just getting used to the idea that F# wants functions to be called without parens and with the parameters separated by spaces instead of commas. But .NET methods are not F# functions. The syntax for calling a .NET method is, well, just like in C#. Parens and commas.

  • Here's another way that method calls are different in F#: When a method call is a parameter to a regular F# function, you have to enclose it in parens. That's why the call to sqlite3_exec() in the first stanza is parenthesized when I pass it to check2.

  • BTW, one of the first things I did was try to call raw.sqlite3_Open(), just to verify that F# is case-sensitive. It is.

  • F# programmers seem to pride themselves on how much they can do in a single line of code, regardless of how long it is. I originally wrote the second stanza in a single line. I only split it up so it would look better here in my blog article.

  • In the third stanza, F# wouldn't let me reuse rc ("Duplicate definition of value 'rc'") so I had to introduce rc2.

  • In the fourth stanza, I have tried to exactly mimic the behavior of the C# code, and I think I've succeeded so thoroughly that any real F# programmer will be tempted to gouge their eyes out when they see it. I've used mutable and while/do, both of which are considered a very un-functional way of doing things.

  • Bottom line: This code works and it does exactly what the original C# does. But I named the file fsharp_dreadful.fs because I think that in terms of what is considered best practices in the F# world, it's probably about as bad as it can be while still being correct.

F#, somewhat less csharpy

Here's an F# version I called fsharp_less_bad.fs. It's still not very good, but I've made an attempt to do some things in a more F#-ish way.

open SQLitePCL

// helper functions to check SQLite result codes and throw

let is_error rc = 
    match rc with
    | raw.SQLITE_OK -> false
    | raw.SQLITE_ROW -> false
    | raw.SQLITE_DONE -> false
    | _ -> true

let check1 rc = 
    if (is_error rc) 
    then failwith (sprintf "%d" rc) 
    else ()

let check2 conn rc = 
    if (is_error rc) 
    then failwith (raw.sqlite3_errmsg(conn)) 
    else rc

let checkpair1 pair =
    let rc,result = pair
    check1 rc |> ignore
    result

let checkpair2 conn pair =
    let rc,result = pair
    check2 conn rc |> ignore
    result

// helper functions to wrap method calls in F# functions

let sqlite3_open name = checkpair1 (raw.sqlite3_open(name))
let sqlite3_exec conn sql = check2 conn (raw.sqlite3_exec (conn, sql)) |> ignore
let sqlite3_prepare_v2 conn sql = checkpair2 conn (raw.sqlite3_prepare_v2(conn, sql))
let sqlite3_bind_int conn stmt ndx v = check2 conn (raw.sqlite3_bind_int(stmt, ndx, v)) |> ignore
let sqlite3_step conn stmt = check2 conn (raw.sqlite3_step(stmt)) |> ignore
let sqlite3_finalize conn stmt = check2 conn (raw.sqlite3_finalize(stmt)) |> ignore
let sqlite3_close conn = check1 (raw.sqlite3_close(conn))
let sqlite3_column_int stmt ndx = raw.sqlite3_column_int(stmt, ndx)

// MAIN program

// ONE: open the db and create the table

let conn = sqlite3_open(":memory:")

// use partial application to create an exec function that already 
// has the conn parameter baked in

let exec = sqlite3_exec conn

exec "CREATE TABLE foo (a int, b int)"

// TWO: insert 16 rows

let ins x = 
    exec (sprintf "INSERT INTO foo (a) VALUES (%d)" x)

[1 .. 16] |> List.iter ins

// THREE: set b = a squared and find b for a=7

exec "UPDATE foo SET b = a * a"

let stmt = sqlite3_prepare_v2 conn "SELECT b FROM foo WHERE a=?"
sqlite3_bind_int conn stmt 1 7
sqlite3_step conn stmt
let vsq = sqlite3_column_int stmt 0
sqlite3_finalize conn stmt

// FOUR: fetch sum(a) for all rows where b < 120

let stmt2 = sqlite3_prepare_v2 conn "SELECT a FROM foo WHERE b<120"

let vsum = List.sum [ while 
    raw.SQLITE_ROW = ( check2 conn (raw.sqlite3_step(stmt2)) ) do 
        yield sqlite3_column_int stmt2 0 
    ]

sqlite3_finalize conn stmt2

// FIVE: close and print the results

sqlite3_close conn

printfn "val: %d" vsq
printfn "sum: %d" vsum

Notes:

  • I changed is_error to use pattern matching. For this very simple situation, I'm not sure this is an improvement over the simple boolean expression I had before.

  • I get the impression that typical doctrine in functional programming church is to not use exceptions, but I'm not tackling that problem here.

  • I got rid of checkthru and just made check2 return its rc paraemter when it doesn't throw. This means most of the times I call check2 I have to ignore the result or else I get a warning.

  • I added a couple of checkpair functions. These are designed to take a tuple, such as the one that comes from a .NET method with an out parameter, like sqlite3_open() or sqlite3_prepare_v2(). The checkpair function does the appropriate check function on the first part of the tuple (the integer return code) and then returns the second part. The sort-of clever thing here is that checkpair does not care what type the second part of the tuple is. I get the impression that this sort of "things are generic by default" philosophy is a pillar of functuonal programming.

  • I added several functions which wrap raw.sqlite3_whatever() as a more F#-like function that looks less cluttered.

  • In the first stanza, after I get the connection open, I define an exec function using the F# partial application feature. The exec function is basically just the sqlite3_exec function except that the conn parameter has already been baked in. This allows me to use very readable syntax like exec "whatever". I considered doing this for all the functions, but I'm not really sure this design is a good idea. I just found this hammer called "partial application" so I was looking for a nail.

  • In the second stanza, I've eliminated the for loop in favor of a list operation. I defined a function called ins which inserts one row. The [1 .. 16] syntax produces a range, which is piped into List.iter.

  • The third stanza looks a lot cleaner with all the .NET method calls hidden away.

  • In the fourth stanza, I still have a while loop, but I was able to get rid of mutable. The syntax I'm using here is (I think) something called a computation expression. Basically, the stuff inside the square brackets is constructing a list with a while loop. Then List.sum is called on that list, resulting in the number I want.

Other notes

I did all this using the command-line F# tools and Mono on a Mac. I've got a command called fsharpc on my system. I'm not sure how it got there, but it probably happened when I installed Xamarin.

Since I'm not using msbuild or NuGet, I just harvested SQLitePCL.raw.dll from the SQLitePCL.raw NuGet package. The net45 version is compatible with Mono, and on a Mac, it will simply P/Invoke from the SQLite library that comes preinstalled with MacOS X.

So the bash commands to setup my environment for this blog entry looked something like this:

mkdir fs_sqlite
cd fs_sqlite
mkdir unzipped
cd unzipped
unzip ~/Downloads/sqlitepcl.raw_needy.0.5.0.nupkg 
cd ..
cp unzipped/lib/net45/SQLitePCL.raw.dll .

Here are the build commands I used:

fs_sqlite eric$ gmcs -r:SQLitePCL.raw.dll -sdk:4.5 csharp.cs

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_dreadful.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ fsharpc -r:SQLitePCL.raw.dll fsharp_less_bad.fs
F# Compiler for F# 3.1 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License

fs_sqlite eric$ ls -l *.exe
-rwxr-xr-x  1 eric  staff   4608 Sep  8 15:30 csharp.exe
-rwxr-xr-x  1 eric  staff   8192 Sep  8 15:30 fsharp_dreadful.exe
-rwxr-xr-x  1 eric  staff  11776 Sep  8 15:31 fsharp_less_bad.exe

fs_sqlite eric$ mono csharp.exe
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_dreadful.exe 
val: 49
sum: 55

fs_sqlite eric$ mono fsharp_less_bad.exe 
val: 49
sum: 55

BTW, I noticed that compiling F# (fsharpc) is a LOT slower than compiling C# (gmcs).

Note that the command-line flag to reference (-r:) an assembly is the same for F# as it is for C#.

Note that fsharp_dreadful.exe is bigger than csharp.exe, and the "less_bad" exe is even bigger. I suspect that generalizing these observations would be extrapolating from too little data.

C# fans may notice that I [attempted to] put more effort into the F# code. This was intentional. Making the C# version beautiful was not the point of this blog entry.

So far, my favorite site for learning F# has been http://fsharpforfunandprofit.com/

 

How To Rapidly Brainstorm and Share Ideas with Method 635

So, if you have a bunch of smart people, a bunch of bright ideas, and everybody wants to talk at the same time ... what do you do?

Or, you have a bunch of smart people, but they are quiet and nobody is sharing their bright ideas, and the squeaky wheel gets the oil ... what do you do?

Whenever you get a bunch of smart people together to change the world it helps to have some proven practices for better results.

One of the techniques a colleague shared with me recently is Method 635.  It stands for six participants, three ideas, and five rounds of supplements. 

He's used Method 635 successfully to get a large room of smart people to brainstorm ideas and put their top three ideas forward.

Here's how he uses Method 635 in practice.

  1. Split the group into 6 people per table (6 people per team or table).
  2. Explain the issue or challenge to the group, so that everybody understands it. Each group of 6 writes down 3 solutions to the problem (5 minutes).
  3. Go five rounds (5 minutes per round).  During each round, pass the ideas to the participant's neighbor (one of the other participants).  The participant's neighbor will add three additional ideas or modify three of the existing ones.
  4. At the end of the five rounds, each team votes on their top three ideas (5 minutes.)  For example, you can use “impact” and “ability to execute” as criteria for voting (after all, who cares about good ideas that can't be done, and who cares about low-value ideas that can easily be executed.)
  5. Each team presents their top three ideas to the group.  You could then vote again, by a show of hands, on the top three ideas across the teams of six.

The outcome is that each person will see the original three solutions and contribute to the overall set of ideas.

By using this method, if each of the 5 rounds is 5 minutes, and if you take 10 minutes to start by explaining the issue, and you give teams 5 minutes to write down their initial set of 3 ideas, and then another 5 minutes at the end to vote, and another 5 minutes to present, you’ve accomplished a lot within an hour.   Voices were heard.  Smart people contributed their ideas and got their fingerprints on the solutions.  And you’ve driven to consensus by first elaborating on ideas, while at the same time, driving to convergence and allowing refinement along the way.

Not bad.

All in a good day’s work, and another great example of how structuring an activity, even loosely structuring an activity, can help people bring out their best.

You Might Also Like

How To Use Six Thinking Hats

Idea to Done: How to Use a Personal Kanban for Getting Results

Workshop Planning Framework

Categories: Architecture, Programming

Xebia KnowledgeCast Episode 3

Xebia Blog - Mon, 09/08/2014 - 15:41

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this third episode,
we get a bit more technical with me interviewing some of the most excellent programmers in the known universe: Age Mooy and Barend Garvelink. Then, I talk education and Smurfs with Femke Bender. Also, I toot my own horn a bit by providing you with a summary of my PMI Netherlands Summit session on Building Your Parachute On The Way Down. And of course, Serge will have Fun With Stickies!

It's been a while, and for those that are still listening to this feed: The Xebia Knowledgecast has been on hold due to personal circumstances. Last year, my wife lost her job, my father and mother in-law died, and we had to take our son out of the daycare center he was in due to the way they were treating him there. So, that had a little bit of an effect on my level of podfever. That said, I'm back! And my podfever is up!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Your Worst Enemy Is Yourself

Making the Complex Simple - John Sonmez - Mon, 09/08/2014 - 15:00

Here’s the thing… You could have been exactly where you want to be right now. You could have gotten the perfect job. You could have started that business you always wanted to start. You could have gotten those 6-pack abs. You could have even met the love of your life. There has only been one […]

The post Your Worst Enemy Is Yourself appeared first on Simple Programmer.

Categories: Programming

Getting started with Salt

Xebia Blog - Mon, 09/08/2014 - 09:07

A couple of days ago I had the chance to spend a full day working with Salt(stack). On my current project we are using a different configuration management tool and my colleagues there claimed that Salt was simpler and more productive. The challenge was easily set, they claimed that a couple of people with no Salt experience, albeit with a little configuration management knowledge, would be productive in a single day.

My preparation was easy, I had to know nothing about Salt....done!

During the day I was working side by side with another colleague who knew little to nothing about Salt. When the day started, the original plan was to do a quick one hour introduction into Salt. But as we like to dive in head first this intro was skipped in favor of just getting started. We used an existing Vagrant box that spun up a Salt master & minion we could work on. The target was to get Salt to provision a machine for XL Deploy, complete with the customizations we were doing at our client. Think of custom plugins, logging configuration and custom libraries.

So we got cracking, running down the steps we needed to get XL Deploy installed. The steps were relatively simple, create a user & group, get the installation files from somewhere, install the server, initialize the repository and run it as a service.

First thing I noticed is that we simply just could get started. For the tasks we needed to do (downloading, unzipping etc.) we didn't need any additional states. Actually, during the whole exercise we never downloaded any additional states. Everything we needed was provided by Salt from the get go. Granted, we weren't doing anything special but it's not a given that everything is available.

During the day we approached the development of our Salt state like we would a shell script. We started from the top and added the steps needed. When we ran into issues with the order of the steps we'd simply move things around to get it to work. Things like creating a user before running a service as that user were easily resolved this way.

Salt uses yaml to define a state and that was fairly straight forward to use. Sometimes the naming used was strange. For example the salt.state.archive uses the parameter "source" for it's source location but "name" for it's destination. It's clearly stated in the docs what the parameter is used for, but a strange convention nonetheless.

We also found that the feedback provided by Salt can be scarce. On more than one occasion we'd enter a command and nothing would happen for a good while. Sometimes there would eventually be a lot of output but sometimes there wasn't.  This would be my biggest gripe with Salt, that you don't always get the feedback you'd like. Things like using templates and hierarchical data (the so-called pillars) proved easy to use. Salt uses jinja2 as it's templating engine, since we only needed simple variable replacement it's hard to comment on how useful jinja is. For our purposes it was fine.  Using pillars proved equally straightforward. The only issue we encountered here was that we needed to add our pillar to our machine role in the top.sls. Once we did that we could use the pillar data where needed.

The biggest (and only real) problem we encountered was to get XL Deploy to run as a service. We tried two approaches, one using the default service mechanism on Linux and the second using upstart. Upstart made it very easy to get the service started but it wouldn't stop properly. Using the default mechanism we couldn't get the service to start during a Salt run. When we send it specific commands it would start (and stop) properly but not during a run. We eventually added a post-stop script to the upstart config to make sure all the (child) processes stopped properly.

At the end of the day we had a state running that provisioned a machine with XL Deploy including all the customizations we wanted. Salt basically did what we wanted. Apart from the service everything went smooth. Granted, we didn't do anything exotic and stuck to rudimentary tasks like downloading, unzipping and copying, but implementing these simple tasks remained simple and straightforward. Overall Salt did what one might expect.

From my perspective the goal of being productive in a single day was easily achieved. Because of how straightforward it was to implement I feel confident about using Salt for more complex stuff. So, all in all Salt left a positive impression and I would like to do more with it.

How to create a knowledge worker Gemba

Software Development Today - Vasco Duarte - Mon, 09/08/2014 - 04:00

I am a big fan of the work by Jim Benson and Tonianne Barry ever since I read their book: Personal Kanban.

In this article Jim describes an idea that I would like to highlight and expand. He says: we need a knowledge worker Gemba. He goes on to describe how to create that Gemba:

  • Create a workcell for knowledge work: Where you can actually observe the team work and interact
  • Make work explicit: Without being able to visualize the work in progress, you will not be able to understand the impact of certain dynamics between the team members. Also, you will miss the necessary information that will allow you to understand the obstacles to flow in the team - what prevents value from being delivered.

These are just some steps you can take right now to understand deeply how work gets done in your team, your organization or by yourself if you are an independent knowledge worker. This understanding, in turn will help you define concrete changes to the way work gets done in a way that can be measured and understood.

I've tried the same idea for my own work and described it here. How about you? What have you tried to implement to create visibility and understanding in your work?

Are Testers still relevant?

Xebia Blog - Sun, 09/07/2014 - 22:07

Last week I talked to one of my colleagues about a tester in his team. He told me that the tester was bored, because he had nothing to do. All the developers wrote and executed their tests themselves. Which makes sense, because the tester 2.0 tries to make the team test infected.

So what happens if every developer in the team has the Testivus? Are you still relevant on the Continuous Delivery train?
Come and join the discussion at the Open Kitchen Test Automation: Are you still relevant?

Testers 1.0

Remember the days when software testers where consulted after everything was built and released for testing. Testing was a big fat QA phase, which was a project by itself. The QA department consisted of test managers analyzing the requirements first. Logical test cases were created and were subordinated to test executors, who created physical test cases and executed them manually. Testers discovered conflicting requirements and serious issues in technical implementations. Which is good obviously. You don't want to deliver low quality software. Right?

So product releases were being delayed and the QA department documented everything in a big fat test report. And we all knew it: The QA department had to do it all over again after the next release.

I remember being a tester during those days. I always asked myself: Why am I always the last one thinking about ways to break the system? Does the developer know how easily this functionality can be broken? Does the product manager know that this requirement does not make sense at all?
Everyone hated our QA department. We were portrayed as slow, always delivering bad news and holding back the delivery cycle. But the problem was not delivering the bad news. The timing was.

The way of working needed to be changed.

Testers 2.0

We started training testers to help Agile teams deliver high quality software during development: The birth of the Tester 2.0 - The Agile Tester.
These testers master the Agile Manifesto, processes and methods that come with it. Collaboration about quality is the key here. Agile Testing is a mindset. And everyone is responsible for the quality of the product. Testers 2.0 helped teams getting (more) test infected. They thought like a researcher instead of a quality gatekeeper. They became part of the software development and delivery teams and they looked into possibilities to speed up testing efforts. So they practiced several exploratory testing techniques. Focused on reasonable and required tests, given the constraints of a sprint.

When we look back at several Agile teams having a shared understanding about Agile Testing, we saw many multidisciplinary teams becoming two separate teams: One is for developers and the other for QA / Testers.
I personally never felt comfortable in those teams. Putting testers with a group of developers is not Agile Testing. Developers still left testing for testers, and testers passively waited for developers to deploy something to be tested. At some point testers became a bottleneck again so they invested in test automation. So testers became test automators and build their own test code in a separate code base than development code. Test automators also picked tools that did not foster team responsibility. Therefore Test Automation got completely separated from product development. We found ways to help testers, test automators and developers to collaborate by improving the process. But that was treating the symptom of the problem. Developers were not taking responsibility in automated tests. And testers did not help developers designing testable software.

We want test automators and developers to become the same people.

Testers 3.0

If you want to accelerate your business you'll need to become iterative, delivering value to production as soon as possible by doing Continuous Delivery properly. So we need multidisciplinary teams to shorten feedback loops coming from different point of views.

Testers 3.0 are therefore required to accelerate the business by working in these following areas:

Requirement inspection: Building the right thing

The tester 3.0 tries to understand the business problem of requirements by describing examples. It's important to get common understanding between the business and technical context. So the tester verifies them as soon as possible and uses this as input for Test Automation and use BDD as a technique where a Human Readable Language is fostered. These testers work on automated acceptance tests as soon as possible.

Test Automation: Boosting the software delivery process

When common understanding is reached and the delivery team is ready to implement the requested feature, the tester needs programming skills to make the acceptance tests in a clean code state. The tester 3.0 uses appropriate Acceptance Test Driven Development tools (like Cucumber), which the whole team supports. But the tester keeps an eye out for better, faster and easier automated testing frameworks to support the team.

At the Xebia Craftsmanship Seminar (a couple of months ago) someone asked me if testers should learn how to write code.
I told him that no one is good at everything. But the tester 3.0 has good testing skills and enough technical baggage to write automated acceptance tests. Continuous Delivery teams have a shared responsibility and they automate all boring steps like manual test scripts, performance and security tests. It's very important to know how to automate; otherwise you'll slow down the team. You'll be treated the same as anyone else in the delivery team.
Testers 3.0 try to get developers to think about clean code and ensuring high quality code. They look into (and keep up with) popular development frameworks and address the testability of it. Even the test code is evaluated for quality attributes continuously. It needs the same love and caring as getting code into production.

Living documentation: Treating tests as specifications

At some point you'll end up with a huge set of automated tests telling you everything is fine. The tester 3.0 treats these tests as specifications and tries to create a living document, which is used for long term requirements gathering. No one will complain about these tests when they are all green and passing. The problem starts when tests start failing and no one can understand why. Testers 3.0 think about their colleague when they write a specification or test. They need to clearly specify what is being tested in a Human Readable Language.
They are used to changing requirements and specifications. And they don't make a big deal out of it. They understand that stakeholders can change their mind once a product comes alive. So the tester makes sure that important decisions made during new requirement inspections and development are stored and understood.

Relevant test results: Building quality into the process

Testers 3.0 focus on getting extreme fast feedback to determine the software quality of software products every day. Every night. Every second.
Testers want to deploy new working software features into production more often. So they do whatever it takes to build a high quality pipeline decreasing the quality feedback time during development.
Everyone in your company deserves to have confidence in the software delivery pipeline at any moment. Testers 3.0 think about how they communicate these types of feedback to the business. They provide ways to automatically report these test results about quality attributes. Testers 3.0 ask the business to define quality. Knowing everything was built right, how can they measure they've built the right thing? What do we need to measure when the product is in production?

How to stay relevant as a Tester

So what happens when all of your teammates are completely focused on high quality software using automation?

Testing does not require you to manually click, copy and paste boring scripted test steps you didn't want to do in the first place. You were hired to be skeptical about anything and make sure that all risks are addressed. It's still important to keep being a researcher for your team and test curiosity accordingly.

Besides being curious, analytical and having great communication skills, you need to learn how to code. Don't work harder. Work smarter by analyzing how you can automate all the boring checks so you'll have more time discovering other things by using your curiosity.

Since testing drives software development, and should no longer be treated as a separate phase in the development process, it's important that teams put test automation in the center of all design decisions. Because we need Test Automation to boost the software delivery by building quality sensors in every step of the process. Every day. Every night. Every second!

Do you want to discuss this topic with other Testers 3.0?  Come and join the Open Kitchen: Test Automation and get on board the Continuous Delivery Train!

 

10 High-Value Activities in the Enterprise

I was flipping back over the past year and reflecting on the high-value activities that I’ve seen across various Enterprise customers.  I think the high-value activities tend to be some variation of the following:

  1. Customer interaction (virtual, remote, connect with experts)
  2. Product innovation and ideation.
  3. Work from anywhere on any device.
  4. Comprehensive and cross-boundary collaboration (employees, customers, and partners.)
  5. Connecting with experts.
  6. Operational intelligence (predictive analytics, predictive maintenance)
  7. Cross-sell / up-sell and real-time marketing.
  8. Development and ALM in the Cloud.
  9. Protecting information and assets.
  10. Onboarding and enablement.

At first I was thinking of Porter’s Value Chain (Inbound Logistics, Operations, Outbound Logistics, Marketing & Sales, Services), which do help identify where the action is.   Next, I was reviewing how when we drive big changes with a customer, it tends to be around changing the customer experience, or changing the employee experiences, or changing the back-office and systems experiences.

You can probably recognize how the mega-trends (Cloud, Mobile, Social, and Big Data) influence the activities above, as well as some popular trends like Consumerization of IT.

High-Value Activities in the Enterprise from the Microsoft “Transforming Our Company” Memo

I also found it helpful to review the original memo from July 11, 2013 titled Transforming Our Company.  Below are some of my favorite sections from the memo:

Via Transforming Our Company:

We will engage enterprise on all sides — investing in more high-value activities for enterprise users to do their jobs; empowering people to be productive independent of their enterprise; and building new and innovative solutions for IT professionals and developers. We will also invest in ways to provide value to businesses for their interactions with their customers, building on our strong Dynamics foundation.

Specifically, we will aim to do the following:

  • Facilitate adoption of our devices and end-user services in enterprise settings. This means embracing consumerization of IT with the vigor we pursued in the initial adoption of PCs by end users and business in the ’90s. Our family of devices must allow people to be more productive, and for them to easily use our devices for work.

  • Extend our family of devices and services for enterprise high-value activities. We have unique expertise and capacity in this space.

  • Information assurance. Going forward this will be an area of critical importance to enterprises. We are their trusted partners in this space, and we must continue to innovate for them against a changing security and compliance landscape.

  • IT management. With more IT delivered as services from the cloud, the function of IT itself will be reimagined. We are best positioned to build the tools and training for that new breed of IT professional.

  • Big data insight. Businesses have new and expanded needs and opportunities to generate, store and use their own data and the data of the Web to better serve customers, make better decisions and design better products. As our customers’ online interactions with their customers accelerate, they generate massive amounts of data, with the cloud now offering the processing power to make sense of it. We are well-positioned to reimagine data platforms for the cloud, and help unlock insight from the data.

  • Customer interaction. Organizations today value most those activities that help them fully understand their customers’ needs and help them interact and communicate with them in more responsive and personalized ways. We are well-positioned to deliver services that will enable our customers to interact as never before — to help them match their prospects to the right products and services, derive the insights so they can successfully engage with them, and even help them find and create brand evangelists.

  • Software development. Finally, developers will continue to write the apps and sites that power the world, and integrate to solve individual problems and challenges. We will support them with the simplest turnkey way to build apps, sites and cloud services, easy integration with our products, and innovation for projects of every size.”

A Story of High-Value Activities in Action

If you can’t imagine what high-value activities look like, or what business transformation would look like, then have a look at this video:

Nedbank:  Video Banking with Lync

Nedbank was a brick-and-mortar bank that wanted to go digital and, not just catch up to the Cloud world, but leap frog into the future.  According to the video description, “Nedbank initiated a program called the Integrated Channel Strategy, focusing on client centered banking experiences using Microsoft Lync. The client experience is integrated and aligned across all channels and seeks to bring about efficiencies for the bank. Video banking with Microsoft Lync gives Nedbank a competitive advantage.”

The most interesting thing about the video is not just what’s possible, but that’s it’s real and happening.

They set a new bar for the future of digital banking.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming