Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Getting Things Right: A Look at Centralized vs Decentralized Systems Through the Eyes of Instant Replay

Three baseball umpires were sitting around a bar, talking about how they make calls on each pitch: First umpire: Some are balls and some are strikes, and I call them as they are. Second umpire: Some are balls and some are strikes, and I call them as I see 'em. Third umpire: Some are balls and some are strikes, but they ain’t nothin' until I call 'em.


AT&T's Global Network Operations Center

 


MLB's Instant Replay Bunker
NHL's Situation Room

It’s fun to look at how concepts we think of as belonging primarily to the domain of computer science play out in other fields. One intriguing example is how Instant Replay reflects and even helps shape the culture of a sport by how replay is implemented: decentralized or centralized.

Lucrative TV deals have pumped huge sums of money into professional sports. With so much money in play, sports have shifted from being pure entertainment to wanting to get things right. The price of making a bad call is just too high to let the human element decide the fate of titans.

Getting things right is also a much talked about subject in computer science. In CS the language of getting things right uses terms like transaction, rollback, quorum, optimistic replication, linearizability, synchronization, lock, eventually consistent, compensating transaction, and so on.

In sports to get things right referees use terms like flag, penalty, by rule, ruling stands, reset the clock, down and distance, line to gain, the whistle blew, ruling confirmed, and ruling overturned.

Though the vocabulary is different, the intent is much the same. Correctness.

Intent is not all tech and sports have in common. As technology evolves we are seeing sports change to take advantage of the new capabilities technology offers. And those changes should be familiar to anyone in software. Sports have gone from a completely decentralized system of officiating to where we now see the NBA, NFL, MLB, and NHL, all converging on some form of a centralized system.

The NHL were the innovators, starting their centralized instant replay system in 2011. It works something like this...officials sit in a war room located in Toronto that looks a lot like every network operations center ever built. Video feeds from all games flow into the room. When there is a controversy or an obvious review-worthy play, Toronto is contacted for a quick review and judgement on the correct call.  Every sport will implement their own centralized replay system in their own way, but that's the gist of it.

We’ve seen the exact same transformation as federated services like email have been replaced with centralized services like Twitter and Facebook. It turns out sports and computer science have some deeper similarities. What might those be?

Categories: Architecture

Apache Spark, ETL and Parquet

spark-parquet

One of the projects we’re currently running in my group (Amdocs’ Technology Research) is an evaluation the current state of different option for reporting on top of and near Hadoop (I hope I’ll be able to publish the results when we’d have them). Anyway, part of the preparations for the benchmark includes ingesting a lot of events (CDRs) into the system and creating different aggregations on top of them for instance, for voice call billing events we create yearly, monthly, weekly and daily and hourly aggregations on the subscriber level which include measures like : count of calls, average duration, sum of pricing, median balance, hourly distribution of calls, popular destinations etc.

We are using spark to do the ingestion and I thought that there are two interesting aspects I can share, which I haven’t seen too many examples on the internet, namely:

  • doing multiple aggregations i.e. an aggregations that goes beyond a work-count level
  • Writing to an Hadoop output format (Parquet in the example)

I created a minimal example, which uses a simple, synthesized input and   demonstrates these two issues – you can get the complete code for that on github . The rest of this post will highlight some of the points from the example.

Let’s start with the main core spark code, which is simple enough:

View the code on Gist.

  • line 1 – is reading a CSV as text file
  • line 3 is doing a simple parsing of the file and replacing it with a class. The parsed RDDs are cached since we’d iterate them multiple times (for each aggregation)
  • like 5,6 groups by multiple keys. The nice way to do that is using SparkSQL but the version I used (1.0.2) had very limited and undocumented SQL (I had to go to the code). This should be better in future versions. If you still use “regular” spark, I haven’t found a better way to group on multiple fields except creating the complex key by hand which is what getHourly and getWeekly methods do (create a string with year,month… etc dimensions)
  • like 11,12 do the aggregations themselves (which we’d look into next). The output  is again Pair RDDs. This seems useless however, it turns out you need pair RDDs if you want to save to Hadoop formats (more on that later)

Aggregations

Once we do the group by key (lines 5,6 above) we have a collection of Call records (Iterable[Call]). Now scala has a lot of nifty functions to transform Iterables (map, flatmap, foldleft etc. etc.) however we have a long list of aggregations to compute and the the Calls collection can get rather big (e.g. for yearly aggregates) and we have lots and lots of collections to iterate (terabytes and terabytes). So instead of using all these features I opted for the more traditional Java like aggregation, which while being pretty ugly, minimized the number of iterations I am making on the data.

You can see the result below (if you have a nicer looking, yet efficient, idea, I’d be happy to hear about it)

View the code on Gist.

Save to Parquet

The end result of doing the aggregations is an hierarchical structure – lise of simple measures (avgs, sums, counts etc.) but also a phone book, which also has an array of pricings and an hours breakdown which is also an array. I decided to store that in Parquet/ORC formats which are efficient for queries in Hadoop (by Hive/Impala depending on the Hadoop distribution you are using). For the purpose of the example I included the code to persist to parquet.

If you can use SparkSQL than support for Parquet is built in and you can do something as simple as

View the code on Gist.

You might have notices that the weeklyAggregated is a simple RDD and not a pair RDD since it isn’t needed here. Unfortunately for me I had to use Spark 0.9 (our lab is currently on Cloudera 5.0.1) – also in the more general case of writing to other Hadoop file formats you can’t use this trick.

One point specific to Parquet is that you can’t write to it directly – you have to use a “writer” class and parquet has Avro, Thrift and ProtoBuf writers available. I thought that’s going to be nice and easy so I looked for scala library to serialize to one of these formats and chose Scalavro (which I used in the past) turns out that, while the serialization is Avro compatible it is not the standard Avro class the Avro writer for parquet expects. No problem, I thought I’ll just take another library that creates works with Thrift – Twitter has one, it is called Scrooge works nice and all – but again it is not completely standard (doesn’t inherit from TBase). Oh well, maybe protobuf will do the job so I tried ScalaBuff , alas, this didn’t work well either (inheriting from LightMessage and not Message as expected by the writer) – I ended up using the plain Java generator for protobuf. This further uglified the aggregation logic but at least it got he job done. The output from the aggregation is a  pair RDD  of protobuf serializable aggregations

So, why the the pair RDD? it turns out all the interesting save functions that can use Hadoop file format only exist on pair RDD and not on regular ones. If you know this little trick the code is not very complicated:

View the code on Gist.

The first two lines in the snippet above configure the writer and are specific to parquet. the last line is the one that does the actual save to file – it specified the output directory, the key class (Void since we don’t need this with the parquet format), the for the records, the Hadoop output format class (Parquet in our case) and lastly a job configuration

To summarize. When you want to do more that a simple toy example the code may end up more complicated than the concise examples you see on-line. When working with lots and lots of data the number of iterations you make on the data also begin to be important and you need to pay attention to that.

Spark is a very powerful library for working on big data, it has a lot of components and capabilities. However its biggest weakness (in my opinion anyway) is its documentation. It makes it easy to start work with the platform, but when you want to do something a little more interesting you are left to dig around without proper directions. I hope this post will help some others when they’d do their digging around :)

Lastly,again I remind you, that  you can get the complete code for that on github

Categories: Architecture

iOS Today Widget in Swift - Tutorial

Xebia Blog - Sat, 09/13/2014 - 18:51

Since both the new app extensions of iOS 8 and Swift are both fairly new, I created a sample app that demonstrates how a simple Today extension can be made for iOS 8 in Swift. This widget will show the latest blog posts of the Xebia Blog within the today view of the notification center.

The source code of the app is available on GitHub. The app is not available on the App Store because it's an example app, though it might be quite useful for anyone following this Blog.

In this tutorial, I'll go through the following steps:

New Xcode project

Even though an extension for iOS 8 is a separate binary than an app, it's not possible to create an extension without an app. That makes it unfortunately impossible to create stand alone widgets, which this sample would be since it's only purpose is to show the latest posts in the Today view. So we create a new project in Xcode and implement a very simple view. The only thing the app will do for now is tell the user to swipe down from the top of the screen to add the widget.

iOS Simulator Screen Shot 11 Sep 2014 23.13.21

Time to add our extension target. From the File menu we choose New > Target... and from the Application Extension choose Today Extension.

Screen Shot 2014-09-11 at 23.20.34

We'll name our target XebiaBlogRSSWidget and of course use Swift as Language.

XebiaBlogRSS

The created target will have the following files:

      • TodayViewController.swift
      • MainInterface.storyboard
      • Info.plist

Since we'll be using a storyboard approach, we're fine with this setup. If however we wanted to create the view of the widget programatically we would delete the storyboard and replace the NSExtensionMainStoryboard key in the Info.plist with NSExtensionPrincipalClass and TodayViewController as it's value. Since (at the time of this writing) Xcode cannot find Swift classes as extension principal classes, we also would have to add the following line to our TodayViewController:

@objc (TodayViewController)

Update: Make sure to set the "Embedded Content Contains Swift Code" build setting of the main app target to YES. Otherwise your widget written in Swift will crash.

Add dependencies with cocoapods

The widget will get the latest blog posts from the RSS feed of the blog: http://blog.xebia.com/feed/. That means we need something that can read and parse this feed. A search on RSS at Cocoapods gives us the BlockRSSParser as first result. Seems to do exactly what we want, so we don't need to look any further and create our Podfile with the following contents:

platform :ios, "8.0"

target "XebiaBlog" do

end

target "XebiaBlogRSSWidget" do
pod 'BlockRSSParser', '~> 2.1'
end

It's important to only add the dependency to the XebiaBlogRSSWidget target since Xcode will build two binaries, one for the app itself and a separate one for the widget. If we would add the dependency to all targets it would be included in both binaries and thus increasing the total download size for our app. Always only add the necessary dependencies to both your app target and widget target(s).

Note: Cocoapods or Xcode might give you problems when you have a target without any Pod dependencies. In that case you may add a dependency to your main target and run pod install, after which you might be able to delete it again.

The BlockRSSParser is written in objective-c, which means we need to add an objective-c bridging header in order to use it from Swift. We add the file XebiaBlogRSSWidget-Bridging-Header.h to our target and add the import.

#import "RSSParser.h"

We also have to tell the Swift compiler about it in our build settings:

Screen Shot 2014-09-12 at 15.35.21

Load RSS feed

Finally time to do some coding. The generated TodayViewController has a function called widgetPerformUpdateWithCompletionHandler. This function gets called every once in awhile to ask for new data. It also gets called right after viewDidLoad when the widget is displayed. The function has a completion handler as parameter, which we need to call when we're done loading data. A completion handler is used instead of a return function so we can load our feed asynchronously.

In objective-c we would write the following code to load our feed:

[RSSParser parseRSSFeedForRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://blog.xebia.com/feed/"]] success:^(NSArray *feedItems) {
  // success
} failure:^(NSError *error) {
  // failure
}];

In Swift this looks slightly different. Here the the complete implementation of widgetPerformUpdateWithCompletionHandler:

func widgetPerformUpdateWithCompletionHandler(completionHandler: ((NCUpdateResult) -> Void)!) {
  let url = NSURL(string: "http://blog.xebia.com/feed/")
  let req = NSURLRequest(URL: url)

  RSSParser.parseRSSFeedForRequest(req,
    success: { feedItems in
      self.items = feedItems as? [RSSItem]
      completionHandler(.NewData)
    },
    failure: { error in
      println(error)
      completionHandler(.Failed)
  })
}

We assign the result to a new optional variable of type RSSItem array:

var items : [RSSItem]?

The completion handler gets called with either NCUpdateResult.NewData if the call was successful or NCUpdateResult.Failed when the call failed. A third option is NCUpdateResult.NoData which is used to indicate that there is no new data. We'll get to that later in this post when we cache our data.

Show items in a table view

Now that we have fetched our items from the RSS feed, we can display them in a table view. We replace our normal View Controller with a Table View Controller in our Storyboard and change the superclass of TodayViewController and add three labels to the prototype cell. No different than in iOS 7 so I won't go into too much detail here (the complete project is on GitHub).

We also create a new Swift class for our custom Table View Cell subclass and create outlets for our 3 labels.

import UIKit

class RSSItemTableViewCell: UITableViewCell {

    @IBOutlet weak var titleLabel: UILabel!
    @IBOutlet weak var authorLabel: UILabel!
    @IBOutlet weak var dateLabel: UILabel!

}

Now we can implement our Table View Data Source functions.

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return items.count
  }
  return 0
}

Since items is an optional, we use Optional Binding to check that it's not nil and then assign it to a temporary non optional variable: let items. It's fine to give the temporary variable the same name as the class variable.

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  let cell = tableView.dequeueReusableCellWithIdentifier("RSSItem", forIndexPath: indexPath) as RSSItemTableViewCell

  if let item = items?[indexPath.row] {
    cell.titleLabel.text = item.title
    cell.authorLabel.text = item.author
    cell.dateLabel.text = dateFormatter.stringFromDate(item.pubDate)
  }

  return cell
}

In our storyboard we've set the type of the prototype cell to our custom class RSSItemTableViewCell and used RSSItem as identifier so here we can dequeue a cell as a RSSItemTableViewCell without being afraid it would be nil. We then use Optional Binding to get the item at our row index. We could also use forced unwrapping since we know for sure that items is not nil here:

let item = items![indexPath.row]

But the optional binding makes our code saver and prevents any future crash in case our code would change.

We also need to create the date formatter that we use above to format the publication dates in the cells:

let dateFormatter : NSDateFormatter = {
        let formatter = NSDateFormatter()
        formatter.dateStyle = .ShortStyle
        return formatter
    }()

Here we use a closure to create the date formatter and to initialise it with our preferred date style. The return value of the closure will then be assigned to the property.

Preferred content size

To make sure we that we can actually see the table view we need to set the preferred content size of the widget. We'll add a new function to our class that does this.

func updatePreferredContentSize() {
  preferredContentSize = CGSizeMake(CGFloat(0), CGFloat(tableView(tableView, numberOfRowsInSection: 0)) * CGFloat(tableView.rowHeight) + tableView.sectionFooterHeight)
}

Since the widgets all have a fixed width, we can simply specify 0 for the width. The height is calculated by multiplying the number of rows with the height of the rows. Since this will set the preferred height greater than the maximum allowed height of a Today widget it will automatically shrink. We also add the sectionFooterHeight to our calculation, which is 0 for now but we'll add a footer later on.

When the preferred content size of a widget changes it will animate the resizing of the widget. To have the table view nicely animating along this transition, we add the following function to our class which gets called automatically:

override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
  coordinator.animateAlongsideTransition({ context in
    self.tableView.frame = CGRectMake(0, 0, size.width, size.height)
  }, completion: nil)
}

Here we simply set the size of the table view to the size of the widget, which is the first parameter.

Of course we still need to call our update method as well as reloadData on our tableView. So we add these two calls to our success closure when we load the items from the feed

success: { feedItems in
  self.items = feedItems as? [RSSItem]
  
  self.tableView.reloadData()
  self.updatePreferredContentSize()

  completionHandler(.NewData)
},

Let's run our widget:

Screen Shot 2014-09-09 at 13.12.50

It works, but we can make it look better. Table views by default have a white background color and black text color and that's no different within a Today widget. We'd like to match the style with the standard iOS Today widget so we give the table view a clear background and make the text of the labels white. Unfortunately that does make our labels practically invisible since the storyboard editor in Xcode will still show a white background for views that have a clear background color.

If we run again, we get a much better looking result:

Screen Shot 2014-09-09 at 13.20.05

Open post in Safari

To open a Blog post in Safari when tapping on an item we need to implement the tableView:didSelectRowAtIndexPath: function. In a normal app we would then use the openURL: method of UIApplication. But that's not available within a Today extension. Instead we need to use the openURL:completionHandler: method of NSExtensionContext. We can retrieve this context through the extensionContext property of our View Controller.

override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
  if let item = items?[indexPath.row] {
    if let context = extensionContext {
      context.openURL(item.link, completionHandler: nil)
    }
  }
}
More and Less buttons

Right now our widget takes up a bit too much space within the notification center. So let's change this by showing only 3 items by default and 6 items maximum. Toggling between the default and expanded state can be done with a button that we'll add to the footer of the table view. When the user closes and opens the notification center, we want to show it in the same state as it was before so we need to remember the expand state. We can use the NSUserDefaults for this. Using a computed property to read and write from the user defaults is a nice way to write this:

let userDefaults = NSUserDefaults.standardUserDefaults()

var expanded : Bool {
  get {
    return userDefaults.boolForKey("expanded")
  }
  set (newExpanded) {
    userDefaults.setBool(newExpanded, forKey: "expanded")
    userDefaults.synchronize()
  }
}

This allows us to use it just like any other property without noticing it gets stored in the user defaults. We'll also add variables for our button and number of default and maximum rows:

let expandButton = UIButton()

let defaultNumRows = 3
let maxNumberOfRows = 6

Based on the current value of the expanded property we'll determine the number of rows that our table view should have. Of course it should never display more than the actual items we have so we also take that into account and change our function into the following:

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return min(items.count, expanded ? maxNumberOfRows : defaultNumRows)
  }
  return 0
}

Then the code to make our button work:

override func viewDidLoad() {
  super.viewDidLoad()

  updateExpandButtonTitle()
  expandButton.addTarget(self, action: "toggleExpand", forControlEvents: .TouchUpInside)
  tableView.sectionFooterHeight = 44
}

override func tableView(tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
  return expandButton
}

Depending on the current value of our expanded property we wil show either "Show less" or "Show more" as button title.

func updateExpandButtonTitle() {
  expandButton.setTitle(expanded ? "Show less" : "Show more", forState: .Normal)
}

When we tap the button, we'll invert the expanded property, update the button title and preferred content size and reload the table view data.

func toggleExpand() {
  expanded = !expanded
  updateExpandButtonTitle()
  updatePreferredContentSize()
  tableView.reloadData()
}

And as a result we can now toggle the number of rows we want to see.

Screen Shot 2014-09-13 at 18.53.39Screen Shot 2014-09-13 at 18.53.43

Caching

At this moment, each time we open the widget, we first get an empty list and then once the feed is loaded, the items are displayed. To improve this, we can cache the retrieved items and display those once the widget is opened before we load the items from the feed. The TMCache library makes this possible with little effort. We can add it to our Pods file and bridging header the same way we did for the BlockRSSParser library.

Also here, a computed property works nice for caching the items and hide the actual implementation:

var cachedItems : [RSSItem]? {
  get {
    return TMCache.sharedCache().objectForKey("feed") as? [RSSItem]
  }
  set (newItems) {
    TMCache.sharedCache().setObject(newItems, forKey: "feed")
  }
}

Since the RSSItem class of the BlockRSSParser library conforms to the NSCoding protocol, we can use them directly with TMCache. When we retrieve the items from the cache for the first time, we'll get nil since the cache is empty. Therefore cachedItems needs to be an optional as well as the downcast and therefore we need to use the as? operator.

We can now update the cache once the items are loaded simply by assigning a value to the property. So in our success closure we add the following:

self.cachedItems = self.items

And then to load the cached items, we add two more lines to the end of viewDidLoad:

items = cachedItems
updatePreferredContentSize()

And we're done. Now each time the widget is opened it will first display the cached items.

There is one last thing we can do to improve our widget. As mentioned earlier, the completionHandler of widgetPerformUpdateWithCompletionHandler can also be called with NCUpdateResult.NoData. Now that we have the items that we loaded previously we can compare newly loaded items with the old and use NoData in case they haven't changed. Here is our final implementation of the success closure:

success: { feedItems in
  if self.items == nil || self.items! != feedItems {
    self.items = feedItems as? [RSSItem]
    self.tableView.reloadData()
    self.updatePreferredContentSize()
    self.cachedItems = self.items
    completionHandler(.NewData)
  } else {
    completionHandler(.NoData)
  }
},

And since it's Swift, we can simply use the != operator to see if the arrays have unequal content.

Source code on GitHub

As mentioned in the beginning of this post, the source code of the project is available on GitHub with some minor changes that are not essential to this blog post. Of course pull requests are always welcome. Also let me know in the comments below if you'd wish to see this widget released on the App Store.

React In Modern Web Applications: Part 2

Xebia Blog - Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

Stuff The Internet Says On Scalability For September 12th, 2014

Hey, it's HighScalability time:


Each dot in this image is an entire galaxy containing billions of stars. What's in there?
  • Quotable Quotes:
    • mseepgood: Or "another language that's becoming popular, Node.js"
    • Joe Moreno: What good are billions of cycles of CPU power that make me wait. I shouldn't have to wait longer and longer due to launching, buffering, syncing, I/O and latency.
    • @stevecheney: Apple Pay is the magic that integrated hardware / software produces. No one else in the world can do this.
    • @etherealmind: Next gen Intel Xeon E5 V3 CPU includes packet processor for 40GBE, 30x increase in OpenSSL crypto, 25% increase in DPDK perf. #IDF14
    • @pbailis: There's actually an interesting question in understanding when to break "sharing" -- at core, NUMA domain, server, or cluster level?
    • @fmueller_bln: Just wait some minutes for vagrant to provision a vm with puppet and you’ll know why docker may be better option for dev machines...

  • Encryption will make fighting the spam war much costlier reveals Mike Hearn in an awesome post: A brief history of the spam war, where he gives insightful color commentary of the punch counter punch between World Heavyweight Champion Google and the challenger, Clever Spammer.  Mike worked in the Gmail trenches for over four years and recommends: make sending email cost money; use money to create deposits using bitcoin. 

  • jeswin: No other browser can practically implement or support Dart. If they do their implementation will be slower than Google's, and will get classified as inferior. < Ignoring the merits of Dart, this is an interesting ecosystem effect. By rating sites for non quality of content reasons Google can in effect select for characteristics over which they have a comparative advantage. It's not an arms length transaction. 

  • Dateline Seattle. Social media users execute a coordinated denial of service attack on cell networks, preventing those in need from accessing emergency services. Who are these terrorists? Football fans. City of Seattle asks people to stop streaming videos, posting photos because of football. Tweets, Instagram, YouTube, and Snapchat are overloading the cell networks so calls can't get through. Should the cell network expand capacity? Should there be an app tax to constrain demand? Should users pay per packet? As a 49ers fan I have another suggestion...move games to a different venue, perhaps the moon. That will help.

  • Are you a militant cable cutter who thinks the future of  TV is the Internet? Not so fast says Dan Rayburn in Internet Traffic Records Could Be Broken This Week Thanks To Apple, NFL, Sony, Xbox, EA and Others: Delivering video over the Internet at the same scale and quality that you do over a cable network isn’t possible. The Internet is not a cable network and if you think otherwise, you will be proven wrong this week. We’re going to see long download times, more buffering of streams, more QoS issues and ISPs that will take steps to deal with the traffic. 

  • Ted Nelson takes on the impossible in on How Bitcoin Actually Works (Computers for Cynics #7). And he does an excellent job, sharing his usual insight with a twist. The title is misleading however. There's hardly any cynicism. How disappointing! Ted is clearly impressed with the design and implementation of bitcoin. For good reason. No matter what you think of bitcoin and its potential role in society, it is a very well thought out and impressive piece of technology. On par with Newton, Mr. Nelson suggests. If you watch this you'll probably realize that you don't actually understand bitcoin, even if you think you do, and that's a good thing.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Azure: SQL Databases, API Management, Media Services, Websites, Role Based Access Control and More

ScottGu's Blog - Scott Guthrie - Fri, 09/12/2014 - 07:14

This week we released a major set of updates to Microsoft Azure. This week’s updates include:

  • SQL Databases: General Availability of Azure SQL Database Service Tiers
  • API Management: General Availability of our API Management Service
  • Media Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer
  • Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal
  • Role-based Access Control: Preview release of role-based access control for Azure Management operations
  • Alerting: General Availability of Azure Alerting and new alerts on events

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers

I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure.

Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size.

Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time.

Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data).

Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host.

Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing

SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively.

Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month.

The below table provides a quick look at the different tiers and functionality:

image

This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels.

During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more.

Learn more about SQL Databases on Azure here. API Management Service: General Availability Release

I’m excited to announce the General Availability of the Azure API Management Service.

In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications.

The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to:

  • Throttle, rate limit and quota your APIs
  • Gain analytic insights on how your APIs are being used and by whom
  • Secure your APIs using OAuth or key-based access
  • Track the health of your APIs and quickly identify errors
  • Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs

Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal

The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs.

With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes.

image

For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial.

Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer

This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services.

The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection

This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding

This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers.

To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer

Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners

I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners:

  • Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure.
  • Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services.
  • Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms.
Learn More

Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress

This week we’ve also released a number of great updates to our Azure Websites service.

Virtual Network Integration

Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well.

The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network.

You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have:

image

Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress

This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases.

It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users.

You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure:

image

Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup

This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now:

  • Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within.
  • Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website.
  • On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live.

These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure

As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources.

Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include:

  • A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs.
  • You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way.

Below are some more details on how this works and can be enabled.

Azure Active Directory

Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises.

Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications:

image

All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions

In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access:

  • A Owner can perform all management operations for a resource and its child resources including access management.
  • A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others.
  • A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets.

In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged.

Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in).

Restricting Access based on Role Based Permissions

Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups.

You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up:

Step 1: Setting up Roles at the Subscription Level

We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription.

To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for: 

image

Then scroll down within the blade of subscription you opened, and locate the Roles tile within it:

image

Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role. 

Below I’ve opened up the default Reader role and added David and Fred to it:

image

Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc).

Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. scott@outlook.com), to also be mapped into a role.

Step 2: Setting up Roles at the Resource Level

Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it. 

If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions.

For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it.

To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this:

image

Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control

The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources.

With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access

Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment.

Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support

I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription.

Alerts are available in the full azure portal by clicking Management Services in the left navigation bar:

image

Also, alerting is available on most of the resources in the Azure preview portal:

image

You can create alerts on metrics from 8 different services today (and we are adding more all the time):

  • Cloud Services
  • Virtual Machines
  • Websites
  • Web hosting plans
  • Storage accounts
  • SQL databases
  • Redis Cache
  • DocumentDB accounts

In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

omni
Categories: Architecture, Programming

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Booting a Raspberry Pi B+ with the Raspbian Debian Wheezy image

Agile Testing - Grig Gheorghiu - Thu, 09/11/2014 - 01:32
It took me a while to boot my brand new Raspberry Pi B+ with a usable Linux image. I chose the Raspbian Debian Wheezy image available on the downloads page of the official raspberrypi.org site. Here are the steps I needed:

1) Bought micro SD card. Note DO NOT get a regular SD card for the B+ because it will not fit in the SD card slot. You need a micro SD card.

2) Inserted the SD card via an SD USB adaptor in my MacBook Pro.

3) Went to the command line and ran df to see which volume the SD card was mounted as. In my case, it was /dev/disk1s1.

4) Unmounted the SD card. I initially tried 'sudo umount /dev/disk1s1' but the system told me to use 'diskutil unmount', so the command that worked for me was:

diskutil unmount /dev/disk1s1

5) Used dd to copy the Raspbian Debian Wheezy image (which I previously downloaded) per these instructions. Important note: the target of the dd command is /dev/disk1 and NOT /dev/disk1s1. I tried initially with the latter, and the Raspberry Pi wouldn't boot (one of the symptoms that something was wrong other than the fact that nothing appeared on the monitor, was that the green light was solid and not flashing; a google search revealed that one possible cause for that was a problem with the SD card). The dd command I used was:

dd if=2014-06-20-wheezy-raspbian.img of=/dev/disk1 bs=1m

6) At this point, I inserted the micro SD card into the SD slot on the Raspberry Pi, then connected the Pi to a USB power cable, a monitor via an HDMI cable, a USB keyboard and a USB mouse. I was able to boot and change the password for the pi user. The sky is the limit next ;-)




10 Common Server Setups For Your Web Application

If you need a good overview of different ways to setup your web service then Mitchell Anicas has written a good article for you: 5 Common Server Setups For Your Web Application.

We've even included a few additional possibilities at no extra cost.

  1. Everything on One Server. Simple. Potential for poor performance because of resource contention. Not horizontally scalable. 
  2. Separate Database Server. There's an application server and a database server. Application and database don't share resources. Can independently vertically scale each component. Increases latency because the database is a network hop away.
  3. Load Balancer (Reverse Proxy). Distribute workload across multiple servers. Native horizontal scaling. Protection against DDOS attacks using rules. Adds complexity. Can be a performance bottleneck. Complicates issues like SSL termination and stick sessions.
  4. HTTP Accelerator (Caching Reverse Proxy). Caches web responses in memory so they can be served faster. Reduces CPU load on web server. Compression reduces bandwidth requirements. Requires tuning. A low cache-hit rate could reduce performance. 
  5. Master-Slave Database Replication. Can improve read and write performance. Adds a lot of complexity and failure modes.
  6. Load Balancer + Cache + Replication. Combines load balancing the caching servers and the application servers, along with database replication. Nice explanation in the article.
  7. Database-as-a-Service (DBaaS). Let someone else run the database for you.  RDS is one example from Amazon and there are hosted versions of many popular databases.
  8. Backend as a Service (BaaS). If you are writing a mobile application and you don't want to deal with the backend component then let someone else do it for you. Just concentrate on the mobile platform. That's hard enough. Parse and Firebase are popular examples, but there are many more.
  9. Platform as a Service (PaaS). Let someone else run most of your backend, but you get more flexibility than you have with BaaS to build your own application. Google App Engine, Heroku, and Salesforce are popular examples, but there are many more.
  10. Let Somone Else Do it. Do you really need servers at all? If you have a store then a service like Etsy saves a lot of work for very little cost. Does someone already do what you need done? Can you leverage it?
Categories: Architecture

Cloud Changes the Game from Deployment to Adoption

Before the Cloud, there was a lot of focus on deployment, as if deployment was success. 

Once you shipped the project, it was time to move on to the next project.  And project success was measured in terms of “on time” and “on budget.”   If you could deploy things quickly, you were a super shipper.

Of course, what we learned was that if you simply throw things over the wall and hope they stick, it’s not very successful.

"If you build it" ... users don't always come.

It was easy to confuse shipping projects on time and on budget with business impact.  

But let's compound the problem. 

The Development Hump

The big hump of software development was the hump in the middle—A big development hump.  And that hump was followed by a big deployment hump (installing software, fixing issues, dealing with deployment hassles, etc.)

So not only were development cycles long, but deployment was tough, too.

Because development cycles were long, and deployment was so tough, it was easy to confuse effort for value.

Cloud Changes the Hump

Now, let's turn it around.

With the Cloud, deployment is simplified.  You can reach more users, and it's easier to scale.  And it's easier to be available 24x7.

Add Agile to the mix, and people ship smaller, more frequent releases.

So with smaller, more-frequent releases, and simpler deployment, some software teams have turned into shipping machines.

The Cloud shrinks the development and deployment humps.

So now the game is a lot more obvious.

Deployment doesn't mark the finish.  It starts the game.

The real game of software success is adoption.

The Adoption Hump is Where the Benefits Are

If you picture the old IT project hump, where there is a long development cycle in the middle, now it's shorter humps in the middle.

The big hump is now user adoption.

It’s not new.  It was always there.   But the adoption hump was hidden beyond the development and deployment humps, and simply written off as “Value Leakage.”

And if you made it over the first two humps, since most projects did not plan or design for adoption, or allocate any resources or time, adoption was mostly an afterthought.  

And so the value leaked.

But the adoption hump is where the business benefits are.   The ROI is sitting there, gathering dust, in our "pay-for-play" world.   The value is simply waiting to be released and unleashed. 

Software solutions are sitting idle waiting for somebody to realize the value.

Accelerate Value by Accelerating Adoption

All of the benefits to the business are locked up in that adoption hump.   All of the benefits around how users will work better, faster, or cheaper, or how you will change the customer interaction experience, or how back-office systems will be better, faster, cheaper ... they are all locked up in that adoption hump.

As I said before, the key to Value Realization is adoption.  

So if you want to realize more value, drive more user adoption. 

And if you want to accelerate value, then accelerate user adoption.

In Essence …

In a Cloud world, the original humps of design, development, and deployment shrink.   But it’s not just time and effort that shrink.  Costs shrink, too.   With online platforms to build on (Infrastructure as a Service, Platforms as a Service, and Software as a Service), you don’t have to start from scratch or roll your own.   And if you adopt a configure before customize mindset, you can further reduce your costs of design and development.

Architecture moves up the stack from basic building blocks to composition.

And adoption is where the action is.  

What was the afterthought in the last generation of solutions, is now front and center. 

In the new world, adoption is a planned spend, and it’s core to the success of the planned value delivery.

If you want to win the game, think “Adoption-First.”

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Firm foundations

Coding the Architecture - Simon Brown - Tue, 09/09/2014 - 12:32

I truly believe that a lightweight, pragmatic approach to software architecture is pivotal to successfully delivering software, and that it can complement agile approaches rather than compete against them. After all, a good architecture enables agility and this doesn't happen by magic. But the people in our industry often tend to have a very different view. Historically, software architecture has been a discipline steeped in academia and, I think, subsequently feels inaccessible to many software developers. It's also not a particularly trendy topic when compared to [microservices|Node.js|Docker|insert other new thing here].

I've been distilling the essence of software architecture over the past few years, helping software teams to understand and introduce it into the way that they work. And, for me, the absolute essence of software architecture is about building firm foundations; both in terms of the team that is building the software and for the software itself. It's about technical leadership, creating a shared vision and stacking the chances of success in your favour.

I'm delighted to have been invited back to ASAS 2014 and my opening keynote is about what a software team can do in order to create those firm foundations. I'm also going to talk about some of the concrete examples of what I've done in the past, illustrating how I apply a minimal set of software architecture practices in the real world to take an idea through to working software. I'll also share some examples of where this hasn't exactly gone to plan too! I look forward to seeing you in a few weeks.

Read more...

Categories: Architecture

Running unit tests on iOS devices

Xebia Blog - Tue, 09/09/2014 - 09:48

When running a unit test target needing an entitlement (keychain access) it does not work out of the box on Xcode. You get some descriptive error in the console about a "missing entitlement". Everything works fine on the Simulator though.

Often this is a case of the executable bundle's code signature not begin valid anymore because a testing bundle was added/linked to the executable before deployment to the device. Easiest fix is to add a new "Run Script Build Phase" with the content:

codesign --verify --force --sign "$CODE_SIGN_IDENTITY" "$CODESIGNING_FOLDER_PATH"

codesign

Now try (cleaning and) running your unit tests again. Good chance it now works.

How To Rapidly Brainstorm and Share Ideas with Method 635

So, if you have a bunch of smart people, a bunch of bright ideas, and everybody wants to talk at the same time ... what do you do?

Or, you have a bunch of smart people, but they are quiet and nobody is sharing their bright ideas, and the squeaky wheel gets the oil ... what do you do?

Whenever you get a bunch of smart people together to change the world it helps to have some proven practices for better results.

One of the techniques a colleague shared with me recently is Method 635.  It stands for six participants, three ideas, and five rounds of supplements. 

He's used Method 635 successfully to get a large room of smart people to brainstorm ideas and put their top three ideas forward.

Here's how he uses Method 635 in practice.

  1. Split the group into 6 people per table (6 people per team or table).
  2. Explain the issue or challenge to the group, so that everybody understands it. Each group of 6 writes down 3 solutions to the problem (5 minutes).
  3. Go five rounds (5 minutes per round).  During each round, pass the ideas to the participant's neighbor (one of the other participants).  The participant's neighbor will add three additional ideas or modify three of the existing ones.
  4. At the end of the five rounds, each team votes on their top three ideas (5 minutes.)  For example, you can use “impact” and “ability to execute” as criteria for voting (after all, who cares about good ideas that can't be done, and who cares about low-value ideas that can easily be executed.)
  5. Each team presents their top three ideas to the group.  You could then vote again, by a show of hands, on the top three ideas across the teams of six.

The outcome is that each person will see the original three solutions and contribute to the overall set of ideas.

By using this method, if each of the 5 rounds is 5 minutes, and if you take 10 minutes to start by explaining the issue, and you give teams 5 minutes to write down their initial set of 3 ideas, and then another 5 minutes at the end to vote, and another 5 minutes to present, you’ve accomplished a lot within an hour.   Voices were heard.  Smart people contributed their ideas and got their fingerprints on the solutions.  And you’ve driven to consensus by first elaborating on ideas, while at the same time, driving to convergence and allowing refinement along the way.

Not bad.

All in a good day’s work, and another great example of how structuring an activity, even loosely structuring an activity, can help people bring out their best.

You Might Also Like

How To Use Six Thinking Hats

Idea to Done: How to Use a Personal Kanban for Getting Results

Workshop Planning Framework

Categories: Architecture, Programming

How Twitter Uses Redis to Scale - 105TB RAM, 39MM QPS, 10,000+ Instances

Yao Yue has worked on Twitter’s Cache team since 2010. She recently gave a really great talk: Scaling Redis at Twitter. It’s about Redis of course, but it's not just about Redis.

Yao has worked at Twitter for a few years. She's seen some things. She’s watched the growth of the cache service at Twitter explode from it being used by just one project to nearly a hundred projects using it. That's many thousands of machines, many clusters, and many terabytes of RAM.

It's clear from her talk that's she's coming from a place of real personal experience and that shines through in the practical way she explores issues. It's a talk well worth watching.

As you might expect, Twitter has a lot of cache.

Timeline Service for one datacenter using Hybrid List:
  • ~40TB allocated heap
  • ~30MM qps
  • > 6,000 instances
Use of BTree in one datacenter:
  • ~65TB allocated heap
  • ~9MM qps
  • >4,000 instances

You'll learn more about BTree and Hybrid List later in the post.

A couple of points stood out:

  • Redis is a brilliant idea because it takes underutilized resources on servers and turns them into valuable service.
  • Twitter specialized Redis with two new data types that fit their use cases perfectly. So they got the performance they needed, but it locked them into an older code based and made it hard to merge in new features. I have to wonder, why use Redis for this sort of thing? Just create a timeline service using your own datastructures. Does Redis really add anything to the party?
  • Summarize large chunks of log data on the node, using your local CPU power, before saturating the network.
  • If you want something that’s high performance separate the fast path, which is the data path, away from the slow path, which is the command and control path. 
  • Twitter is moving towards a container environment with Mesos as the job scheduler. This is still a new approach so it's interesting to hear about how it works. One issue is the Mesos wastage problem that stems from requirement to specify hard resource usage limits in a complicated runtime world.
  • A central cluster manager is really important to keep a cluster in a state that’s easy to understand.
  • The JVM is slow and C is fast. Their cache proxy layer is moving back to C/C++.
With that in mind, let's learn more about how Redis is used at Twitter:

Why Redis?
Categories: Architecture

Xebia KnowledgeCast Episode 3

Xebia Blog - Mon, 09/08/2014 - 15:41

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this third episode,
we get a bit more technical with me interviewing some of the most excellent programmers in the known universe: Age Mooy and Barend Garvelink. Then, I talk education and Smurfs with Femke Bender. Also, I toot my own horn a bit by providing you with a summary of my PMI Netherlands Summit session on Building Your Parachute On The Way Down. And of course, Serge will have Fun With Stickies!

It's been a while, and for those that are still listening to this feed: The Xebia Knowledgecast has been on hold due to personal circumstances. Last year, my wife lost her job, my father and mother in-law died, and we had to take our son out of the daycare center he was in due to the way they were treating him there. So, that had a little bit of an effect on my level of podfever. That said, I'm back! And my podfever is up!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Getting started with Salt

Xebia Blog - Mon, 09/08/2014 - 09:07

A couple of days ago I had the chance to spend a full day working with Salt(stack). On my current project we are using a different configuration management tool and my colleagues there claimed that Salt was simpler and more productive. The challenge was easily set, they claimed that a couple of people with no Salt experience, albeit with a little configuration management knowledge, would be productive in a single day.

My preparation was easy, I had to know nothing about Salt....done!

During the day I was working side by side with another colleague who knew little to nothing about Salt. When the day started, the original plan was to do a quick one hour introduction into Salt. But as we like to dive in head first this intro was skipped in favor of just getting started. We used an existing Vagrant box that spun up a Salt master & minion we could work on. The target was to get Salt to provision a machine for XL Deploy, complete with the customizations we were doing at our client. Think of custom plugins, logging configuration and custom libraries.

So we got cracking, running down the steps we needed to get XL Deploy installed. The steps were relatively simple, create a user & group, get the installation files from somewhere, install the server, initialize the repository and run it as a service.

First thing I noticed is that we simply just could get started. For the tasks we needed to do (downloading, unzipping etc.) we didn't need any additional states. Actually, during the whole exercise we never downloaded any additional states. Everything we needed was provided by Salt from the get go. Granted, we weren't doing anything special but it's not a given that everything is available.

During the day we approached the development of our Salt state like we would a shell script. We started from the top and added the steps needed. When we ran into issues with the order of the steps we'd simply move things around to get it to work. Things like creating a user before running a service as that user were easily resolved this way.

Salt uses yaml to define a state and that was fairly straight forward to use. Sometimes the naming used was strange. For example the salt.state.archive uses the parameter "source" for it's source location but "name" for it's destination. It's clearly stated in the docs what the parameter is used for, but a strange convention nonetheless.

We also found that the feedback provided by Salt can be scarce. On more than one occasion we'd enter a command and nothing would happen for a good while. Sometimes there would eventually be a lot of output but sometimes there wasn't.  This would be my biggest gripe with Salt, that you don't always get the feedback you'd like. Things like using templates and hierarchical data (the so-called pillars) proved easy to use. Salt uses jinja2 as it's templating engine, since we only needed simple variable replacement it's hard to comment on how useful jinja is. For our purposes it was fine.  Using pillars proved equally straightforward. The only issue we encountered here was that we needed to add our pillar to our machine role in the top.sls. Once we did that we could use the pillar data where needed.

The biggest (and only real) problem we encountered was to get XL Deploy to run as a service. We tried two approaches, one using the default service mechanism on Linux and the second using upstart. Upstart made it very easy to get the service started but it wouldn't stop properly. Using the default mechanism we couldn't get the service to start during a Salt run. When we send it specific commands it would start (and stop) properly but not during a run. We eventually added a post-stop script to the upstart config to make sure all the (child) processes stopped properly.

At the end of the day we had a state running that provisioned a machine with XL Deploy including all the customizations we wanted. Salt basically did what we wanted. Apart from the service everything went smooth. Granted, we didn't do anything exotic and stuck to rudimentary tasks like downloading, unzipping and copying, but implementing these simple tasks remained simple and straightforward. Overall Salt did what one might expect.

From my perspective the goal of being productive in a single day was easily achieved. Because of how straightforward it was to implement I feel confident about using Salt for more complex stuff. So, all in all Salt left a positive impression and I would like to do more with it.

Are Testers still relevant?

Xebia Blog - Sun, 09/07/2014 - 22:07

Last week I talked to one of my colleagues about a tester in his team. He told me that the tester was bored, because he had nothing to do. All the developers wrote and executed their tests themselves. Which makes sense, because the tester 2.0 tries to make the team test infected.

So what happens if every developer in the team has the Testivus? Are you still relevant on the Continuous Delivery train?
Come and join the discussion at the Open Kitchen Test Automation: Are you still relevant?

Testers 1.0

Remember the days when software testers where consulted after everything was built and released for testing. Testing was a big fat QA phase, which was a project by itself. The QA department consisted of test managers analyzing the requirements first. Logical test cases were created and were subordinated to test executors, who created physical test cases and executed them manually. Testers discovered conflicting requirements and serious issues in technical implementations. Which is good obviously. You don't want to deliver low quality software. Right?

So product releases were being delayed and the QA department documented everything in a big fat test report. And we all knew it: The QA department had to do it all over again after the next release.

I remember being a tester during those days. I always asked myself: Why am I always the last one thinking about ways to break the system? Does the developer know how easily this functionality can be broken? Does the product manager know that this requirement does not make sense at all?
Everyone hated our QA department. We were portrayed as slow, always delivering bad news and holding back the delivery cycle. But the problem was not delivering the bad news. The timing was.

The way of working needed to be changed.

Testers 2.0

We started training testers to help Agile teams deliver high quality software during development: The birth of the Tester 2.0 - The Agile Tester.
These testers master the Agile Manifesto, processes and methods that come with it. Collaboration about quality is the key here. Agile Testing is a mindset. And everyone is responsible for the quality of the product. Testers 2.0 helped teams getting (more) test infected. They thought like a researcher instead of a quality gatekeeper. They became part of the software development and delivery teams and they looked into possibilities to speed up testing efforts. So they practiced several exploratory testing techniques. Focused on reasonable and required tests, given the constraints of a sprint.

When we look back at several Agile teams having a shared understanding about Agile Testing, we saw many multidisciplinary teams becoming two separate teams: One is for developers and the other for QA / Testers.
I personally never felt comfortable in those teams. Putting testers with a group of developers is not Agile Testing. Developers still left testing for testers, and testers passively waited for developers to deploy something to be tested. At some point testers became a bottleneck again so they invested in test automation. So testers became test automators and build their own test code in a separate code base than development code. Test automators also picked tools that did not foster team responsibility. Therefore Test Automation got completely separated from product development. We found ways to help testers, test automators and developers to collaborate by improving the process. But that was treating the symptom of the problem. Developers were not taking responsibility in automated tests. And testers did not help developers designing testable software.

We want test automators and developers to become the same people.

Testers 3.0

If you want to accelerate your business you'll need to become iterative, delivering value to production as soon as possible by doing Continuous Delivery properly. So we need multidisciplinary teams to shorten feedback loops coming from different point of views.

Testers 3.0 are therefore required to accelerate the business by working in these following areas:

Requirement inspection: Building the right thing

The tester 3.0 tries to understand the business problem of requirements by describing examples. It's important to get common understanding between the business and technical context. So the tester verifies them as soon as possible and uses this as input for Test Automation and use BDD as a technique where a Human Readable Language is fostered. These testers work on automated acceptance tests as soon as possible.

Test Automation: Boosting the software delivery process

When common understanding is reached and the delivery team is ready to implement the requested feature, the tester needs programming skills to make the acceptance tests in a clean code state. The tester 3.0 uses appropriate Acceptance Test Driven Development tools (like Cucumber), which the whole team supports. But the tester keeps an eye out for better, faster and easier automated testing frameworks to support the team.

At the Xebia Craftsmanship Seminar (a couple of months ago) someone asked me if testers should learn how to write code.
I told him that no one is good at everything. But the tester 3.0 has good testing skills and enough technical baggage to write automated acceptance tests. Continuous Delivery teams have a shared responsibility and they automate all boring steps like manual test scripts, performance and security tests. It's very important to know how to automate; otherwise you'll slow down the team. You'll be treated the same as anyone else in the delivery team.
Testers 3.0 try to get developers to think about clean code and ensuring high quality code. They look into (and keep up with) popular development frameworks and address the testability of it. Even the test code is evaluated for quality attributes continuously. It needs the same love and caring as getting code into production.

Living documentation: Treating tests as specifications

At some point you'll end up with a huge set of automated tests telling you everything is fine. The tester 3.0 treats these tests as specifications and tries to create a living document, which is used for long term requirements gathering. No one will complain about these tests when they are all green and passing. The problem starts when tests start failing and no one can understand why. Testers 3.0 think about their colleague when they write a specification or test. They need to clearly specify what is being tested in a Human Readable Language.
They are used to changing requirements and specifications. And they don't make a big deal out of it. They understand that stakeholders can change their mind once a product comes alive. So the tester makes sure that important decisions made during new requirement inspections and development are stored and understood.

Relevant test results: Building quality into the process

Testers 3.0 focus on getting extreme fast feedback to determine the software quality of software products every day. Every night. Every second.
Testers want to deploy new working software features into production more often. So they do whatever it takes to build a high quality pipeline decreasing the quality feedback time during development.
Everyone in your company deserves to have confidence in the software delivery pipeline at any moment. Testers 3.0 think about how they communicate these types of feedback to the business. They provide ways to automatically report these test results about quality attributes. Testers 3.0 ask the business to define quality. Knowing everything was built right, how can they measure they've built the right thing? What do we need to measure when the product is in production?

How to stay relevant as a Tester

So what happens when all of your teammates are completely focused on high quality software using automation?

Testing does not require you to manually click, copy and paste boring scripted test steps you didn't want to do in the first place. You were hired to be skeptical about anything and make sure that all risks are addressed. It's still important to keep being a researcher for your team and test curiosity accordingly.

Besides being curious, analytical and having great communication skills, you need to learn how to code. Don't work harder. Work smarter by analyzing how you can automate all the boring checks so you'll have more time discovering other things by using your curiosity.

Since testing drives software development, and should no longer be treated as a separate phase in the development process, it's important that teams put test automation in the center of all design decisions. Because we need Test Automation to boost the software delivery by building quality sensors in every step of the process. Every day. Every night. Every second!

Do you want to discuss this topic with other Testers 3.0?  Come and join the Open Kitchen: Test Automation and get on board the Continuous Delivery Train!

 

10 High-Value Activities in the Enterprise

I was flipping back over the past year and reflecting on the high-value activities that I’ve seen across various Enterprise customers.  I think the high-value activities tend to be some variation of the following:

  1. Customer interaction (virtual, remote, connect with experts)
  2. Product innovation and ideation.
  3. Work from anywhere on any device.
  4. Comprehensive and cross-boundary collaboration (employees, customers, and partners.)
  5. Connecting with experts.
  6. Operational intelligence (predictive analytics, predictive maintenance)
  7. Cross-sell / up-sell and real-time marketing.
  8. Development and ALM in the Cloud.
  9. Protecting information and assets.
  10. Onboarding and enablement.

At first I was thinking of Porter’s Value Chain (Inbound Logistics, Operations, Outbound Logistics, Marketing & Sales, Services), which do help identify where the action is.   Next, I was reviewing how when we drive big changes with a customer, it tends to be around changing the customer experience, or changing the employee experiences, or changing the back-office and systems experiences.

You can probably recognize how the mega-trends (Cloud, Mobile, Social, and Big Data) influence the activities above, as well as some popular trends like Consumerization of IT.

High-Value Activities in the Enterprise from the Microsoft “Transforming Our Company” Memo

I also found it helpful to review the original memo from July 11, 2013 titled Transforming Our Company.  Below are some of my favorite sections from the memo:

Via Transforming Our Company:

We will engage enterprise on all sides — investing in more high-value activities for enterprise users to do their jobs; empowering people to be productive independent of their enterprise; and building new and innovative solutions for IT professionals and developers. We will also invest in ways to provide value to businesses for their interactions with their customers, building on our strong Dynamics foundation.

Specifically, we will aim to do the following:

  • Facilitate adoption of our devices and end-user services in enterprise settings. This means embracing consumerization of IT with the vigor we pursued in the initial adoption of PCs by end users and business in the ’90s. Our family of devices must allow people to be more productive, and for them to easily use our devices for work.

  • Extend our family of devices and services for enterprise high-value activities. We have unique expertise and capacity in this space.

  • Information assurance. Going forward this will be an area of critical importance to enterprises. We are their trusted partners in this space, and we must continue to innovate for them against a changing security and compliance landscape.

  • IT management. With more IT delivered as services from the cloud, the function of IT itself will be reimagined. We are best positioned to build the tools and training for that new breed of IT professional.

  • Big data insight. Businesses have new and expanded needs and opportunities to generate, store and use their own data and the data of the Web to better serve customers, make better decisions and design better products. As our customers’ online interactions with their customers accelerate, they generate massive amounts of data, with the cloud now offering the processing power to make sense of it. We are well-positioned to reimagine data platforms for the cloud, and help unlock insight from the data.

  • Customer interaction. Organizations today value most those activities that help them fully understand their customers’ needs and help them interact and communicate with them in more responsive and personalized ways. We are well-positioned to deliver services that will enable our customers to interact as never before — to help them match their prospects to the right products and services, derive the insights so they can successfully engage with them, and even help them find and create brand evangelists.

  • Software development. Finally, developers will continue to write the apps and sites that power the world, and integrate to solve individual problems and challenges. We will support them with the simplest turnkey way to build apps, sites and cloud services, easy integration with our products, and innovation for projects of every size.”

A Story of High-Value Activities in Action

If you can’t imagine what high-value activities look like, or what business transformation would look like, then have a look at this video:

Nedbank:  Video Banking with Lync

Nedbank was a brick-and-mortar bank that wanted to go digital and, not just catch up to the Cloud world, but leap frog into the future.  According to the video description, “Nedbank initiated a program called the Integrated Channel Strategy, focusing on client centered banking experiences using Microsoft Lync. The client experience is integrated and aligned across all channels and seeks to bring about efficiencies for the bank. Video banking with Microsoft Lync gives Nedbank a competitive advantage.”

The most interesting thing about the video is not just what’s possible, but that’s it’s real and happening.

They set a new bar for the future of digital banking.

You Might Also Like

Continuous Value Delivery the Agile Way

How Can Enterprise Architects Drive Business Value the Agile Way?

How To Use Personas and Scenarios to Drive Adoption and Realize Value

Categories: Architecture, Programming

Software architecture vs code

Coding the Architecture - Simon Brown - Sat, 09/06/2014 - 09:11

My Software architecture vs code talk has evolved considerably over the past year and I've presented a number of iterations at events in Europe and the US. If you're interested, thanks to the nice people behind the GOTO conferences, you can now watch the video of the version that I presented at GOTO Chicago 2014.

During September I'll be presenting a new version of this session at Foo Café in Malmö, Software Architecture Summit in Berlin and DevDay in Krakow. See you there!

Categories: Architecture