Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

DDD East Anglia 2014

Phil Trelford's Array - Sun, 09/14/2014 - 00:50

This Saturday saw the Developer Developer Developer! (DDD) East Anglia conference in Cambridge. DDD events are organized by the community for the community with the agenda for the day set through voting.

T-Shirts

The event marked a bit of a personal milestone for me, finally completing a set of DDD regional speaker T-Shirts, with a nice distinctive green for my local region. Way back in 2010 I chanced a first appearance at a DDD event with a short grok talk on BDD in the lunch break at DDD Reading. Since then I’ve had the pleasure of visiting and speaking in Glasgow, Belfast, Sunderland, Dundee and Bristol.

Talks

There were five F# related talks on the day, enough to fill an entire track:

Tomas kicked off the day, knocking up a simple e-mail validation library with tests using FsUnit and FsCheck. With the help of Project Scaffold, by the end of the presentation he’d generated a Nuget package, continuous build with Travis and Fake and HTML documentation using FSharp.Formatting.

Anthony’s SkyNet slides are already available on SlideShare:

Building Skynet: Machine Learning for Software Developers from bruinbrown

ASP.Net was also a popular topic with a variety of talks including:

All your types are belong to us!

The title for this talk was borrowed from a slide in a talk given by Ross McKinlay which references the internet meme All your base are belong to us.

You can see a video of an earlier incarnation of the talk, which I presented at NorDevCon over on InfoQ, where they managed to capture me teapotting:

teapot

The talk demonstrates accessing a wide variety of data sources using F#’s powerful Type Provider mechanism.

The World at your fingertips

The FSharp.Data library, run by Tomas Petricek and Gustavo Guerra, provides a wide range of type providers giving typed data access to standards like CSV, JSON, XML, through to large data sources Freebase and the World Bank.

With a little help from FSharp.Charting and a simple custom operator based DSL it’s possible to view interesting statistics from the World Bank data with just a few key strokes:

[uk;fr;de] => fun country -> country.``CO2 emissions (metric tons per capita)`` // FSharp.Data + FSharp.Charting pic.twitter.com/D65m2KmZGp

— dirty coder (@ptrelford) September 11, 2014

The JSON and XML providers give easy typed access to most internet data, and there’s even a branch of FSharp.Data with an HTML type provider providing access to embedded tables.

Enterprise

The SQLProvider project provides type access with LINQ support to a wide variety of databases including MS SQL Server, PostgreSQL, Oracle, MySQL, ODBC and MS Access.

FSharp.Management gives typed access to the file system, registry, WMI and Powershell.

Orchestration

The R Type Provider lets you access and orchestrate R packages inside F#.

With FCell you can easily access F# functions from Excel and Excel ranges from F#, either from Visual Studio or embedded in Excel itself.

The Hadoop provider allows typed access to data available on Hive instances.

There’s also type providers for MATLAB, Java and TypeScript.

Fun

Type Providers can also be fun, I’ve particularly enjoyed Ross’s Choose Your Own Adventure provider and more recently 2048:

2048 

Write your own Type Provider

With Project Scaffold it’s easier than ever to write and publish your own FSharp type provider. I’d recommend starting with Michael Newton’s Type Provider’s from the Ground Up article and video of his session at Skills Matter.

You can learn more from Michael and others at the Progressive F# Tutorials in London this November:

DDD North

The next DDD event is in Leeds on Saturday October 18th, where I’ll be talking about how to Write your own Compiler, hope to see you there :)

Categories: Programming

iOS Today Widget in Swift - Tutorial

Xebia Blog - Sat, 09/13/2014 - 18:51

Since both the new app extensions of iOS 8 and Swift are both fairly new, I created a sample app that demonstrates how a simple Today extension can be made for iOS 8 in Swift. This widget will show the latest blog posts of the Xebia Blog within the today view of the notification center.

The source code of the app is available on GitHub. The app is not available on the App Store because it's an example app, though it might be quite useful for anyone following this Blog.

In this tutorial, I'll go through the following steps:

New Xcode project

Even though an extension for iOS 8 is a separate binary than an app, it's not possible to create an extension without an app. That makes it unfortunately impossible to create stand alone widgets, which this sample would be since it's only purpose is to show the latest posts in the Today view. So we create a new project in Xcode and implement a very simple view. The only thing the app will do for now is tell the user to swipe down from the top of the screen to add the widget.

iOS Simulator Screen Shot 11 Sep 2014 23.13.21

Time to add our extension target. From the File menu we choose New > Target... and from the Application Extension choose Today Extension.

Screen Shot 2014-09-11 at 23.20.34

We'll name our target XebiaBlogRSSWidget and of course use Swift as Language.

XebiaBlogRSS

The created target will have the following files:

      • TodayViewController.swift
      • MainInterface.storyboard
      • Info.plist

Since we'll be using a storyboard approach, we're fine with this setup. If however we wanted to create the view of the widget programatically we would delete the storyboard and replace the NSExtensionMainStoryboard key in the Info.plist with NSExtensionPrincipalClass and TodayViewController as it's value. Since (at the time of this writing) Xcode cannot find Swift classes as extension principal classes, we also would have to add the following line to our TodayViewController:

@objc (TodayViewController)
Add dependencies with cocoapods

The widget will get the latest blog posts from the RSS feed of the blog: http://blog.xebia.com/feed/. That means we need something that can read and parse this feed. A search on RSS at Cocoapods gives us the BlockRSSParser as first result. Seems to do exactly what we want, so we don't need to look any further and create our Podfile with the following contents:

platform :ios, "8.0"

target "XebiaBlog" do

end

target "XebiaBlogRSSWidget" do
pod 'BlockRSSParser', '~> 2.1'
end

It's important to only add the dependency to the XebiaBlogRSSWidget target since Xcode will build two binaries, one for the app itself and a separate one for the widget. If we would add the dependency to all targets it would be included in both binaries and thus increasing the total download size for our app. Always only add the necessary dependencies to both your app target and widget target(s).

Note: Cocoapods or Xcode might give you problems when you have a target without any Pod dependencies. In that case you may add a dependency to your main target and run pod install, after which you might be able to delete it again.

The BlockRSSParser is written in objective-c, which means we need to add an objective-c bridging header in order to use it from Swift. We add the file XebiaBlogRSSWidget-Bridging-Header.h to our target and add the import.

#import "RSSParser.h"

We also have to tell the Swift compiler about it in our build settings:

Screen Shot 2014-09-12 at 15.35.21

Load RSS feed

Finally time to do some coding. The generated TodayViewController has a function called widgetPerformUpdateWithCompletionHandler. This function gets called every once in awhile to ask for new data. It also gets called right after viewDidLoad when the widget is displayed. The function has a completion handler as parameter, which we need to call when we're done loading data. A completion handler is used instead of a return function so we can load our feed asynchronously.

In objective-c we would write the following code to load our feed:

[RSSParser parseRSSFeedForRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://blog.xebia.com/feed/"]] success:^(NSArray *feedItems) {
  // success
} failure:^(NSError *error) {
  // failure
}];

In Swift this looks slightly different. Here the the complete implementation of widgetPerformUpdateWithCompletionHandler:

func widgetPerformUpdateWithCompletionHandler(completionHandler: ((NCUpdateResult) -> Void)!) {
  let url = NSURL(string: "http://blog.xebia.com/feed/")
  let req = NSURLRequest(URL: url)

  RSSParser.parseRSSFeedForRequest(req,
    success: { feedItems in
      self.items = feedItems as? [RSSItem]
      completionHandler(.NewData)
    },
    failure: { error in
      println(error)
      completionHandler(.Failed)
  })
}

We assign the result to a new optional variable of type RSSItem array:

var items : [RSSItem]?

The completion handler gets called with either NCUpdateResult.NewData if the call was successful or NCUpdateResult.Failed when the call failed. A third option is NCUpdateResult.NoData which is used to indicate that there is no new data. We'll get to that later in this post when we cache our data.

Show items in a table view

Now that we have fetched our items from the RSS feed, we can display them in a table view. We replace our normal View Controller with a Table View Controller in our Storyboard and change the superclass of TodayViewController and add three labels to the prototype cell. No different than in iOS 7 so I won't go into too much detail here (the complete project is on GitHub).

We also create a new Swift class for our custom Table View Cell subclass and create outlets for our 3 labels.

import UIKit

class RSSItemTableViewCell: UITableViewCell {

    @IBOutlet weak var titleLabel: UILabel!
    @IBOutlet weak var authorLabel: UILabel!
    @IBOutlet weak var dateLabel: UILabel!

}

Now we can implement our Table View Data Source functions.

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return items.count
  }
  return 0
}

Since items is an optional, we use Optional Binding to check that it's not nil and then assign it to a temporary non optional variable: let items. It's fine to give the temporary variable the same name as the class variable.

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  let cell = tableView.dequeueReusableCellWithIdentifier("RSSItem", forIndexPath: indexPath) as RSSItemTableViewCell

  if let item = items?[indexPath.row] {
    cell.titleLabel.text = item.title
    cell.authorLabel.text = item.author
    cell.dateLabel.text = dateFormatter.stringFromDate(item.pubDate)
  }

  return cell
}

In our storyboard we've set the type of the prototype cell to our custom class RSSItemTableViewCell and used RSSItem as identifier so here we can dequeue a cell as a RSSItemTableViewCell without being afraid it would be nil. We then use Optional Binding to get the item at our row index. We could also use forced unwrapping since we know for sure that items is not nil here:

let item = items![indexPath.row]

But the optional binding makes our code saver and prevents any future crash in case our code would change.

We also need to create the date formatter that we use above to format the publication dates in the cells:

let dateFormatter : NSDateFormatter = {
        let formatter = NSDateFormatter()
        formatter.dateStyle = .ShortStyle
        return formatter
    }()

Here we use a closure to create the date formatter and to initialise it with our preferred date style. The return value of the closure will then be assigned to the property.

Preferred content size

To make sure we that we can actually see the table view we need to set the preferred content size of the widget. We'll add a new function to our class that does this.

func updatePreferredContentSize() {
  preferredContentSize = CGSizeMake(CGFloat(0), CGFloat(tableView(tableView, numberOfRowsInSection: 0)) * CGFloat(tableView.rowHeight) + tableView.sectionFooterHeight)
}

Since the widgets all have a fixed width, we can simply specify 0 for the width. The height is calculated by multiplying the number of rows with the height of the rows. Since this will set the preferred height greater than the maximum allowed height of a Today widget it will automatically shrink. We also add the sectionFooterHeight to our calculation, which is 0 for now but we'll add a footer later on.

When the preferred content size of a widget changes it will animate the resizing of the widget. To have the table view nicely animating along this transition, we add the following function to our class which gets called automatically:

override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
  coordinator.animateAlongsideTransition({ context in
    self.tableView.frame = CGRectMake(0, 0, size.width, size.height)
  }, completion: nil)
}

Here we simply set the size of the table view to the size of the widget, which is the first parameter.

Of course we still need to call our update method as well as reloadData on our tableView. So we add these two calls to our success closure when we load the items from the feed

success: { feedItems in
  self.items = feedItems as? [RSSItem]
  
  self.tableView.reloadData()
  self.updatePreferredContentSize()

  completionHandler(.NewData)
},

Let's run our widget:

Screen Shot 2014-09-09 at 13.12.50

It works, but we can make it look better. Table views by default have a white background color and black text color and that's no different within a Today widget. We'd like to match the style with the standard iOS Today widget so we give the table view a clear background and make the text of the labels white. Unfortunately that does make our labels practically invisible since the storyboard editor in Xcode will still show a white background for views that have a clear background color.

If we run again, we get a much better looking result:

Screen Shot 2014-09-09 at 13.20.05

Open post in Safari

To open a Blog post in Safari when tapping on an item we need to implement the tableView:didSelectRowAtIndexPath: function. In a normal app we would then use the openURL: method of UIApplication. But that's not available within a Today extension. Instead we need to use the openURL:completionHandler: method of NSExtensionContext. We can retrieve this context through the extensionContext property of our View Controller.

override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
  if let item = items?[indexPath.row] {
    if let context = extensionContext {
      context.openURL(item.link, completionHandler: nil)
    }
  }
}
More and Less buttons

Right now our widget takes up a bit too much space within the notification center. So let's change this by showing only 3 items by default and 6 items maximum. Toggling between the default and expanded state can be done with a button that we'll add to the footer of the table view. When the user closes and opens the notification center, we want to show it in the same state as it was before so we need to remember the expand state. We can use the NSUserDefaults for this. Using a computed property to read and write from the user defaults is a nice way to write this:

let userDefaults = NSUserDefaults.standardUserDefaults()

var expanded : Bool {
  get {
    return userDefaults.boolForKey("expanded")
  }
  set (newExpanded) {
    userDefaults.setBool(newExpanded, forKey: "expanded")
    userDefaults.synchronize()
  }
}

This allows us to use it just like any other property without noticing it gets stored in the user defaults. We'll also add variables for our button and number of default and maximum rows:

let expandButton = UIButton()

let defaultNumRows = 3
let maxNumberOfRows = 6

Based on the current value of the expanded property we'll determine the number of rows that our table view should have. Of course it should never display more than the actual items we have so we also take that into account and change our function into the following:

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return min(items.count, expanded ? maxNumberOfRows : defaultNumRows)
  }
  return 0
}

Then the code to make our button work:

override func viewDidLoad() {
  super.viewDidLoad()

  updateExpandButtonTitle()
  expandButton.addTarget(self, action: "toggleExpand", forControlEvents: .TouchUpInside)
  tableView.sectionFooterHeight = 44
}

override func tableView(tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
  return expandButton
}

Depending on the current value of our expanded property we wil show either "Show less" or "Show more" as button title.

func updateExpandButtonTitle() {
  expandButton.setTitle(expanded ? "Show less" : "Show more", forState: .Normal)
}

When we tap the button, we'll invert the expanded property, update the button title and preferred content size and reload the table view data.

func toggleExpand() {
  expanded = !expanded
  updateExpandButtonTitle()
  updatePreferredContentSize()
  tableView.reloadData()
}

And as a result we can now toggle the number of rows we want to see.

Screen Shot 2014-09-13 at 18.53.39Screen Shot 2014-09-13 at 18.53.43

Caching

At this moment, each time we open the widget, we first get an empty list and then once the feed is loaded, the items are displayed. To improve this, we can cache the retrieved items and display those once the widget is opened before we load the items from the feed. The TMCache library makes this possible with little effort. We can add it to our Pods file and bridging header the same way we did for the BlockRSSParser library.

Also here, a computed property works nice for caching the items and hide the actual implementation:

var cachedItems : [RSSItem]? {
  get {
    return TMCache.sharedCache().objectForKey("feed") as? [RSSItem]
  }
  set (newItems) {
    TMCache.sharedCache().setObject(newItems, forKey: "feed")
  }
}

Since the RSSItem class of the BlockRSSParser library conforms to the NSCoding protocol, we can use them directly with TMCache. When we retrieve the items from the cache for the first time, we'll get nil since the cache is empty. Therefore cachedItems needs to be an optional as well as the downcast and therefore we need to use the as? operator.

We can now update the cache once the items are loaded simply by assigning a value to the property. So in our success closure we add the following:

self.cachedItems = self.items

And then to load the cached items, we add two more lines to the end of viewDidLoad:

items = cachedItems
updatePreferredContentSize()

And we're done. Now each time the widget is opened it will first display the cached items.

There is one last thing we can do to improve our widget. As mentioned earlier, the completionHandler of widgetPerformUpdateWithCompletionHandler can also be called with NCUpdateResult.NoData. Now that we have the items that we loaded previously we can compare newly loaded items with the old and use NoData in case they haven't changed. Here is our final implementation of the success closure:

success: { feedItems in
  if self.items == nil || self.items! != feedItems {
    self.items = feedItems as? [RSSItem]
    self.tableView.reloadData()
    self.updatePreferredContentSize()
    self.cachedItems = self.items
    completionHandler(.NewData)
  } else {
    completionHandler(.NoData)
  }
},

And since it's Swift, we can simply use the != operator to see if the arrays have unequal content.

Source code on GitHub

As mentioned in the beginning of this post, the source code of the project is available on GitHub with some minor changes that are not essential to this blog post. Of course pull requests are always welcome. Also let me know in the comments below if you'd wish to see this widget released on the App Store.

R: ggplot – Plotting a single variable line chart (geom_line requires the following missing aesthetics: y)

Mark Needham - Sat, 09/13/2014 - 12:41

I’ve been learning how to do moving averages in R and having done that calculation I wanted to plot these variables on a line chart using ggplot.

The vector of rolling averages looked like this:

> rollmean(byWeek$n, 4)
  [1]  3.75  2.00  1.25  1.00  1.25  1.25  1.75  1.75  1.75  2.50  2.25  2.75  3.50  2.75  2.75
 [16]  2.25  1.50  1.50  2.00  2.00  2.00  2.00  1.25  1.50  2.25  2.50  3.00  3.25  2.75  4.00
 [31]  4.25  5.25  7.50  6.50  5.75  5.00  3.50  4.00  5.75  6.25  6.25  6.00  5.25  6.25  7.25
 [46]  7.75  7.00  4.75  2.75  1.75  2.00  4.00  5.25  5.50 11.50 11.50 12.75 14.50 12.50 11.75
 [61] 11.00  9.25  5.25  4.50  3.25  4.75  7.50  8.50  9.25 10.50  9.75 15.25 16.00 15.25 15.00
 [76] 10.00  8.50  6.50  4.25  3.00  4.25  4.75  7.50 11.25 11.00 11.50 10.00  6.75 11.25 12.50
 [91] 12.00 11.50  6.50  8.75  8.50  8.25  9.50  8.50  8.75  9.50  8.00  4.25  4.50  7.50  9.00
[106] 12.00 19.00 19.00 22.25 23.50 22.25 21.75 19.50 20.75 22.75 22.75 24.25 28.00 23.00 26.00
[121] 24.25 21.50 26.00 24.00 28.25 25.50 24.25 31.50 31.50 35.75 35.75 29.00 28.50 27.25 25.50
[136] 27.50 26.00 23.75

I initially tried to plot a line chart like this:

library(ggplot2)
library(zoo)
rollingMean = rollmean(byWeek$n, 4)
qplot(rollingMean) + geom_line()

which resulted in this error:

stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.
Error: geom_line requires the following missing aesthetics: y

It turns out we need to provide an x and y value if we want to draw a line chart. In this case we’ll generate the ‘x’ value – we only care that the y values get plotted in order from left to right:

qplot(1:length(rollingMean), rollingMean, xlab ="Week Number") + geom_line()
2014 09 13 16 58 57

If we want to use the ‘ggplot’ function then we need to put everything into a data frame first and then plot it:

ggplot(data.frame(week = 1:length(rollingMean), rolling = rollingMean),
       aes(x = week, y = rolling)) +
  geom_line()

2014 09 13 17 11 13

Categories: Programming

R: Calculating rolling or moving averages

Mark Needham - Sat, 09/13/2014 - 09:15

I’ve been playing around with some time series data in R and since there’s a bit of variation between consecutive points I wanted to smooth the data out by calculating the moving average.

I struggled to find an in built function to do this but came across Didier Ruedin’s blog post which described the following function to do the job:

mav <- function(x,n=5){filter(x,rep(1/n,n), sides=2)}

I tried plugging in some numbers to understand how it works:

> mav(c(4,5,4,6), 3)
Time Series:
Start = 1 
End = 4 
Frequency = 1 
[1]       NA 4.333333 5.000000       NA

Here I was trying to do a rolling average which took into account the last 3 numbers so I expected to get just two numbers back – 4.333333 and 5 – and if there were going to be NA values I thought they’d be at the beginning of the sequence.

In fact it turns out this is what the ‘sides’ parameter controls:

sides	
for convolution filters only. If sides = 1 the filter coefficients are for past values only; if sides = 2 they 
are centred around lag 0. In this case the length of the filter should be odd, but if it is even, more of the 
filter is forward in time than backward.

So in our ‘mav’ function the rolling average looks both sides of the current value rather than just at past values. We can tweak that to get the behaviour we want:

mav <- function(x,n=5){filter(x,rep(1/n,n), sides=1)}
> mav(c(4,5,4,6), 3)
Time Series:
Start = 1 
End = 4 
Frequency = 1 
[1]       NA       NA 4.333333 5.000000

The NA values are annoying for any plotting we want to do so let’s get rid of them:

> na.omit(mav(c(4,5,4,6), 3))
Time Series:
Start = 3 
End = 4 
Frequency = 1 
[1] 4.333333 5.000000

Having got to this point I noticed that Didier had referenced the zoo package in the comments and it has a built in function to take care of all this:

> library(zoo)
> rollmean(c(4,5,4,6), 3)
[1] 4.333333 5.000000

I also realised I can list all the functions in a package with the ‘ls’ function so I’ll be scanning zoo’s list of functions next time I need to do something time series related – there’ll probably already be a function for it!

> ls("package:zoo")
  [1] "as.Date"              "as.Date.numeric"      "as.Date.ts"          
  [4] "as.Date.yearmon"      "as.Date.yearqtr"      "as.yearmon"          
  [7] "as.yearmon.default"   "as.yearqtr"           "as.yearqtr.default"  
 [10] "as.zoo"               "as.zoo.default"       "as.zooreg"           
 [13] "as.zooreg.default"    "autoplot.zoo"         "cbind.zoo"           
 [16] "coredata"             "coredata.default"     "coredata<-"          
 [19] "facet_free"           "format.yearqtr"       "fortify.zoo"         
 [22] "frequency<-"          "ifelse.zoo"           "index"               
 [25] "index<-"              "index2char"           "is.regular"          
 [28] "is.zoo"               "make.par.list"        "MATCH"               
 [31] "MATCH.default"        "MATCH.times"          "median.zoo"          
 [34] "merge.zoo"            "na.aggregate"         "na.aggregate.default"
 [37] "na.approx"            "na.approx.default"    "na.fill"             
 [40] "na.fill.default"      "na.locf"              "na.locf.default"     
 [43] "na.spline"            "na.spline.default"    "na.StructTS"         
 [46] "na.trim"              "na.trim.default"      "na.trim.ts"          
 [49] "ORDER"                "ORDER.default"        "panel.lines.its"     
 [52] "panel.lines.tis"      "panel.lines.ts"       "panel.lines.zoo"     
 [55] "panel.plot.custom"    "panel.plot.default"   "panel.points.its"    
 [58] "panel.points.tis"     "panel.points.ts"      "panel.points.zoo"    
 [61] "panel.polygon.its"    "panel.polygon.tis"    "panel.polygon.ts"    
 [64] "panel.polygon.zoo"    "panel.rect.its"       "panel.rect.tis"      
 [67] "panel.rect.ts"        "panel.rect.zoo"       "panel.segments.its"  
 [70] "panel.segments.tis"   "panel.segments.ts"    "panel.segments.zoo"  
 [73] "panel.text.its"       "panel.text.tis"       "panel.text.ts"       
 [76] "panel.text.zoo"       "plot.zoo"             "quantile.zoo"        
 [79] "rbind.zoo"            "read.zoo"             "rev.zoo"             
 [82] "rollapply"            "rollapplyr"           "rollmax"             
 [85] "rollmax.default"      "rollmaxr"             "rollmean"            
 [88] "rollmean.default"     "rollmeanr"            "rollmedian"          
 [91] "rollmedian.default"   "rollmedianr"          "rollsum"             
 [94] "rollsum.default"      "rollsumr"             "scale_x_yearmon"     
 [97] "scale_x_yearqtr"      "scale_y_yearmon"      "scale_y_yearqtr"     
[100] "Sys.yearmon"          "Sys.yearqtr"          "time<-"              
[103] "write.zoo"            "xblocks"              "xblocks.default"     
[106] "xtfrm.zoo"            "yearmon"              "yearmon_trans"       
[109] "yearqtr"              "yearqtr_trans"        "zoo"                 
[112] "zooreg"
Categories: Programming

Making Tangible The Intangible

Sometime the intangible obscures the tangible.

Sometime the intangible obscures the tangible.

In the Software Process and Measurement Cast 37 Kenji Hiranabe suggested that both software and the processes were intangible and opaque.  In SPaMCAST 36 Phil Armour put forth the thought that software was a container for knowledge.  Knowledge is only tangible when demonstrated or, in software terms, executed.  Combining both ideas, means that a software product is a harness for knowledge to deliver business value, delivered using what is perceived to be an intangible process. The output of the process is only fully recognizable and testable for the brief period that the code executes. On top of all that, there is every expectation that the delivery of the product will be on-time, on-budget, have high quality and be managed and orderly.  No wonder most development managers have blood pressure issues!

Intangibility creates the need for managers and customers to apply controls to understand what is happening in a project and why it is happening.  The level of control required for managers and customers to feel comfortable will cost a project time, effort and money that could be better spent actually delivering functionality (or dare I say it, reducing the cost of the project).  Therefore, finding tools and techniques to make software and the process used to create software more tangible and while at the same time more transparent to scrutiny, is generally a good goal.  I use the term “generally” on purpose. The steps taken to increase tangibility and transparency need be less invasive than those typically seen in command and control organizations. Otherwise, why would you risk the process change?

Agile projects have leveraged tools like WIKIs, standup meetings, big picture measurements and customer involvement as tools to increase visibility into the process and functional code as a tool to make their knowledge visible.  I will attest that when well-defined agile processes are coupled with proper corporate culture, an environment is created that is highly effective for breaking down walls.  But, (and as you and I know, there had to be a “but”) the processes aren’t always well defined or applied with discipline and not all organizational cultures can embrace agile methods. There needs to be another way to solve the tangibility and transparency problems without resorting to draconian command and control procedures that cost more than they are normally worth.

In his two SPaMCAST interviews, Mr. Hiranabe laid out two processes that are applicable across the perceived divide defined by waterfall and agile projects.  In 2007 on SPaMCAST 7, Kenji talked about mind mapping.  Mind mapping is a tool used to visualize and organize data and ideas.  Mind mapping provides a method for capturing data concepts, visualizing the relationships between them and, in doing so making ideas and knowledge tangible.  In the SPaMCAST 37, Kenji proposes a way to integrate kanban into the software development process.  According to WIKIPEDIA,  “Kanban is a signaling system to trigger action which in Toyota Production System leverages physical cards as the signal”.  In other words the signal is used to indicate when new tasks should start, and by inference, the status of current work.  Kenji does a great job at explaining how the kanban can be used in system development.  The bottom line is that the signal, whether physical or electronic, provides a low impact means of indicating how the development process is functioning and how functionality is flowing through the process. This increases the visibility of the process and makes it more tangible to those viewing from outside the trenches of IT.

Code that when executed does what was expected is the ultimate evidence that we have successfully captured knowledge and harnessed it to provide the functionality our customer requested.  The sprint demos in SCRUM are a means of providing a glimpse into that knowledge and to build credibility with customers.  However if your project is not leveraging SCRUM, then daily or weekly builds with testing can be leveraged to provide some assurance that knowledge is being captured and assembled into a framework that functions the way that you would expect.  You should note that demos and daily builds are not an either/or situation.  Do both!

The lack of tangibility and lack of transparency of the process of capturing knowledge and building the knowledgeware we call software has been a sore point between developers and managers since the first line of code was written.  We are now finally getting to the point where we have recognized that we have to address these issues; not just from a command and control perspective, but also from a social engineering perspective.  Even if Agile as a movement was to disappear tomorrow, there is no retreat from integrating tools and techniques like mind mapping and kanban while embracing software engineering within the social construct of the organization and perhaps the wider world outside the organization.  Our goal is to make tangible what intangible and visible that which is opaque.


Categories: Process Management

React In Modern Web Applications: Part 2

Xebia Blog - Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

GAO Reports on ACA Site

Herding Cats - Glen Alleman - Fri, 09/12/2014 - 19:08

With all the speculation on what went wrong with the ACA site and all the agile pundits making statements about how agile could have saved the site, here's some actual facts beyond all the opinions - that Daniel Patrick Moynihan would remind us...

Every man is entitled to his own opinion, but not his own facts

 

  Screen Shot 2014-09-12 at 11.40.52 AM

and

Screen Shot 2014-09-12 at 11.42.53 AM

The Key Findings are

  • Oversight weakness and lack of adherence to planning requirements.
  • Acquisition planning carried high levels of risk.
  • Requirements for developing the FFM were not well defined.
  • Contract type carried risk for the government.
  • New IT development approach was supposed to save time, but carried unmitigated risk.
  • Contractor did not fully adhere to HHS procurement guidelines, missing opportunities to capture and consider risk important to program success.
  • Changing requirements and oversight contributed to significant cost growth, schedule delays, and reduced capabilities.
  • Unclear contract oversight responsibilities exacerbated cost growth.
  • Significant contractor performance issues without corrective actions.

So when we hear 

Workshop

Think in what domain, with what value at risk, with what complexity of project, and what business process in which these could possibly be applicable. In fact this goes back to the core of the agile manifesto. And when we hear "pure agile," Scrum Masters produce Scrum Slaves, Mob Programming, "we all want a seat at the table with equal voices, and many other "opinions," remember Moynihan and ask for facts, domain, past performance, experience, and examples of success.

As agile starts to scale to larger domains and the government seeks better ways to develop software beyond the failed processes described above - what parts of this manifesto are applicable outside of a small group of people in the same room with the customer directing the work of the developers?

As my colleague (former NASA Cost Director) reminds our team if you see something that doesn't make sense - follow the money. In the case of ACA and in the case of the Work Shop outcomes above.

Related articles Can Agile Be Integrated with Governance Based Development Processes? Agile Means ... Can Enterprise Agile Be Bottom Up? Agile as a Systems Engineering Paradigm How To Create Confusion About Root Causes Agile and the Federal Government Quote of the Day - Just a Little Process Check Sailing and Agile
Categories: Project Management

Stuff The Internet Says On Scalability For September 12th, 2014

Hey, it's HighScalability time:


Each dot in this image is an entire galaxy containing billions of stars. What's in there?
  • Quotable Quotes:
    • mseepgood: Or "another language that's becoming popular, Node.js"
    • Joe Moreno: What good are billions of cycles of CPU power that make me wait. I shouldn't have to wait longer and longer due to launching, buffering, syncing, I/O and latency.
    • @stevecheney: Apple Pay is the magic that integrated hardware / software produces. No one else in the world can do this.
    • @etherealmind: Next gen Intel Xeon E5 V3 CPU includes packet processor for 40GBE, 30x increase in OpenSSL crypto, 25% increase in DPDK perf. #IDF14
    • @pbailis: There's actually an interesting question in understanding when to break "sharing" -- at core, NUMA domain, server, or cluster level?
    • @fmueller_bln: Just wait some minutes for vagrant to provision a vm with puppet and you’ll know why docker may be better option for dev machines...

  • Encryption will make fighting the spam war much costlier reveals Mike Hearn in an awesome post: A brief history of the spam war, where he gives insightful color commentary of the punch counter punch between World Heavyweight Champion Google and the challenger, Clever Spammer.  Mike worked in the Gmail trenches for over four years and recommends: make sending email cost money; use money to create deposits using bitcoin. 

  • jeswin: No other browser can practically implement or support Dart. If they do their implementation will be slower than Google's, and will get classified as inferior. < Ignoring the merits of Dart, this is an interesting ecosystem effect. By rating sites for non quality of content reasons Google can in effect select for characteristics over which they have a comparative advantage. It's not an arms length transaction. 

  • Dateline Seattle. Social media users execute a coordinated denial of service attack on cell networks, preventing those in need from accessing emergency services. Who are these terrorists? Football fans. City of Seattle asks people to stop streaming videos, posting photos because of football. Tweets, Instagram, YouTube, and Snapchat are overloading the cell networks so calls can't get through. Should the cell network expand capacity? Should there be an app tax to constrain demand? Should users pay per packet? As a 49ers fan I have another suggestion...move games to a different venue, perhaps the moon. That will help.

  • Are you a militant cable cutter who thinks the future of  TV is the Internet? Not so fast says Dan Rayburn in Internet Traffic Records Could Be Broken This Week Thanks To Apple, NFL, Sony, Xbox, EA and Others: Delivering video over the Internet at the same scale and quality that you do over a cable network isn’t possible. The Internet is not a cable network and if you think otherwise, you will be proven wrong this week. We’re going to see long download times, more buffering of streams, more QoS issues and ISPs that will take steps to deal with the traffic. 

  • Ted Nelson takes on the impossible in on How Bitcoin Actually Works (Computers for Cynics #7). And he does an excellent job, sharing his usual insight with a twist. The title is misleading however. There's hardly any cynicism. How disappointing! Ted is clearly impressed with the design and implementation of bitcoin. For good reason. No matter what you think of bitcoin and its potential role in society, it is a very well thought out and impressive piece of technology. On par with Newton, Mr. Nelson suggests. If you watch this you'll probably realize that you don't actually understand bitcoin, even if you think you do, and that's a good thing.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Announcing $100,000 for Startups on Google Cloud Platform

Google Code Blog - Fri, 09/12/2014 - 16:12
This post originally appeared on the Google Cloud Platform blog 
by Julie Pearl, Director, Developer Relations

Today at the Google for Entrepreneurs Global Partner Summit, Urs Hölzle, Senior Vice President, Technical Infrastructure & Google Fellow announced Google Cloud Platform for Startups. This new program will help eligible early-stage startups take advantage of the cloud and get resources to quickly launch and scale their idea by receiving $100,000 in Cloud Platform credit, 24/7 support, and access to our technical solutions team.

This offer is available to startups around the world through top incubators, accelerators and investors. We are currently working with over 50 global partners to provide this offer to startups who have less than $5 million dollars in funding and have less than $500,000 in annual revenue. In addition, we will continue to add more partners over time.

This offer supports our core Google Cloud Platform philosophy: we want developers to focus on code; not worry about managing infrastructure. Starting today, startups can take advantage of this offer and begin using the same infrastructure platform we use at Google.

Thousands of startups have built successful applications on Google Cloud Platform and those applications have grown to serve tens of millions of users. It has been amazing to watch Snapchat send over 700 million photos and videos a day and Khan Academy teach millions of students. Another example, Headspace, is helping millions of people keep their minds healthier and happier using Cloud Platform for Startups. We look forward to helping the next generation of startups launch great products.

For more information on Google Cloud Platform for Startups, visit http://cloud.google.com/startups.

Posted by Katie Miller, Google Developer Platform Team
Categories: Programming

Azure: SQL Databases, API Management, Media Services, Websites, Role Based Access Control and More

ScottGu's Blog - Scott Guthrie - Fri, 09/12/2014 - 07:14

This week we released a major set of updates to Microsoft Azure. This week’s updates include:

  • SQL Databases: General Availability of Azure SQL Database Service Tiers
  • API Management: General Availability of our API Management Service
  • Media Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer
  • Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal
  • Role-based Access Control: Preview release of role-based access control for Azure Management operations
  • Alerting: General Availability of Azure Alerting and new alerts on events

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers

I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure.

Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size.

Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time.

Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data).

Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host.

Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing

SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively.

Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month.

The below table provides a quick look at the different tiers and functionality:

image

This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels.

During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more.

Learn more about SQL Databases on Azure here. API Management Service: General Availability Release

I’m excited to announce the General Availability of the Azure API Management Service.

In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications.

The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to:

  • Throttle, rate limit and quota your APIs
  • Gain analytic insights on how your APIs are being used and by whom
  • Secure your APIs using OAuth or key-based access
  • Track the health of your APIs and quickly identify errors
  • Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs

Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal

The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs.

With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes.

image

For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial.

Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer

This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services.

The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection

This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding

This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers.

To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer

Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners

I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners:

  • Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure.
  • Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services.
  • Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms.
Learn More

Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress

This week we’ve also released a number of great updates to our Azure Websites service.

Virtual Network Integration

Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well.

The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network.

You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have:

image

Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress

This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases.

It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users.

You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure:

image

Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup

This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now:

  • Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within.
  • Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website.
  • On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live.

These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure

As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources.

Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include:

  • A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs.
  • You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way.

Below are some more details on how this works and can be enabled.

Azure Active Directory

Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises.

Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications:

image

All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions

In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access:

  • A Owner can perform all management operations for a resource and its child resources including access management.
  • A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others.
  • A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets.

In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged.

Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in).

Restricting Access based on Role Based Permissions

Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups.

You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up:

Step 1: Setting up Roles at the Subscription Level

We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription.

To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for: 

image

Then scroll down within the blade of subscription you opened, and locate the Roles tile within it:

image

Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role. 

Below I’ve opened up the default Reader role and added David and Fred to it:

image

Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc).

Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. scott@outlook.com), to also be mapped into a role.

Step 2: Setting up Roles at the Resource Level

Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it. 

If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions.

For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it.

To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this:

image

Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control

The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources.

With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access

Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment.

Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support

I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription.

Alerts are available in the full azure portal by clicking Management Services in the left navigation bar:

image

Also, alerting is available on most of the resources in the Azure preview portal:

image

You can create alerts on metrics from 8 different services today (and we are adding more all the time):

  • Cloud Services
  • Virtual Machines
  • Websites
  • Web hosting plans
  • Storage accounts
  • SQL databases
  • Redis Cache
  • DocumentDB accounts

In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Community of Practice: Killers

A community of interest needs a common focus!

A community of interest needs a common focus!

All community of practices (COP) are not successful or at least don’t stay successful.  While there can many issues that cause a COP to fail, there are three very typical problems that kill off COPs.

  1. Poor leadership – All groups have a leader that exerts influence on the direction of the group. The best COP leaders I have observed (best being defined in terms of the health of the COP) are servant leaders.  In a community of practice, the servant leader will work to empower and serve the community. Empowerment of the COP is reflected by the removing of impediments and coaching the team so it meets its goals of connection, encouragement and sharing.  In COPs with a poor leader the goals of the group generally shift towards control of the message or to the aggrandizement of specific group or person.  Earlier in my career I was involved with a local SPIN (software process improvement network) group that had existed for several years.  The SPIN group was a community of practice that drew members form 20 – 30 companies in my area. At one point a leader emerged whose goal was to generate sales leads for himself.  Membership fell precipitously before a new leader emerged and re-organized the remnants.
  2. Lack of a common interest – A group put together without a common interest reminds me of sitting in the back of station wagon with my four siblings on long Sunday drives in the country.  Not exactly pointless but to be avoided if possible.  A community of practice without a common area of interest isn’t a community of practice.
  3. Natural life cycle – Ideas and groups have a natural life cycle.  When a COP purpose passes or fades, the group should either be re-purposed or shutdown. As an example, the SPIN mentioned above reached its zenith during the heyday of the CMMI and faded as that framework became less popular. I have often observed that as a COP’s original purpose wanes the group seeks to preserve itself by finding a new purpose. Re-purposing often fails because the passion the group had for the original concept does not transfer.  Re-purposing works best when the ideas being pursued are a natural evolutionary path.  I recently observed a Scrum Master COP that was in transition. Scrum was institutionalized within the organization and there was a general feeling that the group had run its course unless something was done to energize the group. The group decided to begin exploring Scaled Agile Framework as a potential extension of their common interest in Agile project and program management.

In general, a community of practice is not an institution that lasts forever. Idea and groups follow a natural life cycle.  COPs generally hit their zenith when members finally get the most benefit from sharing and connecting.  The amount of benefit that a member of the community perceives they get from participation is related to the passion they have for the group. As ideas and concepts become mainstream or begin to fade, the need for a COP can also fade. As passion around the idea fades, leaders can emerge that have other motives than serving the community which hastens the breakdown of the COP. When the need to a COP begins to fade, generally it is time to disband or re-purpose the COP.


Categories: Process Management

Cost, Value & Investment: How Much Will This Project Cost? Part 2

This post is continued from Cost, Value & Investment: How Much Will This Project Cost, Part 1

We’ve established that you need to know how much this project will cost. I’m assuming you have more than a small project.

If you have to estimate a project, please read the series starting at Estimating the Unknown: Dates or Budget, Part 1. Or, you could get Essays on Estimation. I’m in the midst of fixing it so it reads like a real book. I have more tips on estimation there.

For a program, each team does this for its own ranked backlog:

  • Take every item on the backlog and roadmap, and use whatever relative sizing approach you use now to estimate. You want to use relative sizing, because you need to estimate everything on the backlog.
  • Tip: If each item on the backlog/roadmap is about team-day or smaller, this is easy. The farther out you go, the more uncertainty you have and the more difficult the estimation is. That’s why this is a tip.
  • Walk through the entire backlog, estimating as you proceed. Don’t worry about how large the features are. Keep a count of the large features. Decide as a team if this feature is larger than two or three team-days. If it is, keep a count of those features. The larger the features, the more uncertainty you have in your estimate.
  • Add up your estimate of relative points. Add up the number of large features. Now, you have a relative estimate, which based on your previous velocity means something to you. You also have a number of large features, which will decrease the confidence in that estimate.
  • Do you have 50 features, of which only five are large? Maybe you have 75% confidence in your estimate. On the other hand, maybe all your features are large. I might only have 5-10% confidence in the estimate. Why? Because the team hasn’t completed any work yet and you have no idea how long your work will take.
Technical Program with Communities of Practice

Technical Program with Communities of Practice

As a software program team, get together, and assess the total estimate. Why the program team? Because the program team is the cross-functional team whose job is to get the software product to done. It’s not just the software teams—it’s everyone involved in the technical program team.

Note: the teams have to trust Sally, Joe, Henry and Tom to represent them to the software program team. If the teams do not, no one has confidence in any estimate at all. The estimate is a total SWAG.

The delegates to the program team know what their estimates mean individually. Now, they “add” them together, whatever that means. Do you realize why we will call this prediction? Do Sally, Joe, Henry, and Tom have feature teams, service teams, or component teams? Do they have to add time for the experiments as they transition to agile? Do they need to gain the trust of their management? Or, are they already experienced agile feature teams?

The more experienced the teams are at agile, the better the estimate is. The more the teams are feature teams, the better the estimate is. If you are new to agile, or have feature teams, or have a mixed program (agile and non-agile teams), you know that estimate is way off.

Is it time for the software program manager to say, “We have an initial order-of-magnitude prediction. But we haven’t tested this estimate with any work, so we don’t know how accurate our estimates are. Right now our confidence is about 5-10% (or whatever it is) in our estimate. We’ve spent a day or so estimating, because we can spend time delivering, rather than estimating. We need to do a week or two of work, deliver a working skeletong, and then we can tell you more about our prediction. We can better our prediction as we proceed. Remember, back in the waterfall days, we spent a month estimating and we were wrong. This way, you’ll get to see product as we work.”

You want to use the word “prediction” as much as possible, because people understand the word prediction. They hear weather predictions all the time. They know about weather predictions. But when they hear estimates of work, they think you are correct, even if you use confidence numbers, they think you are accurate. Use the word prediction.

Beware of These Program Estimation Traps

There are plenty of potential traps when you estimate programs. Here are some common problems:

  • The backlog is full of themes. You haven’t even gotten to epics, never mind stories. I don’t see how you can make a prediction. That’s like me saying, “I can go from Boston to China on an airplane. Yes, I can. It will take time.” I need more data: which time of year? Mid-week, weekend? Otherwise, I can only provide a ballpark, not a real estimate.
  • Worse, the backlog is full of tasks, so you don’t know the value of a story. “Fix the radio button” does not tell me the value of a story. Maybe we can eliminate the button instead of fix it.
  • The people estimating are not the ones who will do the work, so the estimate is full of estimation bias. Just because work looks easy or looks hard does not mean it is.
  • The estimate becomes a target. This never works, but managers do it all the time. “Sure, my team can do that work by the end of Q1.”
  • The people on your program multitask, so the estimate is wrong. Have you read the Cost of Delay due to Multitasking?
  • Managers think they can predict team size from the estimate. This is the problem of splitting work in the mistaken belief that more people make it easier to do more work. More people make the communications burden heavier.

Estimating a program is more difficult, because bigness makes everything harder. A better way to manage the issues of a program is to decide if it’s worth funding in the project portfolio. Then, work in an agile way. Be ready to change the order of work in the backlog, for teams and among teams.

As a program manager, you have two roles when people ask for estimates. You want to ask your sponsors these questions:

  • How much do you want to invest before we stop? Are you ready to watch the program grow as we build it?
  • What is the value of this project or program?

You want to ask the teams and product owners these questions:

  • Please produce walking skeletons (of features in the product) and build on it
  • Please produce small features, so we can see the product evolve every day

The more the sponsors see the product take shape, the less interested they will be in an overall estimate. They may ask for more specific estimates (when can you do this specific feature), which is much easier to answer.

Delivering working software builds trust. Trust obviates many needs for estimates. If your managers or customers have never had trust with a project or program team before, they will start asking for estimates. Your job is to deliver working software every day, so they stop asking.

Categories: Project Management

Managing Stakeholders Expectations- What to do when your SMEs think your project will solve all their problems….

Software Requirements Blog - Seilevel.com - Thu, 09/11/2014 - 17:00
There are lots of times that stakeholders have unrealistic expectations and that you, as the product manager/business analyst will have to manage them so that the scope of the project doesn’t balloon out of proportion. In this blog post, however, I am going to speak to a very specific type of stakeholder expectation: that your […]
Categories: Requirements

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Soft Skills is the Manning Deal of the Day!

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Great news! Early access to my new book Soft Skills: The Software Developer’s Life Manual, is on sale today (9/11/2014) only as Manning’s deal of the day! If you’ve been thinking about getting the book, now is probably the last chance to get it at a discounted rate. Just use the code: dotd091114au to the get […]

The post Soft Skills is the Manning Deal of the Day! appeared first on Simple Programmer.

Categories: Programming

Open as Many Doors as Possible

Making the Complex Simple - John Sonmez - Thu, 09/11/2014 - 15:00

Even though specialization is important, it doesn’t mean you shouldn’t strive to open up as many doors as possible in your life.

The post Open as Many Doors as Possible appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia September 2014

From the Editor of Methods & Tools - Thu, 09/11/2014 - 09:36
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about the software developer condition, scaling Agile, technical debt, behavior-driven development, Agile metrics, UX (user eXperience), NoSQL databases and software design. Blog: The Developer is Dead, Long Live the Developer Blog: Scaling Agile at Gilt Blog: Technical debt 101 Blog: Behaviour Driven Development: Tips for Writing Better Feature Files Article: Acceptance Criteria – Demonstrate Them by Drawing a House Article: Actionable Metrics At Siemens Health Services Article: Adapting Scrum to a ...

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Booting a Raspberry Pi B+ with the Raspbian Debian Wheezy image

Agile Testing - Grig Gheorghiu - Thu, 09/11/2014 - 01:32
It took me a while to boot my brand new Raspberry Pi B+ with a usable Linux image. I chose the Raspbian Debian Wheezy image available on the downloads page of the official raspberrypi.org site. Here are the steps I needed:

1) Bought micro SD card. Note DO NOT get a regular SD card for the B+ because it will not fit in the SD card slot. You need a micro SD card.

2) Inserted the SD card via an SD USB adaptor in my MacBook Pro.

3) Went to the command line and ran df to see which volume the SD card was mounted as. In my case, it was /dev/disk1s1.

4) Unmounted the SD card. I initially tried 'sudo umount /dev/disk1s1' but the system told me to use 'diskutil unmount', so the command that worked for me was:

diskutil unmount /dev/disk1s1

5) Used dd to copy the Raspbian Debian Wheezy image (which I previously downloaded) per these instructions. Important note: the target of the dd command is /dev/disk1 and NOT /dev/disk1s1. I tried initially with the latter, and the Raspberry Pi wouldn't boot (one of the symptoms that something was wrong other than the fact that nothing appeared on the monitor, was that the green light was solid and not flashing; a google search revealed that one possible cause for that was a problem with the SD card). The dd command I used was:

dd if=2014-06-20-wheezy-raspbian.img of=/dev/disk1 bs=1m

6) At this point, I inserted the micro SD card into the SD slot on the Raspberry Pi, then connected the Pi to a USB power cable, a monitor via an HDMI cable, a USB keyboard and a USB mouse. I was able to boot and change the password for the pi user. The sky is the limit next ;-)




Community of Practice: Owning The Message

Some time controlling the message is important!

Some time controlling the message is important!

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. The power of COPs is generated by the interchange between people in a way that helps both the individuals and the group to achieve their goals. Who owns the message that the COP focuses on will affect how well the interchange occurs. Ownership, viewed in a black and white mode, generates two kinds of COPs. In the first type of COP, the group owns the message. In the second, the organization owns the message. “A Community of Practice: An Example” described a scenario in which the organization created a COP for a specific practices and made attendance mandatory. The inference in this scenario is that the organization is using the COP to deliver a message. The natural tendency is to view COPs, in which the organization controls the message and membership, as delivering less value.

Organizational ownership of a COP’s message and membership are generally viewed as anti-patterns. The problem is that ownership and control membership can impact the COP’s ability to:

  1. Connect like-minded colleagues and peers
  2. Share experiences safely if they do not conform to the organization’s message
  3. Innovate and to create new ideas that are viewed as outside-the-box.

The exercise of control will constrain the COP’s focus which in an organization implementing concepts such self-organizing and self-managing team (Agile concepts) and will send very mixed messages to the organization.

The focus that control generates can be used to implement, reinforce and institutionalize new ideas that are being rolled out on an organizational basis. Control of message and membership can:

  1. Accelerate learning by generating focus
  2. Validate and build on existing knowledge, the organization’s message
  3. Foster collaboration and consistency of process

In the short-run this behavior may well be a beneficial mechanism to deliver and then reinforce the organization’s message.  The positives that the constraints generate will quickly be overwhelmed once the new idea loses its bright and shiny status.

In organizations that use top-down process improvement methods, COPs can be used to deliver a message and then to reinforce the message as implementation progresses. However, as soon as institutionalization begins, the organization should get out of the COP control business. This does not mean that support, such as providing budget and logistics, should be withdrawn.  Support does not have to equate to control. Remember that control might be effective in the short run; however, COPs in which the message and membership is explicitly controlled, in the long-term will not be able to evolve and support its members effectively.


Categories: Process Management