Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 314 – Crispin, Gregory, More Agile Testing

www.spamcast.net

http://www.spamcast.net

Listen to the interview here!

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014). She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 314 - Crispin, Gregory, More Agile Testing

Software Process and Measurement Cast - Sun, 11/02/2014 - 23:00

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014)She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Loewensberg re-animated

Phil Trelford's Array - Sun, 11/02/2014 - 14:51

Verena Lowensburg was a Swiss painter and graphic designer, assocciated with the concrete art movement. I came across some of her work while searching for pieces by Richard Paul Lohse.

Again I’ve selected some pieces and attempted to draw them procedurally.

Spiral of circles and semi-circles

Based on Verena Loewensburg’s Unititled, 1953

Loewensberg step-by-step[4]

The piece was constructed from circles and semi-circles arranged around 5 concentric rectangles drawn from the inside-out. The lines of the rectangles are drawn in a specific order. The size of each circle seems only to be related to the size of it’s rectangle. If a placed circle would overlap an existing circle then it is drawn as a semi-circle

Shaded spiral

Again based on Verena Loewensburg’s Unititled, 1953

Loewensberg colors

Here I took the palette of another one of Verena’s works.

Four-colour Snake

Based on Verena Loewensburg’s Untitled, 1971

Loewensberg snake[4]

The original piece reminded me of snake video game.

Rotating Red Square

Based on Verena Loewensburg’s Untitled, 1967 

Loewensberg square rotating

This abstract piece was a rotated red square between blue and green squares.

Multi-coloured Concentric Circles

Based on Verena Loewensburg’s Untitled, 1972

Loewensberg circles shakeup

This piece is made up of overlapping concentric circles with the bottom set clipped with a rectangle. region. The shape and colour remind me a little of the Mozilla Firefox logo.

Method

Each image was procedurally generated using the Windows Forms graphics API inside an F# script. Typically a parameterized function is used to draw a specific frame to a bitmap which can be saved out to a an animated gif.

I guess you could think of each piece as a coding kata.

Scripts

Have fun!

Categories: Programming

Announcing Re-read Saturday!

index

There are a number of books that have had a huge impact on my life and career. Many reader of the Software Process and Measure blog and listeners to the podcast have expressed similar sentiments. Re-read Saturday is a new feature that will begin next Saturday. We will begin Re-read Saturday with a re-read of Leading Change. The re-read will extend over a six to eight Saturdays as I share my current interpretation of a book that has a major impact on how I think about the world around me. When the re-read of Leading Change is complete we will dive into the list of books I am compiling from you, the readers and listeners.

Currently the list includes:

  • Seven Habit of Highly Effective People – Stephen Covey (re-read in early 2014)
  • Out of the Crisis – Edward Deming
  • To Be or Not to Be Intimated — Robert Ringer
  • Pulling Your Own Strings — Wayne Dyer
  • The Goal: A Process of Ongoing Improvement – Eliyahu M. Goldratt

So far each book has gotten one “vote” apiece.

How can you get involved in Re-read Saturday? You can begin by answering, “What are the two books that have most influenced you career (business, technical or philosophical)?” Send the titles to spamcastinfo@gmail.com, post the tiles on the blog, Facebook or Twitter with the hashtag #SPaMCAST. We will continue to add to the list and republish it on the blog. When we are close to the end of the re-read of Leading Change we will publish a poll based on the current list (unless there is a clear leader) to select the next book to re-read.

Two of other ways to get involved include adding your perspective to each of the re-read entries by commenting. Second, when the re-read is complete I will invite all of the commenters to participate in a discussion (just like a book club) that will be recorded and published on the Software Process and Measurement Cast.

We begin next Re-read Saturday next week but you can get involved today!

 


Categories: Process Management

Splitting Users Stories: More Anti-patterns

This hood is making an ineffective split of his face.

This hood is making an ineffective split of his face.

An anti-pattern is a typical response to a problem that is usually ineffective. There are numerous patterns for splitting user stories that are generally effective and there are an equal number that are generally ineffective. Splitting User Stories: When Things Go Wrong described some of the common anti-patterns such as waterfall splits, architectural splits, splitting with knowledge and keeping non-value remainders. Here are other equally problematic patterns for splitting user stories that I have observed since writing the original article:

  1. Vendor splits. Vendor splits are splits that are made based on work assignment or reporting structure. Organizations might use personnel from one company to design a solution, another company to code and another to test functionality. Teams and stories are constructed based on organization the team’s members report to rather than a holistic view of the functionality. I recently observed a project that had split stories between on project management and control  and technical activities.  The rational was that since the technical component of the project had been outsourced the work should be put in separate stories so that it would be easy to track work that was the vendors responsibility to complete. Scrumban or Kanban is often a better choice in these scenarios than other lean/Agile techniques.
  2. Generic personas splits. Splitting stories based on a generic persona or into stories where only a generic persona can be identified typically suggests that team members are unsure who really needs the functionality in the user story. Splitting stories without knowing who the story is trying to serve will make it difficult to hold the conversations needed to develop and deliver the value identified in the user story. Conversations are critical in the flesh out requirements and generating feedback in Agile projects.
  3. Too thin splits. While rare, occasionally teams get obsessed by splitting user stories in to thinner and thinner slices. While I have often said that smaller stories are generally better there comes a time when to split further is not worth the time it will take to make the split. Team that get overly obsessed with splitting user stories into thinner and thinner slices generally will spend more time in planning and less in delivering. Each user story should apply INVEST as a criteria to ensure good splits. In addition to using INVEST, each team should adopt a sizing guideline that maximizes the team’s productivity/velocity. Guidelines of this type are reflection of the capacity of the team to a greater extend and the capacity of the organization to a lesser extent.

Splitting stories well can deliver huge benefits to the team and to the organization. Benefits include increased productivity and velocity, improved quality and higher morale. Splitting user stories badly delivers none of the value we would expect from the process and may even cause teams, stakeholders and whole organizations to develop a negative impression of Agile.


Categories: Process Management

R: Converting a named vector to a data frame

Mark Needham - Sat, 11/01/2014 - 00:47

I’ve been playing around with igraph’s page rank function to see who the most central nodes in the London NoSQL scene are and I wanted to put the result in a data frame to make the data easier to work with.

I started off with a data frame containing pairs of people and the number of events that they’d both RSVP’d ‘yes’ to:

> library(dplyr)
> data %>% arrange(desc(times)) %>% head(10)
       p.name     other.name times
1  Amit Nandi Anish Mohammed    51
2  Amit Nandi Enzo Martoglio    49
3       louis          zheng    46
4       louis     Raja Kolli    45
5  Raja Kolli Enzo Martoglio    43
6  Amit Nandi     Raja Kolli    42
7       zheng Anish Mohammed    42
8  Raja Kolli          Rohit    41
9  Amit Nandi          zheng    40
10      louis          Rohit    40

I actually had ~ 900,000 such rows in the data frame:

> length(data[,1])
[1] 985664

I ran page rank over the data set like so:

g = graph.data.frame(data, directed = F)
pr = page.rank(g)$vector

If we evaluate pr we can see the person’s name and their page rank:

> head(pr)
Ioanna Eirini          Mjay       Baktash      madhuban    Karl Prior   Keith Bolam 
    0.0002190     0.0001206     0.0001524     0.0008819     0.0001240     0.0005702

I initially tried to convert this to a data frame with the following code…

> head(data.frame(pr))
                     pr
Ioanna Eirini 0.0002190
Mjay          0.0001206
Baktash       0.0001524
madhuban      0.0008819
Karl Prior    0.0001240
Keith Bolam   0.0005702

…which unfortunately didn’t create a column for the person’s name.

> colnames(data.frame(pr))
[1] "pr"

Nicole pointed out that I actually had a named vector and would need to explicitly extract the names from that vector into the data frame. I ended up with this:

> prDf = data.frame(name = names(pr), rank = pr)
> head(prDf)
                       name      rank
Ioanna Eirini Ioanna Eirini 0.0002190
Mjay                   Mjay 0.0001206
Baktash             Baktash 0.0001524
madhuban           madhuban 0.0008819
Karl Prior       Karl Prior 0.0001240
Keith Bolam     Keith Bolam 0.0005702

We can now sort the data frame to find the most central people on the NoSQL London scene based on meetup attendance:

> data.frame(prDf) %>%
+   arrange(desc(pr)) %>%
+   head(10)
             name     rank
1           louis 0.001708
2       Kannappan 0.001657
3           zheng 0.001514
4    Peter Morgan 0.001492
5      Ricki Long 0.001437
6      Raja Kolli 0.001416
7      Amit Nandi 0.001411
8  Enzo Martoglio 0.001396
9           Chris 0.001327
10          Rohit 0.001305
Categories: Programming

iOS localization tricks for Storyboard and NIB files

Xebia Blog - Fri, 10/31/2014 - 23:36

Localization in iOS from Interface Builder designed UI has never been without any problems. The right way of doing localization is by having multiple Strings files. Duplicating Nib or Storyboard files and then changing the language is not an acceptable method. Luckily Xcode 5 has improved this for Storyboards by introducing Base Localization, but I've personally come across several situations where this didn't work at all or when it seemed buggy. Also Nib (Xib) files without ViewController don't support it.

In this post I'll show a couple of tricks that can help with the Localization of Storyboard and Nib files.

Localized subclasses

When you use this method, you create specialized subclasses of view classes that handle the localization in the awakeFromNib() method. This method is called for each view that is loaded from a Storyboard or Nib and all properties that you've set in Interface Builder will be set already.

For UILabels, this means getting the text property, localizing it and setting the text property again.

Using Swift, you can create a single file (e.g. LocalizationView.swift) in your project and put all your subclasses there. Then add the following code for the UILabel subclass:

class LocalizedLabel : UILabel {
    override func awakeFromNib() {
        if let text = text {
            self.text = NSLocalizedString(text, comment: "")
        }
    }
}

Now you can drag a label onto your Storyboard and fill in the text in your base language as you would normally. Then change the Class to LocalizedLabel and it will get the actual label from you Localizable.strings file.

Screen Shot 2014-10-31 at 22.45.46

Screen Shot 2014-10-31 at 22.46.56

No need to make any outlets or write any code to change it!

You can do something similar for UIButtons, even though they don't have a single property for the text on a button.

class LocalizedButton : UIButton {
    override func awakeFromNib() {
        for state in [UIControlState.Normal, UIControlState.Highlighted, UIControlState.Selected, UIControlState.Disabled] {
            if let title = titleForState(state) {
                setTitle(NSLocalizedString(title, comment: ""), forState: state)
            }
        }
    }
}

This will even allow you to set different labels for the different states like Normal and Highlighted.

User Defined Runtime Attributes

Another way is to use the User Defined Runtime Attributes. This method requires slightly more work, but has two small advantages:

  1. You don't need to use subclasses. This is nice when you already use another custom subclass for your labels, buttons and other view classes.
  2. Your keys in the Strings file and texts that show up in the Storyboard don't need to be the same. This works well when you use localization keys such as myCoolTableViewController.header.subtitle. It doesn't look very nice to see those everywhere in your Interface Builder labels and buttons.

So how does this work? Instead of creating a subclass, you instead add a computed property to an existing view class. For UILabels you use the following code:

extension UILabel {

    var localizedText: String {
        set (key) {
            text = NSLocalizedString(key, comment: "")
        }
        get {
            return text!
        }
    }

}

Now you can add a User Defined Runtime Attribute with the key localizedText to your UILabel and have the Localization key as its value.

Screen Shot 2014-10-31 at 23.05.18

Screen Shot 2014-10-31 at 23.06.16

Also here if you want to make this work for buttons, it becomes slightly more complicated. You will have to add a property for each state that needs a label.

extension UIButton {
    var localizedTitleForNormal: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Normal)
        }
        get {
            return titleForState(.Normal)!
        }
    }

    var localizedTitleForHighlighted: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Highlighted)
        }
        get {
            return titleForState(.Highlighted)!
        }
    }
}
Conclusion

Always try and pick the best solution for your problem. Use Storyboard Base Localization if that works well for you. If it doesn't, use the approach with subclasses if you don't need to use another subclass and if you don't care about using your base location strings as localization keys. Else, use the last approach with User Defined Runtime Attributes.

Better search on developers.google.com

Google Code Blog - Fri, 10/31/2014 - 18:56
Posted by Aaron Karp, Product Manager, developers.google.com

We recently launched a major upgrade for the search box on developers.google.com: it now provides clickable suggestions as you type.
We hope this becomes your new favorite way to navigate developers.google.com, and to make things even easier we enabled “/” as a shortcut key for accessing the search field on any page.

If you have any feedback on this new feature, please let us know by leaving a comment below. And we have more exciting upgrades coming soon, so stay tuned!

Categories: Programming

Stuff The Internet Says On Scalability For October 31st, 2014

Hey, it's HighScalability time:


A CT scanner without its clothes on. Sexy.
  • 255Tbps: all of the internet’s traffic on a single fiber; 864 million: daily Facebook users
  • Quotable Quotes:
    • @chr1sa: "No dominant platform-level software has emerged in the last 10 years in closed-source, proprietary form”
    • @joegravett: Homophobes boycotting Apple because of Tim Cook's brave announcement are going to lose it when they hear about Turing.
    • @CloudOfCaroline: #RICON MySQL revolutionized Scale-out. Why? Because it couldn't Scale-up. Turned a flaw into a benefit - @martenmickos
    • chris dixon: We asked for flying cars and all we got was the entire planet communicating instantly via pocket supercomputers
    • @nitsanw: "In the majority of cases, performance will be programmer bound" - Barker's Law
    • @postwait: @coda @antirez the only thing here worth repeating: we should instead be working with entire distributions (instead of mean or q(0.99))
    • Steve Johnson: inventions didn't come about in a flash of light — the veritable Eureka! moment — but were rather the result of years' worth of innovations happening across vast networks of creative minds.

  • On how Google is its own VC. cromwellian: The ads division is mostly firewalled off from the daily concerns of people developing products at Google. They supply cash to the treasury, people think up cool ideas and try to implement them. It works just like startups, where you don't always know what your business model is going to be. Gmail started as a 20% project, not as a grand plan to create an ad channel. Lots of projects and products at Google have no business model, no revenue model, the company does throw money at projects and "figure it out later" how it'll make money. People like their apps more than the web. Mobile ads are making a lot of money.  

  • Hey mobile, what's for dinner? "The world," says chef Benedict Evans, who has prepared for your pleasure a fine gourmet tasting menu: Presentation: mobile is eating the world. Smart phones are now as powerful as Thor and Hercules combined. Soon everyone will have a smart phone. And when tech is fully adopted, it disappears. 

  • How much bigger is Amazon’s cloud vs. Microsoft and Google?: Amazon’s cloud revenue at more than $4.7 billion this year. TBR pegs Microsoft’s public cloud IaaS revenue at $156 million and Google’s at $66 million. If those estimates are correct than Amazon’s cloud revenue is 30 times bigger than Microsoft’s.

  • Great discussion on the Accidental Tech Podcast (at about 25 minutes in) on how the the time of open APIs has ended. People who made Twitter clients weren't competing with Twitter, they were helping Twitter become who they are today. For Apple, developers add value to their hardware and since Apple makes money off the hardware this is good for Apple, because without apps Apple hardware is way less valuable. With their new developer focus Twitter and developer interests are still not aligned as Twitter is still worried about clients competing with them. Twitter doesn't want to become an infrastructure company because there's no money in it. In the not so distant past services were expected to have an open API, in essence services were acting as free infrastructure, just hoping they would become popular enough that those dependent on the service could be monetized. New services these days generally don't have full open APIs because it's hard to justify as a business case. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

hdiutil: could not access / create failed – Operation canceled

Mark Needham - Fri, 10/31/2014 - 10:45

Earlier in the year I wrote a blog post showing how to build a Mac OS X DMG file for a Java application and I recently revisited this script to update it to a new version and ran into a frustrating error message.

I tried to run the following command to create a new DMG file from a source folder…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

…but was met with the following error message:

...could not access /Volumes/DemoBench/DemoBench.app/Contents/Resources/app/database-agent.jar - Operation canceled
 
hdiutil: create failed - Operation canceled

I was initially a bit stumped and thought maybe the flags to hdiutil had changed but a quick look at the man page suggested that wasn’t the issue.

I decided to go back to my pre command line approach for creating a DMG – DiskUtility – and see if I could create it that way. This helped reveal the actual problem:

2014 10 31 09 42 20

I increased the volume size to 150 MB…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

and all was well:

....................................................................................................
..........................................................................
created: /Users/markneedham/projects/neo-technology/quality-tasks/park-bench/database-agent-desktop/target/pack.temp.dmg

And this post will serve as documentation to stop it catching me out next time!

Categories: Programming

Azure: Announcing New Real-time Data Streaming and Data Factory Services

ScottGu's Blog - Scott Guthrie - Fri, 10/31/2014 - 07:39

The last three weeks have been busy ones for Azure.  Two weeks ago we announced a partnership with Docker to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Last week we held our Cloud Day event and announced our new G-Series of Virtual Machines as well as Premium Storage offering.  The G-Series VMs provide the largest VM sizes available in the public cloud today (nearly 2x more memory than the largest AWS offering, and 4x more memory than the largest Google offering).  The new Premium Storage offering (which will work with both our D-series and G-series of VMs) will support up to 32TB of storage per VM, >50,000 IOPS of disk IO per VM, and enable sub-1ms read latency.  Combined they provide an enormous amount of power that enables you to run even bigger and better solutions in the cloud.

Earlier this week, we officially opened our new Azure Australia regions – which are our 18th and 19th Azure regions open for business around the world.  Then at TechEd Europe we announced another round of new features – including the launch of the new Azure MarketPlace, a bunch of great network improvements, our new Batch computing service, general availability of our Azure Automation service and more.

Today, I’m excited to blog about even more new services we have released this week in the Azure Data space.  These include:

  • Event Hubs: is a scalable service for ingesting and storing data from websites, client apps, and IoT sensors.
  • Stream Analytics: is a cost-effective event processing engine that helps uncover real-time insights from event streams.
  • Data Factory: enables better information production by orchestrating and managing diverse data and data movement.

Azure Event Hub is now available in general availability, and the new Azure Stream Analytics and Data Factory services are now in public preview. Event Hubs: Log Millions of events per second in near real time

The Azure Event Hub service is a highly scalable telemetry ingestion service that can log millions of events per second in near real time.  You can use the Event Hub service to collect data/events from any IoT device, from any app (web, mobile, or a backend service), or via feeds like social networks.  We are using it internally within Microsoft to monitor some of our largest online systems.

Once you collect events with Event Hub you can then analyze the data using any real-time analytics system (like Apache Storm or our new Azure Stream Analytics service) and store/transform it into any data storage system (including HDInsight and Hadoop based solutions).

Event Hub is delivered as a managed service on Azure (meaning we run, scale and patch it for you and provide an enterprise SLA).  It delivers:

  • Ability to log millions of events per second in near real time
  • Elastic scaling support with the ability to scale-up/down with no interruption
  • Support for multiple protocols including support for HTTP and AMQP based events
  • Flexible authorization and throttling device policies
  • Time-based event buffering with event order preservation

The pricing model for Event Hubs is very flexible – for just $11/month you can provision a basic Event Hub with guaranteed performance capacity to capture 1 MB/sec of events sent to your Event Hub.  You can then provision as many additional capacity units as you need if your event traffic goes higher. 

Getting Started with Capturing Events

You can create a new Event Hub using the Azure Portal or via the command-line.  Choose New->App Service->Service Bus->Event Hub in the portal to do so:

image

Once created, events can be sent to an Event Hub with either a strongly-typed API (e.g. .NET or Java client library) or by just sending a raw HTTP or AMQP message to the service.  Below is a simple example of how easy it is to log an IoT event to an Event Hub using just a standard HTTP post request.  Notice the Authorization header in the HTTP post – you can use this to optionally enable flexible authentication/authorization for your devices:

POST https://your-namespace.servicebus.windows.net/your-event-hub/messages?timeout=60&api-version=2014-01 HTTP/1.1<?xml:namespace prefix = "o" />

Authorization: SharedAccessSignature sr=your-namespace.servicebus.windows.net&sig=tYu8qdH563Pc96Lky0SFs5PhbGnljF7mLYQwCZmk9M0%3d&se=1403736877&skn=RootManageSharedAccessKey

ContentType: application/atom+xml;type=entry;charset=utf-8

Host: your-namespace.servicebus.windows.net

Content-Length: 42

Expect: 100-continue

 

{ "DeviceId":"dev-01", "Temperature":"37.0" }

Your Event Hub can collect up to millions of messages per second like this, each storing whatever data schema you want within them, and the Event Hubs service will store them in-order for you to later read/consume.

Downstream Event Processing

Once you collect events, you no doubt want to do something with them.  Event Hubs includes an intelligent processing agent that allows for automatic partition management and load distribution across readers.  You can implement any logic you want within readers, and the data sent to the readers is delivered in the order it was sent to the Event Hub.

In addition to supporting the ability for you to write custom Event Readers, we also have two easy ways to work with pre-built stream processing systems: including our new Azure Stream Analytics Service and Apache Storm.  Our new Azure Stream Analytics service supports doing stream processing directly from Event Hubs, and Microsoft has created an Event Hubs Storm Spout for use with Apache Storm clusters.

The below diagram helps express some of the many rich ways you can use Event Hubs to collect and then hand-off events/data for processing:

image

Event Hubs provides a super flexible and cost effective building-block that you can use to collect and process any events or data you can stream to the cloud.  It is very cost effective, and provides the scalability you need to meet any needs.

Learning More about Event Hubs

For more information about Azure Event Hubs, please review the following resources:

Stream Analytics: Distributed Stream Processing Service for Azure

I’m excited to announce the preview our new Azure Stream Analytics service – a fully managed real-time distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. The new Azure Stream Analytics service easily scales from small projects with just a few KB/sec of throughput to a gigabyte/sec or more of streamed data messages/events.  

Our Stream Analytics pricing model enable you to run low throughput streaming workloads continuously at low cost, and enables you to only have to scale up as your business needs increase.  We do this while maintaining built in guarantees of event delivery, and state management for fast recovery which enables mission critical business continuity.

Dramatically Simpler Developer Experience for Stream Processing Data

Stream Analytics supports a SQL-like language that dramatically lowers the bar of the developer expertise required to create a scalable stream processing solution. A developer can simply write a few lines of SQL to do common operations including basic filtering, temporal analysis operations, joining multiple live streams of data with other static data sources, and detecting stream patterns (or lack thereof).

This dramatically reduces the complexity and time it takes to develop, maintain and apply time-sensitive computations on real-time streams of data. Most other streaming solutions available today require you to write complex custom code, but with Azure Stream Analytics you can write simple, declarative and familiar SQL.

Fully Managed Service that is Easy to Setup

With Stream Analytics you can dramatically accelerate how quickly you can derive valuable real time insights and analytics on data from devices, sensors, infrastructure, or applications. With a few clicks in the Azure Portal, you can create a streaming pipeline, configure its inputs and outputs, and provide SQL-like queries to describe the desired stream transformations/analysis you wish to do on the data. Once running, you are able to monitor the scale/speed of your overall streaming pipeline and make adjustments to achieve the desired throughput and latency.

You can create a new Stream Analytics Job in the Azure Portal, by choosing New->Data Services->Stream Analytics:

image

Setup Streaming Data Input

Once created, your first step will be to add a Streaming Data Input.  This allows you to indicate where the data you want to perform stream processing on is coming from.  From within the portal you can choose Inputs->Add An Input to launch a wizard that enables you to specify this:

image

We can use the Azure Event Hub Service to deliver us a stream of data to perform processing on. If you already have an Event Hub created, you can choose it from a list populated in the wizard above.  You will also be asked to specify the format that is being used to serialize incoming event in the Event Hub (e.g. JSON, CSV or Avro formats).

Setup Output Location

The next step to developing our Stream Analytics job is to add a Streaming Output Location.  This will configure where we want the output results of our stream processing pipeline to go.  We can choose to easily output the results to Blob Storage, another Event Hub, or a SQL Database:

image

Note that being able to use another Event Hub as a target provides a powerful way to connect multiple streams into an overall pipeline with multiple steps.

Write Streaming Queries

Now that we have our input and output sources configured, we can now write SQL queries to transform, aggregate and/or correlate the incoming input (or set of inputs in the event of multiple input sources) and output them to our output target.  We can do this within the portal by selecting the QUERY tab at the top.

image

There are a number of interesting queries you can write to processing the incoming stream of data.  For example, in the previous Event Hub section of this blog post I showed how you can use an HTTP POST command to submit JSON based temperature data from an IoT device to an Event Hub with data in JSON format like so:

{ "DeviceId":"dev-01", "Temperature":"37.0" }

When multiple devices are streaming events simultaneously into our Event Hub like this, it would feed into our Stream Analytics job as a stream of continuous data events that look like the sequence below:

Wouldn’t it be interesting to be able to analyze this data using a time-window perspective instead?  For example, it would be useful to calculate in real-time what the average temperature of each device was in the last 5 seconds of multiple readings.

With the Stream Analytics Service we can now dynamically calculate this over our incoming live stream of data just by writing a SQL query like so:

SELECT DateAdd(second,-5,System.TimeStamp) as WinStartTime, system.TimeStamp as WinEndTime, DeviceId, Avg(Temperature) as AvgTemperature, Count(*) as EventCount 
    FROM input
    GROUP BY TumblingWindow(second, 5), DeviceId

Running this query in our Stream Analytics job will aggregate/transform our incoming stream of data events and output data like below into the output source we configured for our job (e,g, a blog storage file or a SQL Database):

The great thing about this approach is that the data is being aggregated/transformed in real time as events are being streamed to us, and it scales to handle literally gigabytes of data event streamed per second.

Scaling your Stream Analytics Job

Once defined, you can easily monitor the activity of your Stream Analytics Jobs in the Azure Portal:

image

You can use the SCALE tab to dynamically increase or decrease scale capacity for your stream processing – allowing you to pay only for the compute capacity you need, and enabling you to handle jobs with gigabytes/sec of streamed data. 

Learning More about Stream Analytics Service

For more information about Stream Analytics, please review the following resources:

Data Factory: Fully managed service to build and manage information production pipelines

Organizations are increasingly looking to fully leverage all of the data available to their business.  As they do so, the data processing landscape is becoming more diverse than ever before – data is being processed across geographic locations, on-premises and cloud, across a wide variety of data types and sources (SQL, NoSQL, Hadoop, etc), and the volume of data needing to be processed is increasing exponentially. Developers today are often left writing large amounts of custom logic to deliver an information production system that can manage and co-ordinate all of this data and processing work.

To help make this process simpler, I’m excited to announce the preview of our new Azure Data Factory service – a fully managed service that makes it easy to compose data storage, processing, and data movement services into streamlined, scalable & reliable data production pipelines. Once a pipeline is deployed, Data Factory enables easy monitoring and management of it, greatly reducing operational costs. 

Easy to Get Started

The Azure Data Factory is a fully managed service. Getting started with Data Factory is simple. With a few clicks in the Azure preview portal, or via our command line operations, a developer can create a new data factory and link it to data and processing resources.  From the new Azure Marketplace in the Azure Preview Portal, choose Data + Analytics –> Data Factory to create a new instance in Azure:

image

Orchestrating Information Production Pipelines across multiple data sources

Data Factory makes it easy to coordinate and manage data sources from a variety of locations – including ones both in the cloud and on-premises.  Support for working with data on-premises inside SQL Server, as well as Azure Blob, Tables, HDInsight Hadoop systems and SQL Databases is included in this week’s preview release. 

Access to on-premises data is supported through a data management gateway that allows for easy configuration and management of secure connections to your on-premises SQL Servers.  Data Factory balances the scale & agility provided by the cloud, Hadoop and non-relational platforms, with the management & monitoring that enterprise systems require to enable information production in a hybrid environment.

Custom Data Processing Activities using Hive, Pig and C#

This week’s preview enables data processing using Hive, Pig and custom C# code activities.  Data Factory activities can be used to clean data, anonymize/mask critical data fields, and transform the data in a wide variety of complex ways.

The Hive and Pig activities can be run on an HDInsight cluster you create, or alternatively you can allow Data Factory to fully manage the Hadoop cluster lifecycle on your behalf.  Simply author your activities, combine them into a pipeline, set an execution schedule and you’re done – no manual Hadoop cluster setup or management required. 

Built-in Information Production Monitoring and Dashboarding

Data Factory also offers an up-to-the moment monitoring dashboard, which means you can deploy your data pipelines and immediately begin to view them as part of your monitoring dashboard.  Once you have created and deployed pipelines to your Data Factory you can quickly assess end-to-end data pipeline health, pinpoint issues, and take corrective action as needed.

Within the Azure Preview Portal, you get a visual layout of all of your pipelines and data inputs and outputs. You can see all the relationships and dependencies of your data pipelines across all of your sources so you always know where data is coming from and where it is going at a glance. We also provide you with a historical accounting of job execution, data production status, and system health in a single monitoring dashboard:

image

Learning More about Stream Analytics Service

For more information about Data Factory, please review the following resources:

Other Great Data Improvements

Today’s releases make it even easier for customers to stream, process and manage the movement of data in the cloud.  Over the last few months we’ve released a bunch of other great data updates as well that make Azure a great platform to perform any data needs.  Since August: 

We released a major update of our SQL Database service, which is a relational database as a service offering.  The new SQL DB editions (Basic/Standard/Premium ) support a 99.99% SLA, larger database sizes, dedicated performance guarantees, point-in-time recovery, new auditing features, and the ability to easily setup active geo-DR support. 

We released a preview of our new DocumentDB service, which is a fully-managed, highly-scalable, NoSQL Document Database service that supports saving and querying JSON based data.  It enables you to linearly scale your document store and scale to any application size.  Microsoft MSN portal recently was rewritten to use it – and stores more than 20TB of data within it.

We released our new Redis Cache service, which is a secure/dedicated Redis cache offering, managed as a service by Microsoft.  Redis is a popular open-source solution that enables high-performance data types, and our Redis Cache service enables you to standup an in-memory cache that can make the performance of any application much faster.

We released major updates to our HDInsight Hadoop service, which is a 100% Apache Hadoop-based service in the cloud. We have also added built-in support for using two popular frameworks in the Hadoop ecosystem: Apache HBase and Apache Storm.

We released a preview of our new Search-As-A-Service offering, which provides a managed search offering based on ElasticSearch that you can easily integrate into any Web or Mobile Application.  It enables you to build search experiences over any data your application uses (including data in SQLDB, DocDB, Hadoop and more).

And we have released a preview of our Machine Learning service, which provides a powerful cloud-based predictive analytics service.  It is designed for both new and experienced data scientists, includes 100s of algorithms from both the open source world and Microsoft Research, and supports writing ML solutions using the popular R open-source language.

You’ll continue to see major data improvements in the months ahead – we have an exciting roadmap of improvements ahead.

Summary

Today’s Microsoft Azure release enables some great new data scenarios, and makes building applications that work with data in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Words Have Power

Remember that the race does not always go to the just; however not running will ensure that you can't win. Don’t let your words keep you from running the race.

Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.

Words have power,when used correctly — the power to sell ideas, to rally the troops and to provide motivation. Or words can be a tactic signal of resistance to change and an abrogation of responsibility.  In the later set of scenarios, perfectly good words go bad.  I want to highlight three words, which when used to declare that action won’t be taken or as a tool to deny responsibility for taking action, might as well be swear words.  The words, in my opinion, that are the worst offenders are ‘but’, ‘can’t’ and ‘however’.  The use of any of these three words should send up a red flag that a course is being charted to a special process improvement hell for an organization.

‘But’

The ugliest word in the process improvement world, at least in the English language, is ‘but’ (not but with two t’s).  ‘But’ is a fairly innocuous word, so why do I relegate it to such an august spot on my bad words list?  Because the term is usually used to explain why the speaker (even though they know better) can’t or won’t fight for what is right.  As an example, I recently participated in a discussion about involving the business as an equal partner in a project (a fairly typical discussion for an organization that is transitioning to Agile).  Everyone involved thought that the concept was important, made sense and would help the IT department deliver more value to the organization, ‘but’ in their opinion, the business would not be interested in participating.  Not that anyone would actually discuss having the business be involved with them or invite them to the project party.  A quick probe exposed excuses like “but they do not have time, so we won’t ask” and the infamous, “but that isn’t how we do it here.”  All of the reasons why they would not participate were rationalizations, intellectual smoke screens, for not taking the more difficult steps of asking the business to participate in the process of delivering projects.  It was too frightening to ask and risk rejection, or worse yet acceptance, then have to cede informational power through knowledge sharing.  The use of the word ‘but’ is used to negate anything out of ordinary which gives the speaker permission to not get involved in rectifying the problem.  By not working to fix the problem, the consequences belong to someone else.

‘Can’t’

A related negation word is ‘can’t’. ‘Can’t’ is generally a more personal negation word than ‘but.’ Examples of usage include ‘I can’t’ or ‘we can’t’. Generally this bad word is used to explain why someone or some group lacks specific power to take action.  Again like ‘but’, ‘can’t’ is used to negate what the person using the word admits is a good idea.  The use of the term reflects an abrogation of responsibility and shifts the responsibility elsewhere. For example, I was discussing daily standups with a colleague recently.  He told me a story about a team that had stopped doing daily stand-up meetings because the product owner deemed them overhead.  He quoted the Scrum Master as saying, “It is not my fault that we can’t do stand-ups because our product owner doesn’t think meetings are valuable.” In short he is saying, “It isn’t my fault that the team is not in control of how the work is being done.”  The abrogation of responsibility for the consequences of the team’s actions is what makes ‘can’t’ into a bad word in this example.  ‘Can’t’ reinforces the head-trash which steals power from the practitioner that makes it easy to walk away from the struggle to change rather than looking for a way to embrace change.  When you empower someone else to manage your behavior, you are reinforcing your lack of power and reducing your motivation and the motivation of those around you.

‘However’

The third of this unholy trinity of negation words is ‘however’.  The struggle I have with this word is that it can be used insidiously to reflect a false use of logic to cut off debate.  A number of years ago, while reviewing an organization that decided to use Scrum and two-week iterations for projects, I was told, “we started involving the team in planning what was going to be done during the iterations, however they were not getting the work done fast enough, so we decided to tell them what they needed to do each iteration.”  The use of ‘however’ suggests a cause-and-effect relationship that may or may not be true and tends to deflect discussion from the root cause of the problem. The conversation went on for some period of time during which we came to the conclusion that by telling them what to do, the project had actually fared even worse.  What occurred was that the responsibility had been shifted away from poor portfolio planning onto the team’s shoulders.

In past essays I have discussed that our choices sometimes rob us of positional power.  The rationalization of those individual choices acts as an intellectual smokescreen to make us feel better about our lack of power. Rationalization provides a platform to keep a clinical distance from the problem. Rationalization can be a tool to avoid the passion and energy needed to generate change.

All of these unholy words can be used for good, and that it might be useful to have more instructions on how to recognize when they are be used in a bad way. A sort of a field guide to avoid mistaken recognition.  One easy mechanism for recognizing a poor use of ‘but’, ‘can’t’ and ‘however’ is to break the sentence or statement into three parts, everything before the unholy word, the unholy word and then everything after the unholy word.  By looking at the phrase that follows our unholy word all is exposed.  If the phrase rejects or explains why original and perfectly reasonable premise is bat poop crazy, then you have a problem.  I decided to spend some of my ample time in airports collecting observations of some of the negation phrases people use.  Some of shareable the examples I heard included:

  1. I told you so.
  2. It is not my fault.
  3. Just forget it.
  4. We tried that before.
  5. That will take too long.
  6. It doesn’t matter (passive aggressive).
  7. We don’t do it that way.
  8. My manager won’t go for it.

There were others that I heard that can’t be shared, and I am sure there are many other phrases that can be used to lull the listener into thinking that the speaker agrees and then pulls the rug out from the listener.

The use of negation words can be a sign that you are trying to absolve yourself from the risk of action.  I would like to suggest we ban the use of these three process improvement swear words and substitute enabling phrases such as “and while it might be difficult, here is what I am going to do about it.”  Our goal should be to act on problems that are blockers and issues rather than to ignore them or by doing that establishing  their reality by saying grace over them.  In my opinion, acting and failing is a far better course of action than doing nothing at all and putting your head in the sand.  The responsibility to act does not go away but rather affixes more firmly to those who do nothing than to those that are trying to change the world!  When you pretend to not have power you become a victim.  Victims continually cede their personal and positional power to those around them.  Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 10/30/2014 - 22:50

The most unsuccessful three years in the education of cost estimators appears to be fifth-grade artihmetic - Norman R. Augustine

Opening line in the Welcome section of Software Estimation: Demystifying the Black Art, Steve McConnell.

Augustine is former Chairman and CEO of Martin Marietta. His seminal book Augustine's Laws, describes the complexities and conundrums of today's business management and offers solutions. Anyone interested in learning how successful management of complex technology based firms is done, should read that book. As well, read McConnell's book and see if you can find where 

Screen Shot 2014-10-30 at 3.47.41 PM

Because I sure can't find that proof or any mention that estimates don't work, other than for those who failed to pay attention in the 5th grade arithmetic class.

Categories: Project Management

Make your emails stand out in Inbox

Google Code Blog - Thu, 10/30/2014 - 19:30
Originally posted on the Google Apps Developers Blog

As we announced last week, Inbox is a whole new take on, well, the inbox. It’s built by the Gmail team, but it’s not Gmail—it’s a new product designed to help users succeed in today’s world of email overload and multiple devices. At the same time, Inbox can also help you as a sender by offering new tools to make your emails more interactive!

Specifically, you can now take advantage of a new feature called Highlights.

Exactly like it sounds, Highlights “highlight” or surface key information and actions from an email and display them as easy-to-see chips in the inbox. For example, if you’re an airline that sends flight confirmation emails, Highlights can surface the “Check-in for your flight” action and display live flight status information for recipients right in the user’s main list. The same can apply if you send customers hotel reservations, event details, event invitations, restaurant reservations, purchases, or other tickets. Highlights help ensure that your recipients see your messages and the important details at a glance.

To take advantage of Highlights, you can mark up your email messages to specify which details you want surfaced for your customers. This will make it possible for not only Inbox, but also Gmail, Google Now, Google Search, and Maps to interact more easily with your messages and give your recipients the best possible experience across Android, iOS and the web.

As an example, the following JSON-LD markup can be used by restaurants to send reservation confirmations to their users/customers:
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "FoodEstablishmentReservation",
"reservationNumber": "WTA1EK",
"reservationStatus": "http://schema.org/Confirmed",
. . . information about dining customer . . .
"reservationFor": {
"@type": "FoodEstablishment",
"name": "Charlie’s Cafe",
"address": {
"@type": "PostalAddress",
"streetAddress": "1600 Amphitheatre Parkway",
"addressLocality": "Mountain View",
"addressRegion": "CA",
"postalCode": "94043",
"addressCountry": "United States"
},
"telephone": "+1 650 253 0000"
},
"startTime": "2015-01-01T19:30:00-07:00",
"partySize": "2"
}
</script>

When your confirmation is received, users will see a convenient Highlight with the pertinents at the top of their Inbox, then can open the message to obtain the full details of their reservation as shown above.

Getting started is simple: read about email markup, check out more markup examples, then register at developers.google.com/gmail/markup and follow the instructions from there!


by Shalini Agarwal, Product Management, Inbox by Gmail
Categories: Programming

Driving Business Transformation by Reenvisioning Your Customer Experience

You probably hear a lot about the Mega-Trends (Cloud, Mobile, Social, and Big Data), or the Nexus of Forces (the convergence of social, mobility, cloud and data insights patterns that drive new business scenarios), or the Mega-Trend of Mega-Trends (Internet of Things).

And you are probably hearing a lot about digital transformation and maybe even about the rise of the CDO (Chief Digital Officer.)

All of this digital transformation is about creating business change, driving business outcomes, and driving better business results.

But how do you create your digital vision and strategy?   And, where do you start?

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

3 Perspectives of Digital Vision

When it comes to creating your digital vision, you can focus on reenvisioning the customer experience, the operational processes, or your business model.

Via Leading Digital:

“Where should you focus your digital vision? Digital visions usually take one of three perspectives: reenvisioning the customer experience, reenvisioning operational processes, or combining the previous two approaches to reenvision business models.  The approach you take should reflect your organization’s capabilities, your customer’s needs, and the nature of competition in your industry.”

Start with Your Customer Experience

One of the best places to start is with your customer experience.  After all, a business exists to create a customer.  And the success of the business is how well it creates value and serves the needs of the customer.

Via Leading Digital:

“Many organizations start by reenvisioning the way they interact with customers.  They want to make themselves easier to work with, and they want to be smarter in how they sell to (and serve) customers.  Companies start from different places when reenvisioning the customer experience.”

Transform the Relationship

You can use the waves of technologies (Cloud, Mobile, Social, Data Insights, and Internet of Things), to transform how you interact with your customers and how they experience your people, your products, and your services.

Via Leading Digital:

“Some companies aim to transform their relationships with their customers.  Adam Bortman, chief digital officer of Starbucks, shared this vision: 'Digital has to help more partners and help the company be the way we can ... tell our story, build our brand, and have a relationship with our customers.' Burberry's CEO Angela Ahredts focused on multichannel coherence. 'We had a vision, and the vision was to be the first company who was fully digital end-to-end ... A customer will have total access to Burberry across any device, anywhere.'  Mare Menesquen, managing director of strategic marketing at cosmetics gitan L'Oreal, said, 'The digital world multiples the way our brands can create an emotion-filled relationship with their customers.’”

Serve Your Customers in Smarter Ways

You can use technology to personalize the experience for your customers, and create better interactions along the customer experience journey.

Via Leading Digital:

“Other companies envision how they can be smarter in serving (and selling to) their customers through analytics.  Caesars started with a vision of using real-time customer information to deliver a personalized experience to each customer.  The company was able to increase customer satisfaction and profits per customer using traditional technologies.  Then, as new technologies arose, it extended the vision to include a mobile, location-based concierge in the palm of every customer's hand.”

Learn from Customer Behavior

One of the most powerful things you can now do with the combination of Cloud, Mobile, Social, Big Data and Internet of Things is gain better customer insights.  For example, you can learn from the wealth of social media insights, or you can learn through better integration and analytics of your existing customer data.

Via Leading Digital:

“Another approach is to envision how digital tools might help the company to learn from customer behavior.  Commonwealth Bank of Australia sees new technologies as a key way of integrating customer inputs in its co-creation efforts.  According to CIO Ian Narev, 'We are progressively applying new technology to enable customers to play a greater part in product design.  That helps us create more intuitive products and services, readily understandable to our customers and more tailored to their individual needs.”

Change Customers’ Lives

If you focus on high-value activities, you can create breakthroughs in the daily lives of your customers.

Via Leading Digital:

“Finally, some companies are extending their visions beyond influencing customer experience to actually changing customers' lives.  For instance, Novartis CEO Joseph Jimenez wrote of this potential: ‘The technologies we use in our daily lives, such as smart phones and tablet devices, could make a real difference in helping patients to manage their own health.  We are exploring ways to use these tools to improve compliance rates and enable health-care professionals to monitor patient progress remotely.’”

If you want to change the world, one of the best places to start is right from wherever you are.

With a Cloud and a dream, what can you do to change the world?

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data

Categories: Architecture, Programming

Aligning User Expectations with Business Objectives

Software Requirements Blog - Seilevel.com - Thu, 10/30/2014 - 17:00
Projects with clearly defined business objectives can and do fail even if they deliver functionality that syncs closely with the business objectives defined for the project, but do not meet user expectations. This may seem counter intuitive at first blush since the primary purpose of any enterprise software development effort is to deliver tangible financial […]
Categories: Requirements

Are You Living In The Past Or Future?

Making the Complex Simple - John Sonmez - Thu, 10/30/2014 - 15:00

In this video I talk about the important of living in the present moment and give some tips on how you can make sure you aren’t living in the past or future.

The post Are You Living In The Past Or Future? appeared first on Simple Programmer.

Categories: Programming

#Workout – Premium Print Edition

NOOP.NL - Jurgen Appelo - Thu, 10/30/2014 - 14:46
M30 Workout

The new Management 3.0 #Workout book is already available in PDF, Kindle and ePUB versions. And in just a few weeks, readers will be able to get the Premium Print Edition as well. Nearly 500 pages of high-quality, full-color paper, with great design, awesome photos, crisp illustrations, and concrete practices. It will be the most colorful and practical management book ever published!

The post #Workout – Premium Print Edition appeared first on NOOP.NL.

Categories: Project Management

Why Trust is Hard

Herding Cats - Glen Alleman - Thu, 10/30/2014 - 14:06

Screen Shot 2014-10-29 at 10.09.40 PMHugh McCleod's art for Zappo's provides the foundation for trust in that environment

If I'm the head of HR, I'm responsible for filling the desks at my company with amazing employees. I can hold people to all the right standards. But ultimately I can't control what they do. This is why hiring for culture works. What Zappos does is radical because it trusts. It says "Go do the best job you can do for the customer, without policy". And leaves employees to come up with human solutions. Something it turns out they're quite good at, if given the chance.

Now let's take another domain, one I'm very famailar with - fault tolerant process control systems. Software and support hardware applied to emergency shutdown of exothermic chemical reactors - those that make the unleaded gasoline for our cars, nuclear reactors and conventional fired power generation, gas turbine controls, and other must work properly machines. And a similar domain of DO-178c flight control systems, which must equally work without fail and provide all the needed capabilities on day one.

At Zappos, the HR Diector describes a work environment where employess are free to do the best job they can for the customer. In the domains above, employees also work to do the best job for the customer they can, but flight safety, live safety, equipment safety are also part of that best job. In other domains we work, doing the best job for the customer means processing with extremely low error rates, transactions for 100's of millions of dollars of value in the enterprise IT paradigm. Medical insurance provider services, HHS enrollment, enterprise IT in a variety of domains.

Zappo's can recover from an error, other domains can't. Nonrecoverable errors mean serious loss of revenue, or even loss of live. In the other domains, failure is similar consequences. I come from those domains, they inform my view of the software development world - where software fail safe and fault tolerance is the basis of business success.

So when we hear about the freedom to fail early and fail often in the absence of a domain or context, care is needed. Without a domain and context, it is difficult to assess the credibility of any concept, let alone one that is untested outside of personal ancedote. It comes down to Trust alone or Trust But VerifyI could also guarantee that Zappos has some of the verify process. It is doubtful employees are left to do anything they wish for their customer. The simple reason there is a business governance process at any firm, no matter the size. Behaviour, even full trust behavior fits inside that governance process.
Categories: Project Management

Progressive F# Tutorials London 2014

Phil Trelford's Array - Thu, 10/30/2014 - 10:23

There’s just a week to go until the Progressive F# Tutorials returns to Skills Matter in London, on Nov 6-7, and it’s never too late to book.

The tutorials are a 2 day / 2 track event community event made up of 3 hour long hands on sessions with industry experts, suitable for beginners and experts alike.

The first day will start with a keynote from Don Syme, F# community contributor and a Principal Researcher at Micrsoft Research, on the F# way to reconciliation.

On the beginners track we have:

And on the advanced track:

  • Jérémie Chassaing, Hypnotizer founder, on CQRS with F#
  • Mathias Brandewinder, F# MVP, on Treasures, Traps & F#
  • Michael Newton, of 15below, on Metaprogramming in F#
  • Tomas Petricek, Real World FP author, on the F# compiler

There’ll also be a Park Bench panel on the Thursday with experts including  Pluralsight author Kit Eason and SQL Provider contributor Ross McKinlay.

Read this review of this year’s Progressive F# in New York.

Check out what happened at the last 3 tutorials in London: 2013, 2012 and 2011.

    don and danHalloween

Plus a free F# Hackathon on the Saturday!

Categories: Programming