Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Assessing Value Produced By Investments

Herding Cats - Glen Alleman - Tue, 11/04/2014 - 14:28

Speaking at the Integrated Program Management Conference in Bethesda MD this week. The keynote speaker Monday was Katrina McFarland, Assistant Secretary of Defense (Acquisition)(ASD(A)), the principal adviser to the Secretary of Defense and Under Secretary of Defense for Acquisition. 

During her talk she spoke of the role of Earned Value Management. Here's a mashup of her remarks...

EV is a thoughtful tool as the basis of a conversation for determining the value (BCWP) produced by the investment (BCWS). This conversation is an assessment of the efficacy of our budget. 

We can determine the effecacy of our budget through:

  • Measures of Effectiveness of the deliverables in accomplishing the mission or fulfilling the technical and operational needs of the business.
  • Measures of Performance of these deliverables to perform the needed functions to produce the needed effectiveness
  • Technical Performance Measures of these functions to perform at the needed technical level.

These measures answer the question of what is the efficacy of our budget in delivering the outcomes of our project.

The value of the project outcomes must be traceable to a strategy for the business or mission. Once this strategy has been identified, the Measures of Effectiveness, Performance, and Technical Performance Measures can be assigned to the elements of project. These are shown in the figure below

Screen Shot 2014-11-04 at 6.23.39 AM

This approach is scalable up and down the complexity of projects based on five immutable principles of project success.

5 Immutable Principles

Without credible answers to each of these questions, the project is on the path to failure on day one.

Categories: Project Management

Scaling Agile: Agile Release Trains

3466780657_aec63156b8_b (1)

The Scaled Agile Framework Enterprise (SAFe) is one of the frameworks that has emerged for scaling Agile development. The Agile Release Train (ART) is one of the core concepts introduced as part of the SAFe framework. An ART is a group of logically related effort (train cars) traveling in the same direction (destination) at predictable cadence (speed). At its heart, most everyone on a train is focused on achieving a common goal, which is delivering all types of cargo to a destination. The ART in SAFe is a long-lived team of teams organized around a significant value stream to work towards delivering a common goal. The attributes of an ART interlock to keep it on track to delivering value by ensuring it adheres ot Agile and lean principles.  An ART has the following attributes:

  1. Team of Teams within and ART typically encompass 50 to 125 people. The size limits are a reflection that large groups have issues with communication and cohesion which is problematic when pursuing a common goal and smaller groups have issues absorbing the process overheard which makes development expensive.
  2. ART teams identify and use additional roles not typically found in some Agile frameworks, such as the Release Train Engineer, release managers product management, DevOps and share resources, to name a few.
  3. Teams work with the ART on a long-term basis, 18+ months.
  4. The majority of team members are 100% dedicated to the teams they are involved with.
  5. ARTs are driven by business goals based on the organizations strategic vision and themes, portfolio management constraints and enterprise architectural vision.
  6. ARTs are operationalized and synchronized by cadence at two levels. At a macro level, ARTs leverage a longer cycle cadence called a product increment, which is generally 8 -12 weeks and a shorter Scrum team-level cadence (two weeks is typical).

The Agile Release Train is a tool to help scale Agile to the organizational/product. Conceptually the ART is similar to the operating system (OS) release trains I was first exposed to in the 1990’s. The OS manufacturer and every product manufacturer I have observed from automobile to ATM manufacturer plans the introduction of features over some period of time. An ART provides a mechanism to translate that plan into Agile teams so that organizations can communicate a forecast for the delivery of features to stakeholders and customers. Some degree of predictability is critical if other businesses or parts of a business need to use your input so they can plan their business. Real profits and jobs usually ride on those release trains.

Other frameworks can be used to scale Agile, including DAD, DSDM and arguably Scrum itself. The need for frameworks with additional overheard to scale Agile is controversial. Whether you are an adherent of scaled frameworks or not is less important than having understanding the solutions these frameworks deliver. An Agile Release Train is a tool to organize activity to deliver a common business goal while being both predictable and reacting to the dynamic business environment. At its most basic level isn’t that just a definition of Agile?


Categories: Process Management

Material Design Gets More Material

Google Code Blog - Mon, 11/03/2014 - 20:32

A few weeks ago, we published our first significant update to the material design guidelines. Today, we’re addressing even more of the comments and suggestions you’ve provided with another major update to the spec. Check out some of the highlights below.

  • Links to Android developer docs. One of the biggest requests we’ve heard from developers and designers is that the guidelines should offer quicker access to related developer documentation. We’ve started to add key links for Android developers, and we’re committed to more tightly integrating the spec with both the Polymer and Android docs.
  • A new What is Material? section. While the introduction offers a succinct bird’s-eye-view of material design, it left open some questions that were previously only answered in video content (the Google I/O videos and DesignBytes). This new sections dives deeper into the environment established by material design, including material properties and how we work with objects in 3D space.
  • A What’s New section. We view the material design spec as a living document, meaning we’ll be continually adding and refining content. The What’s New section, which was a highly requested feature, should help designers track the spec’s evolution.

In addition to these major new features and sections, there’s even more in today’s update, including:

Stay tuned for even more updates as we continue to integrate the relevant developer docs, refine existing spec sections, and expand the spec to cover more ground. And as always, remember to leave your suggestions on Google+!

Posted by Roman Nurik, Design Advocate
Categories: Programming

Improve small job completion times by 47% by running full clones.

The idea is most jobs are small. Researchers found 82% of jobs on Facebook's cluster were less than 10 tasks. Clusters have a median utilization of under 20%. And since small jobs are particularly sensitive to stragglers the audacious solution is to proactively launch clones of a job as they are submitted and pick the result from the earliest clone. The result is an average completion time of all the small jobs improved by 47% using cloning, at the cost of just 3% extra resources.

For more details take a look at the very interesting Why Let Resources Idle? Aggressive Cloning of Jobs with Dolly.

Related Articles
Categories: Architecture

Drive Business Transformation by Reenvisioning Your Operations

When you create your digital vision, you have a few places to start.

One place to start is by reenvisioning your customer experience.   Another place to start is by reenvisioning your operations.   And, a third place to start is by renvisioning your business model.

In this post, let’s take a look at reenvisioning your operations.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

Start with Reenvisioning Operations When Financial Performance is Tied to Your Supply Chain

If your financial performance is closely connected to the performance of your core operations and supply chain, then reenvisioning your operations can be a great place to start.

Via Leading Digital:

“Organizations whose fortunes are closely tied to the performance of their core operations and supply chains often start with reenvisioning their operations.”

Increase Process Visibility, Decision Making Speed, and Collaboration

There are many great business reasons to focus on improving your operations.   A few of the best include increasing process visibility, increasing speed of decision making, and improving collaboration across the board.

Via Leading Digital:

“The business drivers of operational visions include efficiency and the need to integrate disparate operations.  Executives may want to increase process visibility and decision making speed or to collaborate across silos.”

Proctor & Gamble Reenvisions Operational Excellence

Proctor and Gamble changed their game by focusing on operational excellence.  The key was to be able to manage the business in real time so they could keep up with their ever-changing world.

Via Leading Digital:

“For instance, in 2011, Proctor & Gamble put operational excellence at the center of its digital vision: 'Digitizing P&G will enable us to manage the business in real time and on a very demand-driven basis.  We'll be able to collaborate more effectively and efficiently, inside and outside the company.'  Other companies in industries from banking to manufacturing, have transformed themselves through similar operationally focused visions.”

Operational Visions are Key to Businesses that Sell to Other Businesses

If your business is a provider of products or services to other businesses, then your operational vision is especially important as it can have a ripple effect on what your customers do.

Via Leading Digital:

“Operational visions are especially useful for businesses that sell largely to other businesses.  When Codelco first launched its Codelco Digital initiative, the aim was to improve mining operations radically through automation and data integration.  As we described in chapter 3, Codelco continued to extend this vision to include new mining automation and integration operations-control capability.  Now, executives are envisioning radical new ways to redefine the mining process and possibly the industry itself.”

Operational Visions Can Change the Industry

When you change your operations, you can change the industry.

Via Leading Digital:

“The operational visions of some companies go beyond an internal perspective to consider how the company might change operations in its industry or even with its customers.“

Changes to Operations Can Enable Customers to Change Their Own Operations

When you improve your operations,  you can help others move up the solution stack.

Via Leading Digital:

“For example, aircraft manufacturer Boeing envisions how changes to its products may enable customers to change their own operations.  'Boeing believes the future of the aviation industry lie in 'the digital airline,' the company explained on its website. 'To succeed in the marketplace, airlines and their engineering and IT teams must take advantage of the increasing amount of data coming off of airplanes, using advanced analytics and airplane technology to take operational efficiency to the next level.' “

Get Information to the People Who Need it Most, When They Need It Most

One of the best things you can do when you improve operations is to put the information in the hands of the people that need it most, when they need it most, where they need it most.

Via Leading Digital:

“The manufacturer goes on to paint a clear picture of what a digital airline means in practice: 'The key to to the digital airline is delivering secure, detailed operational and maintenance information to the people who need it most, when they need it most.  That means that engineering will share data with IT, but also with the finance, accounting, operational and executive functions.' “

Better Operations Enables New Product Designs and Services

When you improve operations, you enable and empower business breakthroughs in all parts of the business.

Via Leading Digital:

“The vision will improve operations at Boeing's customers, but will also help Boeing's operations as the information from airplanes should help the company identify new ways to improve its product designs and services.  The day may also lead to new business models as Boeing uses the information to provide new services to customers.”

When you create your digital vision, while there are lots of places you could start, the key is to take an end-to-end view.

If your financial performance is tied to your core operations and your supply chain, and/or you are a provider of products and services to others, then consider starting your business transformation by reenvisioning your operations.

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

Reenvision Your Customer Experience

Categories: Architecture, Programming

How to Create a Simple Backup Solution That You Can Trust

Making the Complex Simple - John Sonmez - Mon, 11/03/2014 - 16:00

Backing up your data is really important. We’ve all heard too many stories of hard drives crashing or computers getting lost or stolen without having a backup and their owner’s suffering a horrible loss of irreplaceable data. So, if we all know that backing up data is so important, why don’t we do it? Well, […]

The post How to Create a Simple Backup Solution That You Can Trust appeared first on Simple Programmer.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Mon, 11/03/2014 - 05:27

Never attribute to malice that which is adequately explained by stupidity - Hanlon's Razor

 

Categories: Project Management

SPaMCAST 314 – Crispin, Gregory, More Agile Testing

www.spamcast.net

http://www.spamcast.net

Listen to the interview here!

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014). She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 314 - Crispin, Gregory, More Agile Testing

Software Process and Measurement Cast - Sun, 11/02/2014 - 23:00

SPaMCAST 314 features our interview with Janet Gregory and Lisa Crispin.  We discussed their new book More Agile Testing. Testing is core to success in all forms of development.  Agile development and testing are no different. More Agile Testing builds on Gregory and Crispin’s first collaborative effort, the extremely successful Agile Testing to ensure everyone that uses an Agile frameworks delivers the most value possible.

The Bios!

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. Janet is the is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014)She is also a contributor to 97 Things Every Programmer Should Know. Janet specializes in showing Agile teams how testers can add value in areas beyond critiquing the product; for example, guiding development with business-facing tests. Janet works with teams to transition to Agile development, and teaches Agile testing courses and tutorials worldwide. She contributes articles to publications such as Better Software, Software Test & Performance Magazine and Agile Journal, and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca. You can also follow her on twitter @janetgregoryca.

Lisa Crispin is the co-author, with Janet Gregory, of More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002), and a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011) and Beautiful Testing (O’Reilly, 2009). Lisa was honored by her peers by being voted the Most Influential Agile Testing Professional Person at Agile Testing Days 2012. Lisa enjoys working as a tester with an awesome Agile team. She shares her experiences via writing, presenting, teaching and participating in agile testing communities around the world. For more about Lisa’s work, visit www.lisacrispin.com, and follow @lisacrispin on Twitter.

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin on the the Software Process and Measurement blog on November 8th with a re-read of Leading Change. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 315 features our essay on Scrum Masters.  Scrum Masters are the voice of the process at the team level.  Scrum Masters are a critical member of every Agile team. The team’s need for a Scrum Master is not transitory because they evolve together as a team.

Upcoming Events

DCG Webinars:

How to Split User Stories
Date: November 20th, 2014
Time: 12:30pm EST
Register Now

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Loewensberg re-animated

Phil Trelford's Array - Sun, 11/02/2014 - 14:51

Verena Lowensburg was a Swiss painter and graphic designer, assocciated with the concrete art movement. I came across some of her work while searching for pieces by Richard Paul Lohse.

Again I’ve selected some pieces and attempted to draw them procedurally.

Spiral of circles and semi-circles

Based on Verena Loewensburg’s Unititled, 1953

Loewensberg step-by-step[4]

The piece was constructed from circles and semi-circles arranged around 5 concentric rectangles drawn from the inside-out. The lines of the rectangles are drawn in a specific order. The size of each circle seems only to be related to the size of it’s rectangle. If a placed circle would overlap an existing circle then it is drawn as a semi-circle

Shaded spiral

Again based on Verena Loewensburg’s Unititled, 1953

Loewensberg colors

Here I took the palette of another one of Verena’s works.

Four-colour Snake

Based on Verena Loewensburg’s Untitled, 1971

Loewensberg snake[4]

The original piece reminded me of snake video game.

Rotating Red Square

Based on Verena Loewensburg’s Untitled, 1967 

Loewensberg square rotating

This abstract piece was a rotated red square between blue and green squares.

Multi-coloured Concentric Circles

Based on Verena Loewensburg’s Untitled, 1972

Loewensberg circles shakeup

This piece is made up of overlapping concentric circles with the bottom set clipped with a rectangle. region. The shape and colour remind me a little of the Mozilla Firefox logo.

Method

Each image was procedurally generated using the Windows Forms graphics API inside an F# script. Typically a parameterized function is used to draw a specific frame to a bitmap which can be saved out to a an animated gif.

I guess you could think of each piece as a coding kata.

Scripts

Have fun!

Categories: Programming

Announcing Re-read Saturday!

index

There are a number of books that have had a huge impact on my life and career. Many reader of the Software Process and Measure blog and listeners to the podcast have expressed similar sentiments. Re-read Saturday is a new feature that will begin next Saturday. We will begin Re-read Saturday with a re-read of Leading Change. The re-read will extend over a six to eight Saturdays as I share my current interpretation of a book that has a major impact on how I think about the world around me. When the re-read of Leading Change is complete we will dive into the list of books I am compiling from you, the readers and listeners.

Currently the list includes:

  • Seven Habit of Highly Effective People – Stephen Covey (re-read in early 2014)
  • Out of the Crisis – Edward Deming
  • To Be or Not to Be Intimated — Robert Ringer
  • Pulling Your Own Strings — Wayne Dyer
  • The Goal: A Process of Ongoing Improvement – Eliyahu M. Goldratt

So far each book has gotten one “vote” apiece.

How can you get involved in Re-read Saturday? You can begin by answering, “What are the two books that have most influenced you career (business, technical or philosophical)?” Send the titles to spamcastinfo@gmail.com, post the tiles on the blog, Facebook or Twitter with the hashtag #SPaMCAST. We will continue to add to the list and republish it on the blog. When we are close to the end of the re-read of Leading Change we will publish a poll based on the current list (unless there is a clear leader) to select the next book to re-read.

Two of other ways to get involved include adding your perspective to each of the re-read entries by commenting. Second, when the re-read is complete I will invite all of the commenters to participate in a discussion (just like a book club) that will be recorded and published on the Software Process and Measurement Cast.

We begin next Re-read Saturday next week but you can get involved today!

 


Categories: Process Management

Splitting Users Stories: More Anti-patterns

This hood is making an ineffective split of his face.

This hood is making an ineffective split of his face.

An anti-pattern is a typical response to a problem that is usually ineffective. There are numerous patterns for splitting user stories that are generally effective and there are an equal number that are generally ineffective. Splitting User Stories: When Things Go Wrong described some of the common anti-patterns such as waterfall splits, architectural splits, splitting with knowledge and keeping non-value remainders. Here are other equally problematic patterns for splitting user stories that I have observed since writing the original article:

  1. Vendor splits. Vendor splits are splits that are made based on work assignment or reporting structure. Organizations might use personnel from one company to design a solution, another company to code and another to test functionality. Teams and stories are constructed based on organization the team’s members report to rather than a holistic view of the functionality. I recently observed a project that had split stories between on project management and control  and technical activities.  The rational was that since the technical component of the project had been outsourced the work should be put in separate stories so that it would be easy to track work that was the vendors responsibility to complete. Scrumban or Kanban is often a better choice in these scenarios than other lean/Agile techniques.
  2. Generic personas splits. Splitting stories based on a generic persona or into stories where only a generic persona can be identified typically suggests that team members are unsure who really needs the functionality in the user story. Splitting stories without knowing who the story is trying to serve will make it difficult to hold the conversations needed to develop and deliver the value identified in the user story. Conversations are critical in the flesh out requirements and generating feedback in Agile projects.
  3. Too thin splits. While rare, occasionally teams get obsessed by splitting user stories in to thinner and thinner slices. While I have often said that smaller stories are generally better there comes a time when to split further is not worth the time it will take to make the split. Team that get overly obsessed with splitting user stories into thinner and thinner slices generally will spend more time in planning and less in delivering. Each user story should apply INVEST as a criteria to ensure good splits. In addition to using INVEST, each team should adopt a sizing guideline that maximizes the team’s productivity/velocity. Guidelines of this type are reflection of the capacity of the team to a greater extend and the capacity of the organization to a lesser extent.

Splitting stories well can deliver huge benefits to the team and to the organization. Benefits include increased productivity and velocity, improved quality and higher morale. Splitting user stories badly delivers none of the value we would expect from the process and may even cause teams, stakeholders and whole organizations to develop a negative impression of Agile.


Categories: Process Management

R: Converting a named vector to a data frame

Mark Needham - Sat, 11/01/2014 - 00:47

I’ve been playing around with igraph’s page rank function to see who the most central nodes in the London NoSQL scene are and I wanted to put the result in a data frame to make the data easier to work with.

I started off with a data frame containing pairs of people and the number of events that they’d both RSVP’d ‘yes’ to:

> library(dplyr)
> data %>% arrange(desc(times)) %>% head(10)
       p.name     other.name times
1  Amit Nandi Anish Mohammed    51
2  Amit Nandi Enzo Martoglio    49
3       louis          zheng    46
4       louis     Raja Kolli    45
5  Raja Kolli Enzo Martoglio    43
6  Amit Nandi     Raja Kolli    42
7       zheng Anish Mohammed    42
8  Raja Kolli          Rohit    41
9  Amit Nandi          zheng    40
10      louis          Rohit    40

I actually had ~ 900,000 such rows in the data frame:

> length(data[,1])
[1] 985664

I ran page rank over the data set like so:

g = graph.data.frame(data, directed = F)
pr = page.rank(g)$vector

If we evaluate pr we can see the person’s name and their page rank:

> head(pr)
Ioanna Eirini          Mjay       Baktash      madhuban    Karl Prior   Keith Bolam 
    0.0002190     0.0001206     0.0001524     0.0008819     0.0001240     0.0005702

I initially tried to convert this to a data frame with the following code…

> head(data.frame(pr))
                     pr
Ioanna Eirini 0.0002190
Mjay          0.0001206
Baktash       0.0001524
madhuban      0.0008819
Karl Prior    0.0001240
Keith Bolam   0.0005702

…which unfortunately didn’t create a column for the person’s name.

> colnames(data.frame(pr))
[1] "pr"

Nicole pointed out that I actually had a named vector and would need to explicitly extract the names from that vector into the data frame. I ended up with this:

> prDf = data.frame(name = names(pr), rank = pr)
> head(prDf)
                       name      rank
Ioanna Eirini Ioanna Eirini 0.0002190
Mjay                   Mjay 0.0001206
Baktash             Baktash 0.0001524
madhuban           madhuban 0.0008819
Karl Prior       Karl Prior 0.0001240
Keith Bolam     Keith Bolam 0.0005702

We can now sort the data frame to find the most central people on the NoSQL London scene based on meetup attendance:

> data.frame(prDf) %>%
+   arrange(desc(pr)) %>%
+   head(10)
             name     rank
1           louis 0.001708
2       Kannappan 0.001657
3           zheng 0.001514
4    Peter Morgan 0.001492
5      Ricki Long 0.001437
6      Raja Kolli 0.001416
7      Amit Nandi 0.001411
8  Enzo Martoglio 0.001396
9           Chris 0.001327
10          Rohit 0.001305
Categories: Programming

iOS localization tricks for Storyboard and NIB files

Xebia Blog - Fri, 10/31/2014 - 23:36

Localization in iOS from Interface Builder designed UI has never been without any problems. The right way of doing localization is by having multiple Strings files. Duplicating Nib or Storyboard files and then changing the language is not an acceptable method. Luckily Xcode 5 has improved this for Storyboards by introducing Base Localization, but I've personally come across several situations where this didn't work at all or when it seemed buggy. Also Nib (Xib) files without ViewController don't support it.

In this post I'll show a couple of tricks that can help with the Localization of Storyboard and Nib files.

Localized subclasses

When you use this method, you create specialized subclasses of view classes that handle the localization in the awakeFromNib() method. This method is called for each view that is loaded from a Storyboard or Nib and all properties that you've set in Interface Builder will be set already.

For UILabels, this means getting the text property, localizing it and setting the text property again.

Using Swift, you can create a single file (e.g. LocalizationView.swift) in your project and put all your subclasses there. Then add the following code for the UILabel subclass:

class LocalizedLabel : UILabel {
    override func awakeFromNib() {
        if let text = text {
            self.text = NSLocalizedString(text, comment: "")
        }
    }
}

Now you can drag a label onto your Storyboard and fill in the text in your base language as you would normally. Then change the Class to LocalizedLabel and it will get the actual label from you Localizable.strings file.

Screen Shot 2014-10-31 at 22.45.46

Screen Shot 2014-10-31 at 22.46.56

No need to make any outlets or write any code to change it!

You can do something similar for UIButtons, even though they don't have a single property for the text on a button.

class LocalizedButton : UIButton {
    override func awakeFromNib() {
        for state in [UIControlState.Normal, UIControlState.Highlighted, UIControlState.Selected, UIControlState.Disabled] {
            if let title = titleForState(state) {
                setTitle(NSLocalizedString(title, comment: ""), forState: state)
            }
        }
    }
}

This will even allow you to set different labels for the different states like Normal and Highlighted.

User Defined Runtime Attributes

Another way is to use the User Defined Runtime Attributes. This method requires slightly more work, but has two small advantages:

  1. You don't need to use subclasses. This is nice when you already use another custom subclass for your labels, buttons and other view classes.
  2. Your keys in the Strings file and texts that show up in the Storyboard don't need to be the same. This works well when you use localization keys such as myCoolTableViewController.header.subtitle. It doesn't look very nice to see those everywhere in your Interface Builder labels and buttons.

So how does this work? Instead of creating a subclass, you instead add a computed property to an existing view class. For UILabels you use the following code:

extension UILabel {

    var localizedText: String {
        set (key) {
            text = NSLocalizedString(key, comment: "")
        }
        get {
            return text!
        }
    }

}

Now you can add a User Defined Runtime Attribute with the key localizedText to your UILabel and have the Localization key as its value.

Screen Shot 2014-10-31 at 23.05.18

Screen Shot 2014-10-31 at 23.06.16

Also here if you want to make this work for buttons, it becomes slightly more complicated. You will have to add a property for each state that needs a label.

extension UIButton {
    var localizedTitleForNormal: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Normal)
        }
        get {
            return titleForState(.Normal)!
        }
    }

    var localizedTitleForHighlighted: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Highlighted)
        }
        get {
            return titleForState(.Highlighted)!
        }
    }
}
Conclusion

Always try and pick the best solution for your problem. Use Storyboard Base Localization if that works well for you. If it doesn't, use the approach with subclasses if you don't need to use another subclass and if you don't care about using your base location strings as localization keys. Else, use the last approach with User Defined Runtime Attributes.

Better search on developers.google.com

Google Code Blog - Fri, 10/31/2014 - 18:56
Posted by Aaron Karp, Product Manager, developers.google.com

We recently launched a major upgrade for the search box on developers.google.com: it now provides clickable suggestions as you type.
We hope this becomes your new favorite way to navigate developers.google.com, and to make things even easier we enabled “/” as a shortcut key for accessing the search field on any page.

If you have any feedback on this new feature, please let us know by leaving a comment below. And we have more exciting upgrades coming soon, so stay tuned!

Categories: Programming

Stuff The Internet Says On Scalability For October 31st, 2014

Hey, it's HighScalability time:


A CT scanner without its clothes on. Sexy.
  • 255Tbps: all of the internet’s traffic on a single fiber; 864 million: daily Facebook users
  • Quotable Quotes:
    • @chr1sa: "No dominant platform-level software has emerged in the last 10 years in closed-source, proprietary form”
    • @joegravett: Homophobes boycotting Apple because of Tim Cook's brave announcement are going to lose it when they hear about Turing.
    • @CloudOfCaroline: #RICON MySQL revolutionized Scale-out. Why? Because it couldn't Scale-up. Turned a flaw into a benefit - @martenmickos
    • chris dixon: We asked for flying cars and all we got was the entire planet communicating instantly via pocket supercomputers
    • @nitsanw: "In the majority of cases, performance will be programmer bound" - Barker's Law
    • @postwait: @coda @antirez the only thing here worth repeating: we should instead be working with entire distributions (instead of mean or q(0.99))
    • Steve Johnson: inventions didn't come about in a flash of light — the veritable Eureka! moment — but were rather the result of years' worth of innovations happening across vast networks of creative minds.

  • On how Google is its own VC. cromwellian: The ads division is mostly firewalled off from the daily concerns of people developing products at Google. They supply cash to the treasury, people think up cool ideas and try to implement them. It works just like startups, where you don't always know what your business model is going to be. Gmail started as a 20% project, not as a grand plan to create an ad channel. Lots of projects and products at Google have no business model, no revenue model, the company does throw money at projects and "figure it out later" how it'll make money. People like their apps more than the web. Mobile ads are making a lot of money.  

  • Hey mobile, what's for dinner? "The world," says chef Benedict Evans, who has prepared for your pleasure a fine gourmet tasting menu: Presentation: mobile is eating the world. Smart phones are now as powerful as Thor and Hercules combined. Soon everyone will have a smart phone. And when tech is fully adopted, it disappears. 

  • How much bigger is Amazon’s cloud vs. Microsoft and Google?: Amazon’s cloud revenue at more than $4.7 billion this year. TBR pegs Microsoft’s public cloud IaaS revenue at $156 million and Google’s at $66 million. If those estimates are correct than Amazon’s cloud revenue is 30 times bigger than Microsoft’s.

  • Great discussion on the Accidental Tech Podcast (at about 25 minutes in) on how the the time of open APIs has ended. People who made Twitter clients weren't competing with Twitter, they were helping Twitter become who they are today. For Apple, developers add value to their hardware and since Apple makes money off the hardware this is good for Apple, because without apps Apple hardware is way less valuable. With their new developer focus Twitter and developer interests are still not aligned as Twitter is still worried about clients competing with them. Twitter doesn't want to become an infrastructure company because there's no money in it. In the not so distant past services were expected to have an open API, in essence services were acting as free infrastructure, just hoping they would become popular enough that those dependent on the service could be monetized. New services these days generally don't have full open APIs because it's hard to justify as a business case. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

hdiutil: could not access / create failed – Operation canceled

Mark Needham - Fri, 10/31/2014 - 10:45

Earlier in the year I wrote a blog post showing how to build a Mac OS X DMG file for a Java application and I recently revisited this script to update it to a new version and ran into a frustrating error message.

I tried to run the following command to create a new DMG file from a source folder…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

…but was met with the following error message:

...could not access /Volumes/DemoBench/DemoBench.app/Contents/Resources/app/database-agent.jar - Operation canceled
 
hdiutil: create failed - Operation canceled

I was initially a bit stumped and thought maybe the flags to hdiutil had changed but a quick look at the man page suggested that wasn’t the issue.

I decided to go back to my pre command line approach for creating a DMG – DiskUtility – and see if I could create it that way. This helped reveal the actual problem:

2014 10 31 09 42 20

I increased the volume size to 150 MB…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

and all was well:

....................................................................................................
..........................................................................
created: /Users/markneedham/projects/neo-technology/quality-tasks/park-bench/database-agent-desktop/target/pack.temp.dmg

And this post will serve as documentation to stop it catching me out next time!

Categories: Programming

Azure: Announcing New Real-time Data Streaming and Data Factory Services

ScottGu's Blog - Scott Guthrie - Fri, 10/31/2014 - 07:39

The last three weeks have been busy ones for Azure.  Two weeks ago we announced a partnership with Docker to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Last week we held our Cloud Day event and announced our new G-Series of Virtual Machines as well as Premium Storage offering.  The G-Series VMs provide the largest VM sizes available in the public cloud today (nearly 2x more memory than the largest AWS offering, and 4x more memory than the largest Google offering).  The new Premium Storage offering (which will work with both our D-series and G-series of VMs) will support up to 32TB of storage per VM, >50,000 IOPS of disk IO per VM, and enable sub-1ms read latency.  Combined they provide an enormous amount of power that enables you to run even bigger and better solutions in the cloud.

Earlier this week, we officially opened our new Azure Australia regions – which are our 18th and 19th Azure regions open for business around the world.  Then at TechEd Europe we announced another round of new features – including the launch of the new Azure MarketPlace, a bunch of great network improvements, our new Batch computing service, general availability of our Azure Automation service and more.

Today, I’m excited to blog about even more new services we have released this week in the Azure Data space.  These include:

  • Event Hubs: is a scalable service for ingesting and storing data from websites, client apps, and IoT sensors.
  • Stream Analytics: is a cost-effective event processing engine that helps uncover real-time insights from event streams.
  • Data Factory: enables better information production by orchestrating and managing diverse data and data movement.

Azure Event Hub is now available in general availability, and the new Azure Stream Analytics and Data Factory services are now in public preview. Event Hubs: Log Millions of events per second in near real time

The Azure Event Hub service is a highly scalable telemetry ingestion service that can log millions of events per second in near real time.  You can use the Event Hub service to collect data/events from any IoT device, from any app (web, mobile, or a backend service), or via feeds like social networks.  We are using it internally within Microsoft to monitor some of our largest online systems.

Once you collect events with Event Hub you can then analyze the data using any real-time analytics system (like Apache Storm or our new Azure Stream Analytics service) and store/transform it into any data storage system (including HDInsight and Hadoop based solutions).

Event Hub is delivered as a managed service on Azure (meaning we run, scale and patch it for you and provide an enterprise SLA).  It delivers:

  • Ability to log millions of events per second in near real time
  • Elastic scaling support with the ability to scale-up/down with no interruption
  • Support for multiple protocols including support for HTTP and AMQP based events
  • Flexible authorization and throttling device policies
  • Time-based event buffering with event order preservation

The pricing model for Event Hubs is very flexible – for just $11/month you can provision a basic Event Hub with guaranteed performance capacity to capture 1 MB/sec of events sent to your Event Hub.  You can then provision as many additional capacity units as you need if your event traffic goes higher. 

Getting Started with Capturing Events

You can create a new Event Hub using the Azure Portal or via the command-line.  Choose New->App Service->Service Bus->Event Hub in the portal to do so:

image

Once created, events can be sent to an Event Hub with either a strongly-typed API (e.g. .NET or Java client library) or by just sending a raw HTTP or AMQP message to the service.  Below is a simple example of how easy it is to log an IoT event to an Event Hub using just a standard HTTP post request.  Notice the Authorization header in the HTTP post – you can use this to optionally enable flexible authentication/authorization for your devices:

POST https://your-namespace.servicebus.windows.net/your-event-hub/messages?timeout=60&api-version=2014-01 HTTP/1.1<?xml:namespace prefix = "o" />

Authorization: SharedAccessSignature sr=your-namespace.servicebus.windows.net&sig=tYu8qdH563Pc96Lky0SFs5PhbGnljF7mLYQwCZmk9M0%3d&se=1403736877&skn=RootManageSharedAccessKey

ContentType: application/atom+xml;type=entry;charset=utf-8

Host: your-namespace.servicebus.windows.net

Content-Length: 42

Expect: 100-continue

 

{ "DeviceId":"dev-01", "Temperature":"37.0" }

Your Event Hub can collect up to millions of messages per second like this, each storing whatever data schema you want within them, and the Event Hubs service will store them in-order for you to later read/consume.

Downstream Event Processing

Once you collect events, you no doubt want to do something with them.  Event Hubs includes an intelligent processing agent that allows for automatic partition management and load distribution across readers.  You can implement any logic you want within readers, and the data sent to the readers is delivered in the order it was sent to the Event Hub.

In addition to supporting the ability for you to write custom Event Readers, we also have two easy ways to work with pre-built stream processing systems: including our new Azure Stream Analytics Service and Apache Storm.  Our new Azure Stream Analytics service supports doing stream processing directly from Event Hubs, and Microsoft has created an Event Hubs Storm Spout for use with Apache Storm clusters.

The below diagram helps express some of the many rich ways you can use Event Hubs to collect and then hand-off events/data for processing:

image

Event Hubs provides a super flexible and cost effective building-block that you can use to collect and process any events or data you can stream to the cloud.  It is very cost effective, and provides the scalability you need to meet any needs.

Learning More about Event Hubs

For more information about Azure Event Hubs, please review the following resources:

Stream Analytics: Distributed Stream Processing Service for Azure

I’m excited to announce the preview our new Azure Stream Analytics service – a fully managed real-time distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. The new Azure Stream Analytics service easily scales from small projects with just a few KB/sec of throughput to a gigabyte/sec or more of streamed data messages/events.  

Our Stream Analytics pricing model enable you to run low throughput streaming workloads continuously at low cost, and enables you to only have to scale up as your business needs increase.  We do this while maintaining built in guarantees of event delivery, and state management for fast recovery which enables mission critical business continuity.

Dramatically Simpler Developer Experience for Stream Processing Data

Stream Analytics supports a SQL-like language that dramatically lowers the bar of the developer expertise required to create a scalable stream processing solution. A developer can simply write a few lines of SQL to do common operations including basic filtering, temporal analysis operations, joining multiple live streams of data with other static data sources, and detecting stream patterns (or lack thereof).

This dramatically reduces the complexity and time it takes to develop, maintain and apply time-sensitive computations on real-time streams of data. Most other streaming solutions available today require you to write complex custom code, but with Azure Stream Analytics you can write simple, declarative and familiar SQL.

Fully Managed Service that is Easy to Setup

With Stream Analytics you can dramatically accelerate how quickly you can derive valuable real time insights and analytics on data from devices, sensors, infrastructure, or applications. With a few clicks in the Azure Portal, you can create a streaming pipeline, configure its inputs and outputs, and provide SQL-like queries to describe the desired stream transformations/analysis you wish to do on the data. Once running, you are able to monitor the scale/speed of your overall streaming pipeline and make adjustments to achieve the desired throughput and latency.

You can create a new Stream Analytics Job in the Azure Portal, by choosing New->Data Services->Stream Analytics:

image

Setup Streaming Data Input

Once created, your first step will be to add a Streaming Data Input.  This allows you to indicate where the data you want to perform stream processing on is coming from.  From within the portal you can choose Inputs->Add An Input to launch a wizard that enables you to specify this:

image

We can use the Azure Event Hub Service to deliver us a stream of data to perform processing on. If you already have an Event Hub created, you can choose it from a list populated in the wizard above.  You will also be asked to specify the format that is being used to serialize incoming event in the Event Hub (e.g. JSON, CSV or Avro formats).

Setup Output Location

The next step to developing our Stream Analytics job is to add a Streaming Output Location.  This will configure where we want the output results of our stream processing pipeline to go.  We can choose to easily output the results to Blob Storage, another Event Hub, or a SQL Database:

image

Note that being able to use another Event Hub as a target provides a powerful way to connect multiple streams into an overall pipeline with multiple steps.

Write Streaming Queries

Now that we have our input and output sources configured, we can now write SQL queries to transform, aggregate and/or correlate the incoming input (or set of inputs in the event of multiple input sources) and output them to our output target.  We can do this within the portal by selecting the QUERY tab at the top.

image

There are a number of interesting queries you can write to processing the incoming stream of data.  For example, in the previous Event Hub section of this blog post I showed how you can use an HTTP POST command to submit JSON based temperature data from an IoT device to an Event Hub with data in JSON format like so:

{ "DeviceId":"dev-01", "Temperature":"37.0" }

When multiple devices are streaming events simultaneously into our Event Hub like this, it would feed into our Stream Analytics job as a stream of continuous data events that look like the sequence below:

Wouldn’t it be interesting to be able to analyze this data using a time-window perspective instead?  For example, it would be useful to calculate in real-time what the average temperature of each device was in the last 5 seconds of multiple readings.

With the Stream Analytics Service we can now dynamically calculate this over our incoming live stream of data just by writing a SQL query like so:

SELECT DateAdd(second,-5,System.TimeStamp) as WinStartTime, system.TimeStamp as WinEndTime, DeviceId, Avg(Temperature) as AvgTemperature, Count(*) as EventCount 
    FROM input
    GROUP BY TumblingWindow(second, 5), DeviceId

Running this query in our Stream Analytics job will aggregate/transform our incoming stream of data events and output data like below into the output source we configured for our job (e,g, a blog storage file or a SQL Database):

The great thing about this approach is that the data is being aggregated/transformed in real time as events are being streamed to us, and it scales to handle literally gigabytes of data event streamed per second.

Scaling your Stream Analytics Job

Once defined, you can easily monitor the activity of your Stream Analytics Jobs in the Azure Portal:

image

You can use the SCALE tab to dynamically increase or decrease scale capacity for your stream processing – allowing you to pay only for the compute capacity you need, and enabling you to handle jobs with gigabytes/sec of streamed data. 

Learning More about Stream Analytics Service

For more information about Stream Analytics, please review the following resources:

Data Factory: Fully managed service to build and manage information production pipelines

Organizations are increasingly looking to fully leverage all of the data available to their business.  As they do so, the data processing landscape is becoming more diverse than ever before – data is being processed across geographic locations, on-premises and cloud, across a wide variety of data types and sources (SQL, NoSQL, Hadoop, etc), and the volume of data needing to be processed is increasing exponentially. Developers today are often left writing large amounts of custom logic to deliver an information production system that can manage and co-ordinate all of this data and processing work.

To help make this process simpler, I’m excited to announce the preview of our new Azure Data Factory service – a fully managed service that makes it easy to compose data storage, processing, and data movement services into streamlined, scalable & reliable data production pipelines. Once a pipeline is deployed, Data Factory enables easy monitoring and management of it, greatly reducing operational costs. 

Easy to Get Started

The Azure Data Factory is a fully managed service. Getting started with Data Factory is simple. With a few clicks in the Azure preview portal, or via our command line operations, a developer can create a new data factory and link it to data and processing resources.  From the new Azure Marketplace in the Azure Preview Portal, choose Data + Analytics –> Data Factory to create a new instance in Azure:

image

Orchestrating Information Production Pipelines across multiple data sources

Data Factory makes it easy to coordinate and manage data sources from a variety of locations – including ones both in the cloud and on-premises.  Support for working with data on-premises inside SQL Server, as well as Azure Blob, Tables, HDInsight Hadoop systems and SQL Databases is included in this week’s preview release. 

Access to on-premises data is supported through a data management gateway that allows for easy configuration and management of secure connections to your on-premises SQL Servers.  Data Factory balances the scale & agility provided by the cloud, Hadoop and non-relational platforms, with the management & monitoring that enterprise systems require to enable information production in a hybrid environment.

Custom Data Processing Activities using Hive, Pig and C#

This week’s preview enables data processing using Hive, Pig and custom C# code activities.  Data Factory activities can be used to clean data, anonymize/mask critical data fields, and transform the data in a wide variety of complex ways.

The Hive and Pig activities can be run on an HDInsight cluster you create, or alternatively you can allow Data Factory to fully manage the Hadoop cluster lifecycle on your behalf.  Simply author your activities, combine them into a pipeline, set an execution schedule and you’re done – no manual Hadoop cluster setup or management required. 

Built-in Information Production Monitoring and Dashboarding

Data Factory also offers an up-to-the moment monitoring dashboard, which means you can deploy your data pipelines and immediately begin to view them as part of your monitoring dashboard.  Once you have created and deployed pipelines to your Data Factory you can quickly assess end-to-end data pipeline health, pinpoint issues, and take corrective action as needed.

Within the Azure Preview Portal, you get a visual layout of all of your pipelines and data inputs and outputs. You can see all the relationships and dependencies of your data pipelines across all of your sources so you always know where data is coming from and where it is going at a glance. We also provide you with a historical accounting of job execution, data production status, and system health in a single monitoring dashboard:

image

Learning More about Stream Analytics Service

For more information about Data Factory, please review the following resources:

Other Great Data Improvements

Today’s releases make it even easier for customers to stream, process and manage the movement of data in the cloud.  Over the last few months we’ve released a bunch of other great data updates as well that make Azure a great platform to perform any data needs.  Since August: 

We released a major update of our SQL Database service, which is a relational database as a service offering.  The new SQL DB editions (Basic/Standard/Premium ) support a 99.99% SLA, larger database sizes, dedicated performance guarantees, point-in-time recovery, new auditing features, and the ability to easily setup active geo-DR support. 

We released a preview of our new DocumentDB service, which is a fully-managed, highly-scalable, NoSQL Document Database service that supports saving and querying JSON based data.  It enables you to linearly scale your document store and scale to any application size.  Microsoft MSN portal recently was rewritten to use it – and stores more than 20TB of data within it.

We released our new Redis Cache service, which is a secure/dedicated Redis cache offering, managed as a service by Microsoft.  Redis is a popular open-source solution that enables high-performance data types, and our Redis Cache service enables you to standup an in-memory cache that can make the performance of any application much faster.

We released major updates to our HDInsight Hadoop service, which is a 100% Apache Hadoop-based service in the cloud. We have also added built-in support for using two popular frameworks in the Hadoop ecosystem: Apache HBase and Apache Storm.

We released a preview of our new Search-As-A-Service offering, which provides a managed search offering based on ElasticSearch that you can easily integrate into any Web or Mobile Application.  It enables you to build search experiences over any data your application uses (including data in SQLDB, DocDB, Hadoop and more).

And we have released a preview of our Machine Learning service, which provides a powerful cloud-based predictive analytics service.  It is designed for both new and experienced data scientists, includes 100s of algorithms from both the open source world and Microsoft Research, and supports writing ML solutions using the popular R open-source language.

You’ll continue to see major data improvements in the months ahead – we have an exciting roadmap of improvements ahead.

Summary

Today’s Microsoft Azure release enables some great new data scenarios, and makes building applications that work with data in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Words Have Power

Remember that the race does not always go to the just; however not running will ensure that you can't win. Don’t let your words keep you from running the race.

Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.

Words have power,when used correctly — the power to sell ideas, to rally the troops and to provide motivation. Or words can be a tactic signal of resistance to change and an abrogation of responsibility.  In the later set of scenarios, perfectly good words go bad.  I want to highlight three words, which when used to declare that action won’t be taken or as a tool to deny responsibility for taking action, might as well be swear words.  The words, in my opinion, that are the worst offenders are ‘but’, ‘can’t’ and ‘however’.  The use of any of these three words should send up a red flag that a course is being charted to a special process improvement hell for an organization.

‘But’

The ugliest word in the process improvement world, at least in the English language, is ‘but’ (not but with two t’s).  ‘But’ is a fairly innocuous word, so why do I relegate it to such an august spot on my bad words list?  Because the term is usually used to explain why the speaker (even though they know better) can’t or won’t fight for what is right.  As an example, I recently participated in a discussion about involving the business as an equal partner in a project (a fairly typical discussion for an organization that is transitioning to Agile).  Everyone involved thought that the concept was important, made sense and would help the IT department deliver more value to the organization, ‘but’ in their opinion, the business would not be interested in participating.  Not that anyone would actually discuss having the business be involved with them or invite them to the project party.  A quick probe exposed excuses like “but they do not have time, so we won’t ask” and the infamous, “but that isn’t how we do it here.”  All of the reasons why they would not participate were rationalizations, intellectual smoke screens, for not taking the more difficult steps of asking the business to participate in the process of delivering projects.  It was too frightening to ask and risk rejection, or worse yet acceptance, then have to cede informational power through knowledge sharing.  The use of the word ‘but’ is used to negate anything out of ordinary which gives the speaker permission to not get involved in rectifying the problem.  By not working to fix the problem, the consequences belong to someone else.

‘Can’t’

A related negation word is ‘can’t’. ‘Can’t’ is generally a more personal negation word than ‘but.’ Examples of usage include ‘I can’t’ or ‘we can’t’. Generally this bad word is used to explain why someone or some group lacks specific power to take action.  Again like ‘but’, ‘can’t’ is used to negate what the person using the word admits is a good idea.  The use of the term reflects an abrogation of responsibility and shifts the responsibility elsewhere. For example, I was discussing daily standups with a colleague recently.  He told me a story about a team that had stopped doing daily stand-up meetings because the product owner deemed them overhead.  He quoted the Scrum Master as saying, “It is not my fault that we can’t do stand-ups because our product owner doesn’t think meetings are valuable.” In short he is saying, “It isn’t my fault that the team is not in control of how the work is being done.”  The abrogation of responsibility for the consequences of the team’s actions is what makes ‘can’t’ into a bad word in this example.  ‘Can’t’ reinforces the head-trash which steals power from the practitioner that makes it easy to walk away from the struggle to change rather than looking for a way to embrace change.  When you empower someone else to manage your behavior, you are reinforcing your lack of power and reducing your motivation and the motivation of those around you.

‘However’

The third of this unholy trinity of negation words is ‘however’.  The struggle I have with this word is that it can be used insidiously to reflect a false use of logic to cut off debate.  A number of years ago, while reviewing an organization that decided to use Scrum and two-week iterations for projects, I was told, “we started involving the team in planning what was going to be done during the iterations, however they were not getting the work done fast enough, so we decided to tell them what they needed to do each iteration.”  The use of ‘however’ suggests a cause-and-effect relationship that may or may not be true and tends to deflect discussion from the root cause of the problem. The conversation went on for some period of time during which we came to the conclusion that by telling them what to do, the project had actually fared even worse.  What occurred was that the responsibility had been shifted away from poor portfolio planning onto the team’s shoulders.

In past essays I have discussed that our choices sometimes rob us of positional power.  The rationalization of those individual choices acts as an intellectual smokescreen to make us feel better about our lack of power. Rationalization provides a platform to keep a clinical distance from the problem. Rationalization can be a tool to avoid the passion and energy needed to generate change.

All of these unholy words can be used for good, and that it might be useful to have more instructions on how to recognize when they are be used in a bad way. A sort of a field guide to avoid mistaken recognition.  One easy mechanism for recognizing a poor use of ‘but’, ‘can’t’ and ‘however’ is to break the sentence or statement into three parts, everything before the unholy word, the unholy word and then everything after the unholy word.  By looking at the phrase that follows our unholy word all is exposed.  If the phrase rejects or explains why original and perfectly reasonable premise is bat poop crazy, then you have a problem.  I decided to spend some of my ample time in airports collecting observations of some of the negation phrases people use.  Some of shareable the examples I heard included:

  1. I told you so.
  2. It is not my fault.
  3. Just forget it.
  4. We tried that before.
  5. That will take too long.
  6. It doesn’t matter (passive aggressive).
  7. We don’t do it that way.
  8. My manager won’t go for it.

There were others that I heard that can’t be shared, and I am sure there are many other phrases that can be used to lull the listener into thinking that the speaker agrees and then pulls the rug out from the listener.

The use of negation words can be a sign that you are trying to absolve yourself from the risk of action.  I would like to suggest we ban the use of these three process improvement swear words and substitute enabling phrases such as “and while it might be difficult, here is what I am going to do about it.”  Our goal should be to act on problems that are blockers and issues rather than to ignore them or by doing that establishing  their reality by saying grace over them.  In my opinion, acting and failing is a far better course of action than doing nothing at all and putting your head in the sand.  The responsibility to act does not go away but rather affixes more firmly to those who do nothing than to those that are trying to change the world!  When you pretend to not have power you become a victim.  Victims continually cede their personal and positional power to those around them.  Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 10/30/2014 - 22:50

The most unsuccessful three years in the education of cost estimators appears to be fifth-grade artihmetic - Norman R. Augustine

Opening line in the Welcome section of Software Estimation: Demystifying the Black Art, Steve McConnell.

Augustine is former Chairman and CEO of Martin Marietta. His seminal book Augustine's Laws, describes the complexities and conundrums of today's business management and offers solutions. Anyone interested in learning how successful management of complex technology based firms is done, should read that book. As well, read McConnell's book and see if you can find where 

Screen Shot 2014-10-30 at 3.47.41 PM

Because I sure can't find that proof or any mention that estimates don't work, other than for those who failed to pay attention in the 5th grade arithmetic class.

Categories: Project Management