Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Capacity Planning and the Project Portfolio

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1)

The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2)

Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.

Problem #2: They want to use capacity planning, instead of flowing work through teams.

They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization.

If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole.

Don’t Predict the Project Portfolio Based on Capacity

If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it.

First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways.

When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch.

This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:

  • You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems.
  • Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off.
  • You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.

It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.)

When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account.

You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are.

When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps.

The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum.

The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going.

Use Value for the Project Portfolio

Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct?

You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.

Categories: Project Management

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

How Not to Standardize Testing (ISO 29119)

James Bach’s Blog - Mon, 08/25/2014 - 09:15

Many years ago I took a management class. One of the exercises we did was on achieving consensus. My group did not reach an agreement because I wouldn’t lower my standards. I wanted to discuss the matter further, but the other guys grew tired of arguing with me and declared “consensus” over my objections. This befuddled me, at first. The whole point of the exercise was to reach a common decision, and we had failed, by definition, to do that– so why declare consensus at all? It’s like getting checkmated in chess and then declaring that, well, you still won the part of the game that you cared about… the part before the checkmate.

Later I realized this is not so bizarre. What they had effectively done is ostracize me from the team. They had changed the players in the game. The remaining team did come to consensus. In the years since, I have found that changing the boundaries or membership of a community is indeed an important pillar of consensus building. I have used this tactic many times to avoid unhelpful debate. It is one reason why I say that I’m a member of the Context-Driven School of Testing. My school does not represent all schools, and the other schools do not represent mine. Therefore, we don’t need consensus with them.

Then what about ISO 29119?

The ISO organization claims to have a new standard for software testing. But ISO 29119 is not a standard for testing. It cannot be a standard for testing.

A standard for testing would have to reflect the values and practices of the world community of testers. Yet, the concerns of the Context-Driven School of thought, which has been in development for at least 15 years have been ignored and our values shredded by this so-called standard and the process used to create it. They have done this by excluding us. There are two organizations explicitly devoted to Context-Driven values (AST and ISST) and our community holds several major conferences a year. Members of our community speak at all the major practitioners conferences, and our ideas are widely cited. Some of the most famous testers in the the world, including me, are Context-Driven testers. We exist, and together with the Agilists, we are the source of nearly every new idea in testing in the last decade.

The reason they have excluded us is that they know we won’t agree to any simplistic standard based on templates or simple formulae. We know those things look pretty but they don’t help. If ISO doesn’t exclude us, they worry they will never finish. They know we will challenge their evidence, and even their ethics and basic competence. This is why I say the craft is not ready for standards. It will be years before all the recognized experts in testing can come together and agree on anything substantial.

The people running the ISO effort know exactly who we are. I personally have had multiple public debates with Stuart Reid, on stage. He cannot pretend we don’t exist. He cannot pretend we are some sort of lunatic fringe. Tens of thousands of testers have watched my video lectures or bought my books. This is not a case where ISO can simply declare us to be outsiders.

The Burden of Proof

The Context-Driven community stands for excellence in testing. This is why we must reject this depraved attempt by ISO to grab power and assert control over our craft. Our craft is still an open marketplace of ideas, and it is full of strong debates. We must protect that marketplace and allow it to evolve. I want the fair chance to put my competitors out of business (or get them to change their business) with the high quality of my work. Context-Driven testing has been growing in strength and numbers over the years. Whereas this ISO effort appears to be a job protection program for people who can’t stomach debate. They can’t win the debate so they want to remake the rules.

The burden of proof is not on me or any of us to show that the standard is wrong, nor is it our job to make it right. The burden is on those who claim that the craft can be standardized to study the craft and recognize and resolve the deep differences among us. Failing that, there can be no ethical or rational basis for standardization.

This blog post puts me on record as opposing the ISO 29119 standard. Together with my colleagues, we constitute a determined and sustained and principled opposition.

Categories: Testing & QA

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 07:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

C4 model poster

Coding the Architecture - Simon Brown - Sun, 08/24/2014 - 23:20

A few people have recently asked me for a poster/cheat sheet/quick reference of the C4 model that I use for communicating and diagramming software systems. You may have seen an old copy floating around the blog, but I've made a few updates and you can grab the new version from http://static.codingthearchitecture.com/c4.pdf (PDF, A3 size).

Software architecture and the C4 model

Enjoy!

Categories: Architecture

SPaMCAST 304 – Jamie Lynn Cooke, Power of the Agile Business Analyst

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 304

Software Process and Measurement Cast number 304 features our interview with Jamie Lynn Cooke. Jamie Lynn Cooke is the author of The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and huge value of Agile business analysts bring to projects!

Jamie Lynn’s Bio:
Jamie Lynn Cooke has 24 years of experience as a senior business analyst and solutions consultant, working with more than 130 public and private sector organizations throughout Australia, Canada, and the United States.

She is the author of The Power of the Agile Business Analyst: 30 surprising ways a business analyst can add value to your Agile development team, which details how Agile business analysts can increase the relevance, quality and overall business value of Agile projects; Agile Principles Unleashed, a book written specifically to explain Agile in non-technical business terms to managers and executives outside of the IT industry; Agile: An Executive Guide: Real results from IT budgets, which gives IT executives the tools and strategies needed for bottom-line business decisions on using Agile methodologies; and Everything You Want to Know About Agile: How to get Agile results in a less-than-Agile organization, which gives readers strategies for aligning Agile work within the reporting, budgeting, staffing, and governance constraints of their organization. Also checkout,  Agile Productivity Unleashed: Proven Approaches for Achieving Real Productivity Gains in Any Organization (Second Edition)!

Jamie has a Bachelor of Science in Engineering Psychology (Human Factors Engineering) from Tufts University in Medford, Massachusetts; and a Graduate Certificate in e-Business/Business Informatics from the University of Canberra in Australia.

You can find her website here.

 

Next

Software Process and Measurement Cast number 305 will feature our essay on estimation (here is our essay on specific topics within estimation). Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 304 - Jamie Lynn Cooke, Power of the Agile Business Analyst

Software Process and Measurement Cast - Sun, 08/24/2014 - 22:00

Software Process and Measurement Cast number 304 features our interview with Jamie Lynn Cooke. Jamie Lynn Cooke is the author of The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and huge value of Agile business analysts bring to projects!

Jamie Lynn’s Bio:
Jamie Lynn Cooke has 24 years of experience as a senior business analyst and solutions consultant, working with more than 130 public and private sector organizations throughout Australia, Canada, and the United States.

She is the author of The Power of the Agile Business Analyst: 30 surprising ways a business analyst can add value to your Agile development team, which details how Agile business analysts can increase the relevance, quality and overall business value of Agile projects; Agile Principles Unleashed, a book written specifically to explain Agile in non-technical business terms to managers and executives outside of the IT industry; Agile: An Executive Guide: Real results from IT budgets, which gives IT executives the tools and strategies needed for bottom-line business decisions on using Agile methodologies; and Everything You Want to Know About Agile: How to get Agile results in a less-than-Agile organization, which gives readers strategies for aligning Agile work within the reporting, budgeting, staffing, and governance constraints of their organization. Also checkout,  Agile Productivity Unleashed: Proven Approaches for Achieving Real Productivity Gains in Any Organization (Second Edition)!

Jamie has a Bachelor of Science in Engineering Psychology (Human Factors Engineering) from Tufts University in Medford, Massachusetts; and a Graduate Certificate in e-Business/Business Informatics from the University of Canberra in Australia.

You can find her website here.

 

Next

Software Process and Measurement Cast number 305 will feature our essay on estimation (here is our essay on specific topics within estimation). Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Velocity and Productivity

 

Velocity and productivity are different.

Velocity and productivity are different.

Mention productivity to adherents of Agile methods and you will get a range of responses. Some of the typical responses include blank stares, tirades against organization-level control mentality or discussions on why velocity is more relevant. Similar reactions (albeit 180 degrees out of phase) will be experienced when you substitute the word velocity and have discussions with adherents of other methodologies.

Fantasy movies and novels have taught us that in the realm of magic, knowing the name of a person or thing confers power. In fantasy novels the power conferred is that of control. In real life, the power of having a name for a concept is the power of spin. Spin and control are a pair of highly related terms. Spin is to provide an interpretation (a statement or event, for example), especially in a way meant to sway public opinion.

Naming a concept, even if many similar concepts have already been given names, creates an icon that can rally followers and be used to heap derision on non-followers. Maybe because the followers of fantasy and science fiction in the IT professions is higher than you would find in the normal population, the pattern of naming as a concept to focus attention has risen to a fine art. Examples abound in the IT world, such as the use of the term logical files in the IFPUG Function Points (where are the illogical files?) and Agile methods (the others must be the inflexible methods). Productivity and velocity are named concepts that reflect this rule. Each can evoke followers to alter their behavior or to generate violent rage in what began as a civil conversation. The irony is that these terms represent highly related concepts. Both seek to describe the amount of output that will be delivered in a specific period of time. The difference is a matter of perspective.

If they are so similar why are there two terms describing similar concepts? The lets peel back another layer.

Dion Hinchcliffe has defined project velocity is the measurement of the event rate of a project. A simpler definition is simply work divided by time. In both cases velocity is used to describe the speed a specific team delivers results. Typical velocity metrics include story points per person month, requirements per sprint and stories or story points per iteration. The units of measure are targeted at the level of requirements or user stories. The granularity of the unit of measure and collection time frame (iteration or sprints) ensures that the metric is generated and collected multiple times throughout the project. Repetition makes it easy for this process to become repeatable through based on rote memory. Because of the short time horizon and the use of measures that can be derived at a team level, the data is useful to the team as they plan and monitor their work. Useful equals metrics that get collected, in my book. Unfortunately because relative measures (measures based on perception) are used to size requirements these metrics tend to be less useful for organizational comparison than more classic productivity measures. Productivity is also relatively simple metric. It is simply the output (numerator) of a project divided by the input(s) required to produce the output (denominator).

The productivity equation is divided by more esoteric units than calendar time, such as hours of effort or FTE months (full-time equivalents), that relate to the entire project. The units of measure for the numerator range from the venerable line of code to functional units such as function points. Because productivity is generally collected and used at an overall project level it is very useful for parametric estimation or comparing projects, but far less effective for planning day to day activities than is velocity. It should be noted that some organizations collect many separate units to create a lower level view of productivity. I would suggest this can be done albeit it will require a substantial amount of effort to implement and maintain.

So if velocity and productivity are both useful and related, which one should we use? The first place to start is to decide what question you are trying to answer. Once the problem you are trying to solve is identified the unit of measure and the collection time horizon both become manageable decisions. The question of whether we have to choose one over the other I would suggest is a false question. I propose that if we focus on selecting the proper numerator we can have measures that are useful at both the project and organization level. One solution is to substitute Quick and Early Function Points (QEFP is a rules based functional metric) for the typical story points (relative measure). QEFP can be applied at a granular level and then aggregated because it is rules based for reporting at different levels. Understanding the relationship between the two measures we devise a solution to have our cake and eat it too.


Categories: Process Management

Xebia IT Architects Innovation Day

Xebia Blog - Sat, 08/23/2014 - 17:51

Friday August 22nd was Xebia’s first Innovation Day. We spent a full day experimenting with technology. I helped organizing the day for XITA, Xebia’s IT Architects department (Hmm. Department doesn’t feel quite right to describe what we are, but anyway). Innovation days are intended to inspire as well as educate. We split up in small teams and focused on a particular technology. Below is as list of project teams:

• Docker-izing enterprise software
• Run a web application high-available across multiple CoreOS nodes using Kubernetes
• Application architecture (team 1)
• Application architecture (team 2)
• Replace Puppet with Salt
• Scale "infinitely" with Apache Mesos

In the coming weeks we will publish what we learned in separate blogs.

First Xebia Innovation Day

CRC - Components, Responsibilities, Collaborations

Coding the Architecture - Simon Brown - Sat, 08/23/2014 - 09:46

I was reading Dan North Visits Osper and was pleasantly surprised to see Dan mention CRC modelling. CRC is a great technique for the process of designing software, particularly when used in a group/workshop environment. It's not a technique that many people seem to know about nowadays though.

A Google search will yield lots of good explanations on the web, but basically CRC is about helping to identify the classes needed to implement a particular feature, use case, user story, etc. You basically walk through the feature from the start and whenever you identify a candidate class, you write the name of it on a 6x4 index card, additionally annotating the card with the responsibilities of the class. Every card represents a separate class. As you progress through the feature, you identify more classes and create additional cards, annotating the cards with responsibilities and also keeping a note of which classes are collaborating with one another. Of course, you can also refactor your design by splitting classes out or combining them as you progress. When you're done, you can dry-run your feature by walking through the classes (e.g. A calls B to do X, which in turn requests Y from C, etc).

Much of what you'll read about CRC on the web discusses how the technique is useful for teaching OO design at the class level, but I like using it at the component level when faced with architecting a software system given a blank sheet of paper. The same principles apply, but you're identifying components (or services, microservices, etc) rather than classes. When you've done this for a number of significant use cases, you end up with a decent set of CRC cards representing the core components of your software system.

From this, you can start to draw some architecture diagrams using something like my C4 model. Since the cards represent components, you can simply lay out the cards on paper, draw lines between them and you have a component diagram. Each of those components needs to be running in an execution environment (e.g. a web application, database, mobile app, etc). If you draw boxes around groups of components to represent these execution environments, you have a containers diagram. Step up one level further and you can create a simple system context diagram to show how your system interacts with the outside world. My Simple Sketches for Diagramming your Software Architecture article provides more information about the C4 model and the resulting diagrams, but hopefully you get the idea.

CRC then ... yes, it's a great technique for collaborative design, particularly when applied at the component level. And it's a nice starting point for creating software architecture diagrams too.

Categories: Architecture

Stockholm Syndrome and Outsourcing

Cultural disconnects are a major contributor to problems in outsourcing.

Cultural disconnects are a major contributor to problems in outsourcing.

Managing risk is one of the keys to success in an outsourcing arrangement.  There are many control mechanisms used to manage outsourcing deals.  Control mechanisms can range from full-scale contract offices, PMO’s, metrics, scorecard reporting, audits, CMMI assessments and on-site oversight teams.  In real life, typically these are applied in combination.

Cultural disconnects are a major contributor to problems in outsourcing that increase as the distance between client and outsourcer grows.  Sounds like a truism, however examples abound even today as organizations fall prey to misunderstandings driven by cultural disconnects.  The misunderstandings that can occur can range for differences in semantics to deep-seated cultural differences.

A tool to manage/minimize this type of disconnect is to co-locate an on-site account management team with the outsourcer.  This type of arrangement provides an avenue to mitigate cultural differences, translate both intent and words and mainly to build trust.  All positives; however darker possibilities exist, and as personal observations prove, they do happen!

On-site management of outsourced projects provides a number of impressive benefits that other forms of control do not provide.  The first and most important of these is a simple visible presence that reinforces that work is important.  Secondly, an onsite presence can provide a bridge between cultures (both organizational and sociological cultures).  Further, a presence provides a mechanism for translation, and for ironing out semantic differences quickly and efficiently.  Finally, an onsite presence is a basis to build a common history and understanding, which yields trust.

A powerful tool with equally powerful drawbacks: what happens when an onsite lead or team loses perspective?  I participated in an assessment of team that supports a group of outsourced applications.  During initial discussions it was impossible to determine who worked for the outsourcer and who worked for the onsite account management.  Collaboration you might ask?  True, but only if the arrangement is structured as such and all parties perceive it that way, which was not the case.  The on-site team had lost perspective and aligned themselves with organization they were overseeing, an application of the ‘Stockholm Syndrome’.

When an onsite team gets too close, they lose perspective, and they begin to believe they are part of the company they supposed to interface with.  When perspective is lost, who will they advocate for the project or how will a critical point of translation be interpreted?   Even if the closeness is merely an appearance, it will be difficult for others to understand how to act.

How do the best make co-located teams work?  The best observed application of the techniques begins with the sourcer deploying a cross-functional team.  The skills that are required include project management, business and systems analysis.  The very best include personnel with both facilitation and negotiation skills (negotiation is more typical). All team members require a strong sense loyalty to their company.  Note that if the work is ’offshored‘ (not just outsourced), then the team members must have command of the local language.  The rational for teams rather than an individual is two-fold:  The first is that a team can field more skills.  The second is that a team is far less likely to “go native” than an individual (teams create their own support structure).  Note, using teams is a best practice only if the amount of work supports it.  Smaller outsourcing agreements may not have the luxury, which means they must roll all of these skills into a single individual.

Despites the downside risks, co-locating sourcer and outsourcing teams of any size are a best practice.  How organizations structure their co-location program to keep the personnel fresh and useful is what separates the wheat from the chaff.  Observed tactical best practices to maintain the crispness of on-site teams include:

  • Rotation of personnel (not everyone at once unless there is only one person) re-enforces the attachment to the parent company.  A secondary form of rotation includes making a spot on the team a step on a job progression.
  • Leveraging PPQA reviews provides an assessment of whether the outsourcers processes are being followed.  Non-compliances are identified and an action plan is put in place for remediation.
  • External audits, using models such as the CMMI, ITIL or ISO Standards, provide a far more formal reading of whether processes are followed (typically with more consequences if they are not).

On-site teams are a best practice for reducing the risk of an outsourcing agreement, but it is a best practice that has a downside unless they are carefully managed.


Categories: Process Management

What I Don't Know Can Actually Hurt Me

Herding Cats - Glen Alleman - Fri, 08/22/2014 - 19:23

Ostrich Head in SandThe notion of not knowing the impact of decisions, choices, approaches is like putting your head in the sand because you don't like the answer or don't what to know the answer.

In our domain there are three common root causes for program difficulties when the program overruns it's budget, shows up late, or the delivered product doesn't work as required

  1. They couldn't know - the source the problem was in fact unknowable.
  2. They didn't know - the source of the problem was knowable, but the effort to discover it was done.
  3. They dodn't want to know - the source of the problem was there, but if it was made visible the project would be canceled or not started

When I read about decision making processes like ...

Screen Shot 2014-08-22 at 12.07.24 PM

I'm struck by the 3rd statement. Knowing something about the cost of a decision, the outcome of an investment, the risks, scope impact, progress reporting of future values requires you make estimates. Since all project variables are random variables that interact in random ways - technically non-stationary stochastic processes - knowing about the impacts from deciding anything using these variables means estimating the three core variables of all projects, shown below.

When I hear the suggestion that decisions can be made in the absence of those estimates, think of the Ostrich, ask the following:

  • Can I decide about some future outcome without estimating the impact of that outcome?
  • If I'm going to invest - spend money - can I do the ROI calculation in the absence of the estimate of the cost or the value - since neither of those are known on day one?
  • Risk - by it's very definition - is an uncertainty. These uncertainty are probabilistic outcomes of future events. Estimates are needed.

If there ways to decide these things without estimates, let's hear them. Each of the three variables and each of their drivers is a random variable whose value is not know in the present, but can only be knowing with an estimate of it's future possible values.

Screen Shot 2014-08-22 at 12.11.42 PM

Related articles Resources for Moving Beyond the "Estimating Fallacy" Making Estimates For Your Project Require Discipline, Skill, and Experience Four Critical Elements of Project Success First Comes Theory, Then Comes Practice How to Deal With Complexity In Software Projects? Statistical Process Control The Basis of All Modern Control Systems Let's Stop Guessing and Learn How to Estimate The Value of Information Why is Statistical Thinking Hard?
Categories: Project Management

Stuff The Internet Says On Scalability For August 22nd, 2014

Hey, it's HighScalability time:


Exterminate! 1,024 small, mobile, three-legged machines that can move and communicate using infrared laser beams.
  • 1.6 billion: facts in Google's Knowledge Vault built by bots; 100: lightening strikes every second
  • Quotable Quotes:
    • @stevendborrelli: There's a common feeling here at #MesosCon that we at the beginning of a massive shift in the way we manage infrastructure.
    • @deanwampler: 2000 machine service will see > 10 machine crashes per day. Failure is normal. (Google) #Mesoscon
    • @peakscale: "not everything revolves around docker" /booted from room immediately
    • @deanwampler: Twitter has most of their critical infrastructure on Mesos, O(10^4) machines, O(10^5) tasks, O(10^0) SREs supporting it. #Mesoscon
    • @adrianco: Dig yourself a big data hole, then drown in your data lake...
    • bbulkow:  I saw huge Go uptake at OSCON. I met one guy doing log processing easily at 1M records per minute on a single amazon instance, and knew it would scale.
    • @julian_dunn: clearly, running Netflix on a mainframe would have avoided this problem

  • Programming is the new way in an old tradition of using new ideas to explain old mysteries. Take the new Theory of Everything, doesn't it sound a lot like OO programming?: According to constructor theory, the most fundamental components of reality are entities—“constructors”—that perform particular tasks, accompanied by a set of laws that define which tasks are actually possible for a constructor to carry out. Then there's Our Mathematical Universe, which posits that the attributes of objects are the objects: all physical properties of an electron, say, can be described mathematically; therefore, to him, an electron is itself a mathematical structure. Any data modeler knows how faulty is this conceit. We only model our view relative to a problem, not universally. Modelers also have another intuition, that all attributes arise out of relationships between entities and that entities may themselves not have attributes. So maybe physics and programming have something to do with each other after all?

  • Love this. Multi-Datacenter Cassandra on 32 Raspberry Pi’s. Over the top lobby theatrics is a signature Silicon Valley move.

  • Computation is all around us. Jellyfish Use Novel Search Strategy: instead of using a consistent Lévy walk approach, barrel jellyfish also employ a bouncing technique to locate prey. These large jellies ride the currents to a new depth in search of food. If a meal is not located in the new location, the creature rides the currents back to its original location. 

  • Two months early. 300k under budget. Building a custom CMS using a Javascript based Single Page App (SPA), a Clojure back end, a set of small Clojure based micro services sitting on top of MongoDB, hosted in Rackspace.

  • While Twitter may not fight against the impersonation of certain Journalism professors, it does fight spam with a large sword. Here's how that sword of righteousness was forged: Fighting spam with BotMaker. The main challenge: applying rules defined using their own rule language with a low latency. Spam is detected in three stages: real-time, before the tweet enters the system; near real-time, on the write path; periodic, in the background. The result: a 40% reduction in spam and faster response time to new spam attacks.

  • An architecture of small apps. A PHP/Symfony CMS called Megatron takes 10 seconds to render a page. Pervasive slowness leads to constant problems with cache clearing, timeouts, server spin ups and downs, cache warmup. What to do? As an answer an internal Yammer conversation on different options is shared. The major issue is dumping their CMS for a microservices based approach. Interesting discussion that covers a lot of ground.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Neo4j: LOAD CSV – Handling empty columns

Mark Needham - Fri, 08/22/2014 - 13:51

A common problem that people encounter when trying to import CSV files into Neo4j using Cypher’s LOAD CSV command is how to handle empty or ‘null’ entries in said files.

For example let’s try and import the following file which has 3 columns, 1 populated, 2 empty:

$ cat /tmp/foo.csv
a,b,c
mark,,
load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
SET p.b = row.b, p.c = row.c
RETURN p

When we execute that query we’ll see that our Person node has properties ‘b’ and ‘c’ with no value:

==> +-----------------------------+
==> | p                           |
==> +-----------------------------+
==> | Node[5]{a:"mark",b:"",c:""} |
==> +-----------------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 3
==> Labels added: 1
==> 26 ms

That isn’t what we want – we don’t want those properties to be set unless they have a value.

TO achieve this we need to introduce a conditional when setting the ‘b’ and ‘c’ properties. We’ll assume that ‘a’ is always present as that’s the key for our Person nodes.

The following query will do what we want:

load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
FOREACH(ignoreMe IN CASE WHEN trim(row.b) <> "" THEN [1] ELSE [] END | SET p.b = row.b)
FOREACH(ignoreMe IN CASE WHEN trim(row.c) <> "" THEN [1] ELSE [] END | SET p.c = row.c)
RETURN p

Since there’s no if or else statements in cypher we create our own conditional statement by using FOREACH. If there’s a value in the CSV column then we’ll loop once and set the property and if not we won’t loop at all and therefore no property will be set.

==> +-------------------+
==> | p                 |
==> +-------------------+
==> | Node[4]{a:"mark"} |
==> +-------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 1
==> Labels added: 1
Categories: Programming

R: Rook – Hello world example – ‘Cannot find a suitable app in file’

Mark Needham - Fri, 08/22/2014 - 12:05

I’ve been playing around with the Rook library and struggled a bit getting a basic Hello World application up and running so I thought I should document it for future me.

I wanted to spin up a web server using Rook and serve a page with the text ‘Hello World’. I started with the following code:

library(Rook)
s <- Rhttpd$new()
 
s$add(name='MyApp',app='helloworld.R')
s$start()
s$browse("MyApp")

where helloWorld.R contained the following code:

function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Unfortunately that failed on the ‘s$add’ line with the following error message:

> s$add(name='MyApp',app='helloworld.R')
Error in .Object$initialize(...) : 
  Cannot find a suitable app in file helloworld.R

I hadn’t realised that you actually need to assign that function to a variable ‘app’ in order for it to be picked up:

app <- function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Once I fixed that everything seemed to work as expected:s

> s
Server started on 127.0.0.1:27120
[1] MyApp http://127.0.0.1:27120/custom/MyApp
 
Call browse() with an index number or name to run an application.
Categories: Programming

Vitamins, Aspirin and The Little Blue Pill

photo (1)

Why do some products make a bigger splash than others? I recently heard an analogy, which explains why some products have naturally higher market demand than others. The analogy stated that there are really only three macro classes of products; products that avoid problems, solve a specific pain and provide new functionality. The analogy used vitamins, aspirin and the little blue pill as a metaphor for these three categories. Process improvement can easily be classified using these metaphors. As change agents, we can use this analogy as a tool to guide how we conceive and implement process changes within our organizations.

One of the failings of many software process improvement programs is that they are framed as means of avoiding a problem. I call this the futurist point of view. The futurist point of view translates to the “vitamins” for the organizational change world. If I asked you directly I am certain that you would understand the need to take precautions so that the future we will be better. The big unvoiced “however” in your answer would be that it is hard get motivated to make a change now for a nebulous payoff in the future. Just remember the last time you toyed with the idea of starting an exercise program. The linkage between a current change in behavior (taking vitamins) and future benefits is just not direct enough to create a groundswell of acceptance. Bottom-line: Selling potential benefits in the future is it best of times a difficult proposition.

A few years ago I took sales training (and I am proud of it). I learned how to identify pain during the training program. Ask any professional salesperson and they will tell you that an immediate pain is an important motivator to making a sale, maybe the most important motivator. At least 99.9% of the people in the world want pain to go away when they have it, which is why an aspirin is an easier sale than an exercise program for back pain. The “gotcha” is that pain initially expressed is usually not the root cause of the pain (there are a lot reasons for this behavior, but that is topic for another day). The art of persuasion, sales and requirements gathering is the ability to peel back the layers until you can expose the root cause so the pain can be solved, not just masked. The ability to successfully navigate the “pain” conversation to get to the root cause and not irritate person feeling the pain is a skill not consistently found on IT project teams. Bottom-line: I highly recommend a course in salesmanship for all project managers and requirements analysts, make sure your process improvement program solves current problems and always carry aspirin.

I have been shocked and amazed at how the little blue pill and other similar drugs became and stayed blockbuster drugs. Going back to our analogy, the little blue pill represents products that deliver additional functionality. The little blue pill of process improvement projects delivers the ability to either do something that was not possible before, provide greater flexibility in how work is done and/or greater flexibility in the decision making process. I believe that most IT personnel have a bit of a libertarian streak, which conflates flexibility and choice in how we accomplish our job with the functionality of the process. For example, having more than one way to do a retrospective and the choice of which technique to use makes a process for retrospectives better than one without options. IT personnel are problem solvers, solving problems is central to our identity. Process improvement projects that deliver functionality which make it easier (or even possible) to deliver solutions to IT’s customers addresses the core needs of IT developers and IT leaders. Bottom-line: Make sure your process improvement projects make it possible to do work that could not be done before or at the very least provide a more flexible, choice driven approach.

The analogy of vitamins, aspirin and little blue pill frames a discussion that many process improvement leaders do not have when choosing process improvement projects. I suggest that to survive when budgets are being cut, process improvement programs must deliver real benefits NOW, to solve pain IT organizations have NOW, but to be true to our nature, changes must also provide for a better future. These considerations are not just packaging or salesmanship, addressing these considerations is central to providing real value now and in the future.

PS: Take your vitamins, carry aspirin and if you have processes that stay the same for more than four cycles seek medical attention immediately.


Categories: Process Management

Azure: New DocumentDB NoSQL Service, New Search Service, New SQL AlwaysOn VM Template, and more

ScottGu's Blog - Scott Guthrie - Thu, 08/21/2014 - 21:39

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • DocumentDB: Preview of a New NoSQL Document Service for Azure
  • Search: Preview of a New Search-as-a-Service offering for Azure
  • Virtual Machines: Portal support for SQL Server AlwaysOn + community-driven VMs
  • Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal
  • Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library
  • API Management: Support for API Management REST APIs

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: DocumentDB: Announcing a New NoSQL Document Service for Azure

I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well. 

DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units.

Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size.

DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application.

DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal. Provisioning a DocumentDB account

To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item.

image

Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it.

In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region):

image

Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard. 

image

The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically. Developing with DocumentDB

DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs.

The JSON data I am going to use for this example are two families:

// AndersonFamily.json file

{<?xml:namespace prefix = "o" />

    "id": "AndersenFamily",

    "lastName": "Andersen",

    "parents": [

        { "firstName": "Thomas" },

        { "firstName": "Mary Kay" }

    ],

    "children": [

        { "firstName": "John", "gender": "male", "grade": 7 }

    ],

    "pets": [

        { "givenName": "Fluffy" }

    ],

    "address": { "country": "USA", "state": "WA", "city": "Seattle" }

}

and

// WakefieldFamily.json file

{

    "id": "WakefieldFamily",

    "parents": [

        { "familyName": "Wakefield", "givenName": "Robin" },

        { "familyName": "Miller", "givenName": "Ben" }

    ],

    "children": [

        {

            "familyName": "Wakefield",

            "givenName": "Jesse",

            "gender": "female",

            "grade": 1

        },

        {

            "familyName": "Miller",

            "givenName": "Lisa",

            "gender": "female",

            "grade": 8

        }

    ],

    "pets": [

        { "givenName": "Goofy" },

        { "givenName": "Shadow" }

    ],

    "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" }

}

Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them:

using (client = new DocumentClient(new Uri(endpoint), authKey))

{

    var database = new Database { Id = "ScottsDemoDB" };

    database = await client.CreateDatabaseAsync(database);

 

    var collection = new DocumentCollection { Id = "Families" };

    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);

 

    //DocumentDB supports strongly typed POCO objects and also dynamic objects

    dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));

    dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));

 

    //persist the documents in DocumentDB

    await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);

    await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);

 

    //very simple query returning the full JSON document matching a simple WHERE clause

    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");

    var family = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Anderson family have the following pets:");              

    foreach (var pet in family.pets)

    {

        Console.WriteLine(pet.givenName);

    }

 

    //select JUST the child record out of the Family record where the child's gender is male

    query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");

    var child = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);

 

    //cleanup test database

    await client.DeleteDatabaseAsync(database.SelfLink);

}

As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well. 

Server-side JavaScript Stored Procedures

If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service:

function SwapPets(family1Id, family2Id) {

    var context = getContext();

    var collection = context.getCollection();

    var response = context.getResponse();

 

    collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},

    function (err, documents, responseOptions) {

        var family1 = documents[0];

 

        collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},

        function (err2, documents2, responseOptions2) {

            var family2 = documents2[0];

                   

            var itemSave = family1.pets;

            family1.pets = family2.pets;

            family2.pets = itemSave;

 

            collection.replaceDocument(family1._self, family1,

                function (err, docReplaced) {

                    collection.replaceDocument(family2._self, family2, {});

                });

 

            response.setBody(true);

        });

    });

}

 

If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began.

It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):

    //register a stored procedure

    StoredProcedure storedProcedure = new StoredProcedure

    {

        Id = "SwapPets",

        Body = File.ReadAllText(@".\JS\SwapPets.js")

    };

               

    storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);

 

And just as easy to execute the stored procedure from within your application:

    //execute stored procedure passing in the two family documents involved in the pet swap              

    dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily");

If we checked the pets now linked to the Anderson Family we’d see they have been swapped. Learning More

It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large.

You can learn more about DocumentDB from the new DocumentDB development center here.

Search: Announcing preview of new Search as a Service for Azure

I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.   

Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost. Provisioning a Search Service

To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item.

image

Locate the “Search” service icon and select Create to create an instance of the service:

image

You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment.

The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes.

The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search. Managing your Azure Search service

After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service:

image

I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service. Creating a Search Index

Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title).

image

Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows:

HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F");

I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON.

var index = new

{

    name = "contacts",

    fields = new[]

    {

        new { name = "id", type = "Edm.String", key = true },

        new { name = "fullname", type = "Edm.String", key = false },

        new { name = "tags", type = "Collection(Edm.String)", key = false },

        new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },

        new { name = "worklocation", type = "Edm.GeographyPoint", key = false },

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

You can run this code as part of your deployment code or as part of application initialization. Populating a Search Index

Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider.

Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background.

Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above.

var batch = new

{

    value = new[]

    {

        new

        {

            id = "221",

            fullname = "Jay Adams",

            tags = new string[] { "work" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -122.131577, 47.678581 }

            }

        },

        new

        {

            id = "714",

            fullname = "Catherine Abel",

            tags = new string[] { "work", "personal" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -121.825579, 47.1419814}

            }

        }

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs/index?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(batch), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

Searching an Index

After creating an index and populating it with data, I can now issue search requests against the index. Searches are simple HTTP GET requests against the index, and responses contain the data we originally uploaded as well as accompanying scoring information.

I can do a simple search by executing the code below, where searchText is a string containing the user input, something like abel work for example:

var response = client.GetAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs?api-version=2014-07-31-Preview&search=" + Uri.EscapeDataString(searchText)).Result;

response.EnsureSuccessStatusCode();

 

dynamic results = JsonConvert.DeserializeObject(response.Content.ReadAsStringAsync().Result);

 

foreach (var result in results.value)

{

    Console.WriteLine("FullName:" + result.fullname + " score:" + (double)result["@search.score"]);

}

Learning More

The above is just a simple scenario of what you can do.  There are a lot of other things we could do with searches. For example, I can use query string options to filter, sort, project and page over the results. I can use hit-highlighting and faceting to create a richer way to navigate results and suggestions to implement auto-complete within my web or mobile UI.

In this example, I used the default ranking model, which uses statistics of the indexed text and search string to compute scores. You can also author your own scoring profiles that model scores in ways that match the needs of your application.

Check out the Azure Search documentation for more details on how to get started, and some of the more advanced use-cases you can take advantage of.  With the free tier now available at no cost to every Azure subscriber, there is no longer any reason not to have Search fully integrated within your applications. Virtual Machines: Support for SQL Server AlwaysOn, VM Depot images

Last month we added support for managing VMs within the Azure Preview Portal (http://portal.azure.com).  We also released built-in portal support that enables you to easily create multi-VM SharePoint Server Farms as well as a slew of additional Azure Certified VM images.  You can learn more about these updates in my last blog post.

Today, I’m excited to announce new support for automatically deploying SQL Server VMs with AlwaysOn configured, as well as integrated portal support for community supported VM Depot images. SQL Server AlwaysOn Template

AlwaysOn Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, guarantee high availability for mission-critical workloads. Last year we started supporting SQL Availability Groups on Azure Infrastructure Services. In such a configuration, two SQL replicas (primary and secondary), each in its own Azure VM, are configured for automatic failover, and a listener (DNS name) is configured for client connectivity. Other components required are a file share witness to guarantee quorum in the configuration to avoid “split brain” scenarios, and a domain controller to join all VMs to the same domain. The SQL as well as the domain controller replicas are each deployed to an availability set to ensure they are in different Azure failure and upgrade domains.

Prior to today’s release, setting up the Availability Group configuration could be tedious and time consuming. We have dramatically simplified this experience through a new SQL Server AlwaysOn template in the Azure Gallery. This template fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using an Availability Group.

You can find the template by navigating to the Azure Gallery within the Azure Preview Portal (http://portal.azure.com), selecting the Virtual Machine category on the left and selecting the SQL Server 2014 AlwaysOn gallery item. In the gallery details page, select Create. All you need is to provide some basic configuration information such as the administrator credentials for the VMs and the rest of the settings are defaulted for you. You may consider changing the defaults for Listener name as this is what your applications will use to connect to SQL Server.

image

Upon creation, 5 VMs are created in the resource group: 2 VMs for the SQL Server replicas, 2 VMs for the Domain Controller replicas, and 1 VM for the file share witness.

Once created, you can RDP to one of the SQL Server VMs to see the Availability Group configuration as depicted below:

image

Try out the SQL Server AlwaysOn template in the Azure Preview Portal today and give us your feedback! VM Depot in Azure Gallery

Community-driven VM Depot images have been supported on the Azure platform for a couple of years now. But prior to today’s release they weren’t fully integrated into the mainline user experience.

Today, I’m excited to announce that we have integrated community VMs  into the Azure Preview Portal and the Azure gallery. With this release, you will find close to 300 pre-configured Virtual Machine images for Microsoft Azure.

Using these images, fully functional Virtual Machines can be deployed in the Preview Portal in minutes and customized for specific use cases. Starting from base operating system distributions (such as Debian, Ubuntu, CentOS, Suse and FreeBSD) through developer stacks (such as LAMP, Ruby on Rails, Node and Django), to complete applications (such as Wordpress, Drupal and Apache Solr), there is something for everyone in VM Depot.

Try out the VM Depot images in the Azure gallery from within the Virtual Machine category. image Web Sites: WebJobs and Process Management in the Preview Portal

Starting with today’s Azure release, Web Site WebJobs are now supported in the Azure Preview Portal.  You can also now drill into your Web Sites and monitor the health of any processes running within them (both to host your web code as well as your web jobs). Web Site WebJobs

Using WebJobs, you can now now run any code within your Azure Web Sites – and do so in a way that is readily parallelizable, globally scalable, and complete with remote debugging, full VS support and an optional SDK to facilitate authoring. For more information about the power of WebJobs, visit Azure WebJobs recommended resources.

With today’s Azure release, we now support two types of Webjobs: on Demand and Continuous.  To use WebJobs in the preview portal, navigate to your web site and select the WebJobs tile within the Web Site blade. Notice that the part also now shows the count of WebJobs available.

image

By drilling into the title, you can view existing WebJobs as well as create new OnDemand or Continuous WebJobs. Scheduled WebJobs are not yet supported in the preview portal, but expect to see this in the near future. Web Site Processes

I’m excited to announce a new feature in the Azure Web Sites experience in the Preview Portal - Websites Processes. Using Websites Process you can enumerate the different instances of your site, browse through the different processes on each instance, and even drill down to the handles and modules associated with each process. You can then check for detailed information like version, language and more.

image

In addition, you also get rich monitoring for CPU, Working Set and Thread count at the process level.  Just like with Task Manager for Windows, data collection begins when you open the Websites Processes blade, and stops when you close it.

image

This feature is especially useful when your site has been scaled out and is misbehaving in some specific instances but not in others. You can quickly identify runaway processes, find open file handles, and even kill a specific process instance. Monitoring and Management SDK: Programmatic Access to Monitoring Data

The Azure Management Portal provides built-in monitoring and management support that makes it easy for you to track the health of your applications and solutions deployed within Azure.

If you want to programmatically access monitoring and management features in Azure, you can also now use our .NET SDK from Nuget. We are releasing this SDK to general availability today, so you can now use it for your production services!

For example, if you want to build your own custom dashboard that shows metric data from across your services, you can get that metric data via the SDK:

// Create the metrics client by obtain the certificate with the specified thumbprint.

MetricsClient metricsClient = new MetricsClient(new CertificateCloudCredentials(SubscriptionId, GetStoreCertificate(Thumbprint)));

 

// Build the resource ID string.

string resourceId = ResourceIdBuilder.BuildWebSiteResourceId("webtest-group-WestUSwebspace", "webtests-site");

 

// Get the metric definitions.

MetricDefinitionCollection metricDefinitions = metricsClient.MetricDefinitions.List(resourceId, null, null).MetricDefinitionCollection;

 

// Display the available metric definitions.

Console.WriteLine("Choose metrics (comma separated) to list:");

int count = 0;

foreach (MetricDefinition metricDefinition in metricDefinitions.Value)

{

    Console.WriteLine(count + ":" + metricDefinition.DisplayName);

    count++;

}

 

// Ask the user which metrics they are interested in.

var desiredMetrics = Console.ReadLine().Split(',').Select(x =>  metricDefinitions.Value.ToArray()[Convert.ToInt32(x.Trim())]);

 

// Get the metric values for the last 20 minutes.

MetricValueSetCollection values = metricsClient.MetricValues.List(

    resourceId,

    desiredMetrics.Select(x => x.Name).ToList(),

    "",

    desiredMetrics.First().MetricAvailabilities.Select(x => x.TimeGrain).Min(),

    DateTime.UtcNow - TimeSpan.FromMinutes(20),

    DateTime.UtcNow

).MetricValueSetCollection;

 

// Display the metric values to the user.

foreach (MetricValueSet valueSet in values.Value )

{

    Console.WriteLine(valueSet.DisplayName + " for the past 20 minutes:");

    foreach (MetricValue metricValue in valueSet.MetricValues)

    {

        Console.WriteLine(metricValue.Timestamp + "\t" + metricValue.Average);

    }

}

 

Console.Write("Press any key to continue:");

Console.ReadKey();

We support metrics for a variety of services with the monitoring SDK:

Service

Typical metrics

Frequencies

Cloud services

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Virtual machines

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Websites

Requests, Errors, Memory, Response time, Data out

1 min, 1 hr

Mobile Services

API Calls, Data Out, SQL performance

1 hr

Storage

Requests, Success rate, End2End latency

1 min, 1 hr

Service Bus

Messages, Errors, Queue length, Requests

5 min

HDInsight

Containers, Apps running

15 min

If you’d like to manage advanced autoscale settings that aren’t possible to do in the Portal, you can also do that via the SDK. For example, you can construct autoscale based on custom metrics – you can autoscale by anything that is returned from MetricDefinitions.

All of the documentation on the SDK is available on MSDN. API Management: Support for Services REST API

We launched the Azure API Management service into preview in May of this year.  The API Management service enables  customers to quickly and securely publish APIs to partners, the public development community, and even internal developers.

Today, I’m excited to announce the availability of the API Management REST API which opens up a large number of new scenarios. It can be used to manage APIs, products, subscriptions, users and groups in addition to accessing your API analytics. In fact, virtually any management operation available in the Management API Portal is now accessible programmatically - opening up a host of integration and automation scenarios, including directly monetizing an API with your commerce provider of choice, taking over user or subscription management, automating API deployments and more.

We've even provided an additional SAS (Shared Access Signature) security option. An integrated experience in the publisher portal allows you to generate SAS tokens - so securely calling your API service couldn’t be easier. In just three easy steps:

  1. Enable the API on the System Settings page on the Publisher Portal
  2. Acquire a time-limited access token either manually or programmatically
  3. Start sending requests to the API, providing the token with every request

image 

See the REST API reference for full details. Delegation of user registration and product subscription

The new API Management REST API makes it easy to automate and integrate other processes with API management. Many customers integrating in this way already have a user account system and would prefer to use this existing resource, instead of the built-in functionality provided by the Developer Portal. This feature, called Delegation, enables your existing website or backend to own the user data, manage subscriptions and seamlessly integrate with API Management's dynamically generated API documentation.

image

It's easy to enable Delegation: in the Publisher Portal navigate to the Delegation section and enable Delegated Sign-in and Sign up, provide the endpoint URL and validation key and you're good to go. For more details, check out the how-to guide. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu

Categories: Architecture, Programming

Product Ownership

Software Requirements Blog - Seilevel.com - Thu, 08/21/2014 - 17:00
When a company decides, for whatever reason, that Agile is the right idea for them and they are going to go all out to switch over to Scrum as a methodology, the person they usually look to as taking on the new Product Owner roles (POs) are the existing Product Managers (PdMs). This makes all […]
Categories: Requirements

How to move your files to Google Drive

Google Code Blog - Thu, 08/21/2014 - 17:00
Posted by Chuck Coulson, Drive Technology Partnerships, Google

Google Drive for Work is a new premium offering for businesses that includes unlimited storage, advanced audit reporting and new security controls and features, such as encryption at rest.

If you're getting ready to move your company to Drive, one of the first things on your mind is how to migrate all your existing files with as little hassle as possible. It's easy to migrate your files by uploading them directly to Drive or using the Drive Sync client. But, what if you have files stored elsewhere that you want to consolidate? Or what if you want to migrate multiple users at once? Many independent software vendors (ISVs) have built solutions to help organizations migrate their files from different File Sync and Share (FSS) solutions, local hard drives and other data sources. Here are some of the options available for you to use:
  • Cloud Migrator, by Cloud Technology Solutions, migrates user accounts and files to Google Drive and other Google Apps services. (websiteblogpost)
  • Cloudsfer, by Tzunami, transfers files from Box, Dropbox and Microsoft OneDrive to Google Drive. (website)
  • Migrator for Google Apps, by Backupify, migrates and consolidates personal Google Drive or other Google Apps for Business accounts into a single domain. (websiteblogpost)
  • Mover migrates data from 23 cloud services providers, web services, and databases into Google Drive. (websiteblogpost)
  • Nava Certus, by LinkGard, provides a migration and synchronization solution for on-premise and cloud-based storage platforms, including Dropbox, Microsoft OneDrive, Amazon S3, as well as local file systems. (website,blogpost)
  • SkySync, by Portal Architects, integrates existing on-site storage systems as well as other cloud storage providers to Google Drive. (websiteblogpost)
These are just a few companies that offer migration solutions. Please visit the Google Apps Marketplace for a complete listing of tools and offerings that add value to the Google Apps platform.
Categories: Programming

The Plague of Methods and Frameworks

NOOP.NL - Jurgen Appelo - Thu, 08/21/2014 - 16:33
Methods and Frameworks

I know of no industry in the world that is as infested with methods and frameworks as the software business. Whether it’s RUP, XP, Scrum, AUP, DAD, or SAFe, it seems IT businesses are always looking for yet another method or framework that they can “implement” next month.

The post The Plague of Methods and Frameworks appeared first on NOOP.NL.

Categories: Project Management