Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Neo4j: LOAD CSV – Processing hidden arrays in your CSV documents

Mark Needham - Thu, 07/10/2014 - 15:54

I was recently asked how to process an ‘array’ of values inside a column in a CSV file using Neo4j’s LOAD CSV tool and although I initially thought this wouldn’t be possible as every cell is treated as a String, Michael showed me a way of working around this which I thought was pretty neat.

Let’s say we have a CSV file representing people and their friends. It might look like this:

name,friends
"Mark","Michael,Peter"
"Michael","Peter,Kenny"
"Kenny","Anders,Michael"

And what we want is to have the following nodes:

  • Mark
  • Michael
  • Peter
  • Kenny
  • Anders

And the following friends relationships:

  • Mark -> Michael
  • Mark -> Peter
  • Michael -> Peter
  • Michael -> Kenny
  • Kenny -> Anders
  • Kenny -> Michael

We’ll start by loading the CSV file and returning each row:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row RETURN row;
+------------------------------------------------+
| row                                            |
+------------------------------------------------+
| {name -> "Mark", friends -> "Michael,Peter"}   |
| {name -> "Michael", friends -> "Peter,Kenny"}  |
| {name -> "Kenny", friends -> "Anders,Michael"} |
+------------------------------------------------+
3 rows

As expected the ‘friends’ column is being treated as a String which means we can use the split function to get an array of people that we want to be friends with:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row RETURN row, split(row.friends, ",") AS friends;
+-----------------------------------------------------------------------+
| row                                            | friends              |
+-----------------------------------------------------------------------+
| {name -> "Mark", friends -> "Michael,Peter"}   | ["Michael","Peter"]  |
| {name -> "Michael", friends -> "Peter,Kenny"}  | ["Peter","Kenny"]    |
| {name -> "Kenny", friends -> "Anders,Michael"} | ["Anders","Michael"] |
+-----------------------------------------------------------------------+
3 rows

Now that we’ve got them as an array we can use UNWIND to get pairs of friends that we want to create:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row 
  WITH row, split(row.friends, ",") AS friends 
  UNWIND friends AS friend 
  RETURN row.name, friend;
+-----------------------+
| row.name  | friend    |
+-----------------------+
| "Mark"    | "Michael" |
| "Mark"    | "Peter"   |
| "Michael" | "Peter"   |
| "Michael" | "Kenny"   |
| "Kenny"   | "Anders"  |
| "Kenny"   | "Michael" |
+-----------------------+
6 rows

And now we’ll introduce some MERGE statements to create the appropriate nodes and relationships:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row 
  WITH row, split(row.friends, ",") AS friends 
  UNWIND friends AS friend  
  MERGE (p1:Person {name: row.name}) 
  MERGE (p2:Person {name: friend}) 
  MERGE (p1)-[:FRIENDS_WITH]->(p2);
+-------------------+
| No data returned. |
+-------------------+
Nodes created: 5
Relationships created: 6
Properties set: 5
Labels added: 5
373 ms

And now if we query the database to get back all the nodes + relationships…

$ match (p1:Person)-[r]->(p2) RETURN p1,r, p2;
+------------------------------------------------------------------------+
| p1                      | r                  | p2                      |
+------------------------------------------------------------------------+
| Node[0]{name:"Mark"}    | :FRIENDS_WITH[0]{} | Node[1]{name:"Michael"} |
| Node[0]{name:"Mark"}    | :FRIENDS_WITH[1]{} | Node[2]{name:"Peter"}   |
| Node[1]{name:"Michael"} | :FRIENDS_WITH[2]{} | Node[2]{name:"Peter"}   |
| Node[1]{name:"Michael"} | :FRIENDS_WITH[3]{} | Node[3]{name:"Kenny"}   |
| Node[3]{name:"Kenny"}   | :FRIENDS_WITH[4]{} | Node[4]{name:"Anders"}  |
| Node[3]{name:"Kenny"}   | :FRIENDS_WITH[5]{} | Node[1]{name:"Michael"} |
+------------------------------------------------------------------------+
6 rows

…you’ll see that we have everything.

If instead of a comma separated list of people we have a literal array in the cell…

name,friends
"Mark", "[Michael,Peter]"
"Michael", "[Peter,Kenny]"
"Kenny", "[Anders,Michael]"

…we’d need to tweak the part of the query which extracts our friends to strip off the first and last characters:

$ load csv with headers from "file:/Users/markneedham/Desktop/friendsa.csv" AS row 
  RETURN row, split(substring(row.friends, 1, length(row.friends) -2), ",") AS friends;
+-------------------------------------------------------------------------+
| row                                              | friends              |
+-------------------------------------------------------------------------+
| {name -> "Mark", friends -> "[Michael,Peter]"}   | ["Michael","Peter"]  |
| {name -> "Michael", friends -> "[Peter,Kenny]"}  | ["Peter","Kenny"]    |
| {name -> "Kenny", friends -> "[Anders,Michael]"} | ["Anders","Michael"] |
+-------------------------------------------------------------------------+
3 rows

And then if we put the whole query together we end up with this:

$ load csv with headers from "file:/Users/markneedham/Desktop/friendsa.csv" AS row 
  WITH row, split(substring(row.friends, 1, length(row.friends) -2), ",") AS friends 
  UNWIND friends AS friend  
  MERGE (p1:Person {name: row.name}) 
  MERGE (p2:Person {name: friend}) 
  MERGE (p1)-[:FRIENDS_WITH]->(p2);;
+-------------------+
| No data returned. |
+-------------------+
Nodes created: 5
Relationships created: 6
Properties set: 5
Labels added: 5
Categories: Programming

Fitting In With Corporate Culture

Making the Complex Simple - John Sonmez - Thu, 07/10/2014 - 15:00

In this video I answer a question about fitting into corporate culture when you come from a different background.

The post Fitting In With Corporate Culture appeared first on Simple Programmer.

Categories: Programming

Registration Opens for More Workshops

NOOP.NL - Jurgen Appelo - Thu, 07/10/2014 - 09:51
open-registration

I’m very happy to say that the Management 3.0 Workout sessions so far have been well received, and they’re getting better every time! I’m almost ready for the summer break, but registration has opened for the remaining workshops this year. Please remember, I will personally visit each city only once!
It was a very joyful and insightful workshop

The post Registration Opens for More Workshops appeared first on NOOP.NL.

Categories: Project Management

Agile Charter Barnacles

Barnacles slow down the boat.

Barnacles slow down the boat.

The basic Agile charter is brief and to the point. This brevity seems to beg practitioners to add components from classic charters.  A sample of the types of additions include:

  1. Out of Scope:  Sometimes it is important to establish boundaries for a team.  Identifying what is not in scope sets limits for the team.  Identifying what is out of scope s more important as projects get larger.  Programs will almost always need spend time defining which groups should focus on what areas.  I often keep this data outside of the charter and post copies (on flip charts) in team rooms.
  2. Involved Groups: This possible addition identifies other teams, groups and stakeholders that the team will have to interface with to deliver the solution.  I generally find this section to be of value for new teams, teams involved with a business solution outside of their normal area of expertise or programs.  A quick test for whether the section is needed is to ask whether it can be completed by cutting and pasting the information from somewhere else.  If you can complete the involved groups section using a cut and paste from the last charter, the section is not needed.
  3. Risks: Risks are potential problems that could affect the team’s ability to deliver value or the value of that delivery. While it makes sense to identify and discuss risks when crafting a charter, I do not recommend adding risks to ANY charter.  In classic projects, add risks directly to the risk plan, and in Agile projects, add risks directly to the backlog.  Having information in multiple locations can require extra time to maintain OR lead to losing information.
  4. Proposed Release Plan: Developing a release plan helps a team answer questions about when a project will be delivered.   Embedding release plans in the charter generates an anchor bias (setting a date or a set of dates in the team and stakeholders minds), therefore can be problematic. Recognize that release plans developed early in a project are subject to high levels of variability since significant levels of discovery occur as a project develops.
  5. Proposed Solution: In most cases I question how the solution can be known before the project begins, therefore I tend to resist including it in the charter.
  6. Constraints: Everyone on the team should understand the constraints the team faces.  Capturing known constraints as an additional flip chart makes sense and does not add significantly to burden of the process.  Constraints can include: fixed deliver dates, budgets, key personnel absences or hardware/software constraints, such as COTS usage requirements.

For another view on what can or can’t be in an Agile charter take a look at the Agile Warrior’s blog.  The blog has longer list of items that compose his proposed inception deck/charter.

The decision tree I use for whether to include anything other than the basics in an Agile charter (which I recommend doing as flip charts and posting in the project’s team room) is:

  1. Will the team be able to use the information to guide their performance?
    If yes, consider including and go to next question.
  2. Is the data available somewhere else?
    If yes, don’t include it. If no, consider including and go to the next question.
  3. Will I have to replicate the data in another document or tool later?
    If no, I will typically include, while if yes I will not include.

My default answer to adding stuff to an Agile charter is no, and I require a significant level of convincing to change my mind.  The Agile team charter is a tool to help the team focus on achieving a goal and delivering specific value. Anything that does not help achieve that goal does not belong in the charter.


Categories: Process Management

Should the Product Owner test everything?

Good Requirements - Jeffrey Davidson - Thu, 07/10/2014 - 00:24

A scrum master I’ve coached recently sent me this question and I wanted to share my answer. Would you have answered the same way? What did I miss? What do you ask (demand?) from your product owner?

 

Question: Hi, Jeffrey,

Quick question for you: Does Product Owner (PO) approval need to be on a per story basis, per feature basis, or both?

We are facing a situation where some of the system environments were not in place and completed work has remained in Dev until today. We received today that the Test environment is ready. The stage environment is due to be completed at some point in the near future. Meanwhile, our team has modified the team’s “Definition of Done” so that completion criteria are more aligned with our capabilities within the framework of system environments being incomplete. Hence, the above question.

Signed,
The Client

 

Answer: Hello, Client.

First, it makes sense the PO is included in the conversation around “Definition of Done.” I’m not sure based on the question if they are in the loop, or not. I say this because the team is building and meeting expectations for the PO. It’s the polite thing to do to notify them and explain the new definition. In some cases, it may be more appropriate to ask their permission to change rather than simply notify them of the change.

Second, this change makes sense to me; you didn’t have the right environments previously and now you do. It makes sense the definition should change to accompany the environment and how the team is working.

Third, what’s happened to date and how much trust is there between the PO and team? If the PO has already tested all the existing stories, then they may not want to do more than audit the existing stories in the new environment(s). If the PO has trust in the team and testers, they many never do more than audit the stories. If the PO doesn’t have time, they may never get to more than auditing stories. In the end, it’s a great big “it depends” kind of answer.

What do I want from the PO? I want more involvement, as much as I can get. I want the PO to test every story as it’s finished and at least audit functionality and features as they are delivered. I don’t often get it, but it’s my request.

Categories: Requirements

One view or many?

Coding the Architecture - Simon Brown - Wed, 07/09/2014 - 22:16

In Diagramming Spring MVC webapps, I presented an approach that allows you to create a fairly comprehensive model of a software system in code. It starts with you creating a simple base model that includes software systems, people and containers. With this in place, all of the components can then be automatically populated into the model via a scan of the compiled Java code. This is all based upon Software architecture as code.

Once you have a model to work with, it's relatively straightforward to visualise it via a number of views. In the Spring PetClinic example, three separate views (one each of a system context, containers and components view) are sufficient to show everything. With larger software systems, however, this isn't the case.

As an example, here's what a single component diagram for the web application of my techtribes.je system looks like.

A mess

Yup, it's a mess. The components around the left, top and right edges are Spring MVC controllers, while those in the centre are the core components. There are clearly three hotspots here - the LoggingComponent, ActivityComponent and ContentSourceComponent. The reason for the first should be obvious, in that almost all components use the LoggingComponent. The latter two are used by all controllers, simply because some common information is displayed on the header of all pages on the website. I don't mind excluding the LoggingComponent from the view, but I'd quite like to keep the other two. That aside, even excluding the ActivityComponent and ContentSourceComponent doesn't actually solve the problem here. The resulting diagram is still a mess because it's showing far too much information. Instead, another approach is needed.

With this in mind, what I've done instead is use a programmatic approach to create a number of views for the techtribes.je web application, one per Spring MVC controller. The code looks like this.

The result is a larger number of simple diagrams, but I think that the trade-off is worth it. It's a much better way to navigate a large model.

Not so much of a mess

And here's an example component diagram that focusses on a single Spring MVC controller.

Not so much of a mess

The JSON representing the techtribes.je model can be found on GitHub and you can copy-paste it into my (still in-progress) diagramming tool if you'd like to explore the model yourself. I'm still experimenting with much of this but I really like the opportunities provided by having the software architecture model in code. This really is "software architecture for developers". :-)

Categories: Architecture

Putting your Professional Group on the Map

Google Code Blog - Wed, 07/09/2014 - 17:30
By Sarah Maddox, Google Developer Relations team

People love to know what's happening in their area of expertise around the world. What better way to show it, than on a map? Tech Comm on a Map puts technical communication tidbits onto an interactive map, together with the data and functionality provided by Google Maps.


I'm a technical writer at Google. In this post I share a project that uses the new Data layer in the Google Maps JavaScript API, with a Google Sheets spreadsheet as a data source and a location search provided by Google Places Autocomplete.

Although this project is about technical communication, you can easily adapt it for other special interest groups too. The code is on GitHub. The map in action Visit Tech Comm on a Map to see it in action. Here's a screenshot:
The colored circles indicate the location of technical communication conferences, societies, groups and businesses. The 'other' category is for bits and pieces that don't fit into any of the categories. You can select and deselect the checkboxes at top left of the map, to choose the item types you're interested in.

When you hover over a circle, an info window pops up with information about the item you chose. If you click a circle, the map zooms in so that you can see where the event or group is located. You can also search for a specific location, to see what's happening there.

Let's look at the building blocks of Tech Comm on a Map.
Getting hold of a mapI'm using the Google Maps JavaScript API to display and interact with a map. Where does the data come from?When planning this project, I decided I want technical communicators to be able to add data (conferences, groups, businesses, and so on) themselves, and the data must be immediately visible on the map.

I needed a data entry and storage tool that provided a data entry UI, user management and authorization, so that I didn't have to code all that myself. In addition, contributors shouldn't need to learn a new UI or a new syntax in order to add data items to the map. I needed a data entry mechanism that is familiar to most people - a spreadsheet, for example.

In an episode of Google Maps Developer Shortcuts, Paul Saxman shows how to pull data from Google Drive into your JavaScript app. That's just what I needed. Here's how it works.

The data for Tech Comm on a Map is in a Google Sheets spreadsheet. It looks something like this:


Also in the spreadsheet is a Google Apps Script that outputs the data in JSON format:

var SPREADSHEET_ID = '[MY-SPREADSHEET-ID]';var SHEET_NAME = 'Data';function doGet(request) {  var callback = request.parameters.jsonp;  var range = SpreadsheetApp      .openById(SPREADSHEET_ID)      .getSheetByName(SHEET_NAME)      .getDataRange();  var json = callback + '(' +      Utilities.jsonStringify(range.getValues()) + ')';    return ContentService      .createTextOutput(json)      .setMimeType(ContentService.MimeType.JAVASCRIPT);}


Follow these steps to add the script to the spreadsheet and make it available as a web service:
  1. In Google Sheets, choose 'Tools' > 'Script Editor'.
  2. Add a new script as a blank project.
  3. Insert the above code.
  4. Choose 'File' > 'Manage Versions', and name the latest version of the script.
  5. Choose 'Publish' >  'Deploy as web app'. Make it executable by 'anyone, even anonymous'. Note: This means anyone will be able to access the data in this spreadsheet via a script.
  6. Choose 'Deploy'.
  7. Copy the URL of the web service. You'll need to paste it into the JavaScript on your web page.

In your JavaScript, define a variable to contain the URL of the Google Apps script, and add the JSONP callback parameter:
var DATA_SERVICE_URL =   "https://script.google.com/macros/s/[MY-SCRIPT-ID]/exec?jsonp=?";

Then use jQuery's Ajax function to fetch and process the rows of data from the spreadsheet. Each row contains the information for an item: type, item name, description, website, start and end dates, address, latitude and longitude.

$.ajax({  url: DATA_SERVICE_URL,  dataType: 'jsonp',  success: function(data) {    // Get the spreadsheet rows one by one.    // First row contains headings, so start the index at 1 not 0.    for (var i = 1; i < data.length; i++) {      map.data.add({        properties: {          type: data[i][0],          name: data[i][1],          description: data[i][2],          website: data[i][3],          startdate: data[i][4],          enddate: data[i][5],          address: data[i][6]        },        geometry: {          lat: data[i][7],          lng: data[i][8]        }      });    }  }});The new Data layer in the Maps JavaScript API
Now that I could pull the tech comm information from the spreadsheet into my web page, I needed a way to visualize the data on the map. The new Data layer in the Google Maps JavaScript API is designed for just such a purpose. Notice the method map.data.add() in the above code. This is an instruction to add a feature in the Data layer.

With the basic JavaScript API you can add separate objects to the map, such as a polygon, a marker, or a line. But by using the Data layer, you can define a collection of objects and then manipulate and style them as a group. (The Data layer is also designed to play well with GeoJSON, but we don't need that aspect of it for this project.)

The tech comm data is represented as a series of features in the Data layer, each with a set of properties (type, name, address, etc) and a geometry (latitude and longitude).

Style the markers on the map, with different colors depending on the data type (conference, society, group, etc):


function techCommItemStyle(feature) {
 var type = feature.getProperty('type');
 var style = {
   icon: {      path: google.maps.SymbolPath.CIRCLE,      fillOpacity: 1,      strokeWeight: 3,      scale: 10            },    // Show the markers for this type if    // the user has selected the corresponding checkbox.    visible: (checkboxes[type] != false)  };
 // Set the marker colour based on type of tech comm item.  switch (type) {    case 'Conference':      style.icon.fillColor = '#c077f1';      style.icon.strokeColor = '#a347e1';      break;    case 'Society':      style.icon.fillColor = '#f6bb2e';      style.icon.strokeColor = '#ee7b0c';      break;. . . SNIPPED SOME DATA TYPES FOR BREVITY    default:      style.icon.fillColor = '#017cff';      style.icon.strokeColor = '#0000ff';  }  return style;}

Set listeners to respond when the user hovers over or clicks a marker. For example, this listener opens an info window on hover, showing information about the relevant data item:

 map.data.addListener('mouseover', function(event) {    createInfoWindow(event.feature);    infoWindow.open(map);  });The Place Autocomplete search
The last piece of the puzzle is to let users search for a specific location on the map, so that they can zoom in and see the events in that location. The location search box on the map is provided by the Place Autocomplete widget from the Google Places API.What's next?
Tech Comm on a Map is an ongoing project. We technical communicators are using a map to document our presence in the world!

Would you like to contribute? The code is on GitHub.

Posted by Louis Gray, Googler
Categories: Programming

Using SSD as a Foundation for New Generations of Flash Databases - Nati Shalom

“You just can't have it all” is a phrase that most of us are accustomed to hearing and that many still believe to be true when discussing the speed, scale and cost of processing data. To reach high speed data processing, it is necessary to utilize more memory resources which increases cost. This occurs because price increases as memory, on average, tends to be more expensive than commodity disk drive. The idea of data systems being unable to reliably provide you with both memory and fast access—not to mention at the right cost—has long been debated, though the idea of such limitations was cemented by computer scientist, Eric Brewer, who introduced us to the CAP theorem.

The CAP Theorem and Limitations for Distributed Computer Systems

Categories: Architecture

A Typical Conversation on Twitter

NOOP.NL - Jurgen Appelo - Wed, 07/09/2014 - 15:58
mouths

writer: “I believe that A is true.”

reader: That’s stupid. Everyone knows that A’ is true.”
writer: “I wasn’t referring to A’. I was talking about A.”

reader: There’s nothing wrong with A’.”

The post A Typical Conversation on Twitter appeared first on NOOP.NL.

Categories: Project Management

Agile Project Charter

Untitled

All charters have similar goals: to establish a clear and agreed upon definition of success for the team.

“If you don’t have a clear definition of your destination when you set sail, any port will do.”
– (unknown)

Where an Agile charter differs is it single minded approach to being a simple.  The Agile charter strives to be the simplest tool needed in order to concentrate the team’s focus by generating a clear statement of what the problem is to be solved and its solution.

While all charters would claim simplicity, some classic project charters can challenge the size of small novels (100k words between content and template) and can take months to write, wordsmith and gain authorizing signatures.  Note that is this generally occurs after strenuous budgeting. Agile team charters take a different approach and can generally be completed in a single afternoon.  An Agile team charter has three attributes that immediately set them into a different universe from classic charters.

  • Concise – Agile team charters are time boxed (done is a very specific time) and constrained.  Examples of constraints that I have used include one typed page or four handwritten flip charts (my favorite).  Time and size constraints force the team to be very concise.
  • Organic – Whenever possible use a flip chart rather than PowerPoint (or presentation software du jour).  Flips charts and other organic mechanisms for completing the charter generate engagement which increase the charters value to the team.
  • Use language that understandable to the whole team – Do not use legal mumbo jumbo.  Reducing the amount of legal mumbo jumbo reduces the time spent on wordsmithing.

Once the team understands the criteria for the charter there are four typical components of an Agile team charter.

  1. Establish team values and norms – Teams should establish a set of norms or agreements to reduce conflict and promote positive behavior.  Norms can include: respecting team members, not talking over others, showing up on time, and living up to commitments. The whole team (Scrum master, product owner and EVERYONE else) brainstorms, discusses, records and then commits to living by the norms.  Everyone needs to be able to commit to the norms or the team will struggle with institutionalizing the norms.
  2. Develop an elevator speech – It is a short summary used to quickly and simply define a project and its value proposition.  The elevator speech helps team members understand the project goal and why the project is important. As simple elevator speech outline is:
    For [target customer],
    who [statement of need or opportunity]
    the [project name]
    is a [product category]
    that [key benefit, compelling reason to buy].
    Unlike [primary competitive alternative]
    our project [statement of primary differentiation].
  3. Create a product box – The product box represents the teams understanding of the central metaphor for the project. Think of a product box as the box that would contain the functionality generated by the project if you could buy it at the store.  The team uses the metaphor of designing a box for the software to lock in a common understanding of the project vision. It is critical that the team spend the time needed draw the box as picture to set the metaphor of the vision deeply into the team’s psyche.
      Components of the product box should include: 

    • Product Name
    • Picture/Drawing
    • Slogan
    • 3 – 4 Key Selling Points or Objectives
    • Optional (back of box):
      1. Product Description
      2. Features List
      3. Operating Requirements
  4. Capture success criteria – the fourth flip chart in the four flip chart method is success criteria.  Success criteria provides the team with a set of goals to use when they need to work through issues or differing expectations.  Generating the success criteria as a team ensures the criteria do not represent the point of view of a single person.

An Agile project Charter is a tool to focus the team on what needs to be done and why it needs to be done.  Because this charter process is relentless focused on simplicity, an Agile team charter can be delivered with far less overhead than a classic project charter.


Categories: Process Management

It's Not Bottoms Up or Top Down, It's Capabilities Based Delivery

Herding Cats - Glen Alleman - Tue, 07/08/2014 - 21:36

There is a popular notion that agile is bottoms up and tradititional is top down. Neither is actually effective in deliverying value to the customer based on the needd capabilties, time phased to match the business or mission need. 

The traditional - read PMI and an over generalization - project life cycle is requirements elicitaton based. Go gather the requirements, arrange them in some order that makes sense and start implementing them. The agile approach (this is another over generlaizaiton) is to let the requirements emerge, implement them in the priority the customer says - or discovers.

Both these approaches have serious problems as evidenced by the staistics of software development

  • Traditional approaches take too long
  • Agile approach ignore the underlying architectural impacts of mashing up the requirements
Categories: Project Management

Sponsored Post: Surge, Apple, Dreambox, Chartbeat, Monitis, Netflix, Salesforce, Blizzard Entertainment, Cloudant, CopperEgg, Logentries, Gengo, ScaleOut Software, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, LogicMonitor, AppDynamics, ManageEngin

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here.
    • Senior Security Engineer. As a Senior Security Engineer on our team, you will be the ‘tip of the spear’ and will have direct impact on the Point-of-Sale system that powers Apple Retail globally. You will contribute to implementing standards and processes across multiple groups within the organization. You will also help lead the organization through a continuous process of learning and improving secure practices. Please apply here
    • Quality Assurance Engineer - Mobile Platforms. Apple’s Mobile Services/Emerging Technology group is looking for a highly motivated, result-oriented Quality Assurance Engineer. You will be responsible for overseeing quality engineering of mobile server and client platforms and applications in a fast-paced dynamic environment. Your job is to exceed our business customer's aggressive quality expectations and take the QA team forward on a path of continuous improvement. Please apply here.
    • Sr Software Engineer. Join Apple's Internet Applications Team, within the Information Systems and Technology group, as a Senior Software Engineer. Be involved in challenging and fast paced projects supporting Apple's business by delivering Java based IS Systems. Please apply here.
    • Senior Payment Engineer.  you will be responsible for working with cross-functional teams and developing Java server-based solutions to address business and technological needs. You will be helping design and build next generation retail solutions. You will be reviewing design and code developed by others on the team.You will build services and integrate with both internal as well as external services in a SOA environment. You will design and develop frameworks to be used by a large community of developers within the organization. Please apply here
    • Software Developer in Test. The iOS Systems team is looking for a Quality Assurance engineer. In this role you will be expected to work hand-in-hand with the software engineering team to find and diagnose software defects. The ideal candidate will also seek out ways to further automate all aspects of our existing process. This is a highly technical role and requires in-depth knowledge of both white-box and black-box testing methodologies. Please apply here
    • Senior Software Engineer -iOS Systems.Do you love building highly scalable, distributed web applications? Does the idea of a fast-paced environment make your heart leap? Do you want your technical abilities to be challenged every day, and for your work to make a difference in the lives of millions of people? If so, the iOS Systems Carrier Services team is looking for a talented software engineer who is not afraid to share knowledge, think outside the box, and question assumptions. Please apply here.

  • Asana. As an infrastructure engineer you will be designing software to process, query, search, analyze, and store data for applications that are continually growing in scale. You will work with a world-class team of engineers on deploying and operating existing systems, and building new ones for problems that are unique to our problem space. Please apply here.

  • Operations Engineer - AWS Cloud. Want to grow and extend a cutting-edge cloud deployment? Take charge of an innovative 24x7 web service infrastructure on the AWS Cloud? Join DreamBox Learning’s creative team of engineers, designers, and educators. Help us radically change education in an environment that values collaboration, innovation, integrity and fun. Please apply here. http://www.dreambox.com/careers

  • Chartbeat measures and monetizes attention on the web. Our traffic numbers are growing, and so is our list of product and feature ideas. That means we need you, and all your unparalleled backend engineer knowledge to help up us scale, extend, and evolve our infrastructure to handle it all. If you've these chops: www.chartbeat.com/jobs/be, come join the team!

  • The Salesforce.com Core Application Performance team is seeking talented and experienced software engineers to focus on system reliability and performance, developing solutions for our multi-tenant, on-demand cloud computing system. Ideal candidate is an experienced Java developer, likes solving real-world performance and scalability challenges and building new monitoring and analysis solutions to make our site more reliable, scalable and responsive. Please apply here.

  • Sr. Software Engineer - Distributed Systems. Membership platform is at the heart of Netflix product, supporting functions like customer identity, personalized profiles, experimentation, and more. Are you someone who loves to dig into data structure optimization, parallel execution, smart throttling and graceful degradation, SYN and accept queue configuration, and the like? Is the availability vs consistency tradeoff in a distributed system too obvious to you? Do you have an opinion about asynchronous execution and distributed co-ordination? Come join us

  • Java Software Engineers of all levels, your time is now. Blizzard Entertainment is leveling up its Battle.net team, and we want to hear from experienced and enthusiastic engineers who want to join them on their quest to produce the most epic customer-facing site experiences possible. As a Battle.net engineer, you'll be responsible for creating new (and improving existing) applications in a high-load, high-availability environment. Please apply here.

  • Human Translation Platform Gengo Seeks Sr. DevOps Engineer. Build an infrastructure capable of handling billions of translation jobs, worked on by tens of thousands of qualified translators. If you love playing with Amazon’s AWS, understand the challenges behind release-engineering, and get a kick out of analyzing log data for performance bottlenecks, please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • OmniTI has a reputation for scalable web applications and architectures, but we still lean on our friends and peers to see how things can be done better. Surge started as the brainchild of our employees wanting to bring the best and brightest in Web Operations to our own backyard. Now in its fifth year, Surge has become the conference on scalability and performance. Early Bird rate in effect until 7/24! 

  • FoundationDB has announced a new course on concurrency which is free and fully browser-accessible. The course is targeted at developers who are familiar with the FoundationDB Key-Value Store API and want to achieve high throughput in their applications.
Cool Products and Services
  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Aerospike in-Memory NoSQL database is now Open Source. Read the news and see who scales with Aerospike. Check out the code on github!

  • consistent: to be, or not to be. That’s the question. Is data in MongoDB consistent? It depends. It’s a trade-off between consistency and performance. However, does performance have to be sacrificed to maintain consistency? more.

  • Do Continuous MapReduce on Live Data? ScaleOut Software's hServer was built to let you hold your daily business data in-memory, update it as it changes, and concurrently run continuous MapReduce tasks on it to analyze it in real-time. We call this "stateful" analysis. To learn more check out hServer.

  • LogicMonitor is the cloud-based IT performance monitoring solution that enables companies to easily and cost-effectively monitor their entire IT infrastructure stack – storage, servers, networks, applications, virtualization, and websites – from the cloud. No firewall changes needed - start monitoring in only 15 minutes utilizing customized dashboards, trending graphs & alerting.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Critical Thinking Skills Needed for Any Change To Be Effective

Herding Cats - Glen Alleman - Tue, 07/08/2014 - 15:40

Why is it hard to think beyond our short term vision? Rapid delivery of incremental value is common sense, no one would object to that - within the ability of the business to absorb this value of course. This is called the Business Rhythm

But that rapid redelivery of incremental value is only a means to an end. The end is a set of capabilities of the business that allows that business to accomplish their Mission. To do something as a whole with those incremental features. That is turn the features into a capability.

Think about a voice over IP system, who's feature set was incrementally delivered to 5,000 users at a nation wide firm. This week we can call people, receive calls from people, but we don't have the Hold feature yet. Are you really interested in taking that product and putting it to use? 

How about an insurance enrollment system, where you can sign up, provide your financial and health background, choose between policies, but can't see which doctors in your town take the insurance, because the Provider Network piece isn't complete yet.

These are not notional examples, they're real projects I work on. For these type projects - most projects in the enterprise IT world -  an All In feature set is needed. Not the Minimum Viable Product (MVP). But the set of Required Capabilities to meet the business case goals of providing a service or product to customers. No half baked release with missing market features.

You might say, that incremental release of features could be a market strategy, but looking at actual products or integrated services, it seems there is little room for partial capabilities in anything, let alone Enterprise class products. Either the target market gets the set of needed capabilities to capture market share or provide the business service or it doesn't and someone else does.

An internal system may have different behaviours, I can't say since I don't work in that domain. But we've heard loud and strident voices telling us deliver fast and deliver often when there is no consideration for the Business Rhythm of the market or user community for those incremental - which is a code word for partially working - capabilities.

Of course the big bang, design, code test, paradigm was nonsense to start with. That's not what I talking about here. I'm talking about the lack of critical assessment of what is the value flow of the business and only then applying a specific set of processes to deliver that value. Outcome first, then method.

So Now The Hard Part

The conversation around software delivery seems to be dominated by those writing software, rather than by those paying for the software to be written. Where are the critical thinking skills to ask those hard nosed business questions:

  • When will you be done with all the features I need to implement my business strategy?
  • How much will it cost for all those features I to provide those capabilities that fulfill my business plan?

Questions like that have been replaced with platitudes and simple and many times simple minded phrases.

  • Deliver early and often - without consideration of the business needs
  • Unit testing is a waste - because those tests like the internal documentation that provides a long term maintainability platform, aren't what the customer bought
  • We can decide about all kinds of things in the software business without having to estimate anything - a completion violation of the principle of microeconomics, which requires we know the impact of our choices in some unit of measure meaningful to the decision maker. You know something like Money.
Related articles Is There Such a Thing As Making Decisions Without Knowing the Cost? Capabilities Based Planning and Development Business Rhythm Drives Process Do It Right or Do It Twice What Does It Mean To Be DONE? How To Estimate Almost Any Software Deliverable Alan Kay: The pitfalls of incrementalism Don't Start With Requirements Start With Capabilities How To Create Confusion About Root Causes
Categories: Project Management

Choose Backlog Items That Serve Two Purposes

Mike Cohn's Blog - Tue, 07/08/2014 - 15:00

I've been playing a fair amount of Go lately. If you're not familiar with Go, it's a strategy game that originated in China 2,500 years ago. It's along the lines of chess, but it's about marking territory with black and white pieces played on the intersections of a grid of 19x19 lines.

Like Scrum, there are very few rules in Go. Also like Scrum, there are a fair number of principles, often called proverbs in the Go world. One of these Go principles is that a move that does two things is better than a move that does one. For example, a single move may defend a group of the player's stones while threatening the opponent's stones.

Even if you don't know Go, I think this proverb is understandable--after all, something that does two things sounds like it would be better than something that achieves only one goal. But this isn't hard-and-fast advice. It's a principle or proverb because sometimes a move that does one thing--but does that thing very well--will be the better choice.

In addition to playing more Go over the past week or two, I've also spent a lot of time thinking about approaches to prioritizing product backlogs. I hope to share some new thoughts on that here in the coming months.

One thing that has become increasingly clear is that when prioritizing product backlog items, an item that can achieve two goals should often be higher priority.

Let’s see how a product backlog item can help achieve two goals.

Normally, product backlog items are prioritized based on how desirable the new functionality is. This is often, of course, adjusted by how long that functionality will take to develop. That is, a product owner wants something pretty badly until finding out that feature will take the entire next quarter. So, desirability often tempered by cost (usually in development team time) determines priorities.

But there can be other important considerations. And prioritizing highly a product backlog item that is moderately desirable (given its cost) but that also achieves one of these secondary goals may make that item a better first choice overall.

One such consideration is how much learning will occur if the product backlog item is developed. If the team or product owner is going to learn from developing a feature, do it sooner.

Learning can occur in a variety of ways. Developers may learn from doing a product backlog item that the new technology they’d counted on being easy isn’t. A product owner may learn by showing a new user interface built for a specific product backlog item is not something users are excited about as the product owner thought.

Another consideration is how much risk will be reduced or eliminated by developing a product backlog item. If a certain product backlog item needs to be developed and doing so will be risky, my preference is to do that product backlog item early so that I have time to recover from the risk if it hits.

Selecting product backlog items to work on that achieve two (or all three!) of these goals can be a very powerful prioritization strategy. Adding value while simultaneously reducing risk or accelerating learning is just as good as playing a Go stone that achieves two goals.

Choose Backlog Items That Serve Two Purposes

Mike Cohn's Blog - Tue, 07/08/2014 - 15:00

I've been playing a fair amount of Go lately. If you're not familiar with Go, it's a strategy game that originated in China 2,500 years ago. It's along the lines of chess, but it's about marking territory with black and white pieces played on the intersections of a grid of 19x19 lines.

Like Scrum, there are very few rules in Go. Also like Scrum, there are a fair number of principles, often called proverbs in the Go world. One of these Go principles is that a move that does two things is better than a move that does one. For example, a single move may defend a group of the player's stones while threatening the opponent's stones.

Even if you don't know Go, I think this proverb is understandable--after all, something that does two things sounds like it would be better than something that achieves only one goal. But this isn't hard-and-fast advice. It's a principle or proverb because sometimes a move that does one thing--but does that thing very well--will be the better choice.

In addition to playing more Go over the past week or two, I've also spent a lot of time thinking about approaches to prioritizing product backlogs. I hope to share some new thoughts on that here in the coming months.

One thing that has become increasingly clear is that when prioritizing product backlog items, an item that can achieve two goals should often be higher priority.

Let’s see how a product backlog item can help achieve two goals.

Normally, product backlog items are prioritized based on how desirable the new functionality is. This is often, of course, adjusted by how long that functionality will take to develop. That is, a product owner wants something pretty badly until finding out that feature will take the entire next quarter. So, desirability often tempered by cost (usually in development team time) determines priorities.

But there can be other important considerations. And prioritizing highly a product backlog item that is moderately desirable (given its cost) but that also achieves one of these secondary goals may make that item a better first choice overall.

One such consideration is how much learning will occur if the product backlog item is developed. If the team or product owner is going to learn from developing a feature, do it sooner.

Learning can occur in a variety of ways. Developers may learn from doing a product backlog item that the new technology they’d counted on being easy isn’t. A product owner may learn by showing a new user interface built for a specific product backlog item is not something users are excited about as the product owner thought.

Another consideration is how much risk will be reduced or eliminated by developing a product backlog item. If a certain product backlog item needs to be developed and doing so will be risky, my preference is to do that product backlog item early so that I have time to recover from the risk if it hits.

Selecting product backlog items to work on that achieve two (or all three!) of these goals can be a very powerful prioritization strategy. Adding value while simultaneously reducing risk or accelerating learning is just as good as playing a Go stone that achieves two goals.

Developer Productivity Tool Review: Telerik’s Devcraft

Making the Complex Simple - John Sonmez - Tue, 07/08/2014 - 15:00

I don’t do many product reviews on this blog–and there is a good reason for it. I get plenty of requests for companies asking me to “pimp” their stuff on this blog, but most of the stuff I am asked to write about I just don’t use or would never really find myself using. However, […]

The post Developer Productivity Tool Review: Telerik’s Devcraft appeared first on Simple Programmer.

Categories: Programming

Pragmatic Manager Posted: Standup or Handoff

I published a Pragmatic Manager yesterday to my subscribers. Normally, I let them enjoy the pleasure of being “in-the-know” about what I have to say this month for a while before I post the emails to my site.

Read the Pragmatic Manager here: Standup or Handoff.

However, I made a Big Mistake in where I will be speaking this week. I fat-fingered July 10 into July 19. What made it into the newsletter was July 19. Oops. I’m actually a panelist this Thursday, July 10, at Agile New England. The topic: Agile: Massive Success or Empty Buzzword?

My fellow panelists are Ken Schwaber and Linda Cook. We will have no shortage of opinions!

For example, I suspect that Ken and I might disagree on this very issue, of whether you can do agile with a geographically distributed team, and if you can have handoffs or you must do standups.

If you are near the Boston area, this Thursday, July 10, and want to see some sparks fly—or at least engage in lively debate—come to the Agile New England meeting July 10.

If you’re wondering how did this get past my reviewers, my copyeditor, and me? Well, we all make mistakes, don’t we?

Categories: Project Management

Identifying Architectural Elements in Current Systems

Coding the Architecture - Simon Brown - Tue, 07/08/2014 - 10:27

Simon recently talked about the gap between Software Architecture and Code and how to close this with architecturally-evident coding. He's also creating tools to allow Software Architecture to be expressed as code.

If you're working on a greenfield project then including annotations to help with navigation is a great solution but what if you've inherited a large system with a model-code gap? Or if you only realise, sometime into a project, that you lack a model to help you understand its growing complexity? Well, Simon also had some thoughts on scanning Spring annotations to provide this data. This works quite well and it got me thinking about other artifacts in code that can be extracted for these diagrams.

(In more formal terms - for those of you that like to quote ISO42010 - we are trying to extract architectural elements from code that can be displayed within architectural views. Of course the elements may be from a variety of differing abstractions, levels and granularity and therefore need to be placed within differing views.)

So what can we extract from a current/legacy system to give us a view into it? Some suggestions include:

Annotations As already suggested, dependency injections systems such as Spring provide some annotations that can be extracted to give a basic, high level model. Annotations are also present in Java EE applications and other enterprise frameworks. XML DI Configuration files Many (legacy) Spring projects use xml configuration files to define beans. Having scanned a few examples this seems to create a relatively low level model which would need some manual tweaking after generation. With sensible naming convention for beans you can produce models for a desired abstraction. The bean properties indicate the connections between these elements. Module Bundling Systems Modular systems such as OSGi define bundles of components and services including lifecycle and service registry. The deployment information should provide a high level overview. Packages If you have used 'package-by-component' then your packages will relate one-to-one with your components. The links between components should be identifiable by the links between the classes within them (the has-a relationships). If you have package-by-layer then this is much harder or impossible to use. Experience tells me that most real-world systems are actually a combination of the two so you should have some useful information. Class Names It's very common for class names to contain a strong indication of their role e.g. XyzService, XyzConnector, XyzDao, XyzFacade etc. Scanning for known patterns should identify the element names and roles. Interfaces and class hierarchies If you implement interfaces (or extend base classes) then the interfaces used may show the abstraction level and type e.g. implementing Service, DAO, Connector, Repository etc Delegation or Library Dependency Shared delegates used by a set of classes/functions may indicate their purpose. e.g. components delegating to a database utility might indicate a DAO component or using a CORBA utility might indicate a service. This is likely to be time consuming as you need to identify and scan for each delegate you are interested in. Comments/Javadoc/JSDoc/NDoc/Doclets Comments and javadoc style API comments can provide a large amount of meta-information about a class or package. In fact, many UML modeling tools enrich code using custom comments and tags. This has the advantage of not affecting the compiled code or introducing library dependencies but may not be consistently used. Tests Test can provide a lot of meta-data about your system. Unit tests tend to be concentrated around important classes and often construct entire components to test. Simply extracting the names of classes that are directly tested will produce a useful list of components. The higher level systems tests will reveal the important services. Build Systems Build systems such as ant, maven, NuBuild etc all have hooks into the code base for building and deployment. A simple extraction of the build targets will give you the deployment modules (which is a very helpful view for operation teams). This may give you the required information for a Containers view.

What do you think? Of course all of the above is very dependent on your codebase but if none of the above works then you have to question the quality and structure of the code! The data extracted may need filtering and manual correction as it won't give you exactly what you want. You might consider creating structurizr annotations using an initial scan and then maintaining them. One of my tasks for the next few weeks is to try this out on some legacy codebases I maintain.

What other ways of identifying architectural elements can you think of?

Categories: Architecture

What is an Estimate?

Software Development Today - Vasco Duarte - Tue, 07/08/2014 - 04:00
If you don’t know what an estimate is, you can’t avoid using them. So here’s my attempt to define what is an estimate.
The "estimates" that I'm concerned about are those that can easily (by omission, incompetence or malice) be turned into commitments. I believe Software Development is better seen as a discovery process (even simple software projects). In this context, commitments remove options and force the teams/companies to follow a plan instead of responding to change.

Here's my definition: "Estimates, in the context of #NoEstimates, are all estimates that can be (on purpose, or by accident) turned into a commitment regarding project work that is not being worked on at the moment when the estimate is made."
The principles behind this definition of an estimateIn this definition I have the following principles in mind:
  • Delay commitment, create options: When we commit to a particular solution up front we forego possible other solutions and, as a consequence we will make the plan harder to change. Each solution comes with explicit and implicit assumptions about what we will tackle in the future, therefore I prefer to commit only to what is needed in order to validate or deliver value to the customer now. This way, I keep my options open regarding the future.
  • Responding to change over following a plan: Following a plan is easy and comforting, especially when plans are detailed and very clear: good plans. That’s why we create plans in the first place! But being easy does not make it right. Sooner or later we are surprised by events we could not predict and are no longer compatible with the plan we created upfront. Estimation up front makes it harder for us to change the plan because as we define the plan in detail, we commit ourselves to following it, mentally and emotionally.
  • Collaboration over contract negotiation: Perhaps one of the most important Agile principles. Even when you spend time and invest time in creating a “perfect” contract there will be situations you cannot foresee. What do you do then? Hopefully by then you’ve established a relationship of trust with the other party. In that case, a simple conversation to review options and chose the best path will be enough. Estimation locks us down and tends to put people on the defensive when things don’t happen as planned. Leaving the estimation open and relying on incremental development with constant focus on validating the value delivered will help both parties come to an agreement when things don’t go as planned. Thereby focusing on collaboration instead of justifying why an estimated release date was missed.
  Here are some examples that fit the definition of Estimates that I outlined above:
  • An estimate of time/duration for work that is several days, weeks or months in the future.
  • An estimate of value that is not part of an experiment (the longer the time-frame the more of a problem it is).
  • A long term estimate of time and/or value that we can only validate after that long term is over.

How do you define Estimates in your work? Are you still doing estimates that fit the definition above? What is making it hard to stop doing such estimates? Share below in the comments what you think of this definition and how you would improve it.

This definition of an estimate was sent to the #NoEstimates mailing list a few weeks ago. If you want to receive exclusive content about #NoEstimates just sign up below. You will receive a regular email on the topic of #NoEstimates as Agile Software Development.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Picture credit: Pascal @ Flickr

Project Charters

This team is eager to begin work on its charter.

This team is eager to begin work on its charter.

Project charters play an extremely important role in the initiation of projects.  The Project Management Institute (PMI) has suggest that no project should begin without a charter.  A project charter creates a foundation to guide project activities.  The primary roles of the charter are to:

  1. Authorize the project to move forward.  In many methodologies the charter acts as the authorizing document.  The charter will include items such as the business need, project scope, budget and the signature of sponsors.  In some organizations the sponsor or the sponsor’s organization will author the document and use it as a form of transmittal or work ticket. Note: While the literature is rich with processes in which the charter is written outside of the development team, I have never seen this process in action.  What is more typical is a charter written by IT and signed-off on by the stakeholders or some sort of electronic budget authorization. Regardless of methodology, in a large organization, an authorization to spend money is important.
  2. Acts as a communication vehicle. The charter is often used to capture the initial thoughts of the sponsors, product owner and team about the goals of the project and the rules they will use to govern themselves (or be governed in classic command and control methodologies).
  3. Provide structure for the team. Who is on the project team is not always as evident as it should be.  The charter should identify the team members (and some demographics, like time zone, if distributed) with the roles they are playing, if needed.  In many IT organizations, team members play specified specialist roles. While specialization is a feature mainly thought to be seen in organizations using classic development and project management techniques, it can also be seen, albeit to a lesser extent, in organizations using Agile. Depending on the method used the product owner and Scrum Master should be identified.  Other team attributes such as team norms and expectations should be identified and agreed upon.
  4. Identify need resources. Identify any needed resources. Remember that the resources needed,like business needs and requirements continually evolve therefore overtime the charter must evolve or be replaced. For example, a charter I recently reviewed for a new Agile project and team identified the need for a team work room and licenses for a specific automated testing tool

The macro goal for the charter is to get the project going and in the right direction.  The needs and requirements of most projects of moderate to large size/duration are relatively dynamic.   The issue is that while charters are supposed to change and evolve, once created are prone to become shelfware.

A charter can contain additional information such as risks, constraints, project context and deadlines. The range of data that can be housed in a charter is limitless.  I have seem large projects spend months writing and wordsmithing charters that were hundreds of pages long. The process of thinking about all of the charter topics might be valuable, however rarely is the document used after the sponsors and other stakeholders sign off on the document. Whether Agile or a classic plan-based, rarely will the charter see the light of day (or an update cycle) unless the topics covered are important to the team.


Categories: Process Management