Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Traceability: Interpreting the Model

Tallying Up the Answers:
After assessing the three components (customer involvement, criticality and complexity), count the number of “yes” and “no” answers for each model axis. Plotting the results is merely a matter of indicating the number of yes and no answers on each axis. For example, if an appraisal yields:

Customer Involvement:   8 Yes 1 No

Criticality:                       7 Yes 2 No

Complexity:                    5 Yes 4 No

The responses could be shown graphically as:

1

The Traceability model is premised on the idea that as criticality and complexity increases, the need for communication intensifies. Communication becomes more difficult as customer involvement shifts from intimate to arm’s length. Each component of the model influences the other to some extent. In circumstances where customer involvement is high, there are many different planning and control tools that must be utilized than when involvement is lower. The relationships between each axes will suggest a different implementation of traceability. In a perfect world, the model would be implemented as a continuum with an infinite number of nuanced implementations of traceability. In real life, continuums are difficult to implement. Therefore, for ease of use, I suggest an implementation of model with three basic levels of traceability (the Three Bears Approach); Papa Bear or formal/detailed tracking, Mama Bear or formal with function level tracking and Baby Bear or informal (but disciplined)/anecdote based tracking. The three bears analogy is not meant to be pejorative; heavy, medium and light would work as well.

Interpreting the axes:
Assemble the axes you have plotted with the zero intercept at the center (see example below).

Untitled

As noted earlier, I suggest three levels of traceability, ranging from agile to formal. In general if the accumulated “No” answers exceed three (on any axis); an agile approach is not appropriate. An accumulated of 7, 8 or 9 strongly suggests as formal an approach as possible should be used. Note there are certain “NO” answers that are more equal than others. For example, in the Customer Involvement Category, if ‘Agile Methods Used’ is no . . . it probably makes sense to raise the level of formality immediately. A future refinement of the model will create a hierarchy of questions and to vary the impact of the responses based on that hierarchy. All components of the model are notional rather than carved in stone – implementing the model in specific environments will require tailoring. Apply the model through the filter of your experience. Organizational culture and experience will be most important on the cusps (3-4 and 6-7 yes answer ranges).

Informal – Anecdote Based Tracing

Component Scores: No axis with more than three “No” answers.

Traceability will be accomplished through combination of stories, test cases and later test results coupled with the tight interplay between customer and developers found in agile methods. This will ensure what was planned (and not unplanned) is implemented and what was implemented was what was planned.

Moderately Formal – Function Based Tracking

Component Scores: No axis with more than six “No” answers.

The moderately formal implementation of traceability links requirements to functions (each organization needs to define the precise unit – tracing use cases can be very effective when a detailed level control is not indicated), tests cases (development and user acceptance). This type of linkage is typically accomplished using matrices and numbering, requirements tools or some combination of the two.

Formal – Detailed Traceability

Component Scores: One or more axis with more than six “No” answers.

The most formal version of traceability links individual requirements (detailed, granular requirements) through design components, code and test cases, and results. This level of traceability provides the highest level of control and oversight. This type of traceability can be accomplished using paper and pencil for small projects; however for projects of any size, tools are required.

Caveats – As with all models, the proposed traceability model is a simplification of the real world. Therefore customization is expected. Three distinct levels of traceability may be too many for some organizations or too few for others. One implemented version of the model swings between an agile approach (primarily for WEB based projects where SCRUM is being practiced) and the moderately formal model for other types of projects.   For the example organization, adding additional layers has been difficult to implement without support to ensure high degrees of consistency. We found that leveraging project level tailoring for specific nuances has been the most practical means for dealing with “one off” issues.

In practice, teams have reported major benefits to using the model.

The first benefit is that using the model ensures an honest discussion of risks, complexity and customer involvement early in the life of the project. The model works best when all project team members (within reason) participate in the discussion and assessment of the model. Facilitation is sometimes required to ensure that discussion paralysis does not occur. One organization I work with has used this mechanism as a team building exercise.

The second benefit is that the model allows project managers, coaches and team members to define the expectations for the processes to be used for traceability in a transparent/collaborative manner. The framework presented allows all parties to understand what is driving where on the formality continuum your implementation of scalability will fall – HUH?. It should be noted that once the scalability topic is broached for traceability, it is difficult to contain the discussion to just this topic. I applaud those who embrace the discussion and would suggest that all project process need to be scalable based on a disciplined and participative process that can be applied early in a project.

Examples:

Extreme examples are easy to apply without leveraging a model, a questionnaire, or graph. An extreme example would be a critical system where defects could be life threatening – such as a project to build an air traffic control system. The attributes of this type of project would include extremely high levels of complexity, a large system, many groups of customers, each with differing needs, and probably a hard deadline with large penalties for missing the date and any misses on anticipated functionality. The model recommends that a detailed requirement for traceability is a component on the path of success. A similar example could be constructed for the model agile project in which intimate customer involvement can substitute for detailed traceability.

A more illustrative example would be for projects that inhabit gray areas. The following example employs the model to suggest a traceability approach.

An organization (The Org) was engaged a firm (WEB.CO) after evaluating a series of competitive bids to build a new ecommerce web site. The RFP required the use of several WEB 2.0 community and ecommerce functions. The customer that engaged WEB.CO felt they had defined the high level requirements in the RFP. WEB.CO uses some agile techniques on all projects in which they are engaged. The techniques include defining user stories, two weeks sprints, and a coach to support the team, co-located teams and daily builds. The RFP and negotiations indicated that the customer would not be on-site and at times would have constraints on their ability to participate in the project. These early pronouncements on involvement were deemed to non-negotiable. The contract included performance penalties that WEB.CO wished to avoid. The site was considered critical to the customer’s business. Delivery of the site was timed to be in conjunction with the initial introduction of the business. Let’s consider how we would apply the questionnaire in this case.

Question Number Involvement Complexity Criticality 1 Yes Yes No 2 No Yes No 3 No Yes Unknown
(need to know) 4 Yes Yes Yes 5 Yes
(Inferred) Yes Yes 6 Yes Yes No 7 Yes Yes No 8 Yes Yes No 9 Yes Yes Yes

 

Graphically the results look like:

2

Running the numbers on the individual radar plot axes highlights the high degree of perceived criticality for this project. The model recommends the moderate level of traceability documentation. As a final note, if this were a project I was involved on, I would keep an eye on the weakness in the involvement category. Knowing that there are weaknesses in the customer involvement category will make sure you do not rationalize away the criticality score.


Categories: Process Management

Let's Build Maker Cities for Maker People Around New Resources Like Bandwidth, Compute, and Atomically-Precise Manufacturing

TL;DR: There’s a lot of unused space in North America. Yet cities like San Francisco are becoming ever more expensive because of a bubble created by high tech jobs that seemingly can be done anywhere. Historically cities are built around resources that provide some service to humans. The age of infrastructure rising around physical resources is declining while the age of digital resource exploitation is rising. Cities are still valuable because they are amazing idea and problem solving machines. How about we create thousands of new Maker Cities in the vast emptiness that is North America and build them around digital resources like bandwidth, compute power, Atomically-Precise Manufacturing (AMP), and all things future and bright?

Observation Number One: There’s lots of empty space out there.
Categories: Architecture

R: ggplot – Cumulative frequency graphs

Mark Needham - Sun, 08/31/2014 - 23:10

In my continued playing around with ggplot I wanted to create a chart showing the cumulative growth of the number of members of the Neo4j London meetup group.

My initial data frame looked like this:

> head(meetupMembers)
  joinTimestamp            joinDate  monthYear quarterYear       week dayMonthYear
1  1.376572e+12 2013-08-15 13:13:40 2013-08-01  2013-07-01 2013-08-15   2013-08-15
2  1.379491e+12 2013-09-18 07:55:11 2013-09-01  2013-07-01 2013-09-12   2013-09-18
3  1.349454e+12 2012-10-05 16:28:04 2012-10-01  2012-10-01 2012-10-04   2012-10-05
4  1.383127e+12 2013-10-30 09:59:03 2013-10-01  2013-10-01 2013-10-24   2013-10-30
5  1.372239e+12 2013-06-26 09:27:40 2013-06-01  2013-04-01 2013-06-20   2013-06-26
6  1.330295e+12 2012-02-26 22:27:00 2012-02-01  2012-01-01 2012-02-23   2012-02-26

The first step was to transform the data so that I had a data frame where a row represented a day where a member joined the group. There would then be a count of how many members joined on that date.

We can do this with dplyr like so:

library(dplyr)
> head(meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()))
Source: local data frame [6 x 2]
 
  dayMonthYear n
1   2011-06-05 7
2   2011-06-07 1
3   2011-06-10 1
4   2011-06-12 1
5   2011-06-13 1
6   2011-06-15 1

To turn that into a chart we can plug it into ggplot and use the cumsum function to generate a line showing the cumulative total:

ggplot(data = meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()), 
       aes(x = dayMonthYear, y = n)) + 
  ylab("Number of members") +
  xlab("Date") +
  geom_line(aes(y = cumsum(n)))
2014 08 31 22 58 42

Alternatively we could bring the call to cumsum forward and generate a data frame which has the cumulative total:

> head(meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)))
Source: local data frame [6 x 2]
 
  dayMonthYear  n
1   2011-06-05  7
2   2011-06-07  8
3   2011-06-10  9
4   2011-06-12 10
5   2011-06-13 11
6   2011-06-15 12

And if we plug that into ggplot we’ll get the same curve as before:

ggplot(data = meetupMembers %.% group_by(dayMonthYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)), 
       aes(x = dayMonthYear, y = n)) + 
  ylab("Number of members") +
  xlab("Date") +
  geom_line()

If we want the curve to be a bit smoother we can group it by quarter rather than by day:

> head(meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)))
Source: local data frame [6 x 2]
 
  quarterYear   n
1  2011-04-01  13
2  2011-07-01  18
3  2011-10-01  21
4  2012-01-01  43
5  2012-04-01  60
6  2012-07-01 122

Now let’s plug that into ggplot:

ggplot(data = meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()) %.% mutate(n = cumsum(n)), 
       aes(x = quarterYear, y = n)) + 
    ylab("Number of members") +
    xlab("Date") +
    geom_line()
2014 08 31 23 08 24
Categories: Programming

SPaMCAST 305 – Estimation

www.spamcast.net

http://www.spamcast.net

 

Click this link to listen to SPaMCAST 305

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 305 - Estimation Essay

Software Process and Measurement Cast - Sun, 08/31/2014 - 22:00

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM - 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Traceability: Criticality

UntitledThe final axis in the model is ‘criticality’. Criticality is defined as the quality, state or degree of being of the highest importance. The problem with criticality is that the concept is far easier to recognize than to define precisely. This attribute of projects fits the old adage, ‘I will know it when I see it’. Each person on a project will be able to easily identify what they think is critical. The difficultly is that each person has their own perception of what is most important and that perception will change over time. This makes it imperative to define a set of questions or status indicators to appraise criticality consistently. The appraisal processes uses “group think” to find the central tendency in teams and consolidate the responses. Using a consensus model to develop the appraisal will also help ensure that a broad perspective is leveraged. It is also important to remember that any appraisal is specific to a point in time and that the responses to the assessment can and will change over time. I have found that the following factors can be leveraged to assess importance and criticality:

Perceived moderate level of business impact (positive or negative)     y/n
Project does not show significant time sensitivity                        y/n
Fall back position exits if the project fails                                       y/n
Low possibility of impacting important customers                                  y/n
Project is not linked to other projects                                              y/n
Project not required to pay the bills                                                            y/n
Project is not labeled “Mission Critical”                                           y/n
Normal perceived value to the stakeholders                                              y/n
Neutral impact on the organizational architecture                                    y/n

Since each project has its own set of hot button issues, other major contributors can be substituted. However, be careful to understand the impact of the questions and the inter-relationships between the categories. The model does recognize that there will always be some overlap between responses.

Perceived Moderate Business Impact: Projects that are perceived to have a significant business impact are treated as more important than those that not. There are two aspects to the perception of importance. The first aspect is to determine whether or not the project team believes that their actions will have an impact on the outcome. The second aspect is whether the organization’s management acts as if they believe that the project will have a significant business impact (acting as if there will be an impact is more important than whether it “true” – at least in the short term). The perception as to whether the impact will be positive or negative is less important than the perception of the degree of the impact (a perception of a large impact will cause a large reaction). Assessment Tip: If both the project team and the organization’s management perceive that the project will have only a moderate business impact, appraise this attribute as a “Y”. If management does not perceive the significance, do not act as if there is significance or that nothing out of the ordinary is occurring, I would strongly suggest rating this attribute as a “Y”.

Lack of Significant Time Sensitivity for Delivery: Time sensitivity is the relationship between value and when the project is delivered. An example might be the implied time sensitivity when trying to be first to market with a new product, the perception of time sensitivity creates a sense of urgency which is central to criticality. While time is one of the legs of the project management iron triangle (identified in the management constraints above) this attribute measures the relationship between business value and delivery date. Assessment Tip: If the team perceives a higher than normal time sensitivity to delivery, appraise this attribute as ‘N’.

Fall Back Available: All or nothing projects or projects without fall backs, impart a sense of criticality that can easily be recognized (usually by the large bottles of TUMS at project manager’s desks). These types of projects occur, but are rare. Assessment Tip: A simple test for the whether the project is ‘all or nothing’ is to determine whether the team understands that when the project is implemented and works, everybody is good, or it does not work; everyone gets to look for a job, appraise this as an ‘N’. Note: This assumes that a project is planned to be an all or nothing scenario (must be done) and is not just an artifact of poor planning, albeit the impact might be the same.

Low Possibility of Impacting Important Customers: Any software has a possibility of impacting an important customer or block of customers. However, determining the level of that possibility and significance of the impact, if an impact occurs, can be a bit of an art form (or at least risk analysis). Impact is defined, for this attribute, as an effect that, if noticed, would be outside of the customers’ expectations. Assessment Tip: If the project is targeted to delivering functionality for an important customer assess this as ‘N’, if not directly targeted but if there is a high probability on an impact regardless of to whom the change is targeted toward also assess this attribute as ‘N’.

Projects Not Interlinked: Projects whose outcomes are linked to other projects require closer communication. The situation is analogous to building a bridge from both sides of the river, and hoping they meet in the middle. Tools – such as traceability – that formally identify, communicate and link the requirements of one project to another substantially increase the chances of the bridge meeting in the middle. Note: that is not to say that formally documented traceability is the only method that will deliver results. The model’s strength is that it is driven by the confluence of multiple attributes to derive recommendations. Assessment Tip: If the outcome of a project is required for another project (or vice versa), assess this attribute as an ‘N’. Note: “required” means that one project can not occur without the functionality delivered by the other project. It is easy to see the inter-linkage of people as interlinking functionality. I would suggest that the former is a different management problem than we are trying to solve.

Not Directly Linked to Paying the Bills: There are projects that are a paler reflection of a “bet the farm” scenario. While there are very few true “bet the farm” projects, there are many that projects in the tier just below. These ‘second tier’ projects would cause substantial damage to the business and/or to your CIO’s career if they fail, as they are tied to delivering business value (RE: paying the bills). Assessment Tip: Projects that meet the “bet the farm” test or at least a “bet the pasture” project (major impact on revenue or the CIO’s career) can be smelled a mile away; these should be assessed as an “N”. It should be noted that if a project has been raised to this level of urgency artificially, it should be appraised as “Y”. Another tip, projects with the words SAP or PeopleSoft should automatically be assessed as an “N”.

Indirectly Important to Mission: The title “important to mission” represents a long view of the impact of the functionality being delivered by a project. An easy gauge for importance is to determine whether the project can be directly linked to the current or planned core products of the business. Understanding linkages is critical to determining whether a project is important to the mission of the organization. Remember, projects can be important to paying the bills, but not core to the mission of the business. An example, a major component of a clothing manufacturer that I worked for after I left university was its transportation group. Projects for this division were important for paying the bills, but at the same time, they were not directly related to the mission of the business, which was the design and manufacturing of women’s clothing. As an outsider, one quick test for importance to mission is to simply ask the question, “What is the mission of the organization and how does the project support it.” Not knowing the answer is either a sign to ask a lot more questions, or a sign that the project is not important to mission. Assessment Tip: If the project is directly linked to the delivery of a core (current or planned) product assess this attribute as an ‘N’. Appraisal of this attribute can engender passionate debate, most project teams want to believe that the project they are involved in is important to mission. Perception is incredibly important, if there is deeply held passion that the project is directly important to the mission of the organization assess it as an ‘N’.

Moderate Perceived Value to the Stakeholders: Any perception of value is difficult at more than an individual level. Where stakeholders are concerned, involvement clouds the rational assessments.   Simply put, stakeholders perceive most of the projects they are involved in as having more than a moderate level of value. Somewhere in their mind, stakeholders must be asking, why would I be involved with anything of moderate value? The issue is that most projects will deliver, at best, an average value. Assessment Tip: Assuming that you have access to the projected ROI (quantitative and non-quantitative) for the project you are involved in, you have the basis for a decision. A rule of thumb is that projects projected to deliver an ROI that is 10% or more of the organization’s or department’s value, appraise this as an ‘N’. Using the derived ROI assumes that the evaluations are worth more than the paper they are printed on. If you are not tracking the delivery of benefits after the project, any published ROI is suspect.

Neutral to Organizational Architecture: This attribute assesses the degree of impact the functionality/infrastructure to be delivered will have to the organization’s architecture. This attribute has a degree of covariance with the ‘architectural impact’ attribute in the previous model component. While related, they are not the exactly the same. As an example, the delivered output of a project can be critical (important and urgent), but will cause little change (low impact). An explicit example is the installation of a service pack within Microsoft Office. The service pack is typically critical (usually for security reasons), but does not change the architecture of the desktop. Assessment Tip: If delaying the delivery of the project would cause raised voices and gnashing of teeth appraise this as an ‘N’ and argue impact versus criticality over a beer.

An overall note on the concept of criticality, you will need to account for ‘false urgency’. More than a few organizations oversell the criticality of a project. The process of overselling is sometimes coupled with yelling, threats and table-pounding in order to generate a major visual effect. False urgency can have short term benefits, generating concerted action, however as soon as the team figures out the game a whipsaw affect (reduced productivity and attention) typically occurs. Gauge whether the words being used to describe how critical a project is match the appraisal vehicle you just created. Mismatches will sooner or later require action to synchronize the two points of view.

The concept of criticality requires a deft touch to assess. It is rarely as cut and dry as making checkmarks on a form. A major component of the assessment has to be the evaluation of what the project team believes. Teams that believe a project is critical will act as if the stress of criticality is real, regardless of other perceptions of reality. Alternately if a team believes a project is not critical, they will act on that belief, regardless of truth. Make sure you know how all project stakeholders perceive criticality or be ready for surprises.


Categories: Process Management

Why do I use Leanpub?

Coding the Architecture - Simon Brown - Sat, 08/30/2014 - 11:35

There's been some interesting discussion over the past fews days about Leanpub, both on Twitter and blogs. Jurgen Appelo posted Why I Don't Use Leanpub and Peter Armstrong responded. I think the biggest selling points of Leanpub as a publishing platform from an author's perspective may have been lost in the discussion. So, here's why my take on why I use Leanpub for Software Architecture for Developers.

Some history

I pitched my book idea to a number of traditional publishing companies in 2008 and none of them were very interested. "Nice idea, but it won't sell" was the basic summary. A few years later I decided to self-publish my book instead and I was about to head down the route of creating PDF and EPUB versions using a combination of Pages and iBooks Author on the Mac. Why? Because I love books like Garr Reynolds' Presentation Zen and I wanted to do something similar. At first I considered simply giving the book away for free on my website but, after Googling around for self-publishing options, I stumbled across Leanpub. Despite the Leanpub bookstore being fairly sparse at the start of 2012, the platform piqued my interest and the rest is history.

The headline: book creation, publishing, sales and distribution as a service

I use Leanpub because it allows me to focus on writing content. Period. The platform takes care of creating and selling e-books in a number of different formats. I can write some Markdown, sync the files via Dropbox and publish a new version of my book within minutes.

Typesetting and layout

I frequently get asked for advice about whether Leanpub is a good platform for somebody to write a book. The number one question to ask is whether you have specific typesetting/layout needs. If you want to produce a "Presentation Zen" style book or if having control of your layout is important to you, then Leanpub isn't for you. If, however, you want to write a traditional book that mostly consists of words, then Leanpub is definitely worth taking a look at.

Leanpub uses a slightly customised version of Markdown, which is a super-simple language for writing content. Here's an example of a Markdown file from my book, and you can see the result in the online sample of my book. Leanpub does allow you to tweak things like PDF page size, font size, page breaking, section numbering, etc but you're not going to get pixel perfect typesetting. I think that Leanpub actually does a pretty fantastic job of creating good looking PDF, EPUB and MOBI format ebooks based upon the very minimal Markdown. Especially when you consider that all ebook reader software is different and the readers themselves can mess with the fonts/font sizes anyway.

Book formatting on Leanpub

It's like building my own server at Rackspace versus using a "Platform as a Service" such as Cloud Foundry. You need to make a decision about the trade-off between control and simplicity/convenience. Since authoring isn't my full-time job and I have lots of other stuff to be getting on with, I'm more than happy to supply the content and let Leanpub take care of everything else for me.

Toolchain

My toolchain as a Leanpub author is incredibly simple: Dropbox and Mou. From a structural perspective, I have one Markdown file per essay and that's basically it. Leanpub does now provide support for using GitHub to store your content and I can see the potential for a simple Leanpub-aware authoring tool, but it's not rocket science. And to prove the point, a number of non-technical people here in Jersey have books on Leanpub too (e.g. Thrive with The Hive and a number of books by Richard Rolfe).

Iterative and incremental delivery

Before starting, I'd already decided that I'd like to write the book as a collection of short essays and this was cemented by the fact that Leanpub allows me to publish an in-progress ebook. I took an iterative and incremental approach to publishing the book. Rather than starting with essay number one and progressing in order, I tried to initially create a minimum viable book that covered the basics. I then fleshed out the content with additional essays once this skeleton was in place, revisiting and iterating upon earlier essays as necessary. I signed up for Leanpub in January 2012 and clicked the "Publish" button four weeks later. That first version of my book was only about ten pages in length but I started selling copies immediately.

Variable pricing and coupons

Another thing that I love about Leanpub is that it gives you full control over how you price your book. The whole pricing thing is a balancing act between readership and royalties, but I like that I'm in control of this. My book started out at $4.99 and, as content was added, that price increased. The book now currently has a minimum price of $20 and a recommended price of $30. I can even create coupons for reduced price or free copies too. There's some human psychology that I don't understand here, but not everybody pays the minimum price. Far from it, and I've had a good number of people pay more than the recommend price too. Leanpub provides all of the raw data, so you can analyse it as needed.

An incubator for books

As I've already mentioned, I pitched my book idea to a bunch of regular publishing companies and they weren't interested. Fast-forward a few years and my book is the currently the "bestselling" book on Leanpub this week, fifth by lifetime earnings and twelfth in terms of number of copies sold. I've used quotes around "bestselling" because Jurgen did. ;-)

Leanpub bestsellers

In his blog post, Peter Armstrong emphasises that Leanpub is a platform for publishing in-progress ebooks, especially because you can publish using an iterative and incremental approach. For this reason, I think that Leanpub is a fantastic way for authors to prove an idea and get some concrete feedback in terms of sales. Put simply, Leanpub is a fantastic incubator for books. I know of a number of books that were started on Leanpub have been taken on by traditional publishing companies. I've had a number of offers too, including some for commercial translations. Sure, there are other ways to publish in-progress ebooks, but Leanpub makes this super-easy and the barrier to entry is incredibly low.

The future for my book?

What does the future hold for my book then? I'm not sure that electronic products are ever really "finished" and, although I consider my book to be "version 1", I do have some additional content that is being lined up. And when I do this, thanks to the Leanpub platform, all of my existing readers will get the updates for free.

I've so far turned down the offers that I've had from publishing companies, primarily because they can't compete in terms of royalties and I'm unconvinced that they will be able to significantly boost readership numbers. Leanpub is happy for authors to sell their books through other channels (e.g. Amazon) but, again, I'm unconvinced that simply putting the book onto Amazon will yield an increased readership. I do know of books on the Kindle store that haven't sold a single copy, so I take "Amazon is bigger and therefore better" arguments with a pinch of salt.

What I do know is that I'm extremely happy with the return on my investment. I'm not going to tell you how much I've earned, but a naive calculation of $17.50 (my royalty on a $20 sale) x 4,600 (the total number of readers) is a little high but gets you into the right ballpark. In summary, Leanpub allows me focus on content, takes care of pretty much everything and gives me an amazing author royalty as a result. This is why I use Leanpub.

Categories: Architecture

Traceability: Complexity

Untitled

The second component, complexity, is a measure of the number of properties of a project that are judged to be outside of the norm.  The applicable norm is relative to the person or group making the judgment.  Assessing the team’s understanding of complexity is important because when a person or group perceives something to be complex they act differently.  The concept of complexity can be decomposed into many individual components, for this model the technical components of complexity will be appraised in this category.  The people or team driven attributes of complexity are dealt with in the user involvement section (above).  Higher levels of complexity are an important reason for pursuing traceability because complexity decreases the ability of a person to hold a consistent understanding of the problem and solution in their mind.  There are just too many moving parts.  The inability to develop and hold an understanding in the forefront of your mind increases the need to document understandings and issues to improve consistency.

The model assesses technical complexity by evaluating the following factors:

  1.  The project is the size you are used to doing
    2.    There is a single manager or right sized management
    3.    The technology is well known to the team
    4.    The business problem(s) is well understood
    5.    The degree of technical difficulty is normal or less
    6.    The requirements are stable (ish)
    7.    The project management constraints are minor
    8.    The architectural impact is minimal
    9.    The IT Staff perceives the impact to be minimal

As with customer involvement, the assessment process for complexity uses a simple yes or no scale for rating each of the factors.   Each factor will require some degree of discussion and introspection to arrive at an answer.  An overall assessment tip:  A maybe is equivalent to a ‘no’.   Remember that there is no prize for under or over-estimating the impact of these variables, value is only gained through an honest self-evaluation.

Project is normal size: The size of the project is a direct contributor to complexity; all things being equal, a larger than usual project will require more coordination, communication and interaction than a smaller project.  A common error when considering size of project is to use cost as a proxy.  Size is not the same thing as cost.  I suggest estimating the size of the project using standard functional size metrics.  Assessment Tip: Organizations with a baseline will be able to statistically determine the point where size causes a shift in productivity.  The shift is a sign post for where complexity begins to weigh on the processes being used.  In organizations without a baseline, develop and use a rule of thumb.  Consider using the rule that ‘if it is bigger than anything you have done before’ or the corollary ‘the same size as your biggest project’ as rules of thumb.  These equate to an ‘N’ rating.

Single Manager/Right Sized Management:
 There is an old saying ‘too many cooks in the kitchen spoil the broth’.  A cadre of managers supporting a single project can fit the ‘too many cooks’ bill.  While it is equally true that a large project will require more than one manager or leader it is important to understand the implications that the number of managers and leaders will have on a project.  Having the right number of managers and leaders can smooth out issues that are discovered, assemble and provide status without impacting the team dynamic while providing feedback to team members.  Having the wrong number of managers will gum up the works of project (measure the ratio of meeting time to a standard eight hour day anything over 25% is sign to closely check the level of management communication overhead).   The additional layers of communication and coordination are the downside of a project with multiple managers (it is easy for a single manager to communicate with himself or herself).  One of the most important lessons to be gleaned from the agile movement is that communication is critical (and this leads to the conclusion that communication difficulties may trump benefits) and that any process that gets in the way of communication should be carefully evaluated before they are implemented.  A larger communication web will need to be traversed with every manager added to the structure, which will require more formal techniques to ensure consistent and effective communication.  Assessment Tip: Projects with more than five managers and leaders or a worker to manager ratio lower than 8 workers to one manager/leader (with more than one manager) should assess this attribute as an ‘N’.

Well Known Technology: The introduction of a technology that is unfamiliar to the project team will require more coordination and interaction.  While the introduction of one or two hired guns into a group with experience is a good step to ameliorate the impact, it may not be sufficient (and may complicate communication in its own right).  I would suggest that until all relevant team members surmounts the learning curve; new technologies will require more formal communication patterns.  Assessment Tip:  If less than 50% of the project team has not worked with a technology on previous projects, assess the attribute as an ‘N’.

Well Understood Business Problem: A project team that has access to understanding of the business problem being solved by project will have a higher chance at solving the problem.  The amount of organizational knowledge the team has will dictate the level of analysis and communication required to find a solution.  Assessment Tip: If the business problem is not well understood or has not been dealt with in the past this attribute should be assessed as a ‘N”.

Low Technical Difficultly: The term ‘technical difficulty’ has many definitions.  The plethora of definitions means that measuring technical difficulty requires reflecting on many project attributes.  The attributes that define technical difficulty can initially be seen when there are difficulties in describing the solutions and alternatives for solving the problem.  Technical difficulty can include algorithms, hardware, software, data, logic or any combination of components.  Assessment Tip:  When assessing the level of technical difficulty, if it is difficult to frame the business problem in technical terms assess the level of complexity as ‘N’.

Stable Requirements: Requirements typically evolve as a project progresses (and that is a good thing).  Capers Jones indicates that requirements grow approximately 2% per calendar month across the life of a project.  Projects that are difficult to define or where project personnel or processes allow requirements to be amended or changed in an ad hoc manner should anticipate above average scope creep or churn.  Assessment Tip:  If historical data indicates that the project team, customer and application combination tends to have scope creep or churn above the norm assess this attribute as an ‘N’ unless there are procedural or methodological methods to control change.  (Note:  Control does not mean stop change, but rather that it happens in an understandable manner.)

Minor Project Management Constraints: Project managers have three macro levers (cost, scope and time) available to steer a project.   When those levers are constrained or locked (by management, users or contract) any individual problem becomes more difficult to address.  Formal communication becomes more important as options are constrained.  Assessment Tip:  If more than one of the legs of the project management iron triangle is fixed, assess this attribute as an ‘N’.

Minimal Architectural Impact: Changes to the standard architecture of the application(s) or organization will increase complexity on an exponential scale.  This change of complexity will increase the amount of communication required to ensure a trouble free change. Assessment Tip:  If you anticipate modifications (small or wholesale) to the standard architectural footprint of the application or organization, assess this attribute as an ‘N’.

Minimal IT Staff Impact:
 There are many ways a project can impact an IT staff ranging from process related changes (how work is done) to outcome related changes (employment or job duties).  Negative impacts are most apt to require increased formal communication, therefore the use of traceability methods that are more highly documented and granular.  Negative process impacts are those that are driven by the processes used or organizational constraints (e.g. death marches, poorly implemented processes, galloping requirements and resource constraints).  Outcome related impacts are those driven by the solution delivered (e.g. outsourcing, downsizing, and new application/solutions).  Assessment Tip:  Any perceived negative impact on the team or to the organization that is closely associated with the team should viewed as not neutral (assess as an ‘N’), unless you are absolutely certain you can remediate the impact on the team doing the work.  Reassess often to avoid surprises.


Categories: Process Management

Stuff The Internet Says On Scalability For August 29th, 2014

Hey, it's HighScalability time:


In your best Carl Sagan voice...Billions and Billions of Habitable Planets.
  • Quotable Quotes:
    • @Kurt_Vonnegut: Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.
    • @neil_conway: "The paucity of innovation in calculating join predicate selectivities is truly astounding."
    • @KentBeck: power law walks into a bar. bartender says, "i've seen a hundred power laws. nobody orders anything." power law says, "1000 beers, please".
    • @CompSciFact: RT @jfcloutier: Prolog: thinking in proofs Erlang: thinking in processes UML: wishful thinking

  • For your acoustic listening pleasure let me present...The Orbiting Vibes playing Scaling Doesn't Matter. I don't quite understand how it relates to scaling, but my deep learning algorithm likes it. 

  • The Rise of the Algorithm. Another interesting podcast with James Allworth and Ben Thompson. Much pondering of how to finance content. Do you trust content with embedded affiliate links? Do you trust content written by writers judged on their friendliness to advertisers? Why trust at all is the bigger question. Facebook is the soft news advertisers love. Twitter is the hard news advertisers avoid. A traditional newspaper combined both. Humans are the new horses. < Capitalism doesn't care if people are employed anymore than it cared about horses being employed. Employment is simply a byproduct of inefficient processes. The Faith that the future will provide is deliciously ironic given the rigorous rationalism underlying most of the episodes.

  • Great reading list for Berkeley CS286: Implementation of Database Systems, Fall 2014. 

  • Is it just me or is it totally weird that all the spy systems use the same diagrams that any other project would use? It makes it seem so...normal. The Surveillance Engine: How the NSA Built Its Own Secret Google.

  • The Mathematics of Herding Sheep. By little border collie Annie embodies a very smart algorithm to herd sheep:  When sheep become dispersed beyond a certain point, dogs put their effort into rounding them up, reintroducing predatory pressure into the herd, which responds according to selfish herd principles, bunching tightly into a more cohesive unit. < What's so disturbing is how well this algorithm works with people.

  • Inside Google's Secret Drone-Delivery Program. What I really want are pick-up drones, where I send my drone to pick stuff up. Or are pick-up and delivery cars a better bet? Though I can see swarms of drones delivering larger objects in parts that self-assemble

  • Lambda Architecture at Indix: "break down the various stages in your data pipeline into the layers of the architecture and choose technologies and frameworks that satisfy the specific requirements of each layer."

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Why I Don’t Use Leanpub

NOOP.NL - Jurgen Appelo - Fri, 08/29/2014 - 15:21
Leanpub 3

It seems not a week goes by without someone asking me, “Why don’t you publish on Leanpub?” or “Have you considered writing on Leanpub?” or some other variation of the same question.

The post Why I Don’t Use Leanpub appeared first on NOOP.NL.

Categories: Project Management

Quote of the Day — Just a Little Process Check

Herding Cats - Glen Alleman - Fri, 08/29/2014 - 15:12

Everyone is entitled to his own opinion, but not his own facts.Daniel Patrick Moynihan

When engaging in exchanges about complex topics like cost, schedule, and technical performance management, I always get a smile when someone says oh that problem can be solved with this simple approach. Or I bet that organization has no motivation what so every to solve the problem

Then the next quote is applicable...

For every complex problem there is an answer that is clear, simple, and wrong.H. L. Mencken

Solving complex problems is hard, stating there are simple solutions without having worked on complex problems is easy.

Related articles Complex Problems Require Better Solutions The Three Elements of Project Work and Their Estimates The Power of Misattributed and Misquoted Quotes Is There Such a Thing As Making Decisions Without Knowing the Cost?
Categories: Project Management

The Web Search API is Retiring

Google Code Blog - Fri, 08/29/2014 - 13:00
Posted by Dan Ciruli, Product Manager


On November 1, 2010, we announced the deprecation of the Web Search API. As per our policy at the time, we supported the API for a three year period (and beyond), but as all things come to an end, so has its deprecation window.

We are now announcing the turndown of the Web Search API. You may wish to look at our Custom Search API (note: it has a free quota of 100 queries per day).
The service will cease operations on September 29th, 2014.
Categories: Programming

R: dplyr – group_by dynamic or programmatic field / variable (Error: index out of bounds)

Mark Needham - Fri, 08/29/2014 - 10:13

In my last blog post I showed how to group timestamp based data by week, month and quarter and by the end we had the following code samples using dplyr and zoo:

library(RNeo4j)
library(zoo)
 
timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
meetupMembers %.% group_by(week) %.% summarise(n = n())
meetupMembers %.% group_by(monthYear) %.% summarise(n = n())
meetupMembers %.% group_by(quarterYear) %.% summarise(n = n())

As you can see there’s quite a bit of duplication going on – the only thing that changes in the last 3 lines is the name of the field that we want to group by.

I wanted to pull this code out into a function and my first attempt was this:

groupMembersBy = function(field) {
  meetupMembers %.% group_by(field) %.% summarise(n = n())
}

And now if we try to group by week:

> groupMembersBy("week")
 Show Traceback
 
 Rerun with Debug
 Error: index out of bounds

It turns out if we want to do this then we actually want the regroup function rather than group_by:

groupMembersBy = function(field) {
  meetupMembers %.% regroup(list(field)) %.% summarise(n = n())
}

And now if we group by week:

> head(groupMembersBy("week"), 20)
Source: local data frame [20 x 2]
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

Much better!

Categories: Programming

Inspirational Work Quotes at a Glance

What if your work could be your ultimate platform? … your ultimate channel for your growth and greatness?

We spend a lot of time at work. 

For some people, work is their ultimate form of self-expression

For others, work is a curse.

Nobody stops you from using work as a chance to challenge yourself, to grow your skills, and become all that you’re capable of.

But that’s a very different mindset than work is a place you have to go to, or stuff you have to do.

When you change your mind, you change your approach.  And when you change your approach, you change your results.   But rather than just try to change your mind, the ideal scenario is to expand your mind, and become more resourceful.

You can do so with quotes.

Grow Your “Work Intelligence” with Inspirational Work Quotes

In fact, you can actually build your “work intelligence.”

Here are a few ways to think about “intelligence”:

  1. the ability to learn or understand things or to deal with new or difficult situations (Merriam Webster)
  2. the more distinctions you have for a given concept, the more intelligence you have

In Rich Dad, Poor Dad, Robert Kiyosaki, says, “intelligence is the ability to make finer distinctions.”   And, Tony Robbins, says “intelligence is the measure of the number and the quality of the distinctions you have in a given situation.”

If you want to grow your “work intelligence”, one of the best ways is to familiarize yourself with the best inspirational quotes about work.

By drawing from wisdom of the ages and modern sages, you can operate at a higher level and turn work from a chore, into a platform of lifelong learning, and a dojo for personal growth, and a chance to master your craft.

You can use inspirational quotes about work to fill your head with ideas, distinctions, and key concepts that help you unleash what you’re capable of.

To give you a giant head start and to help you build a personal library of profound knowledge, here are two work quotes collections you can draw from:

37 Inspirational Quotes for Work as Self-Expression

Inspirational Work Quotes

10 Distinct Ideas for Thinking About Your Work

Let’s practice.   This will only take a minute, and if you happen to hear the right words, which are the keys for you, your insight or “ah-ha” can be just the breakthrough that you needed to get more of your work, and, as a result, more out of life (or at least your moments.)

Here is a sample of distinct ideas and depth that you use to change how you perceive your work, and/or how you do your work:

  1. “Either write something worth reading or do something worth writing.” — Benjamin Franklin
  2. “You don’t get paid for the hour. You get paid for the value you bring to the hour.” — Jim Rohn
  3. “Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do.” — Steve Jobs
  4. “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” -- Bill Gates
  5. “We must each have the courage to transform as individuals. We must ask ourselves, what idea can I bring to life? What insight can I illuminate? What individual life could I change? What customer can I delight? What new skill could I learn? What team could I help build? What orthodoxy should I question?” – Satya Nadella
  6. “My work is a game, a very serious game.” — M. C. Escher
  7. “Hard work is a prison sentence only if it does not have meaning. Once it does, it becomes the kind of thing that makes you grab your wife around the waist and dance a jig.” — Malcolm Gladwell
  8. “The test of the artist does not lie in the will with which he goes to work, but in the excellence of the work he produces.” -- Thomas Aquinas
  9. “Are you bored with life? Then throw yourself into some work you believe in with all you heart, live for it, die for it, and you will find happiness that you had thought could never be yours.” — Dale Carnegie
  10. “I like work; it fascinates me. I can sit and look at it for hours.” -– Jerome K. Jerome

For more ideas, take a stroll through my inspirational work quotes.

As you can see, there are lots of ways to think about work and what it means.  At the end of the day, what matters is how you think about it, and what you make of it.  It’s either an investment, or it’s an incredible waste of time.  You can make it mundane, or you can make it matter.

The Pleasant Life, The Good Life, and The Meaningful Life

Here’s another surprise about work.   You can use work to live the good life.   According to Martin Seligman, a master in the art and science of positive psychology, there are three paths to happiness:

  1. The Pleasant Life
  2. The Good Life
  3. The Meaningful Life

In The Pleasant Life, you simply try to have as much pleasure as possible.  In The Good Life, you spend more time in your values.  In The Meaningful Life, you use your strengths in the service of something that is bigger than you are.

There are so many ways you can live your values at work and connect your work with what makes you come alive.

There are so many ways to turn what you do into service for others and become a part of something that’s bigger than you.

If you haven’t figured out how yet, then dig deeper, find a mentor, and figure it out.

You spend way too much time at work to let your influence and impact fade to black.

You Might Also Like

40 Hour Work Week at Microsoft

Agile Avoids Work About Work

How Employees Lost Empathy for Their Work, for the Customer, and for the Final Product

Satya Nadella on Live and Work a Meaningful Life

Short-Burst Work

Categories: Architecture, Programming

R: Grouping by week, month, quarter

Mark Needham - Fri, 08/29/2014 - 01:25

In my continued playing around with R and meetup data I wanted to have a look at when people joined the London Neo4j group based on week, month or quarter of the year to see when they were most likely to do so.

I started with the following query to get back the join timestamps:

library(RNeo4j)
query = "MATCH (:Person)-[:HAS_MEETUP_PROFILE]->()-[:HAS_MEMBERSHIP]->(membership)-[:OF_GROUP]->(g:Group {name: \"Neo4j - London User Group\"})
         RETURN membership.joined AS joinTimestamp"
meetupMembers = cypher(graph, query)
 
> head(meetupMembers)
      joinTimestamp
1 1.376572e+12
2 1.379491e+12
3 1.349454e+12
4 1.383127e+12
5 1.372239e+12
6 1.330295e+12

The first step was to get joinDate into a nicer format that we can use in R more easily:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
meetupMembers$joinDate <- timestampToDate(meetupMembers$joinTimestamp)
 
> head(meetupMembers)
  joinTimestamp            joinDate
1  1.376572e+12 2013-08-15 13:13:40
2  1.379491e+12 2013-09-18 07:55:11
3  1.349454e+12 2012-10-05 16:28:04
4  1.383127e+12 2013-10-30 09:59:03
5  1.372239e+12 2013-06-26 09:27:40
6  1.330295e+12 2012-02-26 22:27:00

Much better!

I started off with grouping by month and quarter and came across the excellent zoo library which makes it really easy to transform dates:

library(zoo)
meetupMembers$monthYear <- as.Date(as.yearmon(meetupMembers$joinDate))
meetupMembers$quarterYear <- as.Date(as.yearqtr(meetupMembers$joinDate))
 
> head(meetupMembers)
  joinTimestamp            joinDate  monthYear quarterYear
1  1.376572e+12 2013-08-15 13:13:40 2013-08-01  2013-07-01
2  1.379491e+12 2013-09-18 07:55:11 2013-09-01  2013-07-01
3  1.349454e+12 2012-10-05 16:28:04 2012-10-01  2012-10-01
4  1.383127e+12 2013-10-30 09:59:03 2013-10-01  2013-10-01
5  1.372239e+12 2013-06-26 09:27:40 2013-06-01  2013-04-01
6  1.330295e+12 2012-02-26 22:27:00 2012-02-01  2012-01-01

The next step was to create a new data frame which grouped the data by those fields. I’ve been learning dplyr as part of Udacity’s EDA course so I thought I’d try and use that:

> head(meetupMembers %.% group_by(monthYear) %.% summarise(n = n()), 20)
 
    monthYear  n
1  2011-06-01 13
2  2011-07-01  4
3  2011-08-01  1
4  2011-10-01  1
5  2011-11-01  2
6  2012-01-01  4
7  2012-02-01  7
8  2012-03-01 11
9  2012-04-01  3
10 2012-05-01  9
11 2012-06-01  5
12 2012-07-01 16
13 2012-08-01 32
14 2012-09-01 14
15 2012-10-01 28
16 2012-11-01 31
17 2012-12-01  7
18 2013-01-01 52
19 2013-02-01 49
20 2013-03-01 22
> head(meetupMembers %.% group_by(quarterYear) %.% summarise(n = n()), 20)
 
   quarterYear   n
1   2011-04-01  13
2   2011-07-01   5
3   2011-10-01   3
4   2012-01-01  22
5   2012-04-01  17
6   2012-07-01  62
7   2012-10-01  66
8   2013-01-01 123
9   2013-04-01 139
10  2013-07-01 117
11  2013-10-01  94
12  2014-01-01 266
13  2014-04-01 359
14  2014-07-01 216

Grouping by week number is a bit trickier but we can do it with a bit of transformation on our initial timestamp:

meetupMembers$week <- as.Date("1970-01-01")+7*trunc((meetupMembers$joinTimestamp / 1000)/(3600*24*7))
 
> head(meetupMembers %.% group_by(week) %.% summarise(n = n()), 20)
 
         week n
1  2011-06-02 8
2  2011-06-09 4
3  2011-06-16 1
4  2011-06-30 2
5  2011-07-14 1
6  2011-07-21 1
7  2011-08-18 1
8  2011-10-13 1
9  2011-11-24 2
10 2012-01-05 1
11 2012-01-12 3
12 2012-02-09 1
13 2012-02-16 2
14 2012-02-23 4
15 2012-03-01 2
16 2012-03-08 3
17 2012-03-15 5
18 2012-03-29 1
19 2012-04-05 2
20 2012-04-19 1

We can then plug that data frame into ggplot if we want to track membership sign up over time at different levels of granularity and create some bar charts of scatter plots depending on what we feel like!

Categories: Programming

Traceability: Assessing Customer Involvement

Ruminating on Customer Involvement

Ruminating on Customer Involvement

Customer involvement can be defined as the amount of time and effort applied to a project by the customers (or user) of the project.  Involvement can be both good (e.g. knowledge transfer and decision making) and bad (e.g. interference and indecision).  The goal in using the traceability model is to force the project team to predict both the quality and quantity of customer involvement as accurately as possible across the life of a project.  While the question of quality and quantity of customer involvement is important for all projects it becomes even more important as Agile techniques are leveraged.  Customer involvement is required for the effective use of Agile techniques and to reduce the need for classic traceability.  Involvement is used to replace documentation with a combination of lighter documentation and interaction with the customer.

Quality can be unpacked to include attributes such as competence: knowledge of the problem space, knowledge of the process and ability to make decisions that stick.  Assessing the quality attributes of involvement requires understanding how having multiple customer and/or user constituencies involved in the project outcome can change the complexity of the project.  For example, the impact of multiple customers and user constituencies’ on decision making, specifically the ability to make decisions correctly or on a timely basis, will influence how a project needs to be run.  Multiple constituencies complicate the ability to make decisions which drives the need for structure.  As the number of groups increases, the number of communication nodes increases, making it more difficult to get enough people involved in a timely manner.   Although checklists are used to facilitate the model, model users should remember that knowledge of the project and project management is needed to use the model effectively.  Users of the model should not see the lists of attributes and believe that this model can be used merely as a check-the-box method.

The methodical assessment of the quantity and quality of customer involvement requires determining the leading indicators of success.  Professional experience suggests a standard set of predictors for customer involvement which are incorporated into the appraisal questions below.
These predictors are as follows:

  1. Agile methods will be used                        y/n
  2. The customer will be available more than 80% of the time         y/n
  3. User/customer will be co-located with the project team            y/n
  4. Project has a single primary customer                    y/n
  5. The customer has adequate business knowledge            y/n
  6. The customer has knowledge of how development projects work         y/n
  7. Correct business decision makers are available                y/n
  8. Team members have a high level of interpersonal skills            y/n
  9. Process coaches are available                    y/n

The assessment process simplifies the evaluation process by using a simple yes-no evaluation.  Gray areas like ‘maybe’ are evaluated as an equivalent to a ‘no’.  While the rating scale is simple the discussion to get to a yes-no decision is typically far less simple.

Agile methods will be used:  The first component in the evaluation is to determine whether the project intends to use disciplined Agile methods for the project being evaluated.  The term ‘disciplined’ is used on purpose.  Agile methods like xP are a set of practices that interact to create development supported by intimate communication.  Without the discipline or without critical practices, the communication alone will not suffice.  Assessment tip:  Using a defined, agile process equates to a ‘Y’, making it up as you go equates to an ‘N’.

Customer availability (>80%):  Intense customer interaction is required to ensure effective development and to reduce reliance on classically documented traceability.  Availability is defined as the total amount of time the primary customer is available.  If customers are not available, lack of interaction is foregone conclusion.  I have found that agile methods (which require intense communication) tend to loose traction when customer availability drops below 80%.   Assessment Tip: Assess this attribute as a ‘Y’ if primary customer availability is above 80%.  Assess it as an ‘N’ if customer availability is below 80% (which means if your customers are not around 80% of the time normally during the project without very special circumstances rate this as a No).

Co-located customer/user:  Co-location is an intimate implementation scenario of customer/user availability.  The intimacy that co-location provides can be leveraged as a replacement for documentation-based communication by using less formal techniques like white boards and sticky notes.  Assessment Tip:  Stand up look around, if you don’t have a high probability of seeing your primary customer (unless it is lunch time), you should rate this attribute as an ‘N’.  Leveraging metaverse tools (e.g. Secondlife or similar) can be used to mitigate some of the problems of disparate physical location.

Project Has A Single Customer:  As the number of primary customers increase, the number of communication paths required for creating and deploying the project increases exponentially.  The impact that the number of customers has on communication is not a linear, it can be more easily conceived as a web.  Each node in the web will require attention (attention = communication) to coordinate activities.  Assessment Tip: Count the number of primary customers, if you need more than one finger, assess this question as an ‘N’.

Business Knowledge:  The quality and quantity of business knowledge the team has to draw upon is inversely related to the amount of documentation-based communication needed.  Availability of solid business knowledge impacts the amount of background that needs to be documented in order to establish the team’s bona fides.  It should be noted that it can be argued that sourcing long term business knowledge in human repositories is a risk.  Assessment Tip:  Assessing the quality and quantity of business knowledge will require introspection and fairly brutal honesty, but do not sell the team or yourself short.

Knowledge of How Development Projects Work:  All team members, whether they are filling a hardcore IT role or the most ancillary user role, need to understand both their project responsibilities and how they will contribute to the project.  The more intrinsically participants understand their roles and responsibilities the smaller the amount of wasted effort a project will typically have to expend on non-value added activities (like re-explaining how work is done).  Assessment Tip:  This is an area that can be addressed after assessment through training.  If team members can not be trained or educated as to their role, appraise this attribute as an ‘N’.

Decisions Makers:  The project attribute that defines “decision makers” is the process that leads to the selection of a course of action.  Most IT projects have a core set of business customers that are the decision makers for requirements and business direction.  Knowing who can make a decision (and have it stick) then having access to them is critical.  Having a set of customers available or co-located is not effective if they are not decision makers (‘the right people’).  The perfect customer for a development project is available, co-located and can make decisions that stick (and very apt not to be the person provided).  Assessment Tip:  This area is another that can only be answered after soul searching introspection (i.e. thinking about it over a beer).  If your customer has to check with a nebulous puppet master before making critical decisions then assessment response should be an “N”.

High Level of Interpersonal Skills:  All team members must be able to interact together and perform as a team.  Insular or other behavior that is not team conducive will cause communications to pool and stagnate as team members either avoid the non-team player or the offending party holds on to information at inopportune times.  Non-team behavior within a team is bad regardless of the development methodology being used.  Assessment Tip:  Teams that have worked together and crafted a good working relationship typically can answer this as a “Y”.

Facilitation: Projects perform more consistently with coaching (and seem to deliver better solutions), however coaching as a process has not been universally adopted.  The role that has been universally embraced is project manager (PM).  Coaches and project managers typically play two very different roles.  The PM role has an external focus and acts as the voice of the process, while the role of coach has an internal focus and acts the as the voice of the team (outside vs. inside, process vs. people).  Agile methods implement the role of coach and PM as two very different roles, even though they can co-exist.  Coaches nurture the personnel on the project; helping them to do their best (remember your last coach).  Shouldn’t the same facility be leveraged on all projects?  Assessment Tip:  Evaluate whether a coach is assigned if yes answer affirmatively.  If the role is not formally recognized within the group or organization, care should be taken, even if a coach is appointed.


Categories: Process Management

Training – Lessons Learned from Training a Group of Indian Analysts

Software Requirements Blog - Seilevel.com - Thu, 08/28/2014 - 17:00
I recently trained a couple of groups of analysts in India on Seilevel methodology. This was the first time we had done training in an Indian setting and honestly, I have to confess to being more than a bit apprehensive when I set out. My fears, set out in no particular order of importance, included: […]
Categories: Requirements

Training – Lessons Learned from Training a Group of Indian Analysts

Software Requirements Blog - Seilevel.com - Thu, 08/28/2014 - 17:00
I recently trained a couple of groups of analysts in India on Seilevel methodology. This was the first time we had done training in an Indian setting and honestly, I have to confess to being more than a bit apprehensive when I set out. My fears, set out in no particular order of importance, included: […]
Categories: Requirements

Managers Manage Ambiguity

I was thinking about the Glen Alleman’s post, All Things Project Are Probabilistic. In it, he says,

Management is Prediction

as a inference from Deming. When I read this quote,

If you can’t describe what you are doing as a process, you don’t know what you’re doing. –Deming

I infer from Deming that managers must manage ambiguity.

Here’s where Glen and I agree. Well, I think we agree. I hope I am not putting words into Glen’s mouth. I am sure he will correct me if I am.

Managers make decisions based on uncertain data. Some of that data is predictive data.

For example, I suggest that people provide, where necessary, order-of-magnitude estimates of projects and programs. Sometimes you need those estimates. Sometimes you don’t. (Yes, I have worked on programs where we didn’t need to estimate. We needed to execute and show progress.)

Now, here’s where I suspect Glen and I disagree:

  1. Asking people for detailed estimates at the beginning of a project and expecting those estimates to be true for the entire project. First, the estimates are guesses. Second, software is about learning, If you work in an agile way, you want to incorporate learning and change into the project or program. I have some posts about estimation in this blog queue where I discuss this.
  2. Using estimation for the project portfolio. I see no point in using estimates instead of value for the project portfolio, especially if you use agile approaches to your projects. If we finish features, we can end the project at any time. We can release it. This makes software different than any other type of project. Why not exploit that difference? Value makes much more sense. You can incorporate cost of delay into value.
  3. If you use your estimate as a target, you have some predictable outcomes unless you get lucky: you will shortchange the feature by decreasing scope, incur technical debt, or increase the defects. Or all three.

What works for projects is honest status reporting, which traffic lights don’t provide. Demos provide that. Transparency about obstacles provides that. The ability to be honest about how to solve problems and work through issues provides that.

Much has changed since I last worked on a DOD project. I’m delighted to see that Glen writes that many government projects are taking more agile approaches. However, if we always work on innovative, new work, we cannot predict with perfect estimation what it will take at the beginning, or even through the project. We can better our estimates as we proceed.

We can have a process for our work. Regardless of our approach, as long as we don’t do code-and-fix, we do. (In Manage It! Your Guide to Modern, Pragmatic Project Management, I say to choose an approach based on your context, and to choose any lifecycle except for code-and-fix.)

We can refine our estimates, if management needs them. The question is this: why does management need them? For predicting future cost for a customer? Okay, that’s reasonable. Maybe on large programs, you do an estimate every quarter for the next quarter, based on what you completed, as in released, and what’s on the roadmap. You already know what you have done. You know what your challenges were. You can do better estimates. I would even do an EQF for the entire project/program. Nobody has an open spigot of money.

But, in my experience, the agile project or program will end before you expect it to. (See the comments on Capacity Planning and the Project Portfolio.) But, the project will only end early if you evaluate features based on value and if you collaborate with your customer. The customer will say, “I have enough now. I don’t need more.” It might occur before the last expected quarter. It might occur before the last expected half-year.

That’s the real ambiguity that managers need to manage. Our estimates will not be correct. Technical leaders, project managers and product owners need to manage risks and value so the project stays on track. Managers need to ask the question: What if the project or program ends early?

Ambiguity, anyone?

Categories: Project Management

Speaking in September

Coding the Architecture - Simon Brown - Thu, 08/28/2014 - 16:01

After a lovely summer (mostly) spent in Jersey, September is right around the corner and is shaping up to be a busy month. Here's a list of the events where you'll be able to find me.

It's going to be a fun month and besides, I have to keep up my British Airways frequent flyer status somehow, right? ;-)

Categories: Architecture