Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Project Success: On-Schedule

On-schedule!

On-schedule!

 

Understanding the project team’s definition of success is important because it helps us understand how the team will behave in crunch time. And yes, crunch time still happens. Recently I asked a selection of podcast listeners and blog readers to weigh in on whether being on-budget, on-scope or on-schedule was the most important attribute of project success. A majority answered that on-schedule was the most important attribute. One of the respondents summed up their rational for choosing on-schedule by saying, “Given the high-paced world we find ourselves in, I think on-schedule is what many now use to judge project “success.”

Dates are projected when a project is conceived. A project will begin on x date and end on y date. These dates create an anchor that will be used to compare performance. Both the business and IT plan based on these dates. A simple example illustrates the cascade effect caused by a project going longer than anticipated. If an e-commerce project has to add two extra, two-week sprints to complete a minimum viable product, the teams working on the project cannot then start the other project they were schedule for when the basic site was completed. That project is now delayed for four weeks, impacting the business and other stakeholders. The rollout of the new site will also have an effect both from an operational and marketing perspective. Plans will need to be adjusted. Because the site is late the anticipated revenues will be delayed and ROI calculation for the project will be impacted. Finally, if the project is large enough, the delay may impact earnings making life uncomfortable for managers and executives. While there are often many reasons for missing the schedule, the schedule miss will be a drag on the perception of success. As another respondent noted, “Oftentimes, I find that the driver of most projects is time…after all, everyone wants things done quickly. These drivers influence successful partnerships or solicitations.”

The news media of all types, popular or technology, continually reminds us that we are living in a very dynamic era. New technologies and businesses rise and fall in a metaphorical blink of an eye. Software and technology are seen as a critical enabler to support the pace of change. Therefore, there is an expectation levied on projects to meet the need for new and changed functionality. Agile methods, which either deliver functionality quickly or continuously, fit well into environments that are dynamic. On respondent suggested that the pace and the focus on being on-schedule was because, “IT projects are no longer simply supporting back office functions.  They are responsible for driving the customer experience.” Customers expect change and systems must evolve to meet that expectation.

The Project Management Institute’s define a project as a temporary endeavor  with a start and end date. Plans are made by the business as well as on a more tactical level within development organizations. When projects don’t happen when they are planned, other projects and events are impacted. Change causes a ripple effect. Methods and techniques like Agile change the playing field by doubling down on the concept of on-schedule. Agile and continuous delivery techniques promise delivery of potentially implementable functionality either as a continuous flow or based on a short known cadence. The customers expectation for when they will get functionality gets set that when we provide a date. Teams tend to pursue the commitments they make, the schedule is often the most obvious of those commitments.


Categories: Process Management

Full width iOS Today Extension

Xebia Blog - Thu, 09/18/2014 - 10:57

Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.

Source: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/NotificationCenter.html

This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

override func viewWillAppear(animated: Bool) {
    super.viewWillAppear(animated)
    if let superview = view.superview {
        var frame = superview.frame
        frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
        superview.frame = frame
    }
}

This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

Become high performing. By being happy.  

Xebia Blog - Thu, 09/18/2014 - 03:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

Project Success: Runner Up – On-scope

Looking Back

Keep on-scope.

When I recently asked a selection of readers and listeners of the Software Process and Measurement podcast and blog what they perceived as the single most important attribute that determines project success. Everyone that answered had and expressed an opinion on the topic. In order to focus those opinions, I constrained the respondents by asking them to choose from between the classic on-budget, on-scope and on-schedule triangle. In Project Success: An Overview went through the results at a high-level, and on-scope was clear, but strenuously argued second place finisher (far ahead of on-budget). Respondents rationalized the importance of being on on-scope as a connecting/satisfying their customers and as an attribute they had influence over given their span of control.

One respondent stated the augment for on-scope in a very straightforward manner, “I try to remind myself that the business requirements are most important, even if it results in a less-than-technically elegant solution.” The value of any project is tied closely to functionality being delivered. In a similar vein, another respondent suggested that cutting scope hurts the people that need to use the system (paraphrased). If the software or the interface between users of all types does not address the need of the business, it has little value. Scope, whether documented as requirements or user stories, represents a common understanding between customer and developer.

As projects evolve, the scope changes, whether through the adding stories and reprioritization of the project backlog or via change requests in classic project management methods. The processes of accepting change and determining how that change is implemented is usually a negotiation between the team and stakeholders. One of the respondents put it succinctly: “a clear definition of scope and a transparent understanding between our business customers and IT is what makes a project successful.” Involvement in the discussion of what is in scope and how the implicit business discussion is required for transparency and confers importance to the on-scope attribute. If we compare the perceived level of importance of on-scope to on-budget we see the impact of involvement. Budgets are set typically from outside the team’s span of control and act as a constraint for the project team, therefore practitioners see the attribute of on-budget as less important than on-scope.

One response suggested that being on-scope infers meeting functional and performance requirements. While this might not be exactly true, in the hard light of a project the connection between scope and requirements establishes a link between the development team and their customers. The linkage between the team and the customer’s needs makes the relationship between the two groups more intimate and provides the team with motivation therefore the perceived importance of the on-scope attribute.


Categories: Process Management

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Xebia Blog - Wed, 09/17/2014 - 15:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.

Project Success: Third Place – On-budget

 

Are budgets always a bit askew?

Are budgets always a bit askew?

I recently asked a selection of the Software Process and Measurement Podcast listeners and blog readers which of the classic three factors; on-time, on-budget or on-scope, define project success. While overall the results are not surprising I expected that being on-budget have had more mentions than the results showed. Of the respondents only one person put on-budget at the top of their list. On the other hand, another respondent took the time to explicitly point out why budget could not be on the top of their list. These are two very different takes on the impact of  being “on-budget” has on determining whether a project is successful.

The first comment was, “From the perspective of the sponsor(s), budget is clearly the thing.  This drives a tremendous amount of organizational behavior.” Budget performance is often tied very closely into managers’ performance objectives, and therefore is tied closely to how they are paid or bonused. In order to maximize their individual pay, the budget is often managed by constraining what is delivered or how work is done (ranging from the processes used to where the work is sourced). I have observed the outcome of budget management in projects over the years, which is why I would have through being on-budget would be higher on the list. Budget management can be a blunt instrument, and improper budget management can cause managers to cut the scope to meet the budget, lower quality due to the constraint of testing as the budget runs out, or even logging time to other projects with available budget. In the Software Process and Measurement Cast 306, Luis Gonçalves suggested that many poor organizational behaviors can be traced back to performance objectives and misuse of goals and objectives.

The second comment was, “No project is ever completely on budget.” This comment reflects a certain fatalism toward budgets (and the budgeting process). As we are all aware, budgets are typically developed and set before the organization truly knows what is being asked for and how it will be delivered. Therefore, unless stated as a range based on probability of budgetary outcomes, cannot be correct. Most budgets are not given or recorded as ranges, but rather as a specific number. The false precision based on imperfect information makes the budget a poor goal with little motivational power on its own.

Being on-budget has impact. Organizations allocate funds for work with the expectation of a return. This simple statement is just as true for software maintenance as it is for new product development. Variance in between what was planned and what was spent will have ramifications. Development personnel for the most part can’t impact the budget directly (with the possible exception of product owners), therefore being teams spend less time thinking about whether they are on-budget than on the attributes they can directly influence.


Categories: Process Management

Don’t Equate Story Points to Hours

Mike Cohn's Blog - Tue, 09/16/2014 - 15:00

I’ve been quite adamant lately that story points are about time, specifically effort. But that does not mean you should say something like, “One story point = eight hours.”

Doing this obviates the main reason to use story points in the first place. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively.

Two developers can start by estimating a given user story as one point even if their individual estimates of the actual time on task differ. Starting with that estimate, they can then agree to estimate something as two points if each agree it will take twice as long as the first story.

When story points equated to hours, team members can no longer do this. If someone instructs team members that one point equals eight (or any number of) hours, the benefits of estimating in an abstract but relatively meaningful unit like story points are lost.

When told to estimate this way, the team member will mentally estimate first in number of hours and then convert that estimate to points. Something the developer estimates to be 16 hours will be converted to 2 points.

Contrast this with a team member’s thought process when estimating in story points as they are truly intended. In this case, team members will consider how long each new story will take in comparison to other stories. You and I might agree this new story will take twice as long as a one-point story, and so we agree it’s a two.

However, you might be thinking that’s five hours of work, and I might be thinking it’s 10. In this way, story points are still about time (effort), but the amount of time per point is not pegged to the same amount for all team members.

If someone in your company wants to peg one point to some number of hours, just stop calling them points and use hours or days instead. Calling them points when they’re really just hours introduces needless complexity (and loses one of the main benefits of points).

 

 

 

 

 

 

 

 

Don’t Equate Story Points to Hours

Mike Cohn's Blog - Tue, 09/16/2014 - 15:00

I’ve been quite adamant lately that story points are about time, specifically effort. But that does not mean you should say something like, “One story point = eight hours.”

Doing this obviates the main reason to use story points in the first place. Story points are helpful because they allow team members who perform at different speeds to communicate and estimate collaboratively.

Two developers can start by estimating a given user story as one point even if their individual estimates of the actual time on task differ. Starting with that estimate, they can then agree to estimate something as two points if each agree it will take twice as long as the first story.

When story points equated to hours, team members can no longer do this. If someone instructs team members that one point equals eight (or any number of) hours, the benefits of estimating in an abstract but relatively meaningful unit like story points are lost.

When told to estimate this way, the team member will mentally estimate first in number of hours and then convert that estimate to points. Something the developer estimates to be 16 hours will be converted to 2 points.

Contrast this with a team member’s thought process when estimating in story points as they are truly intended. In this case, team members will consider how long each new story will take in comparison to other stories. You and I might agree this new story will take twice as long as a one-point story, and so we agree it’s a two.

However, you might be thinking that’s five hours of work, and I might be thinking it’s 10. In this way, story points are still about time (effort), but the amount of time per point is not pegged to the same amount for all team members.

If someone in your company wants to peg one point to some number of hours, just stop calling them points and use hours or days instead. Calling them points when they’re really just hours introduces needless complexity (and loses one of the main benefits of points).

 

 

 

 

 

 

Quote of the Month September 2014

From the Editor of Methods & Tools - Tue, 09/16/2014 - 13:36
The important thing is not your process, the important thing is the process for improving your process. Source: Henrik Kniberg, http://blog.crisp.se/wp-content/uploads/2013/08/20130820-What-is-Agile.pdf

Project Success: An Overview

IMG_0696I recently asked  a sample of  the Software Process Measurement blog readers  and listeners of the Software Process and Measurement Cast how they defined project success. I constrained the answers to the classic project management three-legged stool of on-schedule, on-budget and on-scope, to avoid the predominant answer: “it depends.” Even though many of the respondents found the question difficult, everyone had a succinct opinion. The ranked responses were:

  1. On-schedule
  2. On-scope
  3. On-budget

When I asked which of the three “ons” defined project success, the top response hands down was on-schedule with nearly twice as many responses as on-scope. On-budget was a distant third, but interestingly, it was as mentioned as the second most important response on more than a few responses.

I am not surprised by how the responses stacked up. Schedule is the most easily measured and monitored of the success criteria. Everyone knows how to read a calendar, and given that most projects are created with a due date, whether or not a project is on-time is obvious.

On-scope, while representing the voice of the customer, is more difficult for most projects to interpret and measure. Projects (whether Agile or plan-based) generally evolve over the life of the project. That evolution means that the picture any stakeholder, developer or product owner had in their head at the beginning of the project may not represent what has been delivered when the project is completed. Since scope is less tangible, it is perceived to be less important (or at least more easily debated). Being on-scope may be more of a dis-satisfier (if what is delivered not close to what was wanted people will be dissatisfied, however just being on-target or close does not move the needle) than a satisfier.

The final leg on our stool was on-budget. In most cases the true budget of a project is an outcome impacted by schedule and scope, therefore is a metric that all project managers monitor but have few levers to control. Given that budget performance at a project level is deterministic the respondents perceived it as less important. Had I asked a room of IT finance personnel, I suspect that the answer might have been different.

One of the most interesting observations is that even without context everyone had an opinion about the definition of success. While context, if given, may have shifted the respondent’s perspective, their initial response represents the respondent’s natural cognitive bias. When asked to identify whether being on-schedule, on-budget or on-scope was the most important attribute of project success everyone I asked had an opinion. And, that opinion focused on the very tangible and measurable calendar metric: on-schedule. The basic definition of success that a project manager or leader carries with themselves is important because that opinion will guide how they will try to influence project behavior.

Note over the next few days we will explore the rationale behind why each leg of the stool was important to those who responded.


Categories: Process Management

SPaMCAST 307 – Integration Testing and Agile, Software Sensei

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast 307 (podcast)

Software Process and Measurement Cast number 307 features our essay on integration testing and Agile. Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective development whether you are using Agile techniques or not.

Link and pictures noted in the essay:

Beer glass logo screen

Application Diagram

We also have a new installment from the Software Sensei.  Kim Pries, the Software Sensei, discusses layered process audits and software inspections.  The techniques are a powerful approach to deliver high quality software.

Next

SPaMCAST 308 features our interview with Michael West author of Return On Process (ROP): Getting Real Performance Results from Process Improvement and more! We had a great discussion about why some process improvements impact the organization’s bottom line and some don’t. Impacting the bottom line is not accident.

 

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 307 - Integration Testing and Agile, Software Sensei

Software Process and Measurement Cast - Sun, 09/14/2014 - 22:00

Software Process and Measurement Cast number 307 features our essay on integration testing and Agile. Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective development whether you are using Agile techniques or not.

Link and pictures noted in the essay:

Beer glass logo screen

 

Application Diagram

We also have a new installment from the Software Sensei.  Kim Pries, the Software Sensei, discusses layered process audits and software inspections.  The techniques are a powerful approach to deliver high quality software.

Next

SPaMCAST 308 features our interview with Michael West author of Return On Process (ROP): Getting Real Performance Results from Process Improvement and more! We had a great discussion about why some process improvements impact the organization’s bottom line and some don’t. Impacting the bottom line is not accident.

 

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM - 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Community of Practice: A Meeting Checklist

Hand Drawn Checklist

Hand Drawn Checklist

Hand Drawn Chart Saturday

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. There are a few basic logistics that will affect the efficiency of a community of practice.  On the surface, logistics impact ease and comfort of meeting but in a deeper sense, impact the ability for members to connect and share information. A basic logistics checklist would include meeting announcements, facilities and agenda.

Community of practice meeting agenda:

Basics

__ Meeting date

__ Meeting location

__ Meeting time

__ Agenda

Meeting time and place can include lunch and learns (getting together over lunch to connect and learn). For co-located teams or distributed groups videoconferences/teleconferences might be the only option.

Communities of practice are typically more effective when members can physically meet. Teleconferences and videoconferences tend to foster partial attention during presentations and do not support random networking. Where possible each COP should be local. That means that in larger organizations each location will have its own COP for an area of interest. The local COP will connect periodically with like COPS outside their specific location. For example, I recently observed an Agile COP in a large multinational organization (that was supported and funded by the corporate headquarters). A team from headquarters facilitated the development of local COPs in each location (where there was an interest). Facilitation included ensuring rooms were available for meetings, funding was provided for lunch, activities and at least once, for beer and wine. Most importantly the corporate sponsorship ensured time to meet was made available. On a quarterly basis an organizational video-conference was held (lead by one of the local groups) and then annually an in-person conference was held. This has been going on for seven years.

The meeting program is crucial for gaining and holding interest. There a number of programming items for a community of practice that can be considered:

Category 1 – Networking:

__ Networking time with food or other sorts of social lubricant (sharing meals/food has been shown to be an effective team building tool)

__ Open sharing rounds (go around the room and share a success or failure)

Category 2 – Content:

__ Process demonstrations (demonstrate process/technique used within the organization or review a project that was of interest)

__ Agile or process related games (interactive games generate involvement and interest)

__ YouTube videos (introduce new ideas from outside experts without having to arrange for a speaker)

__ Outside presentations (new ideas from outside the team boundary to challenge biases)

The goal of the programming for the sessions is to keep the community interested, learning and interacting based on a common interests. For a typical one hour session I generally select one option from each category.

Note: Each programming element will have different logistics. For example, if the session includes lunch, someone will need to order lunch. A short and probably incomplete list of items to consider is shown below:

__ Food / drink ordered

__ Audio visual items

__ Projector

__ Internet

__ Telephone / conference number

__ Webinar session

__ Speaker (with any needed security approval)

__ Props for Agile or process games

Wrap-up

__ Solicit ideas for the next session

__ Set the time and date for next session

__ Short retrospective (for example identify one thing you learned and one thing that can be done better next time)

Getting a community of process together so that it can meet it’s goal of generating support among the community members, to provide a platform to share trials and successes and to inject new ideas and energy into the community is not an ad-hoc process. Solid, long running COPs work diligently on ensuring that each interaction holds member’s attention and delivers value!


Categories: Process Management

iOS Today Widget in Swift - Tutorial

Xebia Blog - Sat, 09/13/2014 - 18:51

Since both the new app extensions of iOS 8 and Swift are both fairly new, I created a sample app that demonstrates how a simple Today extension can be made for iOS 8 in Swift. This widget will show the latest blog posts of the Xebia Blog within the today view of the notification center.

The source code of the app is available on GitHub. The app is not available on the App Store because it's an example app, though it might be quite useful for anyone following this Blog.

In this tutorial, I'll go through the following steps:

New Xcode project

Even though an extension for iOS 8 is a separate binary than an app, it's not possible to create an extension without an app. That makes it unfortunately impossible to create stand alone widgets, which this sample would be since it's only purpose is to show the latest posts in the Today view. So we create a new project in Xcode and implement a very simple view. The only thing the app will do for now is tell the user to swipe down from the top of the screen to add the widget.

iOS Simulator Screen Shot 11 Sep 2014 23.13.21

Time to add our extension target. From the File menu we choose New > Target... and from the Application Extension choose Today Extension.

Screen Shot 2014-09-11 at 23.20.34

We'll name our target XebiaBlogRSSWidget and of course use Swift as Language.

XebiaBlogRSS

The created target will have the following files:

      • TodayViewController.swift
      • MainInterface.storyboard
      • Info.plist

Since we'll be using a storyboard approach, we're fine with this setup. If however we wanted to create the view of the widget programatically we would delete the storyboard and replace the NSExtensionMainStoryboard key in the Info.plist with NSExtensionPrincipalClass and TodayViewController as it's value. Since (at the time of this writing) Xcode cannot find Swift classes as extension principal classes, we also would have to add the following line to our TodayViewController:

@objc (TodayViewController)
Add dependencies with cocoapods

The widget will get the latest blog posts from the RSS feed of the blog: http://blog.xebia.com/feed/. That means we need something that can read and parse this feed. A search on RSS at Cocoapods gives us the BlockRSSParser as first result. Seems to do exactly what we want, so we don't need to look any further and create our Podfile with the following contents:

platform :ios, "8.0"

target "XebiaBlog" do

end

target "XebiaBlogRSSWidget" do
pod 'BlockRSSParser', '~> 2.1'
end

It's important to only add the dependency to the XebiaBlogRSSWidget target since Xcode will build two binaries, one for the app itself and a separate one for the widget. If we would add the dependency to all targets it would be included in both binaries and thus increasing the total download size for our app. Always only add the necessary dependencies to both your app target and widget target(s).

Note: Cocoapods or Xcode might give you problems when you have a target without any Pod dependencies. In that case you may add a dependency to your main target and run pod install, after which you might be able to delete it again.

The BlockRSSParser is written in objective-c, which means we need to add an objective-c bridging header in order to use it from Swift. We add the file XebiaBlogRSSWidget-Bridging-Header.h to our target and add the import.

#import "RSSParser.h"

We also have to tell the Swift compiler about it in our build settings:

Screen Shot 2014-09-12 at 15.35.21

Load RSS feed

Finally time to do some coding. The generated TodayViewController has a function called widgetPerformUpdateWithCompletionHandler. This function gets called every once in awhile to ask for new data. It also gets called right after viewDidLoad when the widget is displayed. The function has a completion handler as parameter, which we need to call when we're done loading data. A completion handler is used instead of a return function so we can load our feed asynchronously.

In objective-c we would write the following code to load our feed:

[RSSParser parseRSSFeedForRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://blog.xebia.com/feed/"]] success:^(NSArray *feedItems) {
  // success
} failure:^(NSError *error) {
  // failure
}];

In Swift this looks slightly different. Here the the complete implementation of widgetPerformUpdateWithCompletionHandler:

func widgetPerformUpdateWithCompletionHandler(completionHandler: ((NCUpdateResult) -> Void)!) {
  let url = NSURL(string: "http://blog.xebia.com/feed/")
  let req = NSURLRequest(URL: url)

  RSSParser.parseRSSFeedForRequest(req,
    success: { feedItems in
      self.items = feedItems as? [RSSItem]
      completionHandler(.NewData)
    },
    failure: { error in
      println(error)
      completionHandler(.Failed)
  })
}

We assign the result to a new optional variable of type RSSItem array:

var items : [RSSItem]?

The completion handler gets called with either NCUpdateResult.NewData if the call was successful or NCUpdateResult.Failed when the call failed. A third option is NCUpdateResult.NoData which is used to indicate that there is no new data. We'll get to that later in this post when we cache our data.

Show items in a table view

Now that we have fetched our items from the RSS feed, we can display them in a table view. We replace our normal View Controller with a Table View Controller in our Storyboard and change the superclass of TodayViewController and add three labels to the prototype cell. No different than in iOS 7 so I won't go into too much detail here (the complete project is on GitHub).

We also create a new Swift class for our custom Table View Cell subclass and create outlets for our 3 labels.

import UIKit

class RSSItemTableViewCell: UITableViewCell {

    @IBOutlet weak var titleLabel: UILabel!
    @IBOutlet weak var authorLabel: UILabel!
    @IBOutlet weak var dateLabel: UILabel!

}

Now we can implement our Table View Data Source functions.

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return items.count
  }
  return 0
}

Since items is an optional, we use Optional Binding to check that it's not nil and then assign it to a temporary non optional variable: let items. It's fine to give the temporary variable the same name as the class variable.

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  let cell = tableView.dequeueReusableCellWithIdentifier("RSSItem", forIndexPath: indexPath) as RSSItemTableViewCell

  if let item = items?[indexPath.row] {
    cell.titleLabel.text = item.title
    cell.authorLabel.text = item.author
    cell.dateLabel.text = dateFormatter.stringFromDate(item.pubDate)
  }

  return cell
}

In our storyboard we've set the type of the prototype cell to our custom class RSSItemTableViewCell and used RSSItem as identifier so here we can dequeue a cell as a RSSItemTableViewCell without being afraid it would be nil. We then use Optional Binding to get the item at our row index. We could also use forced unwrapping since we know for sure that items is not nil here:

let item = items![indexPath.row]

But the optional binding makes our code saver and prevents any future crash in case our code would change.

We also need to create the date formatter that we use above to format the publication dates in the cells:

let dateFormatter : NSDateFormatter = {
        let formatter = NSDateFormatter()
        formatter.dateStyle = .ShortStyle
        return formatter
    }()

Here we use a closure to create the date formatter and to initialise it with our preferred date style. The return value of the closure will then be assigned to the property.

Preferred content size

To make sure we that we can actually see the table view we need to set the preferred content size of the widget. We'll add a new function to our class that does this.

func updatePreferredContentSize() {
  preferredContentSize = CGSizeMake(CGFloat(0), CGFloat(tableView(tableView, numberOfRowsInSection: 0)) * CGFloat(tableView.rowHeight) + tableView.sectionFooterHeight)
}

Since the widgets all have a fixed width, we can simply specify 0 for the width. The height is calculated by multiplying the number of rows with the height of the rows. Since this will set the preferred height greater than the maximum allowed height of a Today widget it will automatically shrink. We also add the sectionFooterHeight to our calculation, which is 0 for now but we'll add a footer later on.

When the preferred content size of a widget changes it will animate the resizing of the widget. To have the table view nicely animating along this transition, we add the following function to our class which gets called automatically:

override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
  coordinator.animateAlongsideTransition({ context in
    self.tableView.frame = CGRectMake(0, 0, size.width, size.height)
  }, completion: nil)
}

Here we simply set the size of the table view to the size of the widget, which is the first parameter.

Of course we still need to call our update method as well as reloadData on our tableView. So we add these two calls to our success closure when we load the items from the feed

success: { feedItems in
  self.items = feedItems as? [RSSItem]
  
  self.tableView.reloadData()
  self.updatePreferredContentSize()

  completionHandler(.NewData)
},

Let's run our widget:

Screen Shot 2014-09-09 at 13.12.50

It works, but we can make it look better. Table views by default have a white background color and black text color and that's no different within a Today widget. We'd like to match the style with the standard iOS Today widget so we give the table view a clear background and make the text of the labels white. Unfortunately that does make our labels practically invisible since the storyboard editor in Xcode will still show a white background for views that have a clear background color.

If we run again, we get a much better looking result:

Screen Shot 2014-09-09 at 13.20.05

Open post in Safari

To open a Blog post in Safari when tapping on an item we need to implement the tableView:didSelectRowAtIndexPath: function. In a normal app we would then use the openURL: method of UIApplication. But that's not available within a Today extension. Instead we need to use the openURL:completionHandler: method of NSExtensionContext. We can retrieve this context through the extensionContext property of our View Controller.

override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
  if let item = items?[indexPath.row] {
    if let context = extensionContext {
      context.openURL(item.link, completionHandler: nil)
    }
  }
}
More and Less buttons

Right now our widget takes up a bit too much space within the notification center. So let's change this by showing only 3 items by default and 6 items maximum. Toggling between the default and expanded state can be done with a button that we'll add to the footer of the table view. When the user closes and opens the notification center, we want to show it in the same state as it was before so we need to remember the expand state. We can use the NSUserDefaults for this. Using a computed property to read and write from the user defaults is a nice way to write this:

let userDefaults = NSUserDefaults.standardUserDefaults()

var expanded : Bool {
  get {
    return userDefaults.boolForKey("expanded")
  }
  set (newExpanded) {
    userDefaults.setBool(newExpanded, forKey: "expanded")
    userDefaults.synchronize()
  }
}

This allows us to use it just like any other property without noticing it gets stored in the user defaults. We'll also add variables for our button and number of default and maximum rows:

let expandButton = UIButton()

let defaultNumRows = 3
let maxNumberOfRows = 6

Based on the current value of the expanded property we'll determine the number of rows that our table view should have. Of course it should never display more than the actual items we have so we also take that into account and change our function into the following:

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return min(items.count, expanded ? maxNumberOfRows : defaultNumRows)
  }
  return 0
}

Then the code to make our button work:

override func viewDidLoad() {
  super.viewDidLoad()

  updateExpandButtonTitle()
  expandButton.addTarget(self, action: "toggleExpand", forControlEvents: .TouchUpInside)
  tableView.sectionFooterHeight = 44
}

override func tableView(tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
  return expandButton
}

Depending on the current value of our expanded property we wil show either "Show less" or "Show more" as button title.

func updateExpandButtonTitle() {
  expandButton.setTitle(expanded ? "Show less" : "Show more", forState: .Normal)
}

When we tap the button, we'll invert the expanded property, update the button title and preferred content size and reload the table view data.

func toggleExpand() {
  expanded = !expanded
  updateExpandButtonTitle()
  updatePreferredContentSize()
  tableView.reloadData()
}

And as a result we can now toggle the number of rows we want to see.

Screen Shot 2014-09-13 at 18.53.39Screen Shot 2014-09-13 at 18.53.43

Caching

At this moment, each time we open the widget, we first get an empty list and then once the feed is loaded, the items are displayed. To improve this, we can cache the retrieved items and display those once the widget is opened before we load the items from the feed. The TMCache library makes this possible with little effort. We can add it to our Pods file and bridging header the same way we did for the BlockRSSParser library.

Also here, a computed property works nice for caching the items and hide the actual implementation:

var cachedItems : [RSSItem]? {
  get {
    return TMCache.sharedCache().objectForKey("feed") as? [RSSItem]
  }
  set (newItems) {
    TMCache.sharedCache().setObject(newItems, forKey: "feed")
  }
}

Since the RSSItem class of the BlockRSSParser library conforms to the NSCoding protocol, we can use them directly with TMCache. When we retrieve the items from the cache for the first time, we'll get nil since the cache is empty. Therefore cachedItems needs to be an optional as well as the downcast and therefore we need to use the as? operator.

We can now update the cache once the items are loaded simply by assigning a value to the property. So in our success closure we add the following:

self.cachedItems = self.items

And then to load the cached items, we add two more lines to the end of viewDidLoad:

items = cachedItems
updatePreferredContentSize()

And we're done. Now each time the widget is opened it will first display the cached items.

There is one last thing we can do to improve our widget. As mentioned earlier, the completionHandler of widgetPerformUpdateWithCompletionHandler can also be called with NCUpdateResult.NoData. Now that we have the items that we loaded previously we can compare newly loaded items with the old and use NoData in case they haven't changed. Here is our final implementation of the success closure:

success: { feedItems in
  if self.items == nil || self.items! != feedItems {
    self.items = feedItems as? [RSSItem]
    self.tableView.reloadData()
    self.updatePreferredContentSize()
    self.cachedItems = self.items
    completionHandler(.NewData)
  } else {
    completionHandler(.NoData)
  }
},

And since it's Swift, we can simply use the != operator to see if the arrays have unequal content.

Source code on GitHub

As mentioned in the beginning of this post, the source code of the project is available on GitHub with some minor changes that are not essential to this blog post. Of course pull requests are always welcome. Also let me know in the comments below if you'd wish to see this widget released on the App Store.

Making Tangible The Intangible

Sometime the intangible obscures the tangible.

Sometime the intangible obscures the tangible.

In the Software Process and Measurement Cast 37 Kenji Hiranabe suggested that both software and the processes were intangible and opaque.  In SPaMCAST 36 Phil Armour put forth the thought that software was a container for knowledge.  Knowledge is only tangible when demonstrated or, in software terms, executed.  Combining both ideas, means that a software product is a harness for knowledge to deliver business value, delivered using what is perceived to be an intangible process. The output of the process is only fully recognizable and testable for the brief period that the code executes. On top of all that, there is every expectation that the delivery of the product will be on-time, on-budget, have high quality and be managed and orderly.  No wonder most development managers have blood pressure issues!

Intangibility creates the need for managers and customers to apply controls to understand what is happening in a project and why it is happening.  The level of control required for managers and customers to feel comfortable will cost a project time, effort and money that could be better spent actually delivering functionality (or dare I say it, reducing the cost of the project).  Therefore, finding tools and techniques to make software and the process used to create software more tangible and while at the same time more transparent to scrutiny, is generally a good goal.  I use the term “generally” on purpose. The steps taken to increase tangibility and transparency need be less invasive than those typically seen in command and control organizations. Otherwise, why would you risk the process change?

Agile projects have leveraged tools like WIKIs, standup meetings, big picture measurements and customer involvement as tools to increase visibility into the process and functional code as a tool to make their knowledge visible.  I will attest that when well-defined agile processes are coupled with proper corporate culture, an environment is created that is highly effective for breaking down walls.  But, (and as you and I know, there had to be a “but”) the processes aren’t always well defined or applied with discipline and not all organizational cultures can embrace agile methods. There needs to be another way to solve the tangibility and transparency problems without resorting to draconian command and control procedures that cost more than they are normally worth.

In his two SPaMCAST interviews, Mr. Hiranabe laid out two processes that are applicable across the perceived divide defined by waterfall and agile projects.  In 2007 on SPaMCAST 7, Kenji talked about mind mapping.  Mind mapping is a tool used to visualize and organize data and ideas.  Mind mapping provides a method for capturing data concepts, visualizing the relationships between them and, in doing so making ideas and knowledge tangible.  In the SPaMCAST 37, Kenji proposes a way to integrate kanban into the software development process.  According to WIKIPEDIA,  “Kanban is a signaling system to trigger action which in Toyota Production System leverages physical cards as the signal”.  In other words the signal is used to indicate when new tasks should start, and by inference, the status of current work.  Kenji does a great job at explaining how the kanban can be used in system development.  The bottom line is that the signal, whether physical or electronic, provides a low impact means of indicating how the development process is functioning and how functionality is flowing through the process. This increases the visibility of the process and makes it more tangible to those viewing from outside the trenches of IT.

Code that when executed does what was expected is the ultimate evidence that we have successfully captured knowledge and harnessed it to provide the functionality our customer requested.  The sprint demos in SCRUM are a means of providing a glimpse into that knowledge and to build credibility with customers.  However if your project is not leveraging SCRUM, then daily or weekly builds with testing can be leveraged to provide some assurance that knowledge is being captured and assembled into a framework that functions the way that you would expect.  You should note that demos and daily builds are not an either/or situation.  Do both!

The lack of tangibility and lack of transparency of the process of capturing knowledge and building the knowledgeware we call software has been a sore point between developers and managers since the first line of code was written.  We are now finally getting to the point where we have recognized that we have to address these issues; not just from a command and control perspective, but also from a social engineering perspective.  Even if Agile as a movement was to disappear tomorrow, there is no retreat from integrating tools and techniques like mind mapping and kanban while embracing software engineering within the social construct of the organization and perhaps the wider world outside the organization.  Our goal is to make tangible what intangible and visible that which is opaque.


Categories: Process Management

React In Modern Web Applications: Part 2

Xebia Blog - Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

Community of Practice: Killers

A community of interest needs a common focus!

A community of interest needs a common focus!

All community of practices (COP) are not successful or at least don’t stay successful.  While there can many issues that cause a COP to fail, there are three very typical problems that kill off COPs.

  1. Poor leadership – All groups have a leader that exerts influence on the direction of the group. The best COP leaders I have observed (best being defined in terms of the health of the COP) are servant leaders.  In a community of practice, the servant leader will work to empower and serve the community. Empowerment of the COP is reflected by the removing of impediments and coaching the team so it meets its goals of connection, encouragement and sharing.  In COPs with a poor leader the goals of the group generally shift towards control of the message or to the aggrandizement of specific group or person.  Earlier in my career I was involved with a local SPIN (software process improvement network) group that had existed for several years.  The SPIN group was a community of practice that drew members form 20 – 30 companies in my area. At one point a leader emerged whose goal was to generate sales leads for himself.  Membership fell precipitously before a new leader emerged and re-organized the remnants.
  2. Lack of a common interest – A group put together without a common interest reminds me of sitting in the back of station wagon with my four siblings on long Sunday drives in the country.  Not exactly pointless but to be avoided if possible.  A community of practice without a common area of interest isn’t a community of practice.
  3. Natural life cycle – Ideas and groups have a natural life cycle.  When a COP purpose passes or fades, the group should either be re-purposed or shutdown. As an example, the SPIN mentioned above reached its zenith during the heyday of the CMMI and faded as that framework became less popular. I have often observed that as a COP’s original purpose wanes the group seeks to preserve itself by finding a new purpose. Re-purposing often fails because the passion the group had for the original concept does not transfer.  Re-purposing works best when the ideas being pursued are a natural evolutionary path.  I recently observed a Scrum Master COP that was in transition. Scrum was institutionalized within the organization and there was a general feeling that the group had run its course unless something was done to energize the group. The group decided to begin exploring Scaled Agile Framework as a potential extension of their common interest in Agile project and program management.

In general, a community of practice is not an institution that lasts forever. Idea and groups follow a natural life cycle.  COPs generally hit their zenith when members finally get the most benefit from sharing and connecting.  The amount of benefit that a member of the community perceives they get from participation is related to the passion they have for the group. As ideas and concepts become mainstream or begin to fade, the need for a COP can also fade. As passion around the idea fades, leaders can emerge that have other motives than serving the community which hastens the breakdown of the COP. When the need to a COP begins to fade, generally it is time to disband or re-purpose the COP.


Categories: Process Management

Software Development Linkopedia September 2014

From the Editor of Methods & Tools - Thu, 09/11/2014 - 09:36
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about the software developer condition, scaling Agile, technical debt, behavior-driven development, Agile metrics, UX (user eXperience), NoSQL databases and software design. Blog: The Developer is Dead, Long Live the Developer Blog: Scaling Agile at Gilt Blog: Technical debt 101 Blog: Behaviour Driven Development: Tips for Writing Better Feature Files Article: Acceptance Criteria – Demonstrate Them by Drawing a House Article: Actionable Metrics At Siemens Health Services Article: Adapting Scrum to a ...

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Community of Practice: Owning The Message

Some time controlling the message is important!

Some time controlling the message is important!

The simplest definition of a community of practice (COP) is people connecting, encouraging each other and sharing ideas and experiences. The power of COPs is generated by the interchange between people in a way that helps both the individuals and the group to achieve their goals. Who owns the message that the COP focuses on will affect how well the interchange occurs. Ownership, viewed in a black and white mode, generates two kinds of COPs. In the first type of COP, the group owns the message. In the second, the organization owns the message. “A Community of Practice: An Example” described a scenario in which the organization created a COP for a specific practices and made attendance mandatory. The inference in this scenario is that the organization is using the COP to deliver a message. The natural tendency is to view COPs, in which the organization controls the message and membership, as delivering less value.

Organizational ownership of a COP’s message and membership are generally viewed as anti-patterns. The problem is that ownership and control membership can impact the COP’s ability to:

  1. Connect like-minded colleagues and peers
  2. Share experiences safely if they do not conform to the organization’s message
  3. Innovate and to create new ideas that are viewed as outside-the-box.

The exercise of control will constrain the COP’s focus which in an organization implementing concepts such self-organizing and self-managing team (Agile concepts) and will send very mixed messages to the organization.

The focus that control generates can be used to implement, reinforce and institutionalize new ideas that are being rolled out on an organizational basis. Control of message and membership can:

  1. Accelerate learning by generating focus
  2. Validate and build on existing knowledge, the organization’s message
  3. Foster collaboration and consistency of process

In the short-run this behavior may well be a beneficial mechanism to deliver and then reinforce the organization’s message.  The positives that the constraints generate will quickly be overwhelmed once the new idea loses its bright and shiny status.

In organizations that use top-down process improvement methods, COPs can be used to deliver a message and then to reinforce the message as implementation progresses. However, as soon as institutionalization begins, the organization should get out of the COP control business. This does not mean that support, such as providing budget and logistics, should be withdrawn.  Support does not have to equate to control. Remember that control might be effective in the short run; however, COPs in which the message and membership is explicitly controlled, in the long-term will not be able to evolve and support its members effectively.


Categories: Process Management