Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Story Points Are Not Always A Good Metric For Project Planning

Story points?

Story points?

Recently I did a webinar on User stories for my day job. During my preparation for the webinar I asked everyone that was registered to provide the questions they wanted to be addressed. I received a number of fantastic questions. I felt that it was important to share the answers with a broader audience. 

One of the questions from Grigory Kolesnikov I was asked was indicative of a second group of questions: 

“Which is better to use as a metric for project planning:

  1. User stories,
  2. Local self-made proxies,
  3. Functional points, or
  4.  Any other options?”

Given the topic of the webinar the answer focused on whether story points were the best metric for project planning.

Size is one of the predictors of how much work will be required to deliver a project. Assuming all project attributes, with the exception of size, stay the same, a larger project will require more effort to complete than a smaller project. Therefore knowing size is an important factor in answering questions like “how long will this take” or “how much will this project cost”.  While these are questions fraught with dangers, they are always asked. If you have to compete for work they are generally difficult not to answer. While not a perfect analogy, I do not know a person that builds or is involved in building a home that can’t answer that question (on either side of the transaction). Which metric you should use to plan the project depends on the type of project or program and whether you are an internal or external provider (i.e. whether you have to compete for work).  Said a different way, as all good consultants know the answer is – it depends.

User stories are very useful, both for release planning and iteration planning in projects that are being done with one or small number of stable teams. The stability of the teams is important for the team to be able to develop a common frame of reference for applying story points. When teams are unable to develop a common frame of reference (or need to redevelop the frame of reference due to changes in the team) their application of story points will vary widely.  A feature that in sprint 1 might have been 5 story points might be 11 in sprint 3.  While this might not seem to be a big shift, the variability of the how the team perceives size will also be exhibited in the team’s velocity.  Velocity is used in release planning and iteration planning.  The higher degree of variability in the team’s performance from sprint to sprint, the less predictive. If performance measured in story points (velocity) is highly variable it will be  less useful for project planning.  Simply put, if you struggle to remember who is on your team on a day-to-day basis, story points are not going to be very valuable. 

External providers generally have strong contractual incentives to deliver based on set of requirements in a statement of work, RFP or some other binding document.  While contracts can (and should be) tailored to address how Agile manages the flow of work through a dynamic backlog, most are not, and until accounting, purchasing and legal are brought into the world of Agile contracts will be difficult.  For example, outsourcing contracts many times include performance expectations.  These expectations need to be observable, understandable and independently measureable in order to be binding and to build trust.  Relative measures like story points fail on this point.  Story points, as noted in other posts, are also not useful for benchmarking.  

Story points not the equivalent to duct tape. You can do most anything with duct tape. Story points are a team-based mechanism for planning sprints and releases. Teams with a rotating door for membership or projects that have specific contractual performance stipulations need to use more formal sizing tools for planning.


Categories: Process Management

An Android Wear Design Story

Android Developers Blog - Tue, 06/03/2014 - 20:41
By Roman Nurik and Timothy Jordan, Design and Developer Advocates on Android Wear

A few weeks ago, Timothy and I were chatting about designing apps for wearables to validate some of the content we’re planning for Google I/O 20141. We talked a lot about how these devices require scrutiny to preserve user attention while exposing some unique new surface areas for developers. We also discussed user context and how the apps we make should be opportunistic, presenting themselves in contexts where they’re useful; it’s more important than ever to think of apps on wearable devices not as icons on a grid but rather as functional overlays on the operating system itself.

But while I’d designed a number of touch UIs for Android in the past and Timothy had a ton of experience with Glass, neither of us had really gone through the exercise of actually designing an app for Android Wear. So we set out to put our ideas in practice and see what designing for this new platform is like.

Before we got started, we needed an idea. Last year, I participated in an informal Glass design sprint in NYC run by Nadya Direkova, and my sprint team came up with a walking tour app. The idea was you’d choose from a set of nearby tours, walk between the stops, and at each stop on the tour, learn about the destination.

My rough mocks of a walking tour app from a Glass design sprint.

While the design sprint ended at rough mocks, the idea stuck around in my mind, and came up again during this exercise. It seemed like a perfect example of a contextually aware app that could enhance your Android Wear experience.

Designing a walking tour app for Android Wear

We started fleshing out the idea by thinking through the app’s entry points: how will users “launch” this app? While exposing a “start XYZ walking tours app” voice command is pretty standard, it’d be interesting to also suggest nearby walking tours as you go about your day by presenting notifications in the user’s context stream. These notifications would be “low priority,” so you’d only see them after addressing the more important stuff like text messages from friends. And with today’s geofencing and location functionality in Google Play services, this type of contextual awareness is possible in a battery-friendly way.

At this point we were pretty excited and decided to begin mocking up the UI. Rather than starting from scratch, we used Taylor Ling’s excellent Android Wear 0.1 design template as a baseline, which includes templates for both square and round devices. We started with square since we were most familiar with rectangle UI design:

Idea: You get a notification in the context stream when a walking tour is available nearby.

I’ve got to admit, it was pretty thrilling designing in such a constrained environment. 140x140 dp (280x280 px @ XHDPI) isn’t a lot of space to work with, so you need to make some tough choices about when and how to present information. But these are exactly the types of problems that make design really, really fun. You end up spending more time thinking and less time actually pushing pixels around in Photoshop or Sketch.

We pretty quickly fleshed out the rest of the app for square devices. They included just a handful of additional screens: a dynamic notification showing the distance to your next stop, and a 4-page detail screen when you arrive at the tour stop, where you can spend a few moments reading about where you’re standing.

A notification guiding you to your next stop, and a multi-page stop detail screen for learning about the stop when you get there. Seeing our design in real life

Here’s the thing—there’s only so much you can do in Photoshop. To truly understand a platform as a designer, you really need to use (and ideally live with) a real device, and see your work on that device. Only then can you fully evaluate the complexity of your flows, the size of your touch targets, or the legibility of your text.

Luckily, Timothy and I both had test devices—I sported an LG G Watch prototype and Timothy carried a Moto 360 prototype. We then needed a way to quickly send screens to our devices so we could iterate on the design. A few years ago I’d published the Android Design Preview tool that lets you mirror a part of your screen to a connected Android device. Much to our delight, the tool worked great with Android Wear! After seeing our mocks show up on my LG G Watch, we made a few small tweaks and felt much more confident that the overall idea “felt right” on the wrist.

Android Design Preview mirrors a part of your computer screen to an Android device. It’s especially awesome seeing your UI running on an LG G Watch prototype. Designing for round devices

We’d never designed round UIs before, so we weren’t sure what this new adventure would be like. Quite frankly, it ended up being unbelievably easy: tweaking all 8 of our screen mocks for round took under an hour. When you’re only showing the most important 2 or 3 pieces of information on screen at a time, that’s only 2 or 3 pieces of information you need to optimize for round devices. All in all, there were only a few types of minor tweaks we made:

  • Scaled up backgrounds to 160x160 dp (320x320 px @ XHDPI)
  • Bumped up content margins from 12dp on square to 26dp on round; this means content was 116x116 dp on square and only a little smaller at 108x108 dp on round
  • Pushed down circular actions like “Continue tour” to better vertically center with the watch frame
  • Center-aligned certain short snippets of text on round devices as opposed to left-aligning on square
  • Dropped the side padding for context stream cards (the platform automatically does this for notifications, so there isn’t any actual work to do here)
These weren’t completely different layouts—rather, the same layout with slightly tweaked metrics.

It’s hard to articulate the excitement we felt when we mirrored the mocks to Timothy’s Moto 360 prototype with Android Design Preview. To put it lightly, our minds were blown.

There’s something special and awe-inspiring about seeing one of your UIs running on a round screen..

And that was it—with round and square mocks complete, and mirrored on our devices, we’d gotten our first glimpse at designing apps for this exciting new platform. Below are our completed mocks for the tour discovery and engagement flows, not a grid of app icons in sight. You can download the full PSDs here.

An eye-opening experience

Designing for Android Wear is pretty different from designing for the desktop, phones or tablets. Just like with Glass, you really need to think carefully about the information and actions you present to the user, and even more so about the contexts in which your app will come to the surface.

As a designer, that’s the fun part—working with constraints involving scarce resources like device size and user attention means it’s more important than ever to think deeply about your ideas and iterate on them early and often. The actual pixel-pushing part of the process is far, far easier.

So there we were, putting our ideas into practice, on real actual device prototypes that we could’ve only dreamed about only a few years ago. It was the most fun I’ve had designing UIs in a long time. Remember that feeling when you first dreamed up an app, mocked or even coded it up, and ran it on your Android phone? It was that same feeling all over again, but amplified, because you were actually wearing your app. I can’t wait for you all to experience it!

1 Have we mentioned #io14 will have tons of great content around both design and wearable computing? Make sure to tune in June 25th and 26th!

Join the discussion on

+Android Developers
Categories: Programming

Why Agile?

I thought I had written about “Why Agile” before, but I don’t see anything crisp enough.

Anyway, here’s my latest rundown on Why Agile?

  1. Increase customer involvement which can build empathy and avoid rework
  2. Learn faster which means you can adapt to change
  3. Improve quality through focus
  4. Reduce risk through shorter feedback loops and customer interaction
  5. Simplify by getting rid of overhead and waste
  6. Reduce cycle time through timeboxing and parallel development
  7. Improve operational awareness through transparency
  8. Drive process improvement through continuous improvement
  9. Empower people through less mechanics and more interaction, continuous learning, and adaptation
  10. Flow more value through more frequent releases and less “big bang”

Remember that nature favors the flexible and agility is the key to success.

You Might Also Like

Agile vs. Waterfall

Agile Life-Cycle Frame

Methodologies at a Glance

Roles on Agile Teams

The Art of the Agile Retrospective

Categories: Architecture, Programming

Germany, Austria, Switzerland, Denmark and… Finland

NOOP.NL - Jurgen Appelo - Tue, 06/03/2014 - 17:38
helsinki-workshop

Registration for Austria and Switzerland has also opened last week! And we added Denmark into the mix, just for fun!

Would you like to see a video impression of the very first Management 3.0 Workout workshop in Finland? Check it out here:

The post Germany, Austria, Switzerland, Denmark and… Finland appeared first on NOOP.NL.

Categories: Project Management

How Serving Is Your Leadership?

I once worked for a manager who thought everyone should bow down and kiss his feet. Okay, I’m not sure if he actually thought that, but that’s how it felt to me. He regularly canceled his one-on-ones with me. He interrupted me when I spoke at meetings. He tried to tell the people in my group what to do. (I put a stop to that, pretty darn quick.)

He undermined my self-confidence and everything I tried to accomplish in my organization.

When I realized what was going on, I gathered my managers. At the time, I was a Director of Many Things. I said, “Our VP is very busy. I think he has too many things on his plate. Here is what I would like to do. If he interrupts your work with a request, politely acknowledge him, and say, “Johanna will put that in our queue. She is managing our project portfolio.” If he interrupts you in a meeting, feel free to manage him the same way you manage me.” That got a laugh. “I am working with him on some customer issues, and I hope to resolve them soon.”

My managers and project managers kept on track with their work. We finished our deliverables, which was key to our success as an organization.

My relationship with my manager however, deteriorated even further. In three months, he canceled every single one-on-one. He was rude to me in every public meeting. I started looking for a new job.

I found a new job, and left my two week notice on his desk. He ran down the hall, swept into my office and slammed the door. He slammed my notice on my desk and yelled at me, “I don’t accept this! You can’t do this to me. You can’t leave. You’re the only director here accomplishing anything.”

I said, “Are you ready to have a one-on-one now?”

He said, “No. I’m busy. I’m too busy for a one-on-one.”

I said, “I’m leaving. We have nothing to discuss. You can put your head in the sand and try to not accept my resignation. Or, we can make my last two weeks here useful. What would you like?”

“You’re not done with me, Rothman!”

He stalked out of my office, and slammed the door on his way out. I got up and opened the door. I was never so happy to leave a job in my entire life.

Some managers don’t realize that they are not their title. Some managers don’t realize that the value they bring is the plus: the management, plus their relationship with their peers, the people they manage, the systems and environment they enable/create. This guy had created an environment of distrust.

That’s what this month’s management myth is all about: believing that I am More Valuable Than Other People.

If you are a manager, you do provide a valuable service: servant leadership. Make sure you do so.

Categories: Project Management

Schedule vs. Cost: The Tradeoff in Agile

Mike Cohn's Blog - Tue, 06/03/2014 - 15:00

To a large extent, agile is about making tradeoffs. Product owners learn they can trade scope for schedule: get more later or less sooner. Agile projects need to strike a balance between no upfront thinking and too much upfront thinking, a subject I’ve written about before.

I want to write now about a tradeoff that isn’t talked about a lot in agile circles. And this is the tradeoff between schedule and cost. We’ve all heard the old story that nine women can’t make a baby in one month. But on software projects, it is possible (to some extent) to deliver a project faster by adding more people.

For example, suppose a project would take one person one year to do. Two people might be able to do it in six-and-a-half months. That’s one more total person-month to account for the overhead of communicating, for misunderstandings between the two, and so on. Adding a third, fourth or fifth person to the project will likely bring the calendar date in, but probably at the expense of more total person-months on the project.

Adding person-months to a project will presumably make the project more expensive to deliver. There is, then, a tradeoff to be made between schedule and cost. And most of the time, schedule wins. Companies always want things at the lowest cost they can—but not at the expense of schedule.

In an influential 1988 article in Harvard Business Review, George Stalk, Jr., declared that time was the next source of competitive advantage. Shortening development cycles resulting in releasing new products faster is a competitive advantage—often a more important one than developing a new product at the lowest price.

This distinction is glossed over in many agile discussions. Just because a shorter schedule is more important most of the time, does not mean it is more important all of the time. This difference can lead to important but subtle differences in how an agile process may be applied.

For example, standard agile advice is that designers should work as closely as possible with programmers, testers and others on the team. User interface designs are ideally done in the same sprint in which the rest of the development work will occur. Occasionally, exceptions are made for particularly complex interfaces for which a designer may be allowed to work perhaps a sprint ahead of the others.

This is optimizing for schedule. Having the user interface designer work with the team will shorten the schedule. But this could come with the expense of some rework due to design inconsistencies discovered over a number of sprints. It could also come with additional costs due to the programmers sitting idle while waiting for the newest design.

In an agile process optimized for cost, however, the designer would not work as little ahead as possible. The designer would instead work as far ahead as possible while avoiding the opposite problems of designing hard-to-implement ideas and designing things that are no longer needed.

The designer in the latter case of our example should not strive to create a pixel-perfect design of every screen that will be part of the system. However, if cost is a more important consideration than schedule, it may very well be best for the designer to work a couple of sprints ahead.

I’d love to know what you think. What’s more important on your projects? Schedule or cost? (No fair saying both. You cannot optimize for two things.) And, to lower cost, do your designers sometimes work further ahead than an agile process might otherwise say they should?

Is there a future for Map/Reduce?

8w9jj

Google’s Jeffrey Dean and Sanjay Ghemawat filed the patent request and published the map/reduce paper  10 year ago (2004). According to WikiPedia Doug Cutting and Mike Cafarella created Hadoop, with its own implementation of Map/Reduce,  one year later at Yahoo – both these implementations were done for the same purpose – batch indexing of the web.

Back than, the web began its “web 2.0″ transition, pages became more dynamic , people began to create more content – so an efficient way to reprocess and build the web index was needed and map/reduce was it. Web Indexing was a great fit for map/reduce since the initial processing of each source (web page) is completely independent from any other – i.e.  a very convenient map phase and you need  to combine the results to build the reverse index. That said, even the core google algorithm –  the famous pagerank is iterative (so less appropriate for map/reduce), not to mention that  as the internet got bigger and the updates became more and more frequent map/reduce wasn’t enough. Again Google (who seem to be consistently few years ahead of the industry) began coming up with alternatives like Google Percolator  or  Google Dremel (both papers were published in 2010, Percolator was introduced at that year, and dremel has been used in Google since 2006).

So now, it is 2014, and it is time for the rest of us to catch up with Google and get over Map/Reduce and  for multiple reasons:

  • end-users’ expectations (who hear “big data” but interpret that as  “fast data”)
  • iterative problems like graph algorithms which are inefficient as you need to load and reload the data each iteration
  • continuous ingestion of data (increments coming on as small batches or streams of events) – where joining to existing data can be expensive
  • real-time problems – both queries and processing

In my opinion, Map/Reduce is an idea whose time has come and gone – it won’t die in a day or a year, there is still a lot of working systems that use it and the alternatives are still maturing. I do think, however, that if you need to write or implement something new that would build on map/reduce – you should use other option or at the very least carefully consider them.

So how is this change going to happen ?  Luckily, Hadoop has recently adopted YARN (you can see my presentation on it here), which opens up the possibilities to go beyond map/reduce without changing everything … even though in effect,  a lot  will change. Note that some of the new options do have migration paths and also we still retain the  access to all that “big data” we have in Hadoopm as well as the extended reuse of some of the ecosystem.

The first type of effort to replace map/reduce is to actually subsume it by offering more  flexible batch. After all saying Map/reduce is not relevant, deosn’t mean that batch processing is not relevant. It does mean that there’s a need to more complex processes. There are two main candidates here  Tez and Spark where Tez offers a nice migration path as it is replacing map/reduce as the execution engine for both Pig and Hive and Spark has a compelling offer by combining Batch and Stream processing (more on this later) in a single engine.

The second type of effort or processing capability that will help kill map/reduce is MPP databases on Hadoop. Like the “flexible batch” approach mentioned above, this is replacing a functionality that map/reduce was used for – unleashing the data already processed and stored in Hadoop.  The idea here is twofold

  • To provide fast query capabilities* – by using specialized columnar data format and database engines deployed as daemons on the cluster
  • To provide rich query capabilities – by supporting more and more of the SQL standard and enriching it with analytics capabilities (e.g. via MADlib)

Efforts in this arena include Impala from Cloudera, Hawq from Pivotal (which is essentially greenplum over HDFS), startups like Hadapt or even Actian trying to leverage their ParAccel acquisition with the recently announced Actian Vector . Hive is somewhere in the middle relying on Tez on one hand and using vectorization and columnar format (Orc)  on the other

The Third type of processing that will help dethrone Map/Reduce is Stream processing. Unlike the two previous types of effort this is covering a ground the map/reduce can’t cover, even inefficiently. Stream processing is about  handling continuous flow of new data (e.g. events) and processing them  (enriching, aggregating, etc.)  them in seconds or less.  The two major contenders in the Hadoop arena seem to be Spark Streaming and Storm though, of course, there are several other commercial and open source platforms that handle this type of processing as well.

In summary – Map/Reduce is great. It has served us (as an industry) for a decade but it is now time to move on and bring the richer processing capabilities we have elsewhere to solve our big data problems as well.

Last note  – I focused on Hadoop in this post even thought there are several other platforms and tools around. I think that regardless if Hadoop is the best platform it is the one becoming the de-facto standard for big data (remember betamax vs VHS?)

One really, really last note – if you read up to here, and you are a developer living in Israel, and you happen to be looking for a job –  I am looking for another developer to join my Technology Research team @ Amdocs. If you’re interested drop me a note: arnon.rotemgaloz at amdocs dot com or via my twitter/linkedin profiles

*esp. in regard to analytical queries – operational SQL on hadoop with efforts like  Phoenix ,IBM’s BigSQL or Splice Machine are also happening but that’s another story

illustration idea found in  James Mickens’s talk in Monitorama 2014 –  (which is, by the way, a really funny presentation – go watch it) -ohh yeah… and pulp fiction :)

Categories: Architecture

Business Analysts need to ask “Why”?

Software Requirements Blog - Seilevel.com - Tue, 06/03/2014 - 12:21
I attended BA World – Atlanta a few weeks back, and attended a wonderful presentation by Paul Mulvey, “Why Should a Business Analyst Care About Essential Processes?“  Paul started off his session with a story about his daughter, and how she asked for a new iPhone.  His first question was “why?”  Why did she need […]

Business Analysts need to ask “Why”? is a post from: http://requirements.seilevel.com/blog

Categories: Requirements

Story Points as an Organizational Measure of Software Size

Story points make a poor organizational measure of software size.

Story points make a poor organizational measure of software size.

Recently I did a webinar on User Stories for my day job as Vice President of Consulting at the David Consulting Group. During my preparation for the webinar I asked everyone that was registered to provide the questions they wanted to be addressed.  I received quite a few responses.  I did my best to answer the questions, however I thought it would be a good idea to circle back and address a number of the questions more formally. A number of the questions concerned using story points.

The first set of questions focused on using story points to compare teams and to other organizations.  

Questions Set 1: Story Points as an Organizational Measure of Software Size

Story points make a poor organizational measure of software size because they represent an individual team’s perspective and can’t be used to benchmark performance between teams or organizations.

Story points (vs function points) are relative measure based on the team’s perception of the size of the work.  The determination of size is based on level of understanding, how complex and how much work is required compared to other units of work. Every team will have a different perception of the size of work. For example one team thinks that adding a backup to their order entry system is fairly easy and call the work five story points, while a second team might size the same work as eight story points.  Does the difference mean that the second team thinks the work is nearly twice as difficult or does it represent a different frame of reference?  Story points do not provide that level of explanative power and should not be used in this fashion. Inferring the degree of real difficulty or the length of time required to deliver the function based on an outsiders perception of the reported story point size will lead to wrong answers.

There are many published and commercially available benchmarks for function points include IFPUG, COSMIC, NESMA or MarkII varieties (all of which are ISO Standards).  These benchmarks represent data collected or reported using a set of internationally published standards for sizing software. Given that story points are by definition a measure based on a specific team’s perception and not on a set of published rules, there are no industry standards for story point performance. 

In order to benchmark and compare performance between groups, an organization needs to adopt a measure or metric based on a set of published and industry accepted rules. Story points, while valuable at a team level, by definition fail on this point. Story points, as they are currently defined, can’t be used to compare between teams or organizations. Any organization that is publishing industry performance standards based on story points have either redefined story points OR just does not understand what story points represent.


Categories: Process Management

New Demographic Stats in Google Play Games Services

Android Developers Blog - Mon, 06/02/2014 - 18:17

By Ben Frenkel, Google Play Games team

Hey game developers, back in March you may remember we added new game statistics in the Google Play Developer Console for those of you who had implemented Google Play Games: our cross-platform game services for Android, iOS and the web.

Starting today, we're providing more insights into how your games are being used by adding country, age, and gender dimensions to the existing set of reports available in the Developer console. You’ll see demographics integrated into Overview stats as well as the Players reports for New and Active users.

In the Overview stats you can now see highlights of activity by age group, most active countries, and gender.

With a better understanding of your users’ demographic composition, you'll be able to make more effective decisions to improve retention and monetization. Here a few ways you could imagine using these new stats:

  • You just launched your new game globally, and expected it do particularly well in Germany. Using country demographic data, you see that Germany is much less active than expected. After some digging, you realize that your tutorial was not properly translated to German. Based on this insight, you immediately roll out a fix to see if you can improve active users in Germany.

In the Players stats section the new metrics reveal trends in how your app is doing across age groups, countries, and gender.

  • After Looking at your new demographics report you realize that your game is really popular with women in their mid-20s. Your in-app purchase data corroborates this, showing that the one female hero character is the most popular purchase. Empowered by this data, you race to add female hero characters to your game.

Additionally, if you're already using Google Play game services, there's no extra integration needed! By logging in to the Google Play Developer Console you can start using demographics to better inform your decisions today.

Join the discussion on

+Android Developers
Categories: Programming

How NOT to Market Yourself as a Software Developer

Making the Complex Simple - John Sonmez - Mon, 06/02/2014 - 16:00

I’ve talked quite a bit about ways to market yourself as a software developer over the years (I’ve even created a course on the subject), but I haven’t really talked about how NOT to market yourself as a software developer. Arguably it is just as important to know how to NOT market yourself as it […]

The post How NOT to Market Yourself as a Software Developer appeared first on Simple Programmer.

Categories: Programming

Tips for Newbie Business Analysts – Part I

Software Requirements Blog - Seilevel.com - Mon, 06/02/2014 - 12:40
One of the pillars of employee development here at Seilevel is a robust mentorship program. Everyone at the company is assigned a mentor within a few weeks of starting. Your mentor is tasked with ensuring that you are getting the opportunities you need to grow as an employee, solicits feedback from your peers and project […]

Tips for Newbie Business Analysts – Part I is a post from: http://requirements.seilevel.com/blog

Categories: Requirements

Conversation with Dr. John Kotter

NOOP.NL - Jurgen Appelo - Mon, 06/02/2014 - 09:53
dr-kotter

Last week I had an inspiring video chat with Dr. John P. Kotter, bestselling author of the books Leading Change and Our Iceberg is Melting. His most recent work is called Accelerate (XLR8) and I talk with Dr. Kotter about hierarchies, networks, and accelerated change.

The post Conversation with Dr. John Kotter appeared first on NOOP.NL.

Categories: Project Management

SPaMCAST 292 – Ginger Levin, Implementing Program Management

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 292. SPaMCAST 292 features our interview with Dr. Ginger Levin. Dr. Levin and I discussed her book, Implementing Program Management: Templates and Forms. Dr Levin and her co-author Allen Green wrote their go-to reference for program practitioners, colleges, universities, and those sitting for the PgMP. Ginger provides great advice for program managers who are interested in consistently delivering value to their clients.
Note the audio is not perfect this week however the content is great. I hope you can stay with the interview!
Dr. Ginger Levin is a Senior Consultant and Educator in project management with over 45 years of experience. Her specialty areas are portfolio management, program management, the PMO, metrics, and maturity assessments. She is a PMP, PgMP (second in the world), and an OPM3 Certified Professional. She presents regularly at PMI Conferences and conducts numerous seminars on various topics. She is the editor, author or co-author of 20 books focusing on program management, portfolio management, the PMO, virtual teams, and interpersonal skills and is a book series editor for CRC Press. She has managed programs and projects of various sizes and complexity for public and private sector organizations. She is an Adjunct Professor at SKEMA University in Lille, France, in its doctoral program in project management and also for the University of Wisconsin-Platteville in its masters program in project management. Dr. Levin received her doctorate in Information Systems Technology and Public Administration from The George Washington University and the Outstanding Dissertation Award for her research on large organizations. Please see: linkedin.com/in/gingerlevin

Buy your copy of Implementing Program Management: Templates and Forms NOW!

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming.  Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us onFacebook while you’re at it.

Upcoming Events

ITMPI Webinar!
On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects. Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.


Categories: Process Management

SPaMCAST 292 – Ginger Levin, Implementing Program Management

Software Process and Measurement Cast - Sun, 06/01/2014 - 22:00

Listen to the Software Process and Measurement Cast 292. SPaMCAST 292 features our interview with Dr. Ginger Levin. Dr. Levin and I discussed her book, Implementing Program Management: Templates and Forms. Dr Levin and her co-author Allen Green wrote their go-to reference for program practitioners, colleges, universities, and those sitting for the PgMP. Ginger provides great advice for program managers who are interested in consistently delivering value to their clients.


Note the audio is not perfect this week however the content is great. I hope you can stay with the interview!

Dr. Ginger Levin is a Senior Consultant and Educator in project management with over 45 years of experience. Her specialty areas are portfolio management, program management, the PMO, metrics, and maturity assessments. She is a PMP, PgMP (second in the world), and an OPM3 Certified Professional. She presents regularly at PMI Conferences and conducts numerous seminars on various topics. She is the editor, author or co-author of 20 books focusing on program management, portfolio management, the PMO, virtual teams, and interpersonal skills and is a book series editor for CRC Press. She has managed programs and projects of various sizes and complexity for public and private sector organizations. She is an Adjunct Professor at SKEMA University in Lille, France, in its doctoral program in project management and also for the University of Wisconsin-Platteville in its masters program in project management. Dr. Levin received her doctorate in Information Systems Technology and Public Administration from The George Washington University and the Outstanding Dissertation Award for her research on large organizations. Please see: linkedin.com/in/gingerlevin

Buy your copy of Implementing Program Management: Templates and Forms NOW!

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming.  Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us onFacebook while you’re at it.

Upcoming Events

ITMPI Webinar!
On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects. Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.

Categories: Process Management

An architecturally-evident coding style

Coding the Architecture - Simon Brown - Sun, 06/01/2014 - 12:51

Okay, this is the separate blog post that I referred to in Software architecture vs code. What exactly do we mean by an "architecturally-evident coding style"? I built a simple content aggregator for the local tech community here in Jersey called techtribes.je, which is basically made up of a web server, a couple of databases and a standalone Java application that is responsible for actually aggegrating the content displayed on the website. You can read a little more about the software architecture at techtribes.je - containers. The following diagram is a zoom-in of the standalone content updater application, showing how it's been decomposed.

techtribes.je content updater - component diagram

This diagram says that the content updater application is made up of a number of core components (which are shown on a separate diagram for brevity) and an additional four components - a scheduled content updater, a Twitter connector, a GitHub connector and a news feed connector. This diagram shows a really nice, simple architecture view of how my standalone content updater application has been decomposed into a small number of components. "Component" is a hugely overloaded term in the software development industry, but essentially all I'm referring to is a collection of related behaviour sitting behind a nice clean interface.

Back to the "architecturally-evident coding style" and the basic premise is that the code should reflect the architecture. In other words, if I look at the code, I should be able to clearly identify each of the components that I've shown on the diagram. Since the code for techtribes.je is open source and on GitHub, you can go and take a look for yourself as to whether this is the case. And it is ... there's a je.techtribes.component package that contains sub-packages for each of the components shown on the diagram. From a technical perspective, each of these are simply Spring Beans with a public interface and a package-protected implementation. That's it; the code reflects the architecture as illustrated on the diagram.

So what about those core components then? Well, here's a diagram showing those.

techtribes.je core components

Again, this diagram shows a nice simple decomposition of the core of my techtribes.je system into coarse-grained components. And again, browsing the source code will reveal the same one-to-one mapping between boxes on the diagram and packages in the code. This requires conscious effort to do but I like the simple and explicit nature of the relationship between the architecture and the code.

When architecture and code don't match

The interesting part of this story is that while I'd always viewed my system as a collection of "components", the code didn't actually look like that. To take an example, there's a tweet component on the core components diagram, which basically provides CRUD access to tweets in a MongoDB database. The diagram suggests that it's a single black box component, but my initial implementation was very different. The following diagram illustrates why.

techtribes.je tweet component

My initial implementation of the tweet component looked like the picture on the left - I'd taken a "package by layer" approach and broken my tweet component down into a separate service and data access object. This is your stereotypical layered architecture that many (most?) books and tutorials present as a way to build (e.g.) web applications. It's also pretty much how I've built most software in the past too and I'm sure you've seen the same, especially in systems that use a dependency injection framework where we create a bunch of things in layers and wire them all together. Layered architectures have a number of benefits but they aren't a silver bullet.

This is a great example of where the code doesn't quite reflect the architecture - the tweet component is a single box on an architecture diagram but implemented as a collection of classes across a layered architecture when you look at the code. Imagine having a large, complex codebase where the architecture diagrams tell a different story from the code. The easy way to fix this is to simply redraw the core components diagram to show that it's really a layered architecture made up of services collaborating with data access objects. The result is a much more complex diagram but it also feels like that diagram is starting to show too much detail.

The other option is to change the code to match my architectural vision. And that's what I did. I reorganised the code to be packaged by component rather than packaged by layer. In essence, I merged the services and data access objects together into a single package so that I was left with a public interface and a package protected implementation. Here's the tweet component on GitHub.

But what about...

Again, there's a clean simple mapping from the diagram into the code and the code cleanly reflects the architecture. It does raise a number of interesting questions though.

  • Why aren't you using a layered architecture?
  • Where did the TweetDao interface go?
  • How do you mock out your DAO implementation to do unit testing?
  • What happens if I want to call the DAO directly?
  • What happens if you want to change the way that you store tweets?
Layers are now an implementation detail

This is still a layered architecture, it's just that the layers are now a component implementation detail rather than being first-class architectural building blocks. And that's nice, because I can think about my components as being my architecturally significant structural elements and it's these building blocks that are defined in my dependency injection framework. Something I often see in layered architectures is code bypassing a services layer to directly access a DAO or repository. These sort of shortcuts are exactly why layered architectures often become corrupted and turn into big balls of mud. In my codebase, if any consumer wants access to tweets, they are forced to use the tweet component in its entirety because the DAO is an internal implementation detail. And because I have layers inside my component, I can still switch out my tweet data storage from MongoDB to something else. That change is still isolated.

Component testing vs unit testing

Ah, unit testing. Bundling up my tweet service and DAO into a single component makes the resulting tweet component harder to unit test because everything is package protected. Sure, it's not impossible to provide a mock implementation of the MongoDBTweetDao but I need to jump through some hoops. The other approach is to simply not do unit testing and instead test my tweet component through its public interface. DHH recently published a blog post called Test-induced design damage and I agree with the overall message; perhaps we are breaking up our systems unnecessarily just in order to unit test them. There's very little to be gained from unit testing the various sub-parts of my tweet component in isolation, so in this case I've opted to do automated component testing instead where I test the component as a black-box through its component interface. MongoDB is lightweight and fast, with the resulting component tests running acceptably quick for me, even on my ageing MacBook Air. I'm not saying that you should never unit test code in isolation, and indeed there are some situations where component testing isn't feasible. For example, if you're using asynchronous and/or third party services, you probably do want to ability to provide a mock implementation for unit testing. The point is that we shouldn't blindly create designs where everything can be mocked out and unit tested in isolation.

Food for thought

The purpose of this blog post was to provide some more detail around how to ensure that code reflects architecture and to illustrate an approach to do this. I like the structure imposed by forcing my codebase to reflect the architecture. It requires some discipline and thinking about how to neatly carve-up the responsibilities across the codebase, but I think the effort is rewarded. It's also a nice stepping stone towards micro-services. My techtribes.je system is constructed from a number of in-process components that I treat as my architectural building blocks. The thinking behind creating a micro-services architecture is essentially the same, albeit the components (services) are running out-of-process. This isn't a silver bullet by any means, but I hope it's provided some food for thought around designing software and structuring a codebase with an architecturally-evident coding style.

Categories: Architecture

Rescuing a Troubled Project With Agile: Making the Break With Teams

Make A Break!

Make A Break!

Early in my career I worked for a turnaround specialist.  Two lessons have stayed with me over the years. The first was that there is never a “formula” that will solve every problem.  Different Agile techniques will be needed to rescue a troubled or failing project depending on the problems the project is facing.  Second, once the turnaround process begins, everyone must understand that change has begun.  The turnaround specialist I worked for was known for making everyone in the company he was rescuing physically move their desks, with no exceptions. The intent was to let everyone know life would be different from that point forward.  In many troubled projects the implementation of fixed teams focused on a single project at a time can be a watershed event to send the message that things will be different from now on.

Using Agile as a tool to rescue a project (or program) will require ensuring stable and properly constituted teams exist.  In many troubled projects it is common to find most of the people involved working on more than one project at a time and reporting to multiple managers. Groups of specialists gather together to address slivers of project work then hand the work off to another specialist or group of specialists. Matrixed teams find Agile techniques such as self-management and self-organization difficult.  A better approach is the creation of fixed, cross-functional teams reporting to management chain within the organization.  .

An example of a type of fixed team structure is the Capability Team described by Karl Scotland (interviewed on SPaMCAST 174).

Teams

Teams

The Capability Team is formed around specific groups of organizational capabilities that deliver implementable functionality; things which will enable the business to make an impact. The team focuses on a generating a flow of value based on their capabilities. These teams can stay together for as long as the capability is important, building knowledge about all aspects of what they are building, and how they build it. This approach is particularly useful in rescue scenarios in which specific critical technical knowledge is limited. By drawing all of the individuals with critical technical knowledge together they can reinforce each other and share nuances of knowledge between each other, strengthening the whole team.

Teams are a central component of any Agile implementation. Implementation of a fixed, cross-functional or capability team in environments where they are not already used will provide notice to everyone involved with the project and organization that change is occurring and that nothing will be the same. Embracing the team concept that is core to most Agile techniques will help provide focus that is needed to start to get back on course.


Categories: Process Management

Neo4j Meetup Coding Dojo Style

Mark Needham - Sat, 05/31/2014 - 23:55

A few weeks ago we ran a build your first Neo4j app meetup in the Neo4j London office during which we worked with the meta data around 1 million images recently released into the public domain by the British Library.

Feedback from previous meetups had indicated that attendees wanted to practice modelling a domain from scratch and understand the options for importing said model into the database. This data set seemed perfect for this purpose.

We started off by scanning the data set and coming up with some potential questions we could ask of it and then the group split in two and came up with a graph model:

Neo4j dojo

Having spent 15 minutes working on that, one person from each group explained the process they’d gone through to all attendees.

Each group took a similar approach whereby they scanned a subset of the data, sketched out all the properties and then discussed whether or not something should be a node, relationship or property in a graph model.

We then spent a bit of time tweaking the model so we had one everyone was happy with.

We split into three groups to work on input. One group imported some of the data by generating cypher statements from Java, one imported data using py2neo and the last group imported data using the batch inserter.

You can have a look at the github repository to see what we got up and specifically the solution branch to see the batch inserter code and the cypher-import branch for the cypher based approach.

The approach we used throughout the session is quite similar to a Kake coding dojo – something I first tried out when I was a trainer at ThoughtWorks University.

Although there were a few setup based things that could have been a bit slicker I think this format worked reasonably well and we’ll use something similar at the next version in a couple of weeks time.

Feel free to come along if it sounds interesting!

Categories: Programming

Neo4j/R: Analysing London NoSQL meetup membership

Mark Needham - Sat, 05/31/2014 - 22:32

In my spare time I’ve been working on a Neo4j application that runs on tops of meetup.com’s API and Nicole recently showed me how I could wire up some of the queries to use her Rneo4j library:

@markhneedham pic.twitter.com/8014jckEUl

— Nicole White (@_nicolemargaret) May 31, 2014

The query used in that visualisation shows the number of members that overlap between each pair of groups but a more interesting query is the one which shows the % overlap between groups based on the unique members across the groups.

The query is a bit more complicated than the original:

MATCH (group1:Group), (group2:Group)
OPTIONAL MATCH (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
WITH group1, group2, COUNT(*) as commonMembers
MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
WITH group1, group2, commonMembers, group1Members, group2Members
 
UNWIND(group1Members + group2Members) AS combinedMember
WITH DISTINCT group1, group2, commonMembers, combinedMember
 
WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage		 
ORDER BY group1.name, group1.name

The next step is to wire that up to use Rneo4j and ggplot2. First we’ll get the libraries installed and loaded:

install.packages("devtools")
devtools::install_github("nicolewhite/Rneo4j")
install.packages("ggplot2")
 
library(Rneo4j)
library(ggplot2)

And now we’ll execute the query and create a chart from the results:

graph = startGraph("http://localhost:7474/db/data/")
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH p = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
         WITH group1, group2, COLLECT(p) AS paths
         RETURN group1.name, group2.name, LENGTH(paths) as commonMembers
         ORDER BY group1.name, group2.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=commonMembers)) + 
geom_bin2d() +
geom_text(aes(label = commonMembers)) +
labs(x= "Group", y="Group", title="Member Group Member Overlap") +
scale_fill_gradient(low="white", high="red") +
theme(axis.text = element_text(size = 12, color = "black"),
      axis.title = element_text(size = 14, color = "black"),
      plot.title = element_text(size = 16, color = "black"),
      axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
 
// as percentage
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH path = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
         WITH group1, group2, COLLECT(path) AS paths
 
         WITH group1, group2, LENGTH(paths) as commonMembers
         MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
         WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
         MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
         WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
         WITH group1, group2, commonMembers, group1Members, group2Members
 
         UNWIND(group1Members + group2Members) AS combinedMember
         WITH DISTINCT group1, group2, commonMembers, combinedMember
 
         WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
         RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage
 
         ORDER BY group1.name, group1.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=percentage)) + 
  geom_bin2d() +
  geom_text(aes(label = percentage)) +
  labs(x= "Group", y="Group", title="Member Group Member Overlap") +
  scale_fill_gradient(low="white", high="red") +
  theme(axis.text = element_text(size = 12, color = "black"),
        axis.title = element_text(size = 14, color = "black"),
        plot.title = element_text(size = 16, color = "black"),
        axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
2014 05 31 21 54 42

A first glance at the visualisation suggests that the Hadoop, Data Science and Big Data groups have the most overlap which seems to make sense as they do cover quite similar topics.

Thanks to Nicole for the library and the idea of the visualisation. Now we need to do some more analysis on the data to see if there are any more interesting insights.

Categories: Programming

Thoughts on meetups

Mark Needham - Sat, 05/31/2014 - 20:50

I recently came across an interesting blog post by Zach Tellman in which he explains a new approach that he’s been trialling at The Bay Area Clojure User Group.

Zach explains that a lecture based approach isn’t necessarily the most effective way for people to learn and that half of the people attending the meetup are likely to be novices and would struggle to follow more advanced content.

He then goes on to explain an alternative approach:

We’ve been experimenting with a Clojure meetup modelled on a different academic tradition: office hours.

At a university, students who have questions about the lecture content or coursework can visit the professor and have a one-on-one conversation.

At the beginning of every meetup, we give everyone a name tag, and provide a whiteboard with two columns, “teachers” and “students”.

Attendees are encouraged to put their name and interests in both columns. From there, everyone can [...] go in search of someone from the opposite column who shares their interests.

While running Neo4j meetups we’ve had similar observations and my colleagues Stefan and Cedric actually ran a meetup in Paris a few months ago which sounds very similar to Zach’s ‘office hours’ style one.

However, we’ve also been experimenting with the idea that one size doesn’t need to fit all by running different styles of meetups aimed at different people.

For example, we have:

  • An introductory meetup which aims to get people to the point where they can follow talks about more advanced topics.
  • A more hands on session for people who want to learn how to write queries in cypher, Neo4j’s query language.
  • An advanced session for people who want to learn how to model a problem as a graph and import data into a graph.

I’m also thinking of running something similar to the Clojure Dojo but focused on data and graphs where groups of people could work together and build an app.

I noticed that Nick Manning has been doing a similar thing with the New York City Neo4j meetup as well, which is cool.

I’d be interested in hearing about different / better approaches that other people have come across so if you know of any let me know in the comments.

Categories: Programming