Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

How NOT to Market Yourself as a Software Developer

Making the Complex Simple - John Sonmez - Mon, 06/02/2014 - 16:00

I’ve talked quite a bit about ways to market yourself as a software developer over the years (I’ve even created a course on the subject), but I haven’t really talked about how NOT to market yourself as a software developer. Arguably it is just as important to know how to NOT market yourself as it […]

The post How NOT to Market Yourself as a Software Developer appeared first on Simple Programmer.

Categories: Programming

Tips for Newbie Business Analysts – Part I

Software Requirements Blog - Seilevel.com - Mon, 06/02/2014 - 12:40
One of the pillars of employee development here at Seilevel is a robust mentorship program. Everyone at the company is assigned a mentor within a few weeks of starting. Your mentor is tasked with ensuring that you are getting the opportunities you need to grow as an employee, solicits feedback from your peers and project […]

Tips for Newbie Business Analysts – Part I is a post from: http://requirements.seilevel.com/blog

Categories: Requirements

Conversation with Dr. John Kotter

NOOP.NL - Jurgen Appelo - Mon, 06/02/2014 - 09:53
dr-kotter

Last week I had an inspiring video chat with Dr. John P. Kotter, bestselling author of the books Leading Change and Our Iceberg is Melting. His most recent work is called Accelerate (XLR8) and I talk with Dr. Kotter about hierarchies, networks, and accelerated change.

The post Conversation with Dr. John Kotter appeared first on NOOP.NL.

Categories: Project Management

SPaMCAST 292 – Ginger Levin, Implementing Program Management

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 292. SPaMCAST 292 features our interview with Dr. Ginger Levin. Dr. Levin and I discussed her book, Implementing Program Management: Templates and Forms. Dr Levin and her co-author Allen Green wrote their go-to reference for program practitioners, colleges, universities, and those sitting for the PgMP. Ginger provides great advice for program managers who are interested in consistently delivering value to their clients.
Note the audio is not perfect this week however the content is great. I hope you can stay with the interview!
Dr. Ginger Levin is a Senior Consultant and Educator in project management with over 45 years of experience. Her specialty areas are portfolio management, program management, the PMO, metrics, and maturity assessments. She is a PMP, PgMP (second in the world), and an OPM3 Certified Professional. She presents regularly at PMI Conferences and conducts numerous seminars on various topics. She is the editor, author or co-author of 20 books focusing on program management, portfolio management, the PMO, virtual teams, and interpersonal skills and is a book series editor for CRC Press. She has managed programs and projects of various sizes and complexity for public and private sector organizations. She is an Adjunct Professor at SKEMA University in Lille, France, in its doctoral program in project management and also for the University of Wisconsin-Platteville in its masters program in project management. Dr. Levin received her doctorate in Information Systems Technology and Public Administration from The George Washington University and the Outstanding Dissertation Award for her research on large organizations. Please see: linkedin.com/in/gingerlevin

Buy your copy of Implementing Program Management: Templates and Forms NOW!

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming.  Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us onFacebook while you’re at it.

Upcoming Events

ITMPI Webinar!
On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects. Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.


Categories: Process Management

SPaMCAST 292 – Ginger Levin, Implementing Program Management

Software Process and Measurement Cast - Sun, 06/01/2014 - 22:00

Listen to the Software Process and Measurement Cast 292. SPaMCAST 292 features our interview with Dr. Ginger Levin. Dr. Levin and I discussed her book, Implementing Program Management: Templates and Forms. Dr Levin and her co-author Allen Green wrote their go-to reference for program practitioners, colleges, universities, and those sitting for the PgMP. Ginger provides great advice for program managers who are interested in consistently delivering value to their clients.


Note the audio is not perfect this week however the content is great. I hope you can stay with the interview!

Dr. Ginger Levin is a Senior Consultant and Educator in project management with over 45 years of experience. Her specialty areas are portfolio management, program management, the PMO, metrics, and maturity assessments. She is a PMP, PgMP (second in the world), and an OPM3 Certified Professional. She presents regularly at PMI Conferences and conducts numerous seminars on various topics. She is the editor, author or co-author of 20 books focusing on program management, portfolio management, the PMO, virtual teams, and interpersonal skills and is a book series editor for CRC Press. She has managed programs and projects of various sizes and complexity for public and private sector organizations. She is an Adjunct Professor at SKEMA University in Lille, France, in its doctoral program in project management and also for the University of Wisconsin-Platteville in its masters program in project management. Dr. Levin received her doctorate in Information Systems Technology and Public Administration from The George Washington University and the Outstanding Dissertation Award for her research on large organizations. Please see: linkedin.com/in/gingerlevin

Buy your copy of Implementing Program Management: Templates and Forms NOW!

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming.  Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us onFacebook while you’re at it.

Upcoming Events

ITMPI Webinar!
On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects. Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.

Categories: Process Management

An architecturally-evident coding style

Coding the Architecture - Simon Brown - Sun, 06/01/2014 - 12:51

Okay, this is the separate blog post that I referred to in Software architecture vs code. What exactly do we mean by an "architecturally-evident coding style"? I built a simple content aggregator for the local tech community here in Jersey called techtribes.je, which is basically made up of a web server, a couple of databases and a standalone Java application that is responsible for actually aggegrating the content displayed on the website. You can read a little more about the software architecture at techtribes.je - containers. The following diagram is a zoom-in of the standalone content updater application, showing how it's been decomposed.

techtribes.je content updater - component diagram

This diagram says that the content updater application is made up of a number of core components (which are shown on a separate diagram for brevity) and an additional four components - a scheduled content updater, a Twitter connector, a GitHub connector and a news feed connector. This diagram shows a really nice, simple architecture view of how my standalone content updater application has been decomposed into a small number of components. "Component" is a hugely overloaded term in the software development industry, but essentially all I'm referring to is a collection of related behaviour sitting behind a nice clean interface.

Back to the "architecturally-evident coding style" and the basic premise is that the code should reflect the architecture. In other words, if I look at the code, I should be able to clearly identify each of the components that I've shown on the diagram. Since the code for techtribes.je is open source and on GitHub, you can go and take a look for yourself as to whether this is the case. And it is ... there's a je.techtribes.component package that contains sub-packages for each of the components shown on the diagram. From a technical perspective, each of these are simply Spring Beans with a public interface and a package-protected implementation. That's it; the code reflects the architecture as illustrated on the diagram.

So what about those core components then? Well, here's a diagram showing those.

techtribes.je core components

Again, this diagram shows a nice simple decomposition of the core of my techtribes.je system into coarse-grained components. And again, browsing the source code will reveal the same one-to-one mapping between boxes on the diagram and packages in the code. This requires conscious effort to do but I like the simple and explicit nature of the relationship between the architecture and the code.

When architecture and code don't match

The interesting part of this story is that while I'd always viewed my system as a collection of "components", the code didn't actually look like that. To take an example, there's a tweet component on the core components diagram, which basically provides CRUD access to tweets in a MongoDB database. The diagram suggests that it's a single black box component, but my initial implementation was very different. The following diagram illustrates why.

techtribes.je tweet component

My initial implementation of the tweet component looked like the picture on the left - I'd taken a "package by layer" approach and broken my tweet component down into a separate service and data access object. This is your stereotypical layered architecture that many (most?) books and tutorials present as a way to build (e.g.) web applications. It's also pretty much how I've built most software in the past too and I'm sure you've seen the same, especially in systems that use a dependency injection framework where we create a bunch of things in layers and wire them all together. Layered architectures have a number of benefits but they aren't a silver bullet.

This is a great example of where the code doesn't quite reflect the architecture - the tweet component is a single box on an architecture diagram but implemented as a collection of classes across a layered architecture when you look at the code. Imagine having a large, complex codebase where the architecture diagrams tell a different story from the code. The easy way to fix this is to simply redraw the core components diagram to show that it's really a layered architecture made up of services collaborating with data access objects. The result is a much more complex diagram but it also feels like that diagram is starting to show too much detail.

The other option is to change the code to match my architectural vision. And that's what I did. I reorganised the code to be packaged by component rather than packaged by layer. In essence, I merged the services and data access objects together into a single package so that I was left with a public interface and a package protected implementation. Here's the tweet component on GitHub.

But what about...

Again, there's a clean simple mapping from the diagram into the code and the code cleanly reflects the architecture. It does raise a number of interesting questions though.

  • Why aren't you using a layered architecture?
  • Where did the TweetDao interface go?
  • How do you mock out your DAO implementation to do unit testing?
  • What happens if I want to call the DAO directly?
  • What happens if you want to change the way that you store tweets?
Layers are now an implementation detail

This is still a layered architecture, it's just that the layers are now a component implementation detail rather than being first-class architectural building blocks. And that's nice, because I can think about my components as being my architecturally significant structural elements and it's these building blocks that are defined in my dependency injection framework. Something I often see in layered architectures is code bypassing a services layer to directly access a DAO or repository. These sort of shortcuts are exactly why layered architectures often become corrupted and turn into big balls of mud. In my codebase, if any consumer wants access to tweets, they are forced to use the tweet component in its entirety because the DAO is an internal implementation detail. And because I have layers inside my component, I can still switch out my tweet data storage from MongoDB to something else. That change is still isolated.

Component testing vs unit testing

Ah, unit testing. Bundling up my tweet service and DAO into a single component makes the resulting tweet component harder to unit test because everything is package protected. Sure, it's not impossible to provide a mock implementation of the MongoDBTweetDao but I need to jump through some hoops. The other approach is to simply not do unit testing and instead test my tweet component through its public interface. DHH recently published a blog post called Test-induced design damage and I agree with the overall message; perhaps we are breaking up our systems unnecessarily just in order to unit test them. There's very little to be gained from unit testing the various sub-parts of my tweet component in isolation, so in this case I've opted to do automated component testing instead where I test the component as a black-box through its component interface. MongoDB is lightweight and fast, with the resulting component tests running acceptably quick for me, even on my ageing MacBook Air. I'm not saying that you should never unit test code in isolation, and indeed there are some situations where component testing isn't feasible. For example, if you're using asynchronous and/or third party services, you probably do want to ability to provide a mock implementation for unit testing. The point is that we shouldn't blindly create designs where everything can be mocked out and unit tested in isolation.

Food for thought

The purpose of this blog post was to provide some more detail around how to ensure that code reflects architecture and to illustrate an approach to do this. I like the structure imposed by forcing my codebase to reflect the architecture. It requires some discipline and thinking about how to neatly carve-up the responsibilities across the codebase, but I think the effort is rewarded. It's also a nice stepping stone towards micro-services. My techtribes.je system is constructed from a number of in-process components that I treat as my architectural building blocks. The thinking behind creating a micro-services architecture is essentially the same, albeit the components (services) are running out-of-process. This isn't a silver bullet by any means, but I hope it's provided some food for thought around designing software and structuring a codebase with an architecturally-evident coding style.

Categories: Architecture

Rescuing a Troubled Project With Agile: Making the Break With Teams

Make A Break!

Make A Break!

Early in my career I worked for a turnaround specialist.  Two lessons have stayed with me over the years. The first was that there is never a “formula” that will solve every problem.  Different Agile techniques will be needed to rescue a troubled or failing project depending on the problems the project is facing.  Second, once the turnaround process begins, everyone must understand that change has begun.  The turnaround specialist I worked for was known for making everyone in the company he was rescuing physically move their desks, with no exceptions. The intent was to let everyone know life would be different from that point forward.  In many troubled projects the implementation of fixed teams focused on a single project at a time can be a watershed event to send the message that things will be different from now on.

Using Agile as a tool to rescue a project (or program) will require ensuring stable and properly constituted teams exist.  In many troubled projects it is common to find most of the people involved working on more than one project at a time and reporting to multiple managers. Groups of specialists gather together to address slivers of project work then hand the work off to another specialist or group of specialists. Matrixed teams find Agile techniques such as self-management and self-organization difficult.  A better approach is the creation of fixed, cross-functional teams reporting to management chain within the organization.  .

An example of a type of fixed team structure is the Capability Team described by Karl Scotland (interviewed on SPaMCAST 174).

Teams

Teams

The Capability Team is formed around specific groups of organizational capabilities that deliver implementable functionality; things which will enable the business to make an impact. The team focuses on a generating a flow of value based on their capabilities. These teams can stay together for as long as the capability is important, building knowledge about all aspects of what they are building, and how they build it. This approach is particularly useful in rescue scenarios in which specific critical technical knowledge is limited. By drawing all of the individuals with critical technical knowledge together they can reinforce each other and share nuances of knowledge between each other, strengthening the whole team.

Teams are a central component of any Agile implementation. Implementation of a fixed, cross-functional or capability team in environments where they are not already used will provide notice to everyone involved with the project and organization that change is occurring and that nothing will be the same. Embracing the team concept that is core to most Agile techniques will help provide focus that is needed to start to get back on course.


Categories: Process Management

Neo4j Meetup Coding Dojo Style

Mark Needham - Sat, 05/31/2014 - 23:55

A few weeks ago we ran a build your first Neo4j app meetup in the Neo4j London office during which we worked with the meta data around 1 million images recently released into the public domain by the British Library.

Feedback from previous meetups had indicated that attendees wanted to practice modelling a domain from scratch and understand the options for importing said model into the database. This data set seemed perfect for this purpose.

We started off by scanning the data set and coming up with some potential questions we could ask of it and then the group split in two and came up with a graph model:

Neo4j dojo

Having spent 15 minutes working on that, one person from each group explained the process they’d gone through to all attendees.

Each group took a similar approach whereby they scanned a subset of the data, sketched out all the properties and then discussed whether or not something should be a node, relationship or property in a graph model.

We then spent a bit of time tweaking the model so we had one everyone was happy with.

We split into three groups to work on input. One group imported some of the data by generating cypher statements from Java, one imported data using py2neo and the last group imported data using the batch inserter.

You can have a look at the github repository to see what we got up and specifically the solution branch to see the batch inserter code and the cypher-import branch for the cypher based approach.

The approach we used throughout the session is quite similar to a Kake coding dojo – something I first tried out when I was a trainer at ThoughtWorks University.

Although there were a few setup based things that could have been a bit slicker I think this format worked reasonably well and we’ll use something similar at the next version in a couple of weeks time.

Feel free to come along if it sounds interesting!

Categories: Programming

Neo4j/R: Analysing London NoSQL meetup membership

Mark Needham - Sat, 05/31/2014 - 22:32

In my spare time I’ve been working on a Neo4j application that runs on tops of meetup.com’s API and Nicole recently showed me how I could wire up some of the queries to use her Rneo4j library:

@markhneedham pic.twitter.com/8014jckEUl

— Nicole White (@_nicolemargaret) May 31, 2014

The query used in that visualisation shows the number of members that overlap between each pair of groups but a more interesting query is the one which shows the % overlap between groups based on the unique members across the groups.

The query is a bit more complicated than the original:

MATCH (group1:Group), (group2:Group)
OPTIONAL MATCH (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
WITH group1, group2, COUNT(*) as commonMembers
MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
WITH group1, group2, commonMembers, group1Members, group2Members
 
UNWIND(group1Members + group2Members) AS combinedMember
WITH DISTINCT group1, group2, commonMembers, combinedMember
 
WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage		 
ORDER BY group1.name, group1.name

The next step is to wire that up to use Rneo4j and ggplot2. First we’ll get the libraries installed and loaded:

install.packages("devtools")
devtools::install_github("nicolewhite/Rneo4j")
install.packages("ggplot2")
 
library(Rneo4j)
library(ggplot2)

And now we’ll execute the query and create a chart from the results:

graph = startGraph("http://localhost:7474/db/data/")
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH p = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
         WITH group1, group2, COLLECT(p) AS paths
         RETURN group1.name, group2.name, LENGTH(paths) as commonMembers
         ORDER BY group1.name, group2.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=commonMembers)) + 
geom_bin2d() +
geom_text(aes(label = commonMembers)) +
labs(x= "Group", y="Group", title="Member Group Member Overlap") +
scale_fill_gradient(low="white", high="red") +
theme(axis.text = element_text(size = 12, color = "black"),
      axis.title = element_text(size = 14, color = "black"),
      plot.title = element_text(size = 16, color = "black"),
      axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
 
// as percentage
 
query = "MATCH (group1:Group), (group2:Group)
         WHERE group1 <> group2
         OPTIONAL MATCH path = (group1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(group2)
 
         WITH group1, group2, COLLECT(path) AS paths
 
         WITH group1, group2, LENGTH(paths) as commonMembers
         MATCH (group1)<-[:MEMBER_OF]-(group1Member)
 
         WITH group1, group2, commonMembers, COLLECT(id(group1Member)) AS group1Members
         MATCH (group2)<-[:MEMBER_OF]-(group2Member)
 
         WITH group1, group2, commonMembers, group1Members, COLLECT(id(group2Member)) AS group2Members
         WITH group1, group2, commonMembers, group1Members, group2Members
 
         UNWIND(group1Members + group2Members) AS combinedMember
         WITH DISTINCT group1, group2, commonMembers, combinedMember
 
         WITH group1, group2, commonMembers, COUNT(combinedMember) AS combinedMembers
 
         RETURN group1.name, group2.name, toInt(round(100.0 * commonMembers / combinedMembers)) AS percentage
 
         ORDER BY group1.name, group1.name"
 
group_overlap = cypher(graph, query)
 
ggplot(group_overlap, aes(x=group1.name, y=group2.name, fill=percentage)) + 
  geom_bin2d() +
  geom_text(aes(label = percentage)) +
  labs(x= "Group", y="Group", title="Member Group Member Overlap") +
  scale_fill_gradient(low="white", high="red") +
  theme(axis.text = element_text(size = 12, color = "black"),
        axis.title = element_text(size = 14, color = "black"),
        plot.title = element_text(size = 16, color = "black"),
        axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1))
2014 05 31 21 54 42

A first glance at the visualisation suggests that the Hadoop, Data Science and Big Data groups have the most overlap which seems to make sense as they do cover quite similar topics.

Thanks to Nicole for the library and the idea of the visualisation. Now we need to do some more analysis on the data to see if there are any more interesting insights.

Categories: Programming

Thoughts on meetups

Mark Needham - Sat, 05/31/2014 - 20:50

I recently came across an interesting blog post by Zach Tellman in which he explains a new approach that he’s been trialling at The Bay Area Clojure User Group.

Zach explains that a lecture based approach isn’t necessarily the most effective way for people to learn and that half of the people attending the meetup are likely to be novices and would struggle to follow more advanced content.

He then goes on to explain an alternative approach:

We’ve been experimenting with a Clojure meetup modelled on a different academic tradition: office hours.

At a university, students who have questions about the lecture content or coursework can visit the professor and have a one-on-one conversation.

At the beginning of every meetup, we give everyone a name tag, and provide a whiteboard with two columns, “teachers” and “students”.

Attendees are encouraged to put their name and interests in both columns. From there, everyone can [...] go in search of someone from the opposite column who shares their interests.

While running Neo4j meetups we’ve had similar observations and my colleagues Stefan and Cedric actually ran a meetup in Paris a few months ago which sounds very similar to Zach’s ‘office hours’ style one.

However, we’ve also been experimenting with the idea that one size doesn’t need to fit all by running different styles of meetups aimed at different people.

For example, we have:

  • An introductory meetup which aims to get people to the point where they can follow talks about more advanced topics.
  • A more hands on session for people who want to learn how to write queries in cypher, Neo4j’s query language.
  • An advanced session for people who want to learn how to model a problem as a graph and import data into a graph.

I’m also thinking of running something similar to the Clojure Dojo but focused on data and graphs where groups of people could work together and build an app.

I noticed that Nick Manning has been doing a similar thing with the New York City Neo4j meetup as well, which is cool.

I’d be interested in hearing about different / better approaches that other people have come across so if you know of any let me know in the comments.

Categories: Programming

Capabilities Based Planning and Development

Herding Cats - Glen Alleman - Sat, 05/31/2014 - 15:29

The Death March project starts when we don't know what DONE looks like. Many of the agile approaches attempt to avoid this by exchanged  not knowing for budget and time bounds. In the enterprise IT domain, those providing the money, usually have a need for all the features on a specific date to meet the business goal.

ICD-10 go live, new product launch enabled by new enrollment, Go Live of new ERP company wide, with incremental transition across divisions or sites. Maintenance support systems for new fielded products on or before products put into service - are examples of all features, on budget, on schedule.

The elicitation of the underlying technical and operational requirements has to be incremental of course, because knowing all the requirements upfront is just not possible. Even in the nth install of ERP at the nth plant, there will be new and undiscovered requirements.

It's knowing the needed Capabiliities of the system that are the foundation of project success.

Here is a top level view of how to capture and use Capabilities Based Planning in enterprise IT.

Capabilities based planning (v2) from Glen Alleman Related articles Practices without Principles Does Not Scale Concept of Operations First, then Capabilities, then Requirements Delivering Needed Capabilities Getting to done! Don't Start With Requirements Start With Capabilities The Calculus of Writing Software for Money How to Deal With Complexity In Software Projects? Agile Project Management The "Real" Root Cause of IT Project Failure 5 Questions That Need Answers for Project Success
Categories: Project Management

Neo4j: Cypher – UNWIND vs FOREACH

Mark Needham - Sat, 05/31/2014 - 15:19

I’ve written a couple of posts about the new UNWIND clause in Neo4j’s cypher query language but I forgot about my favourite use of UNWIND, which is to get rid of some uses of FOREACH from our queries.

Let’s say we’ve created a timetree up front and now have a series of events coming in that we want to create in the database and attach to the appropriate part of the timetree.

Before UNWIND existed we might try to write the following query using FOREACH:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
FOREACH (event IN events | 
  CREATE (e:Event {name: event.name})
  MATCH (year:Year {year: event.timetree.year }), 
        (year)-[:HAS_MONTH]->(month {month: event.timetree.month }),
        (month)-[:HAS_DAY]->(day {day: event.timetree.day })
  CREATE (e)-[:HAPPENED_ON]->(day))

Unfortunately we can’t use MATCH inside a FOREACH statement so we’ll get the following error:

Invalid use of MATCH inside FOREACH (line 5, column 3)
"  MATCH (year:Year {year: event.timetree.year }), "
   ^
Neo.ClientError.Statement.InvalidSyntax

We can work around this by using MERGE instead in the knowledge that it’s never going to create anything because the timetree already exists:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
FOREACH (event IN events | 
  CREATE (e:Event {name: event.name})
  MERGE (year:Year {year: event.timetree.year })
  MERGE (year)-[:HAS_MONTH]->(month {month: event.timetree.month })
  MERGE (month)-[:HAS_DAY]->(day {day: event.timetree.day })
  CREATE (e)-[:HAPPENED_ON]->(day))

If we replace the FOREACH with UNWIND we’d get the following:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
UNWIND events AS event
CREATE (e:Event {name: event.name})
WITH e, event.timetree AS timetree
MATCH (year:Year {year: timetree.year }), 
      (year)-[:HAS_MONTH]->(month {month: timetree.month }),
      (month)-[:HAS_DAY]->(day {day: timetree.day })
CREATE (e)-[:HAPPENED_ON]->(day)

Although the lines of code has slightly increased the query is now correct and we won’t accidentally correct new parts of our time tree.

We could also pass on the event that we created to the next part of the query which wouldn’t be the case when using FOREACH.

Categories: Programming

Get Up And Code 056: Angel Fitness Sensor With Eugene Jorov

Making the Complex Simple - John Sonmez - Sat, 05/31/2014 - 15:00

I was pretty excited to get to talk to one of the co-creators of the Angel wearable device. I’m really excited about this one because of the ability to monitor heart rate and oxygen level. Check out the episode below. Full transcript below show John:                  Hey everyone, welcome back to Get Up and CODE Podcast.  […]

The post Get Up And Code 056: Angel Fitness Sensor With Eugene Jorov appeared first on Simple Programmer.

Categories: Programming

Neo4j: Cypher – Neo.ClientError.Statement.ParameterMissing and neo4j-shell

Mark Needham - Sat, 05/31/2014 - 13:44

Every now and then I get sent Neo4j cypher queries to look at and more often than not they’re parameterised which means you can’t easily run them in the Neo4j browser.

For example let’s say we have a database which has a user called ‘Mark’:

CREATE (u:User {name: "Mark"})

Now we write a query to find ‘Mark’ with the name parameterised so we can easily search for a different user in future:

MATCH (u:User {name: {name}}) RETURN u

If we run that query in the Neo4j browser we’ll get this error:

Expected a parameter named name
Neo.ClientError.Statement.ParameterMissing

If we try that in neo4j-shell we’ll get the same exception to start with:

$ MATCH (u:User {name: {name}}) RETURN u;
ParameterNotFoundException: Expected a parameter named name

However, as Michael pointed out to me, the neat thing about neo4j-shell is that we can define parameters by using the export command:

$ export name="Mark"
$ MATCH (u:User {name: {name}}) RETURN u;
+-------------------------+
| u                       |
+-------------------------+
| Node[1923]{name:"Mark"} |
+-------------------------+
1 row

export is a bit sensitive to spaces so it’s best to keep them to a minimum. e.g. the following tries to create the variable ‘name ‘ which is invalid:

$ export name = "Mark"
name  is no valid variable name. May only contain alphanumeric characters and underscores.

The variables we create in the shell don’t have to only be primitives. We can create maps too:

$ export params={ name: "Mark" }
$ MATCH (u:User {name: {params}.name}) RETURN u;
+-------------------------+
| u                       |
+-------------------------+
| Node[1923]{name:"Mark"} |
+-------------------------+
1 row

A simple tip but one that saves me from having to rewrite queries all the time!

Categories: Programming

Clojure: Destructuring group-by’s output

Mark Needham - Sat, 05/31/2014 - 01:03

One of my favourite features of Clojure is that it allows you to destructure a data structure into values that are a bit easier to work with.

I often find myself referring to Jay Fields’ article which contains several examples showing the syntax and is a good starting point.

One recent use of destructuring I had was where I was working with a vector containing events like this:

user> (def events [{:name "e1" :timestamp 123} {:name "e2" :timestamp 456} {:name "e3" :timestamp 789}])

I wanted to split the events in two – those containing events with a timestamp greater than 123 and those less than or equal to 123.

After remembering that the function I wanted was group-by and not partition-by (I always make that mistake!) I had the following:

user> (group-by #(> (->> % :timestamp) 123) events)
{false [{:name "e1", :timestamp 123}], true [{:name "e2", :timestamp 456} {:name "e3", :timestamp 789}]}

I wanted to get 2 vectors that I could pass to the web page and this is fairly easy with destructuring:

user> (let [{upcoming true past false} (group-by #(> (->> % :timestamp) 123) events)] 
       (println upcoming) (println past))
[{:name e2, :timestamp 456} {:name e3, :timestamp 789}]
[{:name e1, :timestamp 123}]
nil

Simple!

Categories: Programming

Rescuing a Trouble Project With Agile: Agile Addresses Tactical and Strategic Project Needs

A rescue makes sense. This kid is miserable.

A rescue makes sense. This kid is miserable.

Projects run into trouble for an infinite number of reasons. Assuming a rescue makes sense, why does applying or reapplying Agile make sense as a rescue technique? Agile can help address all of the more common problems that cause projects to fail.

How would common Agile techniques help address these issues?

Untitled

Not all of the reasons a project becomes troubled can be addressed. Sometimes the right answer is either to use other rescue techniques or to terminate the project, redeploy the assets and let the people involved do something else. For example, if a true product owner can’t be found or deployed, Agile is not an appropriate rescue technique. A second example, a number of years ago the company I was working for had a project to modify the company’s product delivery methods.  The organization was sold to a competitor that had a different business product model that conflicted with the goal of the project. We spent a month trying to smooth out the clash of goals before shutting the project down.  This was not a project that could or should have been rescued. The assessment step answers two questions. First, can or should the project be rescued. Second, what is causing the project challenges. Once we have idea of what is causing the problems we can decide on whether using Agile to rescue the project makes sense. From there we can decide which Agile techniques should be placed on top of our process improvement backlog.


Categories: Process Management

Testing on the Toilet: Risk-Driven Testing

Google Testing Blog - Sat, 05/31/2014 - 00:10
by Peter Arrenbrecht

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

We are all conditioned to write tests as we code: unit, functional, UI—the whole shebang. We are professionals, after all. Many of us like how small tests let us work quickly, and how larger tests inspire safety and closure. Or we may just anticipate flak during review. We are so used to these tests that often we no longer question why we write them. This can be wasteful and dangerous.

Tests are a means to an end: To reduce the key risks of a project, and to get the biggest bang for the buck. This bang may not always come from the tests that standard practice has you write, or not even from tests at all.

Two examples:

“We built a new debugging aid. We wrote unit, integration, and UI tests. We were ready to launch.”

Outstanding practice. Missing the mark.

Our key risks were that we'd corrupt our data or bring down our servers for the sake of a debugging aid. None of the tests addressed this, but they gave a false sense of safety and “being done”.
We stopped the launch.

“We wanted to turn down a feature, so we needed to alert affected users. Again we had unit and integration tests, and even one expensive end-to-end test.”

Standard practice. Wasted effort.

The alert was so critical it actually needed end-to-end coverage for all scenarios. But it would be live for only three releases. The cheapest effective test? Manual testing before each release.

A Better Approach: Risks First

For every project or feature, think about testing. Brainstorm your key risks and your best options to reduce them. Do this at the start so you don't waste effort and can adapt your design. Write them down as a QA design so you can point to it in reviews and discussions.

To be sure, standard practice remains a good idea in most cases (hence it’s standard). Small tests are cheap and speed up coding and maintenance, and larger tests safeguard core use-cases and integration.

Just remember: Your tests are a means. The bang is what counts. It’s your job to maximize it.

Categories: Testing & QA

Capitalizing on the Internet of Things: How To Succeed in a Connected World

“Learning and innovation go hand in hand. The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.” -- William Pollard

The Internet of Things is hot.  But it’s more than a trend.  It’s a new way of life (and business.)

It’s transformational in every sense of the word (and world.)

A colleague shared some of their most interesting finds with me, and one of them is:

Capitalizing on the Internet of Things: How To Succeed in a Connected World

Here are my key take aways:

  1. The Fourth Industrial Revolution:  The Internet of Things
  2. “For many companies, the mere prospect of remaking traditional products into smart and connected ones is daunting.  But embedding them into the digital world using services-based business models is much more fundamentally challenging.  The new business models impact core processes such as product management, operations, and production, as well as sales and channel management.”
  3. “According to the research database of the analyst firm Machina Research, there will be approx. 14 billion connected devices by 2022 – ranging from IP-enabled cars to heating systems, security cameras, sensors, and production machines.”
  4. “Managers need to envision the valuable new opportunities that become possible when the physical world is merged with the virtual
    world.”
  5. “The five key markets are connected buildings, automotive, utilities, smart cities, and manufacturing.”
  6. “In order to provide for the IoT’s multifaceted challenges, the most important thing to do is develop business ecosystems comparable to a coral reef, where we can find diversity of species, symbiosis, and shared development.”
  7. “IoT technologies create new ways for companies to enrich their services, gain customer insights, increase efficiency, and create differentiation opportunities.”
  8. “From what we have seen, IoT entrepreneurs also need to follow exploratory approaches as they face limited predictability and want to minimize risks, preferably in units that are small, agile, and independent.”

It’s a fast read, with nice and tight insight … my kind of style.

Enjoy.

You Might Also Like

4 Stages of Market Maturity

E-Shaped People, Not T-Shaped

Trends for 2014

Categories: Architecture, Programming

How to Deal With Complexity In Software Projects?

Herding Cats - Glen Alleman - Fri, 05/30/2014 - 18:57

In a previous post on How to Assure Your Project Will Fail, the notion that the current project management processes are obsolete and the phrase of dealing with complexity on projects  is a popular one in the software domain. By the way that notion is untested, unreviewed, and is missing comparable examples of it working outside the specific references in the orginal paper.

But here's the simplest approach to deal with project complexity...

Don't Let The Project Get Comnplex

Nice platitude, It's that simple and it's that hard.

Before the nashing of teeth is heard, here's a working example of not letting the project get complex

So how is this possible? First let's make some assumptions:

  • If it is actually a complex project domain, then the value at risk is high.
  • If the value at risk  is high, then investing in protecting that value is worth the investment.
  • This means a project governance environment is likely the right thing to do. Skimping on process is probably not the right thing. And thinking that we can get this done with a minimalist approach is probably naïve at best.
  • This also means a model of the project that reveals the complexity element. Tools provide this insight and are applied regularly on complex projects.
  • One final assumption. If the term complexity is used to describe the people aspects of the project - the providers of the solution and the requester of that solution - then that complexity has to be nipped in the bud on the first day.
    • You simply can't allow the complexities of the people aspects to undermine the techncial, managerial, and financial aspects project.
    • This is an organizational management problem and the processes to deal with it are also well defined - and most time ignored at the expense of the project's success. 
    • Here's a case study of how this id done Making the Impossible PossibleIt's hard work, it's difficult, but doable. 

Here's the steps to dealing with project complexity that have been shown to work in a variety of domains:

  • Define the structure of the project in a formal manner. sysML is one language for this. This include the people, processes, and tools.
  • Define the needed capabilities in units of Measures of Effectiveness (MoE):
    • This means defining what business capabilities will be produced by the project and assigning  measures of effectiveness for each capability.
    • How do we discover these capabilities? Look at your business, project, and product strategy document. You have one right? No, then go get one. 
    • Start with a Balanced Scorecard for your project. Better yet have a Balanced Scorecard for your business to reveal what projects will be needed to implement that strategy at the project level.
    • Here's some guidance on how to construct a Balanced Scorecard in the IT Enterprise Domain.
    • Once the Strategy is in place, apply Capabilities Based Planning. Here's an approach for using Capabilities Based Planning.
  • Define the order of delivery of these capabilities, guided by the strategy and business road map.
    • It is straight forward. Define what capabilities are needed on what dates for the business to meet its strategic needs defined in the Balanced Scorecard.
    • Don't let the notion of emergent  requirements happen in the absence of defined capabilities. You have the defined needs, the defined - monetized - benefits, the Measures of Effectiveness, Measures of Performance, and Technical Performance Measures for these capabilities.
    • Sure there are uncertainties - Aleatory that are protected by margins. Epistemic - protected by risk  buy down processes or management reserve.
  • Manage the development of the solutions through an Integrated Master Plan (IMP), Integrated Master Schedule (IMS), Systems Engineering Management Plan (SEMP), and the Continuous Risk Management process. The IMP provides:
    • The strategy for delivery of the needed capabilities at the planned time and for the planned cost to meet the business goals.
    • The Measures of Effectiveness (MOE) and Measures of Performance (MOP) needed to assess the fulfillment of the needed capabilities delivered to the customer.
    • Technical performance is stated in Technical Performance Measures (TPM) and Key Performance Parameters (KPP) both derived from the MOE and MOP.
    • The SEMP describes the what, who, when and why of the project.
    • There should be a similar description from the customer stating what purpose  and when the capabilities are needed.
    • These two descriptions can be grouped into a Concept of Operations, a Statement of Work, a Statement of Objectives, or some similar narrative.

Let's pause here for a process check. If there is no narrative about what DONE looks like in units of measure meaningful to the decision makers (MOE, MOP, TPM, KPP) then the project participants have no way to recognize  DONE other than when they run out of money and time.

This is the Yourdon definition of a Death March project. Many who use the term  complexity and complex projects are actually speaking about death march projects. We're back to the fundamental problem - we let the project become complex because we don't pay attention to the processes needed to manage the project to keep it from becoming complex. Read Yourdon and the Making the Impossible Possible: Leading Extraordinary Performance - The Rocky Flats Story books to see examples of how to keep out of this condition.

  • Next comes the execution of the project to produce the desired outcomes that deliver the needed capabilities.
    • Detailed planning may or may not be needed. This is a domain dependent decision.
    • But the naïve notion that planning is not needed, is just that naïve. 
    • Planning is always needed, just depends on the fidelity of the plans. Without planning chaos reigns. From the DOD 5000.02 formality to the Scrum Planning session - planning and plans are the strategy for the successful completion of the project.
  • All execution processes are risk reduction processes.
    • Risk Management is how adults manage projects - Tim Lister.
    • If you're working on a complex project and don't have a credible Risk Management Plan you're soon going to be working on a Death March project.
    • So the first step in managing in the presence of uncertainty is to enumerate those uncertainties - epistemic and aleatory - put them in a Risk Register, apply your favorite Risk Management Process, mine is SEI Continuous Risk Management, and deal directly with the chaotic nature of your project to get it under control.
    • Here's a summary diagram of the CRM process.
  • Finally comes the measures of progress to plan. 
    • The the defined capabilities, some sense of when they are needed and how much they will cost - in a probabilistic estimating manner - we can now measure progress.
    • Agile likes to use the words we're delivering continuous value to our customers. Good stuff, can't go wrong with that.
    • What exactly are the units of measure of that value?
    • On what day do you plan to deliver those units of measure?
    • Are those deliverables connected to capabilities to do business? Or are they just requirements that have been met at the User Acceptance Test (UAT) level?
    • Here's an example from a health insurance enterprise system of the planned delivery of needed capabilities to meet the business strategy defined by the business owners. This is some times called value stream mapping, but it is also found in the formal Capabilities Based Planning paradigm.

The End

When hear the notion that chaos is the basis of projects in the software world - run away as fast as you can. That is the formula for failure. When failure examples are presented in support of the notion that chaos reigns and there is no actual, verifiable, tangible, correctable Root Causes listed - run away as fast as you can. Those proposing that idea have not done their home work.

But the question of dealing with complexity on projects is still open. The Black Swans, that get misused in the project domain, since the term refers to the economics and finance domain through Taleb, may still be there. There are there because the project management process have choosed to ignore them, can't afford to seek them out, or don't have enough understanding to realize they are actually there.

So if Black Swans are the source of worry on projects, you're not finished with your project management planning, controlling, and corrective actions duties as a manager. Using project complexity as the excuse for project difficulties is easy. Any one can do that.

Taking corrective actions to eliminate all but the Unknowable uncertainties? Now that's much harder.

Related articles Elements of Project Success How To Assure Your Project Will Fail We Can Know the Business Value of What We Build When We Say Risk What Do We Really Mean? Uncertainty is the Source of Risk Both Aleatory and Epistemic Uncertainty Create Risk Stakeholders in complexity On The Balanced Scorecard Time to Revisit The Risk Discussion
Categories: Project Management

Episode 204: Anil Madhavapeddy on the Mirage Cloud Operating System and the OCaml Language

Robert talks to Dr. Anil Madhavapeddy of the Cambridge University (UK) Systems research group about the OCaml language and the Mirage cloud operating system, a microkernel written entirely in OCaml. The outline includes: history of the evolution from dedicated servers running a monolithic operating system to virutalized servers based on the Xen hypervisor to micro-kernels; […]
Categories: Programming