Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Thoughts on meetups

Mark Needham - Sat, 05/31/2014 - 20:50

I recently came across an interesting blog post by Zach Tellman in which he explains a new approach that he’s been trialling at The Bay Area Clojure User Group.

Zach explains that a lecture based approach isn’t necessarily the most effective way for people to learn and that half of the people attending the meetup are likely to be novices and would struggle to follow more advanced content.

He then goes on to explain an alternative approach:

We’ve been experimenting with a Clojure meetup modelled on a different academic tradition: office hours.

At a university, students who have questions about the lecture content or coursework can visit the professor and have a one-on-one conversation.

At the beginning of every meetup, we give everyone a name tag, and provide a whiteboard with two columns, “teachers” and “students”.

Attendees are encouraged to put their name and interests in both columns. From there, everyone can [...] go in search of someone from the opposite column who shares their interests.

While running Neo4j meetups we’ve had similar observations and my colleagues Stefan and Cedric actually ran a meetup in Paris a few months ago which sounds very similar to Zach’s ‘office hours’ style one.

However, we’ve also been experimenting with the idea that one size doesn’t need to fit all by running different styles of meetups aimed at different people.

For example, we have:

  • An introductory meetup which aims to get people to the point where they can follow talks about more advanced topics.
  • A more hands on session for people who want to learn how to write queries in cypher, Neo4j’s query language.
  • An advanced session for people who want to learn how to model a problem as a graph and import data into a graph.

I’m also thinking of running something similar to the Clojure Dojo but focused on data and graphs where groups of people could work together and build an app.

I noticed that Nick Manning has been doing a similar thing with the New York City Neo4j meetup as well, which is cool.

I’d be interested in hearing about different / better approaches that other people have come across so if you know of any let me know in the comments.

Categories: Programming

Capabilities Based Planning and Development

Herding Cats - Glen Alleman - Sat, 05/31/2014 - 15:29

The Death March project starts when we don't know what DONE looks like. Many of the agile approaches attempt to avoid this by exchanged  not knowing for budget and time bounds. In the enterprise IT domain, those providing the money, usually have a need for all the features on a specific date to meet the business goal.

ICD-10 go live, new product launch enabled by new enrollment, Go Live of new ERP company wide, with incremental transition across divisions or sites. Maintenance support systems for new fielded products on or before products put into service - are examples of all features, on budget, on schedule.

The elicitation of the underlying technical and operational requirements has to be incremental of course, because knowing all the requirements upfront is just not possible. Even in the nth install of ERP at the nth plant, there will be new and undiscovered requirements.

It's knowing the needed Capabiliities of the system that are the foundation of project success.

Here is a top level view of how to capture and use Capabilities Based Planning in enterprise IT.

Capabilities based planning (v2) from Glen Alleman Related articles Practices without Principles Does Not Scale Concept of Operations First, then Capabilities, then Requirements Delivering Needed Capabilities Getting to done! Don't Start With Requirements Start With Capabilities The Calculus of Writing Software for Money How to Deal With Complexity In Software Projects? Agile Project Management The "Real" Root Cause of IT Project Failure 5 Questions That Need Answers for Project Success
Categories: Project Management

Neo4j: Cypher – UNWIND vs FOREACH

Mark Needham - Sat, 05/31/2014 - 15:19

I’ve written a couple of posts about the new UNWIND clause in Neo4j’s cypher query language but I forgot about my favourite use of UNWIND, which is to get rid of some uses of FOREACH from our queries.

Let’s say we’ve created a timetree up front and now have a series of events coming in that we want to create in the database and attach to the appropriate part of the timetree.

Before UNWIND existed we might try to write the following query using FOREACH:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
FOREACH (event IN events | 
  CREATE (e:Event {name: event.name})
  MATCH (year:Year {year: event.timetree.year }), 
        (year)-[:HAS_MONTH]->(month {month: event.timetree.month }),
        (month)-[:HAS_DAY]->(day {day: event.timetree.day })
  CREATE (e)-[:HAPPENED_ON]->(day))

Unfortunately we can’t use MATCH inside a FOREACH statement so we’ll get the following error:

Invalid use of MATCH inside FOREACH (line 5, column 3)
"  MATCH (year:Year {year: event.timetree.year }), "
   ^
Neo.ClientError.Statement.InvalidSyntax

We can work around this by using MERGE instead in the knowledge that it’s never going to create anything because the timetree already exists:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
FOREACH (event IN events | 
  CREATE (e:Event {name: event.name})
  MERGE (year:Year {year: event.timetree.year })
  MERGE (year)-[:HAS_MONTH]->(month {month: event.timetree.month })
  MERGE (month)-[:HAS_DAY]->(day {day: event.timetree.day })
  CREATE (e)-[:HAPPENED_ON]->(day))

If we replace the FOREACH with UNWIND we’d get the following:

WITH [{name: "Event 1", timetree: {day: 1, month: 1, year: 2014}}, 
      {name: "Event 2", timetree: {day: 2, month: 1, year: 2014}}] AS events
UNWIND events AS event
CREATE (e:Event {name: event.name})
WITH e, event.timetree AS timetree
MATCH (year:Year {year: timetree.year }), 
      (year)-[:HAS_MONTH]->(month {month: timetree.month }),
      (month)-[:HAS_DAY]->(day {day: timetree.day })
CREATE (e)-[:HAPPENED_ON]->(day)

Although the lines of code has slightly increased the query is now correct and we won’t accidentally correct new parts of our time tree.

We could also pass on the event that we created to the next part of the query which wouldn’t be the case when using FOREACH.

Categories: Programming

Get Up And Code 056: Angel Fitness Sensor With Eugene Jorov

Making the Complex Simple - John Sonmez - Sat, 05/31/2014 - 15:00

I was pretty excited to get to talk to one of the co-creators of the Angel wearable device. I’m really excited about this one because of the ability to monitor heart rate and oxygen level. Check out the episode below. Full transcript below show John:                  Hey everyone, welcome back to Get Up and CODE Podcast.  […]

The post Get Up And Code 056: Angel Fitness Sensor With Eugene Jorov appeared first on Simple Programmer.

Categories: Programming

Neo4j: Cypher – Neo.ClientError.Statement.ParameterMissing and neo4j-shell

Mark Needham - Sat, 05/31/2014 - 13:44

Every now and then I get sent Neo4j cypher queries to look at and more often than not they’re parameterised which means you can’t easily run them in the Neo4j browser.

For example let’s say we have a database which has a user called ‘Mark’:

CREATE (u:User {name: "Mark"})

Now we write a query to find ‘Mark’ with the name parameterised so we can easily search for a different user in future:

MATCH (u:User {name: {name}}) RETURN u

If we run that query in the Neo4j browser we’ll get this error:

Expected a parameter named name
Neo.ClientError.Statement.ParameterMissing

If we try that in neo4j-shell we’ll get the same exception to start with:

$ MATCH (u:User {name: {name}}) RETURN u;
ParameterNotFoundException: Expected a parameter named name

However, as Michael pointed out to me, the neat thing about neo4j-shell is that we can define parameters by using the export command:

$ export name="Mark"
$ MATCH (u:User {name: {name}}) RETURN u;
+-------------------------+
| u                       |
+-------------------------+
| Node[1923]{name:"Mark"} |
+-------------------------+
1 row

export is a bit sensitive to spaces so it’s best to keep them to a minimum. e.g. the following tries to create the variable ‘name ‘ which is invalid:

$ export name = "Mark"
name  is no valid variable name. May only contain alphanumeric characters and underscores.

The variables we create in the shell don’t have to only be primitives. We can create maps too:

$ export params={ name: "Mark" }
$ MATCH (u:User {name: {params}.name}) RETURN u;
+-------------------------+
| u                       |
+-------------------------+
| Node[1923]{name:"Mark"} |
+-------------------------+
1 row

A simple tip but one that saves me from having to rewrite queries all the time!

Categories: Programming

Clojure: Destructuring group-by’s output

Mark Needham - Sat, 05/31/2014 - 01:03

One of my favourite features of Clojure is that it allows you to destructure a data structure into values that are a bit easier to work with.

I often find myself referring to Jay Fields’ article which contains several examples showing the syntax and is a good starting point.

One recent use of destructuring I had was where I was working with a vector containing events like this:

user> (def events [{:name "e1" :timestamp 123} {:name "e2" :timestamp 456} {:name "e3" :timestamp 789}])

I wanted to split the events in two – those containing events with a timestamp greater than 123 and those less than or equal to 123.

After remembering that the function I wanted was group-by and not partition-by (I always make that mistake!) I had the following:

user> (group-by #(> (->> % :timestamp) 123) events)
{false [{:name "e1", :timestamp 123}], true [{:name "e2", :timestamp 456} {:name "e3", :timestamp 789}]}

I wanted to get 2 vectors that I could pass to the web page and this is fairly easy with destructuring:

user> (let [{upcoming true past false} (group-by #(> (->> % :timestamp) 123) events)] 
       (println upcoming) (println past))
[{:name e2, :timestamp 456} {:name e3, :timestamp 789}]
[{:name e1, :timestamp 123}]
nil

Simple!

Categories: Programming

Rescuing a Trouble Project With Agile: Agile Addresses Tactical and Strategic Project Needs

A rescue makes sense. This kid is miserable.

A rescue makes sense. This kid is miserable.

Projects run into trouble for an infinite number of reasons. Assuming a rescue makes sense, why does applying or reapplying Agile make sense as a rescue technique? Agile can help address all of the more common problems that cause projects to fail.

How would common Agile techniques help address these issues?

Untitled

Not all of the reasons a project becomes troubled can be addressed. Sometimes the right answer is either to use other rescue techniques or to terminate the project, redeploy the assets and let the people involved do something else. For example, if a true product owner can’t be found or deployed, Agile is not an appropriate rescue technique. A second example, a number of years ago the company I was working for had a project to modify the company’s product delivery methods.  The organization was sold to a competitor that had a different business product model that conflicted with the goal of the project. We spent a month trying to smooth out the clash of goals before shutting the project down.  This was not a project that could or should have been rescued. The assessment step answers two questions. First, can or should the project be rescued. Second, what is causing the project challenges. Once we have idea of what is causing the problems we can decide on whether using Agile to rescue the project makes sense. From there we can decide which Agile techniques should be placed on top of our process improvement backlog.


Categories: Process Management

Testing on the Toilet: Risk-Driven Testing

Google Testing Blog - Sat, 05/31/2014 - 00:10
by Peter Arrenbrecht

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

We are all conditioned to write tests as we code: unit, functional, UI—the whole shebang. We are professionals, after all. Many of us like how small tests let us work quickly, and how larger tests inspire safety and closure. Or we may just anticipate flak during review. We are so used to these tests that often we no longer question why we write them. This can be wasteful and dangerous.

Tests are a means to an end: To reduce the key risks of a project, and to get the biggest bang for the buck. This bang may not always come from the tests that standard practice has you write, or not even from tests at all.

Two examples:

“We built a new debugging aid. We wrote unit, integration, and UI tests. We were ready to launch.”

Outstanding practice. Missing the mark.

Our key risks were that we'd corrupt our data or bring down our servers for the sake of a debugging aid. None of the tests addressed this, but they gave a false sense of safety and “being done”.
We stopped the launch.

“We wanted to turn down a feature, so we needed to alert affected users. Again we had unit and integration tests, and even one expensive end-to-end test.”

Standard practice. Wasted effort.

The alert was so critical it actually needed end-to-end coverage for all scenarios. But it would be live for only three releases. The cheapest effective test? Manual testing before each release.

A Better Approach: Risks First

For every project or feature, think about testing. Brainstorm your key risks and your best options to reduce them. Do this at the start so you don't waste effort and can adapt your design. Write them down as a QA design so you can point to it in reviews and discussions.

To be sure, standard practice remains a good idea in most cases (hence it’s standard). Small tests are cheap and speed up coding and maintenance, and larger tests safeguard core use-cases and integration.

Just remember: Your tests are a means. The bang is what counts. It’s your job to maximize it.

Categories: Testing & QA

Capitalizing on the Internet of Things: How To Succeed in a Connected World

“Learning and innovation go hand in hand. The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.” -- William Pollard

The Internet of Things is hot.  But it’s more than a trend.  It’s a new way of life (and business.)

It’s transformational in every sense of the word (and world.)

A colleague shared some of their most interesting finds with me, and one of them is:

Capitalizing on the Internet of Things: How To Succeed in a Connected World

Here are my key take aways:

  1. The Fourth Industrial Revolution:  The Internet of Things
  2. “For many companies, the mere prospect of remaking traditional products into smart and connected ones is daunting.  But embedding them into the digital world using services-based business models is much more fundamentally challenging.  The new business models impact core processes such as product management, operations, and production, as well as sales and channel management.”
  3. “According to the research database of the analyst firm Machina Research, there will be approx. 14 billion connected devices by 2022 – ranging from IP-enabled cars to heating systems, security cameras, sensors, and production machines.”
  4. “Managers need to envision the valuable new opportunities that become possible when the physical world is merged with the virtual
    world.”
  5. “The five key markets are connected buildings, automotive, utilities, smart cities, and manufacturing.”
  6. “In order to provide for the IoT’s multifaceted challenges, the most important thing to do is develop business ecosystems comparable to a coral reef, where we can find diversity of species, symbiosis, and shared development.”
  7. “IoT technologies create new ways for companies to enrich their services, gain customer insights, increase efficiency, and create differentiation opportunities.”
  8. “From what we have seen, IoT entrepreneurs also need to follow exploratory approaches as they face limited predictability and want to minimize risks, preferably in units that are small, agile, and independent.”

It’s a fast read, with nice and tight insight … my kind of style.

Enjoy.

You Might Also Like

4 Stages of Market Maturity

E-Shaped People, Not T-Shaped

Trends for 2014

Categories: Architecture, Programming

How to Deal With Complexity In Software Projects?

Herding Cats - Glen Alleman - Fri, 05/30/2014 - 18:57

In a previous post on How to Assure Your Project Will Fail, the notion that the current project management processes are obsolete and the phrase of dealing with complexity on projects  is a popular one in the software domain. By the way that notion is untested, unreviewed, and is missing comparable examples of it working outside the specific references in the orginal paper.

But here's the simplest approach to deal with project complexity...

Don't Let The Project Get Comnplex

Nice platitude, It's that simple and it's that hard.

Before the nashing of teeth is heard, here's a working example of not letting the project get complex

So how is this possible? First let's make some assumptions:

  • If it is actually a complex project domain, then the value at risk is high.
  • If the value at risk  is high, then investing in protecting that value is worth the investment.
  • This means a project governance environment is likely the right thing to do. Skimping on process is probably not the right thing. And thinking that we can get this done with a minimalist approach is probably naïve at best.
  • This also means a model of the project that reveals the complexity element. Tools provide this insight and are applied regularly on complex projects.
  • One final assumption. If the term complexity is used to describe the people aspects of the project - the providers of the solution and the requester of that solution - then that complexity has to be nipped in the bud on the first day.
    • You simply can't allow the complexities of the people aspects to undermine the techncial, managerial, and financial aspects project.
    • This is an organizational management problem and the processes to deal with it are also well defined - and most time ignored at the expense of the project's success. 
    • Here's a case study of how this id done Making the Impossible PossibleIt's hard work, it's difficult, but doable. 

Here's the steps to dealing with project complexity that have been shown to work in a variety of domains:

  • Define the structure of the project in a formal manner. sysML is one language for this. This include the people, processes, and tools.
  • Define the needed capabilities in units of Measures of Effectiveness (MoE):
    • This means defining what business capabilities will be produced by the project and assigning  measures of effectiveness for each capability.
    • How do we discover these capabilities? Look at your business, project, and product strategy document. You have one right? No, then go get one. 
    • Start with a Balanced Scorecard for your project. Better yet have a Balanced Scorecard for your business to reveal what projects will be needed to implement that strategy at the project level.
    • Here's some guidance on how to construct a Balanced Scorecard in the IT Enterprise Domain.
    • Once the Strategy is in place, apply Capabilities Based Planning. Here's an approach for using Capabilities Based Planning.
  • Define the order of delivery of these capabilities, guided by the strategy and business road map.
    • It is straight forward. Define what capabilities are needed on what dates for the business to meet its strategic needs defined in the Balanced Scorecard.
    • Don't let the notion of emergent  requirements happen in the absence of defined capabilities. You have the defined needs, the defined - monetized - benefits, the Measures of Effectiveness, Measures of Performance, and Technical Performance Measures for these capabilities.
    • Sure there are uncertainties - Aleatory that are protected by margins. Epistemic - protected by risk  buy down processes or management reserve.
  • Manage the development of the solutions through an Integrated Master Plan (IMP), Integrated Master Schedule (IMS), Systems Engineering Management Plan (SEMP), and the Continuous Risk Management process. The IMP provides:
    • The strategy for delivery of the needed capabilities at the planned time and for the planned cost to meet the business goals.
    • The Measures of Effectiveness (MOE) and Measures of Performance (MOP) needed to assess the fulfillment of the needed capabilities delivered to the customer.
    • Technical performance is stated in Technical Performance Measures (TPM) and Key Performance Parameters (KPP) both derived from the MOE and MOP.
    • The SEMP describes the what, who, when and why of the project.
    • There should be a similar description from the customer stating what purpose  and when the capabilities are needed.
    • These two descriptions can be grouped into a Concept of Operations, a Statement of Work, a Statement of Objectives, or some similar narrative.

Let's pause here for a process check. If there is no narrative about what DONE looks like in units of measure meaningful to the decision makers (MOE, MOP, TPM, KPP) then the project participants have no way to recognize  DONE other than when they run out of money and time.

This is the Yourdon definition of a Death March project. Many who use the term  complexity and complex projects are actually speaking about death march projects. We're back to the fundamental problem - we let the project become complex because we don't pay attention to the processes needed to manage the project to keep it from becoming complex. Read Yourdon and the Making the Impossible Possible: Leading Extraordinary Performance - The Rocky Flats Story books to see examples of how to keep out of this condition.

  • Next comes the execution of the project to produce the desired outcomes that deliver the needed capabilities.
    • Detailed planning may or may not be needed. This is a domain dependent decision.
    • But the naïve notion that planning is not needed, is just that naïve. 
    • Planning is always needed, just depends on the fidelity of the plans. Without planning chaos reigns. From the DOD 5000.02 formality to the Scrum Planning session - planning and plans are the strategy for the successful completion of the project.
  • All execution processes are risk reduction processes.
    • Risk Management is how adults manage projects - Tim Lister.
    • If you're working on a complex project and don't have a credible Risk Management Plan you're soon going to be working on a Death March project.
    • So the first step in managing in the presence of uncertainty is to enumerate those uncertainties - epistemic and aleatory - put them in a Risk Register, apply your favorite Risk Management Process, mine is SEI Continuous Risk Management, and deal directly with the chaotic nature of your project to get it under control.
    • Here's a summary diagram of the CRM process.
  • Finally comes the measures of progress to plan. 
    • The the defined capabilities, some sense of when they are needed and how much they will cost - in a probabilistic estimating manner - we can now measure progress.
    • Agile likes to use the words we're delivering continuous value to our customers. Good stuff, can't go wrong with that.
    • What exactly are the units of measure of that value?
    • On what day do you plan to deliver those units of measure?
    • Are those deliverables connected to capabilities to do business? Or are they just requirements that have been met at the User Acceptance Test (UAT) level?
    • Here's an example from a health insurance enterprise system of the planned delivery of needed capabilities to meet the business strategy defined by the business owners. This is some times called value stream mapping, but it is also found in the formal Capabilities Based Planning paradigm.

The End

When hear the notion that chaos is the basis of projects in the software world - run away as fast as you can. That is the formula for failure. When failure examples are presented in support of the notion that chaos reigns and there is no actual, verifiable, tangible, correctable Root Causes listed - run away as fast as you can. Those proposing that idea have not done their home work.

But the question of dealing with complexity on projects is still open. The Black Swans, that get misused in the project domain, since the term refers to the economics and finance domain through Taleb, may still be there. There are there because the project management process have choosed to ignore them, can't afford to seek them out, or don't have enough understanding to realize they are actually there.

So if Black Swans are the source of worry on projects, you're not finished with your project management planning, controlling, and corrective actions duties as a manager. Using project complexity as the excuse for project difficulties is easy. Any one can do that.

Taking corrective actions to eliminate all but the Unknowable uncertainties? Now that's much harder.

Related articles Elements of Project Success How To Assure Your Project Will Fail We Can Know the Business Value of What We Build When We Say Risk What Do We Really Mean? Uncertainty is the Source of Risk Both Aleatory and Epistemic Uncertainty Create Risk Stakeholders in complexity On The Balanced Scorecard Time to Revisit The Risk Discussion
Categories: Project Management

Episode 204: Anil Madhavapeddy on the Mirage Cloud Operating System and the OCaml Language

Robert talks to Dr. Anil Madhavapeddy of the Cambridge University (UK) Systems research group about the OCaml language and the Mirage cloud operating system, a microkernel written entirely in OCaml. The outline includes: history of the evolution from dedicated servers running a monolithic operating system to virutalized servers based on the Xen hypervisor to micro-kernels; […]
Categories: Programming

Pragmatic Manager Posted: Time for a Decision

I published another Pragmatic Manager this week, Time for a Decision. It’s about program management decisions, and collaborating across the organization.

Do you receive my Pragmatic Manager emails? If you think you are on my list, but are not receiving my emails, let me know. Some of you long-time subscribers are not receiving my emails because of your hosts. I am working on that. Some of you don’t subscribe yet. You can subscribe. I write something valuable at least once a month. I post the newsletter on my site when I get around to it, later.

I hope you enjoy this newsletter. If you don’t already subscribe, I hope you decide to sign up.

Categories: Project Management

Business Analyst Tip: Template Usability

Software Requirements Blog - Seilevel.com - Fri, 05/30/2014 - 12:50
Many of us produce document templates using Microsoft Word. In addition to the heavy lifting–the content–you should consider the usability of the templates. My basic approach with templates is that they should take care of the bells and whistles once so each user doesn’t have to. Applying the tips below really doesn’t take that long, […]

Business Analyst Tip: Template Usability is a post from: http://requirements.seilevel.com/blog

Categories: Requirements

Rescuing a Troubled Project with Agile: Making the Decision to Rescu

Assessment Requires Observing The Process!

Assessment Requires Observing The Process!

Shakespeare wrote “to be or not to be” but when deciding whether a project requires intervention, we might paraphrase the quote a bit to say “to act or not to act”. In each case we need to decide whether a project warrants or needs intervention. Not all projects we viewed as troubled are in bad enough shape to require an intervention and by intervening we might waste resources or a learning opportunity. For example, does a project that is two weeks behind schedule after a year and with an estimated duration of an additional year of duration warrant an intervention? What if the implementation date has been committed to in the marketplace?  All of the indicators of a project under duress need to be interpreted as part of entire organization and at the same time through the lens of known set of tolerances.

Assessments can be done using a variety of methods.  Each method has its own set of strengths and weaknesses.  The overall strength in using a method (any method) is that the results become less about opinions and feelings and more about facts.  There are three basic forms of assessment: model, process and quantitative.

Three Types of Assesments

 

Quantitative assessments are easily the most common of the three approaches used to identify troubled projects.  This method is the most common because a majority of projects have a known budget, timeline and acceptable level of quality (even if not stated) that are reported against on status reports.  The assessment process is fairly simple, has the project spent (or planning to spending) more than the budget, if yes, then it is in trouble.  The same comparison can be made to the promised date, or the number of defects the project has found and logged into the backlog. The bigger the difference the bigger the hole the project will find itself in. What the quantitative assessment does not answer is why the project is in trouble and what to do about it.

Model based assessments use industry standard frameworks such as the CMMItm to look for process gaps. The process gaps can be used to explain why the project is having problems and suggest areas that need to be implemented or tuned to help the project recover.  Model based assessment are generally very formal in nature which can require substantial effort and be quite invasive.

Process assessments look as the qualitative attributes of the targeted process. Targeting generally done based on interviews with project leaders and stakeholders.  These interviews and later qualitative process appraisals require skilled interviewers that are experienced in project rescues.  These methods are generally less invasive than formal model based appraisals and be deployed faster, however, they can be skewed by the biases of the interviewers. I suggest combining qualitative and quantitative assessments to help minimize potential biases.

Once the assessment has been completed, the final step is to determine an overall direction for the course of action. We suggest that if an intervention is required that there are only two possible strategies. The first is to reset the project (build on what has occurred using new techniques and team structures) and the second is to blow it up and start over. Again a set of criteria should be leveraged to lessen the passion around the decision process.  The following table shows an example of a set of decision criteria:

Criteria

Action Is there a coherent vision of the projects goals? If Yes, then reset Does an external estimate to complete indicate that humans can actually deliver the project? If Yes, then reset Can the organization afford what is required to deliver the project?

If Yes then reset

If the answer to any of the criteria is “No” then stop the project. Every organization will have develop specific criteria that meets the organization’s culture. However, recognize that trying to turn around a project that is either not feasible or is not pursuing a coherent goal will generally lead to throwing good money after bad which is why assessing the project using a process will help an organization to make a less emotional decision.


Categories: Process Management

Scrum at a Glance (Visual)

I’ve shared a Scrum Flow at a Glance before, but it was not visual.

I think it’s helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page. 

Here is a simple visual of Scrum:

image

There are a lot of interesting tools and concepts in scrum.  The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.

I like to think of Scrum as an effective Agile project management framework for shipping incremental value.  It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.

I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.

I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.

You Might Also Like

Agile vs. Waterfall

Agile Life-Cycle Frame

Don’t Push Agile, Pull It

Scrum Flow at a Glance

The Art of the Agile Retrospective

Categories: Architecture, Programming

How To Beat Laziness

Making the Complex Simple - John Sonmez - Thu, 05/29/2014 - 16:00

One of the worst diseases to plague humanity is laziness. Left to myself, I am as lazy as they get, but I’ve found a way to overcome it. In this video I share how I overcome my own laziness by breaking things down into smaller pieces and making sure the end is always in sight. […]

The post How To Beat Laziness appeared first on Simple Programmer.

Categories: Programming

Software architecture vs code

Coding the Architecture - Simon Brown - Thu, 05/29/2014 - 13:01

I presented two talks last week with the title "Software architecture vs code" - first as the opening keynote for the inaugural Software Design and Development conference and also the next day as a regular conference session at GOTO Chicago. Videos from both should be available at some point and the slides are available now. The talk itself seems to polarise people, with responses ranging from Without a doubt, Simon delivered one of the best keynotes I have seen. I got a lot from it, with plenty 'food for thought' moments. through to "hmmm, meh".

Separating software architecture from code

The basic premise of the talk is that the architecture and code of a software system never quite match up. The traditional way to communicate the architecture of a software system is with diagrams based upon a number of views ... a logical view, a functional view, a module view, a physical view, etc, etc. Philippe Kruchten's 4+1 model is an example often cited as a starting point for such approaches. I've followed these approaches in the past myself and, although I can get my head around them, I don't find them an optimal way to describe a software system. The "why?" has taken me a while to figure out, but the thing I dislike is the way in which you get an artificial separation between the architecture-related views (logical, module, functional, etc) and the code-related views (implementation, design, etc). I don't like treating the architecture and the code as two separate things, but this seems to be the starting point for many of the ways in which software systems are communicated/documented. If you want a good example of this, take a look at the first chapter of "Software Architecture in Practice" where it describes the relationship between modules, components, and component instances. It makes my head hurt.

The model-code gap

This difference between the architecture and code views is also exaggerated by what George Fairbanks calls the "model-code gap" in his book titled "Just Enough Software Architecture" (highly recommended reading, by the way). George basically says that your architecture models will include abstract concepts (e.g. components, services, modules, etc) but the code usually doesn't reflect this. This matches my own experience of helping people communicate their software systems ... people will usually draw components or services, but the actual implementation is a bunch of classes sitting inside a traditional layered architecture. Actually, if I'm being honest, this matches my own experience of building software myself because I've done the same thing! :-)

The intersection of software architecture and code

My approach to all of this is to ensure that the architecture and code views of a software system are one and the same thing, albeit from different levels of abstraction. In other words, my primary focus when describing a software system is the static structure, which ranges from code (classes) right up through components and containers. I model this with my C4 approach, which recognises that software developers are the primary stakeholders in software architecture. Other views of the software system (deployment, infrastructure, etc) slot into place really easily when you understand the static structure.

To put this all very simply, your code should reflect the architecture diagrams that you draw. If your diagrams include abstract concepts such as components, your code should reflect this. If the diagrams and code don't line up, you have to question the value of the diagrams because they're creating a fantasy and there's little point in referring to them.

Challenging the traditional layered approach

This deserves a separate blog post, but something I also mentioned during the talk was that teams should challenge the traditional layered architecture and the way that we structure our codebase. One way to achieve a nice mapping between architecture and code is to ensure that your code reflects the abstract concepts shown on your architecture diagrams, which can be achieved by writing components rather than classes in layers. Another side-effect of changing the organisation of the code is less test-induced design damage. The key question to ask here is whether layers are architecturally significant building blocks or merely an implementation detail, which should be wrapped up inside of (e.g.) components. As I said, this needs a separate blog post.

Thoughts?

As I said, the slides are here. Aligning the architecture and the code raises a whole bunch of interesting questions but provides some enormous benefits for a software development team. A clean mapping between diagrams and code makes a software system easy to explain, the impact of change becomes easier to understand and architectural refactorings can seem much less daunting if you know what you have and where you want to get to. I'm interested in your thoughts on things like the following:

  • Aligning the architecture and code - is this something you do. If so, how? If not, why not?
  • Is your codebase more than just a bunch of classes in layers? Do you follow what George Fairbanks calls an "architecturally-evident coding style"? If yes, how? If not, why not?
  • If you have any architecture documentation (e.g. diagrams), is it useful? If not, why not?

Convincing people to structure the code underlying their monolithic systems as a bunch of collaborating components seems to be a hard pill to swallow, yet micro-service architectures are going to push people to reconsider how they structure a software system, so I think this discussion is worth having. Thoughts?

Categories: Architecture

Conversation with Marshall Goldsmith

NOOP.NL - Jurgen Appelo - Thu, 05/29/2014 - 08:07
marshall-goldsmith

Marshall Goldsmith is the bestselling author of What Got You Here Won’t Get You There, and several other books about coaching and leadership. I talked with Marshall about Managers as Mentors, coaching versus mentoring, feedback versus feed forward, and questions that you can ask yourself each day.

The post Conversation with Marshall Goldsmith appeared first on NOOP.NL.

Categories: Project Management

Rescuing a Troubled Project with Agile: A Basic Intervention Process

Make sure the path is clear before you embark.

Make sure the path is clear before you embark.

Rescuing a troubled project takes more than waving a magic wand and it almost never makes sense to immediately dive in and begin making changes.  As noted before, projects get in trouble one decision at a time, having a process to gather information about the project and then plan the intervention reduces the risk of adding to the burden of bad decisions.

UntitledAssess the Project: The first step is to determine whether the project is really troubled and warrants an intervention or, even more importantly, whether the project can be saved.  The process for assessment will be reviewed in the next installment.

Plan the Intervention: Once you have assessed a project and determined that a rescue makes sense, the intervention needs to be planned.  Use an Agile approach to planning the rescue process. That is, start by building and prioritizing a backlog based on the assessment.  Using the prioritized backlog the changes can be grouped into releases (a change release plan) that avoids a big bang approach. Big bang approaches might be required if the project requires a critical level of intervention, however the big bang approach requires a lot more coordination. Just like in any Agile project, the backlog is dynamic and change the intervention progresses. It is important to involve the appropriate team(s) in planning the implementation because it will help to build commitment.

JumpStartsm the Rescue: The JumpStartsm should use the following steps:

  1. The team should stop what is to be changed cold turkey,
  2. Show the team the team how to do the new technique by performing the task with the team,
  3. Gather immediate feedback and tailor the process,
  4. Transfer technique ownership with just-in-time training and coaching, and
  5. Withdraw coaching as the team gains confidence.

Coach the Team: After the JumpStartsm, you need to maintain effort to keep the project on the straight and narrow until the new set of behaviors can become muscle memory. A coach provides a constant addition of energy into the process so that the team does not revert to old behaviors. Think about any sports team you have practiced on. The coach’s role is to support the team, but it is outside of the three standard Scrum roles. One of the primary roles of coach (as opposed to the manager) is to provide training and feedback so the team members can improve.

Celebrate: Change is difficult and those doing the changing will require feedback to continue to change. Feedback needs to include both positive and negative components to be effective in the long term. Celebration is a component of the positive feedback.  Celebrate getting the project moving in the right direction; don’t wait for the end of the project or a release to provide positive feedback.

There are lots of troubled projects and there are a wide variety of reasons projects get in trouble. Before we intervene, we need to decide whether intervening makes sense, and if it does, then we need to make sure we have a plan.  A poor or unplanned intervention can lead to project failure just as easy as doing nothing will.  Most IT departments would rather avoid the black eye of a failed project. Therefore you need to have a process in place for intervention. It will pay benefits because then there will always be a bias for action.


Categories: Process Management

Google Play services 4.3

Android Developers Blog - Wed, 05/28/2014 - 21:00
gps

Google Play services 4.3 has now been rolled out to the world, and it contains a number of features you can use to improve your apps. Specifically, this version adds some new members to the Google Play services family: Google Analytics API, Tag Manager, and the Address API. We’ve also made some great enhancements to the existing APIs; everything to make sure you stay on top of the app game out there.

Here are the highlights of the 4.3 release.


Google Analytics and Google Tag Manager

The Analytics API and Google Tag Manager has existed for Android for some time as standalone technologies, but with this release we are incorporating them as first class citizens in Google Play services. Those of you that are used to the API will find it very similar to previous versions, and if you have not used it before we strongly encourage you to take a look at it.

Google Analytics allows you to get detailed statistics on how you app is being used by your users, for example what functionality of your app is being used the most, or which activity triggers users to convert from an advertised version of an app to paid one. Google Tag Manager lets you change characteristics of your app on-the-fly, for example colors, without having to push an update from Google Play.


Google Play Games services Update

The furious speed of innovation in Android mobile gaming has not slowed down and neither have we when it comes to packing the Google Play Game services API with features.

With this release, we are introducing game gifts, which allows players to send virtual in-game requests to anyone in their Google+ circles or through player search. Using this feature, the player can send a 'wish' request to ask another player for an in-game item or benefit, or a 'gift' request to grant an item or benefit to another player.

This is a great way for a game to be more engaging by increasing cross player collaboration and social connections. We are therefore glad to add this functionality as an inherent part of the Games API, it is an much-wanted extension to the multi-player functionality included a couple of releases ago. For more information, see: Unlocking the power of Google for your games.


Drive API

The Google Drive for Android API was just recently added as a member of the Google Play services API family. This release adds a number of important features:

  • Pinning - You can now pin files that should be kept up to date locally, ensuring that it is available when the user is offline. This is great for users that need to use your app with limited or no connectivity
  • App Folders - An app often needs to create files which are not visible to the user, for example to store temporary files in a photo editor. This can now be done using App Folders, a feature is analogous to Application Data Folders in the Google Drive API
  • Change Notifications - You can now register a callback to receive notifications when a file or folder is changed. This mean you no longer need to query Drive continuously to check if the data has changed, just put a change notification on it

In addition to the above, we've also added the ability to access a number of new metadata fields.


Address API

This release will also includes a new Address API, which allows developers to request access to addresses for example to fill out a delivery address form. The kicker is the convenience for the user; a user interface component is presented where they select the desired address, and bang, the entire form is filled out. Developers have been relying on Location data which works very well, but this API shall cater for cases where the Location data is either not accurate or the user actually wants to use a different address than their current physical location. This should sound great to anyone who has done any online shopping during the last decade or so.

That’s it for this time. Now go to work and incorporate these new features to make your apps even better!
And stay tuned for future updates.

For the release video, please see:
DevBytes: Google Play Services 4.3

For details on the APIs, please see:
Google Analytics
Google Tag Manager
Google Play Games services Gifts
Google Drive Android API - Change Events
Google Drive Android API - Pinning
Google Drive Android API - App Folder
Address API







Join the discussion on
+Android Developers


















The latest release of Google Play services has begun rolling out to Android devices worldwide. It includes the full release of the Google Cast SDK, for developing and publishing Google Cast-ready apps.

Once the rollout is complete, you'll be able to download the Google Play services SDK using the SDK Manager and get started with the new APIs. Watch for more information coming soon.

-->
Categories: Programming