Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Neo4j Backup: java.lang.ClassCastException: org.jboss.netty.buffer.BigEndianHeapChannelBuffer cannot be cast to org.neo4j.cluster.com.message.Message

Mark Needham - Sun, 01/19/2014 - 20:29

When using Neo4j’s online backup facility there are two ways of triggering it, either by using the ‘single://‘ or ‘ha://‘ syntax and these behave slightly differently.

If you’re using the ‘single://’ syntax and don’t specify a port then it will connect to ’6362′ by default:

./neo4j-backup -from single://192.168.1.34 -to /mnt/backup/neo4j-backup

If you’ve changed the backup port via the ‘online_backup_server’ property in conf/neo4j.properties you’ll need to set the port explicitly:

online_backup_server=192.168.1.34:6363
./neo4j-backup -from single://192.168.1.34:6363 -to /mnt/backup/neo4j-backup

If you’re using the ‘ha://’ syntax then the backup client joins the HA cluster, works out which machine is the master and then creates a backup from that machine.

In order for the backup client to join the cluster it connects to port ’5001′ by default:

./neo4j-backup -from ha://192.168.1.34 -to /mnt/backup/neo4j-backup

If you’ve changed the ‘ha.cluster_server’ property then you’ll need to set the port explicitly:

ha.cluster_server=192.168.1.34:5002
./neo4j-backup -from ha://192.168.1.34:5002 -to /mnt/backup/neo4j-backup

A mistake that I made when first using this utility was to use the ‘ha://’ syntax with the backup port. e.g.

./neo4j-backup -from ha://192.168.1.34:6362 -to /mnt/backup/neo4j-backup

If you do this you’ll end up with the following exception:

2014-01-19 19:24:30.842+0000 ERROR [o.n.c.c.NetworkSender]: Receive exception:
java.lang.ClassCastException: org.jboss.netty.buffer.BigEndianHeapChannelBuffer cannot be cast to org.neo4j.cluster.com.message.Message
	at org.neo4j.cluster.com.NetworkSender$NetworkMessageSender.messageReceived(NetworkSender.java:409) ~[neo4j-cluster-2.0.0.jar:2.0.0]
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) ~[netty-3.6.3.Final.jar:na]
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) ~[netty-3.6.3.Final.jar:na]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
	at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]

Let me know in the comments if any of this doesn’t make sense. There are lots of other examples to follow on neo4j-backup’s man page in the manual as well.

Categories: Programming

Technical Debt: Process Debt

Development Debt

Development Debt

 

Hand Drawn Chart Saturday

In Technical Debt, we noted that technical debt is a metaphor coined by Ward Cunningham to represent the work not done or the shortcuts taken when delivering a product. Cunningham uses the term to describe issues in software code. As with any good metaphor, it can be used to stand in for short cuts in other deliverables that effect the delivery of customer value.  In fact, a quick Google search shows that there have been numerous other uses. These “extensions” of technical debt include:

  • Quality Debt,
  • Design Debt,
  • Configuration Management Debt,
  • User Experience Debt, and
  • Architectural Debt.

Before I suggest that the explosion of types of  technical debt of a  jumped the shark moment,  I’d like to propose one more extension: process debt.  Process debt reflects the shortcuts taken the process of doing work that doesn’t end up in the code.  Short cuts in the process can affect the outcome of sprint, release, whole project or an organizations ability to deliver value. For example, teams that abandon retrospectives are incurring process debt that could have long-term impact.

Short-term process debt can be incurred when a process is abridged due to a specific one-time incident. For example failing to update application support documentation while implementing an emergency change. In the long run, support personnel might deliver poor advice based on outdated documentation reducing customer satisfaction.  The one-time nature of the scenario suggests that the team would not continually incur debt purposefully, however if process debt becomes chronic, an anti-process bubble could form around the team.  Consider how a leader with a poor attitude can infect a team.  High quantities of process debt can reflect a type of bad lead syndrome that will be very difficult to remediate.

An example of process debt I recently discussed with a fellow coach occurred when a team that was supposed to do a code check-in every evening for their nightly build called a week hiatus.  The overall process was that code was to be checked in, followed by a consolidated build of the software, and then it was to be subjected to a set of automated smoke and regression tests. A significant amount of effort had been put into making sure the process could executed, however the person that oversaw the process got the flu and no one else had been trained. The lack of executing a nightly build process allowed the code to drift enough so that it could not be integrated. It took twelve days and a significant amount of rework to get to a point where a nightly build could be done again.  The process debt was incurred when a backup was not trained and the process stopped being used.

Peer pressure, coaching and process and product quality assurance (PPQA) audits are tools for finding, avoiding and, when needed, remediating process debt.  In all three cases someone, whether a fellow team member or an outsider, will need to look at how the work has been done in order to understand how the process was followed.

Process debt equates to shortcuts in the use of the process that can impact a team, a project, project deliverables or the long-term ability of the organization to deliver value to their customers.  What process debt is not is experimentation done by teams consciously in order to improve. Process debt is not change made to a standard process that does not fit their context adopted during retrospectives. Process debt is incurred when teams abandon a process without a better process. Process debt stands as a worthy extension of the technical metaphor that can help us understand that shortcuts in how we do our work can have ramifications, even if the code is not affected.


Categories: Process Management

Get Up And Code 37: Food Politics With Marion Nestle

Making the Complex Simple - John Sonmez - Sat, 01/18/2014 - 17:30

Pretty excited about this episode. Iris, and I had the rare honor of getting to interview Marion Nestle in this episode of Get Up and CODE. Marion is a Paulette Goddard Professor in the Department of Nutrition, Food Studies, and Public Health and Professor of Sociology at New York University. She’s well known and highly […]

The post Get Up And Code 37: Food Politics With Marion Nestle appeared first on Simple Programmer.

Categories: Programming

Basic Tuples & Pattern Matching

Phil Trelford's Array - Sat, 01/18/2014 - 14:42

Over the last couple of weeks I’ve been building my own parser, interpreter and compiler for Small Basic, a dialect of BASIC with only 14 keywords aimed at beginners. Despite, or perhaps because of, Small Basic’s simplicity some really fun programs have been developed, from games like Tetris and 3D Maze to a parser for the language itself.

Small Basic provides primitive types for numbers, strings and associative arrays. There is no syntax provided for structures, but these can be easily modelled with the associative arrays. For example a 3D point can be constructed with named items or ordinals:

Named items Ordinals
Point["X"] = 1.0
Point["Y"] = 2.0
Point["Z"] = 3.0
Point[0] = 1.0
Point[1] = 2.0
Point[2] = 3.0

In languages like Erlang and Python this could be more concisely expressed as a tuple:

Erlang Python
Point = {1.0, 2.0, 3.0}
point = (1.0, 2.0, 3.0)

In fact sophisticated Erlang programs are built entirely from tuples and lists, there is no explicit class or inheritance syntax in the language. Messages can be easily expressed with tuples and behaviour via pattern matching.

Alan Kay, inventor of the Smalltalk language has said:

The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages.

In Erlang a hierarchy of shapes can simply be modelled using tuples with atoms for names:

Circle = { circle, 5.0 }
Square = { square, 7.0 }
Rectangle = { rectangle, 10.0, 5.0 }

The area of a shape can be expressed using pattern matching:

area(Shape) ->
  case Shape of
    { circle, R } -> pi() * R * R;
    { square, W } -> W * W;
    { rect, W, H } -> W * H
  end.

Select Case

The Visual Basic family’s Select Case functionality is quite rich. More so than the switch/case statements of the mainstream C dialects: Java, C# and C++, which only match literals.

In Visual Basic it is already possible to match values with literals, conditions or ranges:

Select Case agerange
  Case Is < 16
    MsgBox("Minor")
  Case 16 To 21
    MsgBox("Still Young")
  Case 50 To 64
    MsgBox("Start Lying")
  Case Is > 65
    MsgBox("Be Yourself") 
  Case Else
    MsgBox("Inbetweeners")
End Select

Given that Select Case in VB is already quite expressive, it feels support for tuples and pattern matching over them would feel quite natural in the language.

Extended Small Basic

To this end I have extended my Small Basic parser and compiler implementation with tuple and pattern matching support.

Tuples

Inspiration for construction and deconstruction was taken from F# and Python:

F# Python
let person = ("Phil", 27)
let (name, age) = person
person = ("Phil", 27)
name, age = person

So that tuples use explicit parentheses in the extended Small Basic implementation:

Person = ("Phil", 27)
(Name, Age) = Person

Internally tuples are represented using Small Basic’s built-in associative arrays.

Pattern Matching

First I implemented VB’s Select Case statements, which is not hugely dissimilar to parsing and compiling Small Basic’s If/ElseIf/Else statements.

Then I extended Select Case to support matching tuples with similar functionality to F#:

F# Extended Small Basic
let xor x y =
  match (x,y) with
  | (1,1) -> 0
  | (1,0) -> 1
  | (0,1) -> 1
  | (0,0) -> 0
Function Xor(a,b)
  Select Case (a,b)
    Case (1,1)
      Xor = 0
    Case (1,0)
      Xor = 1
    Case (0,1)
      Xor = 1
    Case (0,0)
      Xor = 0
  EndSelect
EndFunction

Constructing, deconstructing and matching nested tuples is also supported.

Example

Putting it altogether, FizzBuzz can now be expressed in my extended Small Basic implementation with functions, tuples and pattern matching:

Function Mod(Dividend,Divisor)
  Mod = Dividend
  While Mod >= Divisor
    Mod = Mod - Divisor
  EndWhile
EndFunction

Sub Echo(s)
  TextWindow.WriteLine(s)
EndSub

For A = 1 To 100 ' Iterate from 1 to 100
  Select Case (Mod(A,3),Mod(A,5))
    Case (0,0)
      Echo("FizzBuzz")
    Case (0,_)
      Echo("Fizz")
    Case (_,0)
      Echo("Buzz")
    Case Else
      Echo(A)
  EndSelect
EndFor

Conclusions

Extending Small Basic with first class support for tuples was relatively easy, and I feel quite natural in the language. It provides object orientated programming without the need for a verbose class syntax. I think this is something that would probably work pretty well in other BASIC dialects including Visual Basic.

Source code is available on BitBucket: https://bitbucket.org/ptrelford/smallbasiccompiler

Categories: Programming

Technical Debt: Transference?

A dark alley...

A dark alley…

If someone were to approach you on a dark foggy night as you stood below an underpowered streetlight and ask if you wanted some “technical debt,” I am sure answer would be an emphatic NO. While there are reasons we would intentionally accept technical debt any development or support team would be better off if they could avoid being shouldered with technical debt.  One strategy to get rid of it is to transfer the ownership of technical debt onto someone else.  Buying packaged software can do this – COTS (commercial off the shelf software) or SAAS (software as a service) can be used as a strategy to avoid owning technical debt.

When organizations buy a supported package or purchase SAAS, they avoid developing specific software development experience (who wants to code an accounting package if you can buy it), shortening the time needed to move from needs-recognition to using the software and to avoid having to have a staff to maintain the software. In short, the day-to-day management of technical debt becomes someone else’s problem . . . sort of.  One of the laws of software (or at least should be) is that all software includes accrued technical debt. Some portion of the cost initial and annual support or subscription cost of the software pays for the management and maintenance of the software technical debt.

Most software companies do not explicitly publish measures of technical debt in their software, therefore as a user of the software you are more or less blind to level of the technical debt in the package. Why does it matter?  The level of technical debt can still affect your organization even when you don’t “own” the debt. Technical debt in COTS or SAAS impacts the ease of installation, ease of interfacing with your legacy systems, support costs, the existence of latent defects, and maybe most importantly, function agility.  Function agility represents how quickly new features and functions can be rolled into the software.  The longer it takes to develop and integrate new features, the lower the function agility of a software package. The level of technical debt in a package or in SAAS can only be inferred by observing these areas of performance. Review performance in these areas when you are doing your due diligence, and remember that since technical debt tends to build up, performance generally gets worse before it gets better.  And of course, changing software packages is very similar to trying to switch horses while riding (only done by trained stuntmen in the movies, not in real life). So, try to get it right the first time.

A simple cost analysis for determining whether to shoulder the technical debt or to transfer it elsewhere would use the following criteria (assuming the original build/buy decision has been made).

Costs to internally own the technical debt:

  1. Support Staff Costs                            _____
  2. Support Staff Overhead                    _____
  3. Support Staff Tools                            _____
  4. Support Staff Training                      _____
  1. Total Costs                                              _____

Costs of someone else owning the technical debt:

  1. Support Cost / Subscription Cost_____
  2. Support Staff Costs                            _____
  3. Support Staff Overhead                    _____
  4. Support Staff Tools                            _____
  5. Support Staff Training                      _____

B, Total Costs                                                      _____

Net                   Cost        (A – B)                         _____

This is a relatively simple view of the support costs. You should note that there will more than likely be some residual support staff costs needed to interface the new package with legacy systems or just to act as a help desk to initially triage problems. Intangibles, some of which are affected by technical debt such as function agility, will need to be factored into the decision process.

Transferring the servicing and management of technical debt of software by buying packages or using SAAS can make sense.  In many cases COTS and SAAS can reduce amount organizations spend on support and maintenance, including technical debt.  A cost benefit analysis can easily be done to verify whether this is true.  What is less obvious is whether the software’s owner will manage the level of technical debt well. If technical debt is managed poorly, the ability of the package to keep pace with the necessary functionally may suffer. Caveat emptor; do your due diligence and look before you leap.


Categories: Process Management

Stuff The Internet Says On Scalability For January 17th, 2014

Hey, it's HighScalability time:


From the stunning Scale of the Universe - Interactive Flash Animation
  • $7 trillion: US spend on patrolling oil sea-lanes; 82 billion: files served by MaxCDN in 5 months
  • Quotable Quotes: 
    • @StephenFleming: "Money doesn’t solve scaling problems, but the actual solutions to scaling problems always cost money." http://daringfireball.net/2014/01/googles_acquisition_of_nest
    • David Rosenthal: Robert Puttnam in Making Democracy Work and Bowling Alone has shown the vast difference in economic success between high-trust and low-trust societies.
    • @kylefox: That's a huge advantage of SaaS businesses: you can be liberal with refunds & goodwill credits w/o impacting the bottom line much.
    • Thomas B. Roberts: That’s the essence of science: Ask the impertinent question, and you are on your way to pertinent science.
    • Benjamin K. Bergen: Simulation is an iceberg. By consciously reflecting, as you just have been doing, you can see the tip—the intentional, conscious imagery. But many of the same brain processes are engaged, invisibly and unbeknownst to you, beneath the surface during much of your waking and sleeping life. Simulation is the creation of mental experiences of perception and action in the absence of their external manifestation.

  • Urbane apps are the future. 80% world population will be in cities by 2045

  • Knossos: Redis and linearizability. Kyle Kingsbury delivers an amazingly indepth model based analysis of "a hypothetical linearizable system built on top of Redis WAIT and a strong coordinator." The lesson: don't get Kyle mad.

  • If a dead startup had a spirit, this is what it would look like: About Everpix. A truly fine memorial. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

How Not To Develop What "Done" Looks Like

Herding Cats - Glen Alleman - Fri, 01/17/2014 - 04:17

The common picture of requirements elicitation looks like this. Which of course is an example of doing stupid things on purpose. When this picture is used as an example of not doing something because it doesn't turn out right, is a further example of doing stupid things.

Swing Project

Let's see where the gaps appear that results in the outcome in the last panel:

  • How the customer explained it - was there a Concept of Operations? How about a Statement of Objectives? Some narrative of what the system would do when it is working? Use Cases? Scenarios? NO? Then what the customer explained cannot be tested in any meaningful way.
  • How the project leader understood it - here's where the failure starts. There is no confirmation in units of measure meaningful to the decision makers on what DONE looks like. What are the Measures of Effectiveness that are the goals the customer can assess and agree to?
  • How the analyst designed it - what is the baseline description of the system the analyst starts with. What are the Measures of Performance of the system? How will those be measured?
  • How the programmer wrote it? - what's the build to plan? How does the developer  know what to build? How will it be tested? 
  • etc etc etc

The wheels fall off when there is no description of DONE shared between the customer and the provider. If the customer doesn't know what DONE looks like, who does? The developers? Probably not. Is DONE emerging? Then the customer has to pay for that? 

There is no way out of the need to know what DONE looks like in Measures of Effectiveness, Measures of Performance, Key Performance Parameters, and Technical Performance Measures. Without these the success of the project is in doubt from the start.

The notion that requirements emerge is will established. But the capabilities the customer needs to be stable enough to establish an Estimate At Complete and an Estimated Duration to Complete. Without these the customer has no understanding of the all in cost of the project. These change as the requirements change. That's part of the project management process. This can't be ignored if the customer is to have any confidence in the project providing value at the needed time at needed to be put to use. 

So a reminder one more time...

You can't assess the value of the project outcome without knowing something about the cost to deliver that value and the time frame for that delivery.

No matter what anyone says, this is an immutable business principle. Anything is simply fooling yourself into believing that the customer doesn't care how you spend her money or when you spend it.

Focus on value, that's what the customer bought. Actually they bought a capability to do something to improve their business or mission. But ignoring the cost of delivering that value is ignoring the balance sheet of the business.

 

Categories: Project Management

8 Reasons Why Estimates Are Too Low

Herding Cats - Glen Alleman - Fri, 01/17/2014 - 04:05

Dead Horse on a StickThe post 8 Reasons Estmates are too Low, is one of those pieces of material that on the surface seems plausible but has series flaws. First is the restating bad management practices and then arguing against them. This seems all too common in the Agile domain for some reason.

A poster campaign at Rocky Flats in the late 90's for safety and safeguards, but usable everywhere is...

Don't Do Stupid Things on Purpose

If we ignore the red herring approach of doing stupid things on purpose and then tilting at the resulting windmill, let's look further for each idea in the post. The picture to the right is used when engaging in a conversation about making improvement, but starting with a credible baseline. This is called Dead Horse on a Stick. Thanks to the Master Systems Engineer on our program for this concept. He uses this when he starts a review and the ideas are dead before he got there. It's also appropriate for concepts that are dead on arrival, like suggesting that those paying for products don't have a need to know how much it's going to cost, before they start spending money.

  1. Super Hero Estimates - the use of a super hero is a well known flaw on all projects. Don't do this. Just say no to the super hero. Use Reference Class Forecasting. Use past performance. Use reference designs. Find like products. Have a Red Team to assess all estimates. Have the super hero contribute an estimate - the possible anchor. This number can be the most optimal. Then adjust that estimate accordingly. This anchoring and adjustment process is well defined in Tversky and Kahneman's work.
  2. Wrong Team - simple - get the right team. Much harder than this of course. But without the right team, the project is headed to the ditch. Knowing its headed to the ditch might provide incentives to get the right people. This is called triage
  3. Guessing in the Dark - stop guessing in the dark. Do your job as an informed engineer. Start with Reference Classes. Install a Red Team. This is one issue with estimating. But can be fixed with good estimating processes. There are lots of sources for advice. 
  4. Forgetting Stuff - this is the role of the WBS. It's the all in product and services for the project. If you forget things, you don't do a good job of the WBS. The WBS is usually derived from the Statement of Work (SOW), Concept of Opeerations (ConOps), Statement of Operations (SOO), or some description of what the customer wants. If you don't have one, go get something that describes enough detail to start the project, then develop that understanding as you go. In the end if you don't know what capabilities the customer wants, your project is not likely to be a success before you even start.
  5. Ball of Mud Project - this is bad architecture. It's more common that desired. But it's a bad basis to start a projet. Spend time and money to untangle the project, otherwise you're laying the seeds of disappointment before you start.
  6. Multitasking - there should be ZERO doubt that multi-tasking is bad. Don't do it.
  7. Mythical Man Month - this is bad management. Don't do it. 
  8. Lazy Developer - really? Time to replace that person.

Quick Summary

  1. Get a reference class. Don't have one. Get a reference class
  2. Get the right people
  3. Build a model of the cost and schedule drivers
  4. List the deliverables for the project against the needed capabilities. 
  5. Unscramble the mess before starting. Can't do that? Don't start
  6. Stay focused
  7. Plan needed staff
  8. Get people who will contribute

So what's the point?

When we look at making improvements to anything from projects and sepnding other people's money, to better peddling of my road bike on century rides - start with

stop doing stupid things on purpose.

Only then can you make real and lasting improvements. If you don't do that, you are beating a dead horse.

Categories: Project Management

Technical Debt: Impact of Aggressive Remediation

You can more aggressively remediate debt if you have more monitors, right?

You can more aggressively remediate debt if you have more monitors, right?

One of the more enjoyable parts of writing these blog entries is the interesting conversations I get into.  The conversation I had at 3:30 AM this morning was even more interesting than normal (I was discussing this topic at 3:30 AM because I needed to find a convenient time for a multi-time zone phone call that would not conflict with running a 10k before 8 AM!) The discussion was about whether an aggressive approach to remediating technical debt can have measurable productivity benefits. My friend had some data that argues that technical debt should be remediated continually (at least at some level).

In my experience, most organizations only react to technical debt when it reaches some trigger level.  That is not to say that that they don’t attempt to avoid technical debt through reviews, pair programming, tools and other techniques as a normal practice.  There are two basic categories of triggers.  The first is a limit that fits into the organizations scheme for measuring technical debt. The second is when someone decides that too much time or money is being spent just on keeping an application or portfolio running.  When the latter case is extreme, words like replacement or outsourcing tend to get used rather than remediation or refactoring. In either case, the logic is that effort and budget is better spent on delivering functionality than on refactoring, until the technical debt begins to impede progress.

My friend suggested that rather than waiting, some portion of all project time should be consciously and publicly committed to remediating known technical debt above and beyond the normal technical debt avoidance techniques such as reviews, tools, standards and pair programming.  The argument is that the technical debt scenario is very similar to the scenario that teams face when they decide to improve how they will work in a retrospective.  We could easily say that addressing issues found in a retrospective represents the remediation of process debt.  Why wouldn’t we pursue technical debt in a similar manner? I think the logic is hard to argue with, if it can be shown to benefit the customer and the organization.

As noted earlier, my friend came with data.  My phone friend described the scenario as two moderately mature Agile teams. Both teams included 7 people, were mostly dedicated resources (did not work on projects outside the team), most people on the both teams had been together doing a mix of Scrum, xP and lean for 3 years, and that they have approximately the same level of technical capability. The organization uses IFPUG function points to measure team productivity to identify macro level process improvements and best practices.  One team actively includes one story each sprint to either refactor some portion of code or to address a specific item of technical debt from their backlog.  The other does not follow this practice.   The data from the last three quarterly releases is shown below:

Untitled

Team Apple was the team that actively remediated.  The data, while not statistically valid, suggests that an active remediation strategy increases productivity.  What we were not able to conclude was whether the cost/benefit of remediation (effort, a direct cost and delayed functionality, an opportunity cost)was positive.  My friend’s organization’s current thinking is a big “probably”.

Does aggressive or active technical debt remediation make sense?  The answer is probably not cut and dry and we certainly need more data to draw a conclusion for these two teams. In order to decide on an approach, each organization is going to have to evaluate their own context.  For example, as discussed in Technical Debt, remediating consciously accrued technical debt applications that have a very short planned life probably does not make sense. If you are dealing with a core ERP application the answer may be very different. However, if productivity is enhanced by aggressively remediating technical debt, waiting until you hit some arbitrary trigger might not make sense.


Categories: Process Management

Test Jumpers: One Vision of Agile Testing

James Bach’s Blog - Fri, 01/17/2014 - 00:42
Many software companies, these days, are organized around a number of small Agile teams. These teams may be working on different projects or parts of the same project. I have often toured such companies with their large open plan offices; their big tables and whiteboards festooned with colorful Post-Its occasionally fluttering to the floor like leaves in a perpetual autumn display; their too many earbuds and not nearly enough conference rooms. Sound familiar, Spotify? Skype?

(This is a picture of a smoke jumper. I wish test jumpers looked this cool.)

I have a proposal for skilled Agile testing in such places: a role called a “test jumper.” The name comes from the elite “smoke jumper” type of firefighter. A test jumper is a trained and enthusiastic test lead (see my Responsible Tester post for a description of a test lead) who “jumps” into projects and from project to project: evaluating the testing, doing testing or organizing people in other roles to do testing. A test jumper can function as test team of one (what I call an omega tester ) or join a team of other testers.

The value of a role like this arises because in a typical dedicated Agile situation, everyone is expected to help with testing, and yet having staff dedicated solely to testing may be unwarranted. In practice, that means everyone remains chronically an amateur tester, untrained and unmotivated. The test jumper role could be a role held by one person, dedicated to the mastery of testing skills and tools, who is shared among many projects. This is a role that I feel close to, because it’s sort of what I already do. I am a consulting software tester who likes to get his hands dirty doing testing and running in-house testing events. I love short-term assignments and helping other testers come up to speed.

 

 

What Does a Test Jumper Do?

A test jumper basically asks, How are my projects handling the testing? How can I contribute to a project? How can I help someone test today?

Specifically a test jumper:

  • may spend weeks on one project, acting as an ordinary responsible tester.
  • may spend a few days on one project, organizing and leading testing events, coaching people, and helping to evaluate the results.
  • may spend as little as 90 minutes on one project, reviewing a test strategy and giving suggestions to a local tester or developer.
  • may attend a sprint planning meeting to assure that testing issues are discussed.
  • may design, write, or configure a tool to help perform a certain special kind of testing.
  • may coach another tester about how to create a test strategy, use a tool, or otherwise learn to be a better tester.
  • may make sense of test coverage.
  • may work with designers to foster better testability in the product.
  • may help improve relations between testers and developers, or if there are no other testers help the developers think productively about testing.

Test jumping is a time-critical role. You must learn to triage and split your time across many task threads. You must reassess project and product risk pretty much every day. I can see calling someone a test jumper who never “jumps” out of the project, but nevertheless embodies the skills and temperament needs to work in a very flexible, agile, self-managed fashion, on an intense project.

Addendum #1: Commenter Augusto Evangelisti suggests that I emphasize the point about coaching. It is already in my list, above, but I agree it deserves more prominence. In order to safely “jump” away from a project, the test jumper must constantly lean toward nudging, coaching, or even training local helpers (who are often the developers themselves, and who are not testing specialists, even though they are super-smart and experienced in other technical realms) and local responsible testers (if there are any on that project). The ideal goal is for each team to be reasonably self-sufficient, or at least for the periodic visits of the test jumper to be enough to keep them on a good track.

What Does a Test Jumper Need?

  • The ability and the enthusiasm for plunging in and doing testing right now when necessary.
  • The ability to pull himself out of a specific test task and see the big picture.
  • The ability to recruit helpers.
  • The ability to coach and train testers, and people who can help testing.
  • A wide knowledge of tools and ability to write tools as needed.
  • A good respectful relationship with developers.
  • The ability to speak up in sprint planning meetings about testing-related issues such as testability.
  • A keen understanding of testability.
  • The ability to lead ad hoc groups of people with challenging personalities during occasional test events.
  • An ability to speak in front of people and product useful and concise documentation as necessary.
  • The ability to manage many threads of work at once.
  • The ability to evaluate and explain testing in general, as well as with respect to particular forms of testing.

A good test jumper will listen to advice from anyone, but no one needs to tell a test jumper what to do next. Test jumpers manage their own testing missions, in consultation with such clients as arise. A test jumper must be able to discover and analyze the testing context, then adapt to it or shape it as necessary. It is a role made for the Context-Driven school of testing.

Does a Test Jumper Need to be a Programmer?

Coding skills help tremendously in this role, but being a good programmer is not absolutely required. What is required is that you learn technical things very quickly and have excellent problem-solving and social skills. Oh, and you ought to live and breathe testing, of course.

How Does a Test Jumper Come to Be?

A test jumper is mostly self-created, much as good developers are. A test jumper can start as a programmer, as I did, and then fall in love with the excitement of testing (I love the hunt for bugs). A test jumper may start as a tester, learn consulting and leadership skills, but not want to be a full-time manager. Management has its consolations and triumphs, of course, but some of us like to do technical things. Test jumping may be part of extending the career path for an experienced and valuable tester.

Categories: Testing & QA

Discover DevTools course updated and translated

Google Code Blog - Thu, 01/16/2014 - 22:52
Author PhotoBy Peter Lubbers, Program Manager and MOOC Manufacturer

Last year, we teamed up with Code School to launch an interactive course that teaches you how to take advantage of the powerful resources available in Chrome DevTools and speed up the development and debugging of your web apps.

Today, we’ve launched a major course update that features new videos that reflect the most up-to-date Chrome DevTools UI and functionality as well as Spanish and Brazilian Portuguese subtitles.

course screenshotThe Discover DevTools course is (still) available for free, and includes lessons on the DOM and styles, working with the Console, debugging JavaScript, and additional ways to improve performance. By adding Spanish and Portuguese subtitles to the course, we're eager to see more talented developers deepen their understanding of how the Chrome DevTools can accelerate their web development workflow.

We hope you’ll take a moment to rediscover DevTools and see how Chrome DevTools can make you a more productive developer.


+Peter Lubbers is a Program Manager on the Chrome Developer Relations Team, spreading HTML5 and Open Web goodness.

Posted by Scott Knaster, Editor
Categories: Programming

Windows Azure: Staging Publishing Support for Web Sites, Monitoring Improvements, Hyper-V Recovery Manager GA, and PCI Compliance

ScottGu's Blog - Scott Guthrie - Thu, 01/16/2014 - 20:53

This morning we released another great set of enhancements to Windows Azure.  Today’s new capabilities and announcements include:

  • Web Sites: Staged Publishing Support and Always On Support
  • Monitoring Improvements: Web Sites + SQL Database Alerts
  • Hyper-V Recovery Manager: General Availability Release
  • Mobile Services: Support for SenchaTouch
  • PCI Compliance: Windows Azure Now Validated for PCI DSS Compliance

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Web Sites: Staged Publishing Support

With today’s release, you can now enable staged publishing to your Windows Azure Web Sites.  This new feature is really powerful, and enables you to deploy updates of your web apps/sites to a staging version of the site that can be accessed via a URL that is different from your main site.  You can use this staged site to test your site/app deployment and then, when ready, instantaneously swap the content and configuration between the live site and the staging version. 

This new features enables you to deploy changes with more confidence.  And it ensures that your site is never in an inconsistent state (where some files have been updated and others not) - now you can immediately swap all changes to all of the files in one shot.

Enabling Staged Publishing Support

To setup staged publishing go to the DASHBOARD tab of a web site and click Enable staged publishing from the quick glance section:

image

Clicking this link will cause Azure to create a new staging version of the web-site and link it to the existing site.  This linkage is represented in the navigation of the Windows Azure Management Portal – the staging site will show up as a sub-node of the primary site:

image

If you look closely at the name of the staging site, you’ll notice that its URL by default is sitename-staging (e.g. if the primary site name was “scottgu”, the staging site would be “scottgu-staging”):

image

You can optionally map any custom DNS name you want to the staging site (using either a C-Name or A-Record) – just like you would a normal site.  So your staging domain doesn’t have to have an azurewebsites.net extension.  In the scenario above I could remap the staging domain to be staging.scottgu.com, or scottgu-staging.com, or even foobar.scottgu.com if I wanted to. 

The staging URL doesn’t change between deployments of a site – so you can configure a custom DNS once, and then you can use it across all subsequent deployments.  You can also optionally enable SSL on the staging site and upload a SSL certificate to use with the staging domain (ensuring you can fully test/validate your SSL scenarios before swapping live).

Configuring the Staging Site

You can click on the staging site to manage it just like any other site:

image 

The SCALE tab and the LINKED RESOURCES tabs are disabled for staging sites, but all other tabs work as expected.  You can use the CONFIGURE tab to set configuration settings like database and application connection-strings (if you set these at the site level they override anything you might have in a web.config file).

One thing you’ll also notice when you open the staging site is that there is a new SWAP button in the bottom command-bar of it – we’ll talk about how to use that in a little bit.

Deploying to the Staging Site

Deploying a new instance of your web-app/site to the staging site is really easy.  Simply deploy to it just like you would any normal site.  You can use FTP, the built-in “Publish” dialog inside Visual Studio, Web Deploy or Git, TFS, VS Online, GitHub, BitBucket, DropBox or any of the other deployment mechanism we already support.  You configure these just like you would a normal site.

Below I’m going to use the built-in VS publish wizard to publish a new version of the site to the staging site:

image

Once this new version of the app is deployed to the staging site we can access a page in it using the staging domain (in this case http://scottgu-staging):

image

Note that the new version of the site we deployed is only in the staging site.  This means that if we hit the primary site domain (in this case http://scottgu) we wouldn’t see this new “V2” update - it would instead show any older version that had been previously deployed:

image

This allows us to do final testing and validation of the staging version without impacting users visiting the live production site.

Swapping Deployments

At some point we’ll be ready to roll our staged version to be the live production site version.  Doing this is easy – all we need to do is push the SWAP button within the command-bar of either our live site or staging site using the Windows Azure Portal (you can also automate this from the command-line or via a REST call):

image

When we push the SWAP button we’ll be prompted with a confirmation dialog explaining what is about to happen:

image

If we confirm we want to proceed with the swap, Azure will immediately swap the content of the live site (in this case http://scottgu) with the newer content in the staging site (in this case http://scottgu-staging).  This will take place immediately – and ensure that all of the files are swapped in a single shot (so that you never have mix-matched files).

Some settings from the staged version will automatically copy to the production version – including things like connection string overrides, handler mappings, and other settings you might have configured.  Other settings like the DNS endpoints, SSL bindings, etc will not change (ensuring that you don’t need to worry about SSL certs used for the staging domain overriding the production URL cert, etc).

Once the swap is complete (the command takes only a few seconds to execute), you’ll find that the content that was previously in the staging site is now in the live production site:

image

And the content that had been in the older live version of the site is now in the staging site.  Having the older content available in the staging site is useful – as it allows you to quickly swap it back to the previous site if you discover an issue with the version that you just deployed (just click the SWAP button again to do this).  Once you are sure the new version is fine you can just overwrite the staging site again with V3 of your app and repeat the process again.

Deployment with Confidence

We think you’ll find that the new staged publishing feature is both easy to use and very powerful, and enables you to handle deployments of your sites with an industrial strength workflow.

Web Sites: Always On Support

One of the other useful Web Site features that we are introducing today is a feature we call “Always On”.  When Always On is enabled on a site, Windows Azure will automatically ping your Web Site regularly to ensure that the Web Site is always active and in a warm/running state.  This is useful to ensure that a site is always responsive (and that the app domain or worker process has not paged out due to lack of external HTTP requests). 

It also useful as a way to keep a Web Site active for scenarios where you want to run background code within it irrespective of whether it is actively processing external HTTP customer requests.  We have another new feature we are enabling this week called “Web Jobs” that makes it really easy to now write this background code and run it within a Web Site. I’ll blog more about this feature and how to use it in the next few days.

You can enable Always On support for Web Sites running in Standard mode by navigating to the CONFIGURE tab within the portal, and then toggling the Always On button that is now within it:

image

Monitoring Improvements: Web Sites + SQL Database Alerts

With almost every release we make improvements to our monitoring functionality of Azure services. Today’s update brings two nice new improvements:

  1. Metrics updated every minute for Windows Azure Web Sites
  2. Alerting for more metrics from Windows Azure Websites and Windows Azure SQL Databases

Monitoring Data Every Minute

With today’s release we are now updating statistics on the monitoring dashboard of a Web Site every minute, so you can get much more fresh information on exactly how your website is being used (prior to today the granularity was not as fine grained):

image

Viewing data at this higher granularity can make it easier to observe changes to your website as they happen. No additional configuration is required to get data every minute – it is now automatically enabled for all Azure Websites.

Expanding Alerting

When you create alerts you can now choose between six different services:

  • Cloud Service
  • Mobile Service
  • SQL Database (New Today!)
  • Storage
  • Virtual Machine
  • Web Site (More Metrics Today!)

To get started with Alerting, click on the Management Services extension on the left navigation tab of the the Windows Azure Management Portal:

image

Then, click the Add Rule button in the command bar at the bottom of the screen. This will open a wizard for creating an alert rule. You can see all of the services that now support alerts:

image

New Web Site Alert Metrics

With today’s release we are adding the ability to alert on any metric that you see for a Web Site in the portal (previously we only supported alerts on Uptime and Response Time metrics). Today’s new metrics include support for setting threshold alerts for errors as well as CPU time and total requests:

image

The CPU time and Data Out metric alerts are particularly useful for Free or Shared websites – you can now use these alerts to email you if you’re getting close to exceeding your quotas for a free or shared website (and need to scale up instances).

New SQL Alert Metrics

With today’s release you can also now define alerts for your SQL Databases. For Web and Business tier databases you can setup alert metrics for the Storage for the database.  There are also now additional metrics and alerts for SQL Database Premium (which is currently in preview) such as CPU Cores and IOPS.

Once you’ve set up these new alerts, they behave just like alerts for other services. You’ll be informed when they cross the thresholds you establish, and you can see the recent alert history in the dashboard:

image

Windows Azure Hyper-V Recovery Manager: General Availability Release

I’m excited to announce the General Availability of Windows Azure Hyper-V Recovery Manager (HRM). This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios.

Windows Azure Hyper-V Recovery Manager helps protect your on premise applications and services by orchestrating the protection and recovery of Virtual Machines running in a System Center Virtual Machine Manager 2012 R2 and System Center Virtual Machine Manager 2012 SP1 private cloud to a secondary location. With simplified configuration, automated protection, continuous health monitoring and orchestrated recovery, Hyper-V Recovery Manager service can help you implement Disaster Recovery and recover applications accurately, consistently, and with minimal downtime.

image

The service leverages Hyper-V Replica technology available in Windows Server 2012 and Windows Server 2012 R2 to orchestrate the protection and recovery of Hyper-V Virtual Machines from one on-premise site to another on-premise site. Application data always travels on your on premise replication channel. Only metadata that is needed (such as names of logical clouds, virtual machines, networks etc.) for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted.

Getting Started

To get started, use the Windows Azure Management Portal to create a Hyper-V Recovery Manager Vault. Browse to Data Services > Recovery Services and click New to create a New Hyper-V Recovery Manager Vault. You can name the vault and specify a region where you would like the vault to be created.

clip_image002

Once the Hyper-V recovery Manager vault is created, you’ll be presented with a simple tutorial that will help guide you on how to register your SCVMM Servers and configure protection and recovery of Virtual Machines.

clip_image004

To learn more about setting up Hyper-V Recovery Manager in your deployment follow our detailed step-by-step guide.

Key Benefits of Hyper-V Recovery Manager

Hyper-V Recovery Manager offers the following key benefits that differentiate it from other disaster recovery solutions:

  • Simple Setup and Configuration: HRM dramatically simplifies configuration and management operations across large number of Hyper-V hosts, Virtual Machines and data-centers.
  • Automated Protection: HRM leverages the capabilities of Windows Server and System Center to provide on-going replication of VMs and ensures protection throughout the lifecycle of a VM.
  • Remote Monitoring: HRM leverages the power and reach of Azure to provide a remote monitoring and DR management service that can be accessed from anywhere.
  • Orchestrated Recovery: Recovery Plans enables automated DR orchestration by sequencing failover of different application tiers and customization with scripts and manual actions.

New Improvements

The Hyper-V Recovery Manager service has been enhanced since the initial October Preview with several nice improvements:

  • Improved Failback Support: The Failback support has been improved in scenarios where the primary host cluster has been rebuilt after an outage.
  • Support for Kerberos based Authentication: Cloud configuration now allows selecting Kerberos based authentication for Hyper-V Replica. This is useful in scenarios where customers want to use 3rd party WAN optimization and compression and have AD trust available between primary and secondary sites.
  • Support for Upgrade from VMM 2012 SP1 to VMM 2012 R2: HRM service now supports upgrades from VMM 2012 SP1 to VMM 2012 R2.
  • Improved Scale: The UI and service has been enhanced for better scale support.

Please visit Windows Azure web site for more information on Hyper-V Recovery Manager. You can also refer to additional product documentation. You can visit the HRM forum on MSDN for additional information and engage with other customers.

Mobile Services: Support for SenchaTouch

I’m excited to announce that in partnership with our friends at Sencha, we are today adding support for SenchaTouch to Windows Azure Mobile Services. SenchaTouch is a well know HTML/JavaScript-based development framework for building cross-platform mobile apps and web sites. With today’s addition, you can easily use Mobile Services with your SenchaTouch app.

You can download Windows Azure extension for Sencha here, configure Sencha loader with the location of the azure extension, and add Azure package to your app.json file:

{ name : "Basic", requires : [ "touch-azure"]}

Once you have the Azure extension added to your Sencha project, you can connect your Sencha app to your Mobile Service simply by adding the following initialization code:

Ext.application({

    name: 'Basic',

    requires: ['Ext.azure.Azure'],

    azure: {

        appKey: 'myazureservice-access-key',

        appUrl: 'myazure-service.azure-mobile.net'

    },

    launch: function () {

        // Call Azure initialization

        Ext.Azure.init(this.config.azure);

    }

});

From here on you can data bind your data model to Azure Mobile Services, authenticate users and use push notifications. Follow this detailed Getting Started tutorial to get started with SenchaTouch and Mobile Services. Read more detailed documentation at Mobile Services Sencha extension resources page.

Windows Azure Now Validated for PCI DSS Compliance

We are very excited to announce that Windows Azure has been validated for compliance with the Payment Card Industry (PCI) Data Security Standards (DSS) by an independent Qualified Security Assessor (QSA).

The PCI DSS is the global standard that any organization of any size must adhere to in order to accept payment cards, and to store, process, and/or transmit cardholder data. By providing PCI DSS validated infrastructure and platform services, Windows Azure delivers a compliant platform for you to run your own secure and compliant applications. You can now achieve PCI DSS certification for those applications using Windows Azure.

To assist customers in achieving PCI DSS certification, Microsoft is making the Windows Azure PCI Attestation of Compliance and Windows Azure Customer PCI Guide available for immediate download.

Visit the Trust Center for a full list of in scope features or for more information on Windows Azure security and compliance.

Summary

Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Documentation Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

Categories: Architecture, Programming

Google Play Services 4.1

Android Developers Blog - Thu, 01/16/2014 - 20:13
gps

The latest release of Google Play services is now available on Android devices worldwide. It includes new Turn Based Multiplayer support for games, and a preliminary API for integrating Google Drive into your apps. This update also improves battery life for all users with Google Location Reporting enabled.

You can get started developing today by downloading the Google Play services SDK from the SDK Manager.

Turn Based Multiplayer

Play Games now supports turn-based multiplayer! Developers can build asynchronous games to play with friends and auto-matched players, supporting 2-8 players per game. When players take turns, their turn data is uploaded to Play Services and shared with other players automatically.

We are also providing an optional new “Connecting to Play Games” transition animation during sign-in, before the permission dialog appears. This helps contextualize the permission dialog, especially in games that ask for sign in on game start.

Google Drive

This version of Google Play Services includes a developer preview of the new Google Drive API for Android. You can use it to easily read and write files in Google Drive so they're available across devices and on the web. Users can work with files offline too — changes are synced with Google Drive automatically when they reconnect.

The API also includes common UI components including a file picker and save dialog.

Google Mobile Ads

With Google Play services 4.1, the Google Mobile Ads SDK now fully supports DoubleClick for Publishers, DoubleClick Ad Exchange, and Search Ads for Mobile Apps. You can also use a new publisher-provided location API to provide Google with the location when requesting ads. Location-based ads can improve your app monetization.

Google+

An improved Google+ sharing experience makes it even easier for users to share with the right people from your app. It includes better auto-complete and suggested recipients from Gmail contacts, device contacts and people on Google+.

More About Google Play Services

To learn more about Google Play services and the APIs available to you through it, visit the Google Services area of the Android Developers site. Details on the APIs are avaialble in the API reference.

For information about getting started with Google Play services APIs, see Set Up Google Play Services SDK

Join the discussion on

+Android Developers
Categories: Programming

Introducing the Google Drive Android API

Google Code Blog - Thu, 01/16/2014 - 19:00
Author PhotoBy Magnus Hyttsten, Developer Advocate, Google Drive

With today's developer preview of the Google Drive Android API in Google Play Services 4.1, you can add the convenience of Google Drive cloud storage to your apps without breaking a sweat.

While Drive integration on Android was possible in the past, the new API creates a faster, seamless experience that enables your apps to integrate with the Drive backend within minutes.

The new API offers a number of benefits:

1. Transparent use and syncing of local storage

The Google Drive Android API temporarily uses a local data store in case the device is not connected to a network. So, no need to worry about failed API calls in your app because the user is offline or experiencing a network connectivity problem. Data stored locally in this fashion will automatically and transparently be stored in the Google Drive cloud by Android’s sync scheduler when connectivity is available to minimize impact on battery life, bandwidth, and other resources.


2. Designed for Android and available everywhere

The API was developed for Android and conforms to the latest Android design paradigms, such as using the new uniform client API GoogleAPIClient. And being part of the latest release of Google Play Services provides additional benefits:
  • There’s minimal impact on the weight of your apps. As the client library is a stub to Google Play Services, incorporating the API has minimal impact on the size of your .apk binaries, resulting in faster downloads, fewer updates, and smaller execution footprint.
  • User files are automatically synced between different devices (provided the app has the same namespace and is signed with the same key).
  • Any device running the Gingerbread or later releases of Android and Google Play Services will automatically have support for the Google Drive Android API.

3. User interface components

File picker and creator user interface components are provided with this initial release of the Google Drive Android API, enabling users to select files and folders in Google Drive.


For example, the file picker is implemented as an Intent and allows you develop a native Android user experience with just a couple lines of code. This following code snippet launches the picker and allows the user to select a text file:
// Launch user interface and allow user to select file
IntentSender i = Drive.DriveApi
.newOpenFileActivityBuilder()
.setMimeType(new String[] { “text/plain” })
.build(mGoogleApiClient);
startIntentSenderForResult(i, REQ_CODE_OPEN, null, 0, 0, 0);

The result is provided in the onActivityResult callback as usual.

4. Direct access to Drive functionality

You may be wondering how the Google Drive Android API relates to the Storage Access Framework released as part of Android 4.4 KitKat.

The Storage Access Framework is a generic client API that works with multiple storage providers, including cloud-based and local file systems. While apps can use files stored on Google Drive using this generic framework, the Google Drive API offers specialized functionality for interacting with files stored on Google Drive — including access to metadata and sharing features.

Additionally, as part of Google Play services the Google Drive APIs are supported on devices running Android 2.3 Gingerbread and above.

How to get started

As you incorporate the Google Drive Android API into your apps, we hope it makes your life a little bit easier, and enables you to create fun, powerful apps that take advantage of all that Android and Google Drive can do together.

For more information visit our documentation or explore our API demo and other sample applications on the official Google Drive GitHub repository.

Also check out the official launch video:


Let’s keep the discussions going on +GoogleDrive, and Stack Overflow (google-drive-sdk).


Magnus Hyttsten is a Developer Advocate on the Google Drive team. Beyond work, he enjoys trying out new technologies, thinking about product strategies, and exploring California.

Posted by Scott Knaster, Editor
Categories: Programming

Finding The Right Pace

Making the Complex Simple - John Sonmez - Thu, 01/16/2014 - 17:30

Go too fast and you’ll burn out early. Go too slow and you’ll be wasting time. Finding the right pace is hard– but it is very important. In this video I talk about how you can find the right pace for yourself and why it is so important to do so. Full transcription show Hey, […]

The post Finding The Right Pace appeared first on Simple Programmer.

Categories: Programming

The Unlimited Vacation Policy

NOOP.NL - Jurgen Appelo - Thu, 01/16/2014 - 13:26
Unlimited Vacation

Some companies have stopped defining how many days in the year employees can go on a vacation.

The post The Unlimited Vacation Policy appeared first on NOOP.NL.

Categories: Project Management

RST Methodology: “Responsible Tester”

James Bach’s Blog - Thu, 01/16/2014 - 04:58

In Rapid Software Testing methodology, we recognize three main roles: Leader, Responsible Tester, and Helper. These roles are situational distinctions. The same person might be a helper in one situation, a leader in another, and a responsible tester in yet another.

Responsible Tester

Rapid Software Testing is a human-centered approach to testing, because testing is a performance and can only be done by humans. Therefore, testing must be traceable to people, or else it is literally and figuratively irresponsible. Hence, a responsible tester is that tester who bears personal responsibility for testing a particular thing in a particular way for a particular project. The responsible tester answers for the quality of that testing, which means the tester can explain and defend the testing, and make it better if needed. Responsible testers also solicit and supervise helpers, as needed (see below).

This contrasts with factory-style testing, which relies on tools and texts rather than people. In the Factory school of testing thought, it should not matter who does the work, since people are interchangeable. Responsibility is not a mantle on anyone’s shoulders in that world, but rather a sort of smog that one seeks to avoid breathing too much of.

Example of testing without a responsible tester: Person A writes a text called a “test case” and hands it to person B. Person B reads the text and performs the instructions in the text. This may sound okay, but what if Person B is not qualified to evaluate if he has understood and performed the test, while at the same time Person A, the designer, is not watching and so also isn’t in position to evaluate it? In such a case, it’s like a driverless car. No one is taking responsibility. No one can say if the testing is good or take action if it is not good. If a problem is revealed later, they may both rightly blame the other.

That situation is a “sin” in Rapid Testing. To be practicing RST, there must always a responsible tester for any work that the project relies upon. (Of course students and otherwise non-professional testers can work unsupervised as practice or in the hopes of finding one more bug. That’s not testing the project relies upon.)

A responsible tester is like being the driver of an automobile or the pilot-in-command of an aircraft.

Helper

A helper is someone who contributes to the testing without taking responsibility for the quality of the work AS testing. In other words, if a responsible tester asks someone to do something simple to press a button, the helper may press the button without worrying about whether that has actually helped fulfill the mission of testing. Helpers should not be confused with inexperienced or low-skilled people. Helpers may be very skilled or have little skill. A senior architect who comes in to do testing might be asked to test part of the product and find interesting bugs without being expected to explain or defend his strategy for doing that. It’s the responsible tester whose job it is to supervise people who offer help and evaluate the degree to which their work is acceptable.

Beta testing is testing that is done entirely by helpers. Without responsible testers in the mix, it is not possible to evaluate in any depth what was achieved. One good way to use beta testers is to have them organized and engaged by one or more responsible testers.

Leader

A leader is someone whose responsibility is to foster and maintain the project conditions that make good testing possible; and to train, support, and evaluate responsible testers. There are at least two kinds of leader, a test lead and a test manager. The test manager is a test lead with the additional responsibilities of hiring, firing, performance reviews, and possibly budgeting.

In any situation where a leader is responsible for testing and yet has no responsible testers on his team, the leader IS the acting responsible tester. A leader surrounded by helpers is the responsible tester for that team.

 

Categories: Testing & QA

Technical Debt: Can Technical Debt Be Measured?

DSC_0448In Technical Debt we defined technical debt as the work not done or the shortcuts taken when delivering a product. Then we discussed the sources of technical debt in Technical Debt: Where Does It Come From? The next question is can we measure technical debt? If we have a consistent measure, we can determine whether the technical debt in our code is growing or shrinking, or even whether the level of debt passes some threshold of acceptability which requires remediation.  There are three common approaches to identifying and measuring technical debt. Each of these approaches have pluses and minuses. They are:

  1. Tool Based Approaches: There are several software tools that scan code to determine whether the code meets coding or structural standards (extensibility is a structural standard). These tools include Cast AIP, SonarQube, SonarJ, Structure101 and others.  Some tools are proprietary and some are open source.  Tools are very useful for identifying technical debt that has crept into an application unintentionally and that impacts code quality, but less useful for identifying partial requirements, dropped testing or intentional architecture deviations.
  2. Custom Measures:  Many organizations create measures technical debt.  Almost all measures of technical debt are proxies.  Examples of proxy measures include automated test code coverage, audit results and size to line-of-code ratios ([Lines of code/ Function Points Delivered]/Industry Function Point Language Specific Backfire Calibration Number). Since there any number of custom techniques, it is difficult to determine where these can be best applied. However, audit techniques are very valuable to identify technical debt generated by process problems that are not reported.  For example, when pressed a developer may not complete all of his or her unit testing (the same behavior will occur in independent testing also).  Unless it is self-reported or observed by a peer, this type of behavior may leave unrecognized technical debt in the code that can’t be seen through a scan. Audits are a mechanism to formally inspect code for standards and processes that can’t be automated.
  3. Self-Reporting: Team level self-reporting is a fantastic mechanism for tracking intentionally accrued technical debt.  The list of identified technical debt can be counted, sized or valued and prioritized.  The act of sizing or valuing converts the list into a true measure that can be prioritized.  The sizing mechanisms I have used include function points, story points and effort to correct (usually hours or days). When using value instead of size or effort I always done using a currency (e.g. Dollars, Rupees, Euros). Everyone understands money. I usually ask teams I am working with to generate a value for each piece of technical debt they intentionally accrue as they make the decision.  More than once I have seen team change their decision to incur the technical debt when they considered the value of the item they are considering.  Self-reporting is very useful for capturing intentional accrued technical debt, such as partially implemented requirements, intentional architectural deviations and knowingly constrained testing.

Technical debt is a powerful tool to help teams and organizations think about the quality of their code base.  Measures are also important tools to understand how much technical debt exists and whether or not it makes sense to remediate the technical debt.  Each category of measuring technical debt is useful however each has  its own strengths and weaknesses.  A combination is usually needed to get the best view of the technical debt to meet the needs of a specific team or organization.


Categories: Process Management

Vedis - An Embedded Implementation of Redis Supporting Terabyte Sized Databases

I don't know about you, but when I first learned about Redis my initial thought was wow, why hasn't anyone done this before? My next thought was why put this functionality in a separate process? Why not just embed it in your own server code and skip the network path completely? Especially in a Service Oriented Architecture there's no need for an extra hop or extra software installation and configuration.

Now you can embed Redis-like code directly into your server with Vedis - an embeddable datastore C library built with over 70 commands similar in concept to Redis but without the networking layer since Vedis run in the same process of the host application. It's transactional, cross platform, thread safe, key-value, supports terabyte sized databases, has a GPL-like license (which isn't great for commercial apps), and supports an on-disk as well as in-memory datastore.

More about Vedis:

Categories: Architecture

Quote of the Day

Herding Cats - Glen Alleman - Wed, 01/15/2014 - 14:18

Such as are your habitual thoughts, such also will be the character of your mind; for the soul is dyed by the thoughts. - Marcus Aurelius

Categories: Project Management