Friday, November 4, 2011

Ending thoughts of an acceptance test

I spent over a year in my current organization to get to the point where my team completed a project's acceptance test the first time. Now, down to little over two years, a second project's acceptance test is ending.

I'm a test manager in quite a number of projects that are long enough that I feel that most of my time goes into waiting to actually getting into testing. Two more to complete in upcoming months, and another longer project just getting started.

I wanted to share a few bits on the now-about-to-be-finished acceptance testing.

Having reached this point, I can reflect back a month and admit, that I believed our contractor would not be able to find the relevant problems. I had reviewed some of their results, and noted that
  • system testing defects were mostly minor and there were not that much of those
  • it took about 4 days to plan & execute a test on average, 1 day to fix a bug when found during system testing
  • it took about 2 days to plan & execute a test on average, 0,7 days to fix during integration testing
On setting up the project, I had tried convincing steering group to let the customer side testers participate in the contractor testing phases in testing role, but was refused. We wanted to try out how things go when we allocate the responsibility over to the contractor side. So I wasn't convinced there would not be a number of "change requests" - bugs that the contractor can't or don't care to find, since they are not to be read directly from specifications or problems that you just can't read from spec directly.

Now that we're almost done with our testing, I can outline our results. We found a small yet relevant portion of problems during the project. I had estimated that if I can count number of bugs to fix with one hands fingers, we'll be able to do this in the allocated one-month timeframe, and that's where we ended. Another batch was must-have change requests, bugs with just another name. And we logged about 80 % of issues, that reflected either bugs in current production (and mostly those we don't fix, although it's not so straightforward) or problems in setting up the test scenario for the chain of synchronized data. Just counted that we used on finding issue on average 4,6 days. If I take out the ones that did not result in changes, 20 days to a relevant bug. I have no right to complain about the contractors efficiency. And I better do them right, and make our management clearly aware of this too.

So, I guess I have to admit. The contractor succeeded even though I remained sceptic up until the very end. I had a really good working relationship with the customer side test manager, and at this point I'm convinced she is a key element in this experience of success and another ones still going on, being over schedule with still relevant concerns on whether the testing we do on customer side will ever be enough.

Just wanted to share, amongst all the not-so-fortunate stories I have, that this one went really well. The only glitch during the project was at the time when our own project manager needed planning support to take the testing through. After that, she was also brilliant and allowed me to work on multiple projects on less managerial role but more of as a contributor to the actual testing.