Wednesday, May 28, 2014

Quality Auditing - A bad review worth having ...

I was taking another look at Lessons Learned In Software Testing on Amazon today.  I've used the work copy extensively, and really want "my own" edition, but I'm torn between having a paper copy or an electronic version (I use my Kindle a lot).

I have to admit I've not really finished it yet - but I do find it full of useful insights.  I often read things in there that I'm afraid to say I agree with, because I've seen companies and projects learning the hard way.

What interested me was the reviews, or rather the profile of the reviews ...


Most people think it's an amazing and indispensable book ... except 2 people who think it's dreadful (out of 50, that's about 4%).  Fascinated, perhaps with a shade of rubber neck syndrome, I had to look at these reviews.  Maybe these "thought leaders" of testing had an interesting take on the book, that I should not be dismissive of.

This is by far the more interesting ...

Promises Much, Delivers [little]

This book was praised by several colleagues as THE way to work on testing methods and thinking. After reading it and talking with each of them, it was apparent they were excited based on false credentials about ideas that were easy and comfortable but ineffective long term. 

This book is VERY dangerous to a serious testing organization because it focuses on minimal documentation (which means in 6 months when you're asked if you tested X and you can't remember, you'll get 5mins to get out of the building), downplays automation in regression testing (what!!?), and admits openly that it is proposing ideas that are NOT proven (contrary to what the title states) but rather are ideas that "seem to be working" (see pg 176) but no formal nor long term studies support their claims. 

Well, long term studies that have already been done directly contradict their findings: process is driven by a need to be effective and if you don't know what you're doing before you do it, then you don't know what you did when you're done. ... This is a book for those who advocate ad hoc testing to their own discredit and need a means of justificating their apathy and laziness to those who actually know effective testing techniques.

Now that is a bad review worth having!  Sadly I am surprised that this form of opinion represents only 4% of people - though the figures are going to be skewed by the fact that people who feel as the above individual are (I seriously doubt) unlikely to buy this book.

But this view, represented by, let's call him for simplicity Rex, is one many software testers have to face.

The phrase that struck me with a level of absolute terror on Rex's thought leadership style was that piece here, "it focuses on minimal documentation (which means in 6 months when you're asked if you tested X and you can't remember, you'll get 5mins to get out of the building)".

That isn't a model of testing - that's a model of QA, in this case Quality Auditors.  In his model, QAs go around as almost accountants of software recording, noting, writing.  Constantly writing more and more reports and documents which just stack up.

I have to admit early in my career I worked for a project like that (to be honest we didn't know better).  When we ran a "formal test", we needed three people - one to read the script, one to perform the action, and a third person who "witnessed" our test,
  • at the end of a successful step, the reader would tick a box in the printed hardcopy of the test script.  
  • at the end of a successful test case, we'd each sign our relevant roles
  • at the end of a successful test phase, all our signed test scripts would be filed in our office in case of audit
And sometimes we'd have the customer come along to witness one of our tests - so we'd have a special signature box for those occasions too!

Pretty soon of course, the office just couldn't keep up with all the paperwork we were generating.  In truth we were probably now a fire hazard.  We looked like something from the movie Brazil ...


What we eventually learned was that having up to four people witness every test step, was kind of inefficient.  Oh sometimes it was worth having two people testing together to challenge each other, but mainly it was a waste, slow, boring.  At the end of my two years there I heard two testers running one of my scripts and I was actually ashamed of how slow and boring I'd made testing on the project.

The problem with the Quality Audit model of testing is this,
  • just because you're recording everything you're doing doesn't mean you're doing the right thing.  It just means you can defend what you did.  And if you missed doing something because (a) people wouldn't give you feedback or (b) you were committed to following your scripted planned process, even though you found issues, then too bad.
  • you're to blame for software.  Notice in Rex's take on testing, if there's a problem with the software, it's not the project manager, the developer or the BA who is made to pack up their desk.  It's the tester.  The "QA tester" here is the fall guy - he owns quality, so if the product has issues it's because he didn't add quality at the end - fire him!  Why would anyone want to work in that role?  Personally I'd rather be a Wedding Planner in The Game of Thrones!
  • it creates the idea of testing being "just a very bureaucratic layer of software development".  We had three people performing one test - and doing it by slowly reading instructions and performing them.  Did we get the best value from those three testers using that model?  Hell no!  But the documented process was to die for, and would keep our auditors happy!  This leads to the idea many managers have that testing is full of waste.  Yes we do need to document anything important (see my piece on exploratory testing for more), but to give the most value for our testing time, we need the most amount of time either with hands on the software, trying out different ideas and pathways.
Of course the problem with the last point on bureaucracy is inevitably testing finds itself squeezed, and it's very tempting to do the wrong thing in that position (you keep the level of documentation, but test less yes?).  As I've said, I'm a huge fan of using tools like QTrace which just make notes for me as I test, so I don't need to either follow a script or take time consuming screenshots as I go.  Sometimes I know I have a problem area, and I want to try a few things out, and having it recorded helps be go back and show someone.  But even so if a recording doesn't show something, I don't just save it "just because" I want to avoid building up a huge library of files if I don't think there's anything useful in that file.

Testing isn't about auditing software, it's about trying things on your software (the actual doing, not the documentation).  Probably the documentation that matters the most are bug reports - but even so I see test managers who have a hard time coping with teams who deal with defects that can be fixed easily "using sticky notes" because they can't use their tool to "see what defects were fixed 6 months ago".

I'm glad I see less and less "thought leaders" like Rex.  Their ideas to software and to testing are pure bureaucratic toxicity.  They imagine a career path that as long as you can generate a documented trail of evidence you can claim the high ground with any project stuff up.  I think somewhere down the line, Rex is in for a nasty shock ...

I'll wrap up with a superb and concise quotation from Scott Barber to one of the reviewers,

"This review makes it clear that what you really want is to not have to think. Testing is a thought-engaged activity. I suggest that if you want to be told the "only" way to do a thing, that testing might not be the best career path for you."

2 comments:

  1. I appreciate with this post thanks for sharing with Its seems looking so good.
    saarc wild wacky

    ReplyDelete
  2. It is always interesting to read the comment sections. I have found that it is pretty rare to find anything that 100 percent of people are thrilled about. I mean even when you look at movies. Do you ever see any that have a 100 percent rating? The bad ratings are from people who don't know how the product works.

    Kent Gregory @ ARMATURE Corporation

    ReplyDelete