Share This Page

ShareShareShareShare
03/24/09

Hurrah for Automation, But Neglect Manual Testing at High Cost

Their board of directors had been alarmed by the report of deployment failures, and was adamant in insisting that this should never happen again. Because of the number of customizations, and because of the frequency of upgrades, the VP believed that the only solution was full automation. He understood that this could only be a long-term goal, but he wanted my company’s help in making it happen, along with whatever bridge solutions were required between now and then.

Their routine for QA and testing, which until now had worked satisfactorily, lacked sophistication. Their development team provided unit testing, and business analysts provided acceptance testing. Their small QA team of four full-time test engineers spent all their time developing and executing test cases for features or fixes that were to be rolled into the next scheduled monthly patch. A week before its scheduled release, everyone in the product development group installed the release candidate patch on their machines and for a couple of days ran certain scenarios selected from a distributed list. The test team barely had time to run their new test cases. They did not have modular tests; and they had stopped running regression tests at least two years ago.

Lack of sophistication in test strategy, the obvious problem at Buckflow, is not unusual. I pointed out that bugs found in the design stage are far less expensive to fix than bugs found during product integration testing. Also – especially applicable to Buckflow – every bug fix is a weak link because the product’s original design did not address it, and therefore must be tested in every release. The VP nodded with evident regret, and said that they had thought that disciplined development combined with unit testing would be sufficient.

It’s also not unusual to find companies who continue to have naive faith in automation, in spite of evidence against such disturbingly resilient illusions as:

– automation eliminates human errors that result from fatigue, boredom, and disinterest;

– automation can be re-used indefinitely (write once, run many);

– automation provides more coverage than manual testing

– automation eliminates the need for costly manual testing;

Every one of the above statements makes sense if properly qualified. Automation may eliminate some or all of the errors that result from manual testers growing weary, but it may also introduce other errors that are equally due to fatigue, boredom and disinterest, arising here in the people who develop automation.

Automation can be re-used indefinitely, provided that the application or system under test does not change, and that nothing else changes that might affect execution, such as common libraries or runtime environments (e.g. Java). Whatever return on investment may have been realized from automation will be quickly wiped out by maintenance costs, at which point the only advantages of automation are strategic, not economic.

If “coverage” means “test coverage” (rather than “code coverage”), then yes, automation can even provide 100% coverage: one need only automate all available test cases. A more significant data point however is the degree of code function or code path coverage provided by available test cases. While achieving 80% code path coverage may be better than 70%, a more significant consideration is what has not been covered, and why.

To avoid manual testing at all costs would be the costliest option, because only in manual testing can all of a test engineer’s understanding, probity, logic, and cleverness be put to work. Security testing of Buckflow at the application level, for example, depends on how the application was developed, where it stores its cookies, what scripts it runs during initialization and various transactions, how stateful connections are established in the inherently stateless HTTP protocol, etc.

While there are commercial test tools that can verify an application’s defense against cookie-cutter varieties of denial of service, or even 50% of the threat model for most applications, interoperation with new applications and with new versions of underlying technologies requires at least a significant investment in manual testing.

More obvious needs for manual testing include penetration testing, usability testing, and localization testing. But Buckflow had a particularly acute need for testing massively configurable applications in diverse environments. While there was room to talk about keyword-driven automation, it was clear that only manual testing would be able to identify configuration issues. In the end, we agreed that the best approach would be a combination of carefully orchestrated automated tests with rigorous manual testing.

READ

Let's Talk About Your Needs

Thank you for your submission. We'll be in touch.