I took a break from blogging because I could not find time to write blogs worth reading. Also because a colleague urged me to stop writing blogs and focus on writing articles and books for 3rd-party publication, because their ROI is much better – ROI not as in money, but as in how many people read what you write. My standards for “blogs worth reading” have been articles and whitepapers and books. But there are many fascinating bloggers whose tone is more conversational than expository. They encourage conversation, and welcome disagreement. So-with this blog I turn over a new leaf. If you find my blog uninteresting, please let me know – bloggers who puts uninteresting stuff on the web might as well de-blog themselves.
Many engineers will tell you that unit tests find bugs early. Well, yes. If they find bugs, they will necessarily be early. How to write unit tests, though it’s not obvious, and what you write to test a USB passthrough will differ radically from unit test for a codec algorithm. This year I’ve talked to three VPs in IT who were convinced that if you write unit tests for all system modules, then you can eliminate, or at least drastically reduce, functional and end-to-end testing. They would also like to hear that unit tests will find all the bugs that their functional tests have been missing.
I wish it were true. But unit tests cover both unconditional as well as conditional state transitions. Conditional state transitions are triggered by specific conditions. Where do those specific conditions arise? Not within the class. I think that to cover conditional transitions, you’re already into integration testing. It’s only words, but for this stage of the cosmic cycle we’re stuck with using words to communicate. Once you’re into integration testing, you’re no longer in unit testing, and so, at that early stage you’ve already understood that “comprehensive testing”, that elusive Holy Grail, will at least include both unit tests and integration tests. Some other time it’d be interesting to blog about end-to-end testing, which is where we should be spending at least as much of our testing investment as we spend in unit testing, including getting the “smartest people”.
In the last couple of years we’ve been developing automated API tests using an approach we called “usecase-based API testing”. Identify what APIs are used in moving from step-x to step-y in a usecase, figure out what APIs are exercised in that segment, and group those APIs’ tests in an automated test suite. For one platform, reuse of such API test suites further justifies the cost of their development. Is it worth writing a paper to present at a conference? Not sure yet. What interests me is the similarity to what Binder called the thread-based strategy for integration testing, where you line up unit tests for all the classes that would be exercised in responding to an external input (could be user-input or web service feed or whatever), and execute them as a group. He contrasted that to uses-based integration testing, which first tests classes that don’t use (or use very few) server classes, then those that use a few more, on up to full integration. The terminology is flipped, but I think our usecase-based API testing is Binder’s thread-based testing.
Donald Knuth, the great Stanford-rooted author of The Art of Computer Programming, said that he had no use for unit tests, but I think he was talking about developing code single-handedly. He said he’d use them in “a totally unknown environment”, which I think (I’m only half-joking) is how he might classify writing a datagrid renderer using third-party libraries. The reason he could be so confident that he didn’t need unit tests was that he could rely on his Meta-Simulator (for MMIX anyway).