Published on O'Reilly (http://oreilly.com/)
 See this if you're having trouble printing code examples

The Importance of Unit Testing

by Steven Feuerstein



Document Contents:

What is Unit Testing?

A unit test is a test that a developer creates to ensure that his or her "unit", usually a single program, works properly. A unit test is very different from a system or functional test; these latter types of test are oriented to application features or overall testing of the system. You cannot properly or effectively perform a system test until you know that the individual programs behave as expected.

So of course you would therefore expect that programmers do lots of unit testing and have a correspondingly high level of confidence in their programs. Ah, if only that were the case! The reality is that developers generally perform an inadequate number of inadequate tests and figure that if the users don't find a bug, there is no bug. Why does this happen? Let me count the ways...

  1. The psychology of success and failure. We are so focused on getting our code to work correctly, that we generally shy away from bad news, from even wanting to take the chance of getting bad news. Better to do some cursory testing, confirm that it seems to be working OK, and then wait for others to find bugs, if there are any (as if there were any doubt).

  2. Deadline pressures. Hey, it's Internet time! Time to market determines all. We need everything yesterday, so let's be just like Microsoft and Netscape: release pre-beta software as production and let our users test/suffer through our applications.

  3. Management's lack of understanding. IT management is notorious for not really understanding the software development process. If we are not given the time and authority to write (in the broadest sense, including testing, documentation, refinement, etc.) our own code properly, we will always end up with buggy junk that no one wants to admit ownership of.

  4. Overhead of setting up and running tests. If it's a big deal to write and run tests, they won't get done. I don't have time, there is always something else to work on. One consequence of this point is that more and more of the testing is handed over to the QA department, if there is one. That transfer of responsibility is, on the one hand, positive. Professional quality assurance professionals can have a tremendous impact on application quality. Yet developers must take and exercise responsibility for unit testing of their own code, otherwise the testing/QA process is much more frustrating and extended.

The bottom line is that our code almost universally needs more testing. I can't help with deadline pressures, and my ability to improve your manager's understanding of the need to take more time to test is limited. So how about if I instead offer you a "framework" -- a set of processes and code elements -- that will allow you to test your code more easily?

You might even find that by using my framework (code-named utPLSQL for "Unit Test PL/SQL") testing becomes something you look forward to!

About Extreme Programming

First let me share with you how I came up with my idea for a unit test framework. I love to learn from the experience of others -- and I like to make sure to give them credit.

Have you heard of "Extreme Programming"? Scary name, but some really great ideas! Check out www.xprogramming.com and www.extremeprogramming.org for lots of information and background. I also recommend picking up a copy of Extreme Programming Explained by Kent Beck. I will offer a summary here and then most importantly apply the ideas to the world of PL/SQL.

Work with Human Nature

We don't like to test, but we know we should do lots more of it than we do. To get developers to test more thoroughly, we've got to make the process as easy, fast and painless as possible. The tests are just the means to improving the quality of our code. They are not an end in and of themselves. So the unit testing framework must be "lightweight" -- easy to install and easy to use, easy to modify and easy to run.

We should also be realistic about what we test. We cannot possibly test everything -- but that's not an excuse to do no testing. Any testing at all is better than none. We ought to, on the other hand, focus our attention on those portions of the code that we think are likely to break. Sure, bugs can appear anywhere, but we do need to prioritize our efforts.

Write Tests First!

This may sound strange, but it actually makes a whole lot of sense. You have been tasked to write a program to do X. The first inclination is to sit down and knock out a bunch of code. That's fun! It can also lead to an enormous wastage of time and effort. Here's a different approach:
  1. Forget about the internals of the program. Make sure that you understand what it needs to do. Have you figured that out? Great...

  2. Determine the data going in (parameters, hopefully) and the measurable results from running the program ("If I pass in the value 'C', then the function should return 'CLOSED'.").

  3. Think about the different kinds of values that you would want to use to test the program.

  4. Build a separate test case for each different set of data/results. When you build the test case (explained later in this doc), you will be designing (and probably modifying a couple of times) the header of the program to be tested -- even before you have written anything.

  5. Once you have written up your various tests, you are then in much better position to implement the program with fewer false starts.

Code a Little, Test Thoroughly

As Kent Beck, the author of Extreme Programming Explained and one of the founders of the Extreme Programming "light weight methodology", puts it, "XP takes commonsense principles and practices to extreme levels:

"If code reviews are good, we'll review code all the time (pair programming).

"If testing is good, everybody will test all the time (unit testing), even the customers (functional testing)." [page XV]

I'm not going to explore pair programming (in which no one programs alone. Everyone works in pairs; one person works tactically on the task at hand, the other person thinks strategically about ways to improve the code) in this text. Instead, let's focus on "test all the time".

XP recommends that as you write code, you also build unit tests to test that code. To build applications most efficiently, you should also test that code incrementally. In other words, don't write a single, humongous 5,000 line procedure implementing some complex business rule and then try to test that big blob all at once. Create smaller, modular programs that can be tested individually and then combined to implement the business rule.

So: you sit in front of your computer ready to implement a new program. First, write the unit test: what is this program supposed to do? How will I know when it is working properly? Notice that as you build the tests, you are designing the interface and functionality of the program (which is by no means always well-understood before a person starts coding). Once the tests are constructed, write the program (a little program). Then run the tests, fix the code, and get it working. Great! Time to move on to the next program.

Isolated, Automated Testing

One of the central tenets of XP is that programmers must write automated unit tests in so that "their confidence in the operation of the program can become part of the program itself."

Almost any program you write will have numerous scenarios that need to be tested before you can be confident that the code works as desired. You can throw together one or five or ten individual scripts to run those tests, but how easy will it be to run those tests, again and again? If it is not really, really easy (including setting up and cleaning up data), you just won't do it, right?

It is also important to isolate your tests. The failure of one test should not cause one hundred other tests to fail. You lose faith in the results of your tests.

XP recommends the use of a testing framework and supporting code so that you can execute a single unit test or an entire test suite with a single program or even a click on a GUI screen. The tests that you have written run automatically and provide clear indicators of success or failure.

Red Light, Green Light

You cannot test very efficiently if it takes you half an hour to analyze the results of your test to see if your code worked properly. You should be able to get a clear, unambiguous "green light" if everything was fine. You should also be notified as clearly of a "red light" situation, a failure of one or more of the tests - and not just that you had a failure, but which test failed and how the results differed from what had been expected.

In the utPLSQL framework, a successful run of a test will display the following message:

<package> - SUCCESS

as in:


whereas a failure will give me a very different result:

    Between spaces; expected "is not", got "is not"
    Test negative start; expected "a s", got "a str"
    Between spaces; expected "not muc|h", got "not much"
    Not inclusive; expected "h of a st", got "h of a str"

The utPLSQL software has been designed to offer a robust API so that GUI developers can build front-ends that truly offer Red Light, Green Light visual notification.

Transform Bug Reports into Test Cases

When someone reports a bug, your first inclination will be to dive into the code and try to fix it. Hold off! Before you start mucking around in your software, write a test case that verifies the bug. Add it to the test suite and run the test to make sure that you can reproduce the bug (ie, the test fails).

Once you have done all that, analyze your code and determine what it is that needs to be fixed. Before you re-run your test, walk through the code and verify to yourself that from a purely logical standpoint it should fix the problem.

Now run your test. You will hopefully (and are much more likely to) get a green light.

The result? Your code has been repaired, you can verify the fix, you have expanded your scenarios for future testing, and you can make sure that the bug never comes back.

Return to: The utPLSQL Project

Copyright © 2009 O'Reilly Media, Inc.