The Evolving Definition of Unit Testing
Back in the day a unit test was defined as something like:
"[a] test that exercises a single class in isolation of all others"
The really big shake up in unit testing came with the development of JUnit and it's clones. This made it a requirement that unit tests should be automated and repeatable. Along with the introduction of JUnit some developers actually started writing unit tests, rather than just reading about them in books. The operational definition of unit testing shifted to:
“tests that are run against a project using a unit testing framework”
After all, if your using a unit test framework, what your writing must be a unit test - surely?. The requirement for a test of a single class was relaxed. What mattered was that the test was automated and repeatable.
Now lots of people have attempted unit testing, and some have succeeded. But many, or I suspect most, have failed. It's just a hard skill to acquire. It's rock hard for a novice unit tester to apply unit testing to an existing code base - the code's just not testable. Even on a virgin code base it's difficult. You can easily have large chunks of the system which you don't know how to test.
The thing which makes unit testing difficult is external dependencies. The points where your code starts to talk to something else - the boundaries of the system. Examples of external dependencies are:
- Database
- Smtp client
- Network
- Active directory
- Creating a process
- Web services
- File system
- HttpContext
- Message queues
- Windows registry
- Calling outside the CLR
- Calling outside the current AppDomain
The problem with external dependencies is, you have to configure them prior to the test, and verify them after the test. This set-up and verification is difficult - it makes the tests hard to write and harder still to understand. It also makes the tests slow. This lead Michael Feathers adopted the hard line position:
A test is not a unit test if:
- It talks to the database
- It communicates across the network
- It touches the file system
- It can't run at the same time as any of your other unit tests
- You have to do special things to your environment (such as editing config files) to run it.
So just having automated tests isn't good enough. To make unit testing work well, during the test you must replace the external dependency somehow. What you need to do is replace the dependency with an in-memory object which is easy to set-up and verify - a stub (or mock or fake or dummy). But how? You must talk to your dependency via an interface, then during testing you can replace the real implementation, with the stub/mock/fake/dummy version.
Just think about that for a moment. To make unit testing work you have to change the design of the system, so that all external dependencies are accessed via an interface. You have to change the architecture of the system, to make testing possible. That's really quite shocking.
All this leads to my current definition of unit testing:
Unit test - an automated test, without external dependencies. Integration test - an automated test, with external dependencies.
The aim is to test all of the business logic within unit tests. The integration tests are necessary and important, but their scope is limited to testing the interfaces to the external dependency. Integration tests need not test business logic too. The unit and integration tests are separated, say into different projects. This means you can run the fast unit test frequently. The slower integration test are run less often - say when you know you've changed the interface or prior to check-in. It is not a strict requirement that the unit test covers only a single class, though in practice if your test covers too much it will be hard to understand.
You can see with the definitions of unit testing described here, the cultural relativism implicit in the definition we use. What you think unit testing is depends on whether you actually write any, the tools you use, and how good you are at it.