Multiple Asserts Are OK
Some people suggest we should restrict ourselves to a single assertion per test. Are multiple asserts in a test ever OK?
I first heard the "one assert per test" idea years ago from Dave Astels, and others have picked it up since then.
There are several cases where multiple assertions are used, some bad, some OK:
- Run-on test
- Missing method or object
- De-duplication of similar tests
- Confirming setup
- Probe with multiple assertions
Multiple assertions can be a sign of trouble in tests. But as you'll see by the end, some situations make good use of groups of assertions.
A run-on test is like a run-on sentence: it meanders along, probably going somewhere, but you're never quite sure where you'll end up.
The test looks like this:
set up some objects trigger some behavior on the tested object verify its consequences (one or more asserts) trigger some more behavior on the same object verify those consequences trigger even more behavior verify even more trigger and verify trigger and verify etc.
I worked with a testing group that had system tests like that: a 155-step test, where step 4 failed. They couldn't complete that test run until very late in development. (Their original plan was to file 152 defect reports, but I talked them out of that.)
Why are tests like this a problem?
- The test is fragile: any mistake early in the test makes the rest of the test useless or confusing.
- The test is hard to debug: to check something late in the test, you have to look at a lot of irrelevant things along the way.
- The test makes you play computer: if anything goes wrong, you have to re-trace everything step-by-step from the very beginning. You can't pick up in the middle and trust that you know what to expect.
The solution is to split the long test into separate tests. Once you've done this, you can apply other guidelines to improve it further.
Is there ever a place for this run-on style? The one time I felt justified was in testing selection on a grid component, where pressing or releasing key combinations would shift future selections. It felt more clear to walk through cases such as when a shift selection adds a string of items, then alt-click on one removed a single item, but alt-clicking on the same item put it back.
Were I testing this today, I'd look harder for the state machine or some other separation that would let me test more simply, but clarity of the test is the key.
Missing Method or Object
Consider this test:
setup some object Point p = someObject.triggerBehavior() assertEquals(7, p.x) assertEquals(8, p.y)
We have multiple assertions because they're tearing apart the result object to verify everything.
This code uses a Point object, which may be good, but we've got feature envy: our assertion code is all about the insides of some other object. Our assertion is better if we compare whole objects for equality:
assertEquals(new Point(7, 8), p)
In other cases, there's a missing object. (Imagine if we hadn't returned a Point, but made you query the tested object for x and y separately.) The solution to a missing object is to create it and start using it.
"Primitive Obsession" is a common smell; we often resist creating simple objects for things like points, dates, ranges, money. But once we do, we reduce duplication and increase abstraction across the whole system.
De-Duplication of Similar Tests
You may have a series of tests, with very similar-looking group of assertions at the end of each one.
Your duplication detector kicks in, and you think, "I'll tolerate some duplication in tests, but this is ridiculous!"
So you create a custom assertion method that asserts the common things, and make all the tests use it. Now they use only one assertion per test, the custom one.
This can definitely help. I always take a minute more to ask, "Does that custom assertion reflect a responsibility that belongs on the object? Could the object's production clients use it?"
I've especially seen this pattern for higher-level tests, such as acceptance tests written in a unit testing framework. It can be an intermediate step along the way to creating a custom testing language.
We sometimes have an "arrange" that's complicated enough we don't trust it:
// arrange set up something hairy // assert on setup assert setup looks right // act probe a behavior // assert verify its consequences
We don't expect that first assertion to fail, but we don't fully trust the test without it.
We have a few options:
- Live with it. We may judge the extra safety and clarity in the test is worth the setup assertion.
- Split the test in two:
test1: set up something hairy assert that it looks right test2: set up the same hairy situation probe a behavior verify its consequences
This is often an improvement.
- Create a separate fixture just for tests that use that hairy setup:
setup: set up something hairy assert that it's as expected (or do this in a test method) test: probe a behavior verify its consequences
- Question it. Figure out why it's so hard to set up and trust the objects in the first place. (Geepaw Hill says, "Don't take hard shots.") There can be any number of problems: the design is too complicated, there are too many collaborators, there are missing objects, the test case is doing too much, etc.