Multiple Asserts Are OK
Some people suggest we should restrict ourselves to a single assertion per test. Are multiple asserts in a test ever OK?
I first heard the "one assert per test" idea years ago from Dave Astels, and others have picked it up since then.
There are several cases where multiple assertions are used, some bad, some OK:
- Run-on test
- Missing method or object
- De-duplication of similar tests
- Confirming setup
- Probe with multiple assertions
Multiple assertions can be a sign of trouble in tests. But as you'll see by the end, some situations make good use of groups of assertions.
Run-On Test
A run-on test is like a run-on sentence: it meanders along, probably going somewhere, but you're never quite sure where you'll end up.
The test looks like this:
set up some objects trigger some behavior on the tested object verify its consequences (one or more asserts) trigger some more behavior on the same object verify those consequences trigger even more behavior verify even more trigger and verify trigger and verify etc.
I worked with a testing group that had system tests like that: a 155-step test, where step 4 failed. They couldn't complete that test run until very late in development. (Their original plan was to file 152 defect reports, but I talked them out of that.)
Why are tests like this a problem?
- The test is fragile: any mistake early in the test makes the rest of the test useless or confusing.
- The test is hard to debug: to check something late in the test, you have to look at a lot of irrelevant things along the way.
- The test makes you play computer: if anything goes wrong, you have to re-trace everything step-by-step from the very beginning. You can't pick up in the middle and trust that you know what to expect.
The solution is to split the long test into separate tests. Once you've done this, you can apply other guidelines to improve it further.
Is there ever a place for this run-on style? The one time I felt justified was in testing selection on a grid component, where pressing or releasing key combinations would shift future selections. It felt more clear to walk through cases such as when a shift selection adds a string of items, then alt-click on one removed a single item, but alt-clicking on the same item put it back.
Were I testing this today, I'd look harder for the state machine or some other separation that would let me test more simply, but clarity of the test is the key.
Missing Method or Object
Consider this test:
setup some object Point p = someObject.triggerBehavior() assertEquals(7, p.x) assertEquals(8, p.y)
We have multiple assertions because they're tearing apart the result object to verify everything.
This code uses a Point object, which may be good, but we've got feature envy: our assertion code is all about the insides of some other object. Our assertion is better if we compare whole objects for equality:
assertEquals(new Point(7, 8), p)
In other cases, there's a missing object. (Imagine if we hadn't returned a Point, but made you query the tested object for x and y separately.) The solution to a missing object is to create it and start using it.
"Primitive Obsession" is a common smell; we often resist creating simple objects for things like points, dates, ranges, money. But once we do, we reduce duplication and increase abstraction across the whole system.
De-Duplication of Similar Tests
You may have a series of tests, with very similar-looking group of assertions at the end of each one.
Your duplication detector kicks in, and you think, "I'll tolerate some duplication in tests, but this is ridiculous!"
So you create a custom assertion method that asserts the common things, and make all the tests use it. Now they use only one assertion per test, the custom one.
This can definitely help. I always take a minute more to ask, "Does that custom assertion reflect a responsibility that belongs on the object? Could the object's production clients use it?"
I've especially seen this pattern for higher-level tests, such as acceptance tests written in a unit testing framework. It can be an intermediate step along the way to creating a custom testing language.
Confirming Setup
We sometimes have an "arrange" that's complicated enough we don't trust it:
// arrange set up something hairy // assert on setup assert setup looks right // act probe a behavior // assert verify its consequences
We don't expect that first assertion to fail, but we don't fully trust the test without it.
We have a few options:
- Live with it. We may judge the extra safety and clarity in the test is worth the setup assertion.
- Split the test in two:
test1: set up something hairy assert that it looks right test2: set up the same hairy situation probe a behavior verify its consequences
This is often an improvement. - Create a separate fixture just for tests that use that hairy setup:
setup: set up something hairy assert that it's as expected (or do this in a test method) test: probe a behavior verify its consequences
- Question it. Figure out why it's so hard to set up and trust the objects in the first place. (Geepaw Hill says, "Don't take hard shots.") There can be any number of problems: the design is too complicated, there are too many collaborators, there are missing objects, the test case is doing too much, etc.
For example, I reviewed a system where the constructor for one type of collection required you to pass in its first element. But creating an element required telling it which collection it would be part of. Tests using the collection required a three-step setup that created a dummy element and swapped in the real one later. Tests using multiple elements were even more painful.
Because of the complexity, many tests double-checked that once everything was connected, the right elements were in the right order. Just picture a developer coming out of a painful debugging session, realizing that once again the setup wasn't quite right, and you'll know why they double-checked it.
The solution was to simplify the design. Once using these custom collections was as easy as using a standard collection, tests and production code were both better.
Probe with Multiple Assertions

Finally, we get to the biggest point of disagreement between those who argue for one assertion per test and those who think it's not needed.
In this case, we trigger a change in an object, and then call methods to explore what happened; these methods provide different perspectives.
People who argue for one assert per test would prefer a separate test method describing each consequence. That approach is too wordy; I prefer a block of assertions that describe the relevant consequences together.
(If the extra description is needed because the consequences are too subtle, I take it as a sign of other problems to address.)
Consider a test like this:
// arrange - set things up create someObject and its arguments // act - trigger behavior someObject.doSomething(arguments) // assert - check what happened assertEquals(someObject.aspect1()) assertEquals(someObject.aspect2()) assertEquals(someObject.aspect3())
For example: create a stack, and push an item onto it. We can query the stack to find out its top object (which should be what we just pushed) and its length (which should be one more than it started).
Some test frameworks might push you toward a different tradeoff, but with xUnit frameworks it's usually cleaner to just explore the related consequences of the behavior in one place.
This style works especially well with objects that follow the command-query separation principle: methods are either mutators or accessors, but not both at once. The accessors provide you different views into the consequences.
Conclusion
Part of TDD is "listening to the code": paying attention when things are difficult, whether it's the tests or the code being developed.
When you see multiple assertions, give it some thought. Is it a run-on test, are there missing methods or objects, duplication, or too-complicated setup? If so, improve the code!
Thanks to Mike Hill (@GeePawHill), Tim Ottinger (@tottinge), Alexandre Freire (@freire_da_silva), and Joshua Kerievsky (@JoshuaKerievsky) for comments on an earlier draft.