Tag Archives: nondeterminism

Simple software engineering: Mocks are a last resort

Most tests that rely on automatic mocking frameworks should be rewritten to use either the real dependencies or manually-written fakes.

Wait, let’s back up. Tests have a few moving parts. First, there is some code being tested. This is commonly called the unit. The unit might have dependencies. The dependencies are not under test, but they can help determine whether the unit behaves correctly. Ideally, they would be passed into the unit. But dependencies can be many things: static data, global data, files on the filesystem, etc.

Dependencies interact with tests in a few ways. The unit can introduce side effects on the dependencies and vice versa. Automatic mocking frameworks are designed to aid this process. Mock assertions can validate that expected method calls happened, whether correct parameters were passed, can override return values, and can execute different logic. Mocks have almost absolute power to override the behavior of dependencies (within the confines of what the language allows).

But mocks aren’t the only way to write tests that involve dependencies. Real objects can be used directly. This isn’t always possible: the real object might be nondeterministic. It might provide random numbers, make a call on the network, etc. Nondeterminism is difficult to test, since there’s not necessarily an expected output. Nondeterministic failures decrease confidence in tests, since it’s difficult to know whether a failure is real. Accordingly, nondeterminism should be avoided in tests.

Statue of Leif Erickson in Reykjavik, Iceland.
Leif Erikson discovered automatic mocking frameworks in the year 998

“Test fakes” are an alternative. They are a fake implementation of a real object. For example, a trivial implementation of an interface that the real object implements. Here’s an example from a side project of mine. It allows a clock to be simulated and advanced for testing. Test fakes have a maintenance cost. The tradeoff is that the fake can be reused everywhere.

How should you pick which one to use?

How I select a dependency to use for testing

  1. Use the real object, if possible.
  2. Use a fake implementation, if possible.
  3. Use a mock.

I try to get as close to the production configuration as possible. Why? When a test fails with a real dependency, it’s likely a real problem. The more differences between a test object and a production object, the less likely the failure is real, and more likely that the failure involves the test configuration.

OK, so, where am I going with this? In the next section, I will explain common issues with automatic mocking. Then I will describe the tradeoffs that real objects and test fakes have. I will finish by explaining a few situations where mocking is preferable to the other alternatives.

Automatic mocks are very manual

Consider a unit that uses Redis as a key/value store. Talking to Redis involves I/O. So we mock the return values of Redis anywhere it’s used in tests.

The first mocked test isn’t so bad. It reads one value and writes one value. The second test reads a few values. The third test reads a bunch of objects, but it doesn’t modify the values at all. And so on.

Imagine this Redis class spreading through a codebase. Dozens of usages. After all, everyone loves Redis. Every call must be mocked in the test.

But this requires that every test author behaves like a human key/value store. Why provide return values for all of these tests? It is simpler to put the Redis key/value store behind an interface and use an in-memory implementation. This would save time per test and would make tests easier to write. The fake would save time the way any code does – by automating a task.

The tests become easier to write because it becomes trivial to assert both the effects and the side effects of the test. Did the unit return the correct value? Did the fake end up in the correct final state? Great!

I find that the break-even point for this approach is n=1. As in, implementing the fake often takes roughly the same amount of time as implementing the first mock. And then the fake can be reused, but the mock can’t. There are exceptions to this that are discussed at the end of this post.

Automatic mocks don’t have to behave correctly

Mocks can behave absolutely incorrectly with no consequences. Can one plus one equal three? Sure, why not:

when(mockInstance.addTwoNumbers(1, 1).thenReturn(3));

A human must simulate the return value for every mocked return value. This leads to situations where the bug and the test both have errors that mask each other. In fact, mocks can be written by watching the test fail and seeing what value would have make the assertions pass. Then the engineer simply enters the expected values into the mock. People really do this. I’ve done it. I’ve watched other people do it. These errors get past code review.

Granted, this can happen with both real objects and fake objects in testing. But since real/fake objects are not customizable per-test, the error rate will be lower holistically with these approaches.

Automatic mocks can silently break during a refactoring

This is more insidious. Let’s say there is a widely-used dependency, and one of its methods provides the path of a URL. It needs to be changed to provide the full URL string as part of a project to support multiple domains. And it’s being renamed from providePath to provideFullURL or something.

So you rename the method. You change the behavior. The full URL is returned instead of the path. The tests pass. Hooray 🎉  But that method is called in 50ish places, and each of those call sites have tests that are written using mocks. Furthermore, some of those call sites are within code that is mocked in tests. Are you confident that nothing is wrong?

I’d be confident in the opposite: something broke somewhere. The mocks silently hid the problem because the return value was simulated. Imagine the developers of each of those call sites. If even one had a tight deadline and needed the full URL, they’re gonna prepend the server name they expect. They won’t think twice. It could even take days for these errors to appear – when the next nightly big data job runs, when the next weekly marketing email is sent, etc.

Areal object would have a better chance of exposing these errors in tests. A fake object would be changed from providing a path to providing a URL, which would also allow the error to be caught across the codebase with a single change. The change would need to have the same level of scrutiny and QA testing. But with a reasonably complete test suite, it’s less likely that it would lead to real problems.

Tradeoffs

Using a real object has a philosophical tradeoff. Strictly speaking, the test stops being a unit test. It becomes an integration test of the unit and its dependencies. That’s fine. If a test can be written quicker and increase confidence, then it’s a reasonable tradeoff. If the simplest and most maintainable test is an integration test, then write an integration test. Life’s too short for ideological purity.

There are more tradeoffs. A breakage in a real object can cause dozens of failures through the codebase. This often makes it easier to debug the failure (since there are lots of examples to debug), but it can also obscure the failure. Similarly, a real object with many call sites can cause failures in just one or two tests. This is often difficult to diagnose. Is the test subtly wrong? Is there a subtle bug in the real object? Is there a subtle bug in how they interact?

Fakes add a maintenance cost. They need to be written and maintained along with the real object or interface. Plus, since they simulate the behavior of an object without being the full implementation, they can easily introduce incorrect behavior that is then reused everywhere. There is also an art to writing them that has to be learned.

A few situations where mocks are the best approach

There are definitely situations where mocks should be used. Here are some common “last resort” cases that I’ve discovered over the years.

Faking complex behavior, like SQL

At a certain point, a fake would be so complicated that an in-memory solution is totally infeasible. It’s implausible to expect an in-memory execution of a SQL server that matches all of the syntactic quirks and features of MySQL. In this situation, using the mock dramatically reduces the maintenance required for the test.

Preventing a method from being called multiple times

Sometimes, calling a method twice is REALLY BAD – maybe it causes a deadlock, maybe a buggy device driver would cause a kernel panic, etc. Code review and instrumentation aren’t enough, and it’s desirable to assert that it can never happen. Mocks excel at this type of assertion.

Legacy code is poorly structured and there ain’t time to fix it

Sometimes, you have to parachute into a codebase, make a fix, and then get extracted. Sometimes it’s just not reasonable to spend 3 weeks refactoring to make a 1 day change more testable.

Determining whether a delegate is being invoked

A delegate wraps a second object, and is responsible for calling methods on that second object. An automatic mocking library is an easy solution for ensuring that these calls happen as expected.

Thank you for attending my Jake Talk

Automatic mocking frameworks are a last resort. Mocks have uses. But real objects and fake objects should be preferred, in that order.