Note: I’m asking about the strategy behind unit / integration / end-to-end tests, not about classifying tests as one or the other.
More so in the past than present, it was expensive to write and run end-to-end tests for every possible scenario. Now though, for example with the increased use of test fixtures / emulators, or even lower latency / higher query limits to APIs, it’s more feasible. But of course there’s always gonna be human error. We can never be sure we thought of every scenario.
Still I’m asking anyway, hypothetically, given an oracle that tells us every possible scenario, or to put it another way, discounting scenarios other than exactly the ones with end-to-end tests, how might unit tests still be valuable?
What I’m wondering is if, unit-integration tests are kind of an “alternate approach” to thinking about tests, and that’s the advantage of writing them even when aiming for “100%” end-to-end coverage: because they might catch a scenario missed by human error. But eliminate human error in thinking up scenarios, and what do they do?
Some ideas I can come up with (but I’m hoping for even stronger answers!)
- They encourage a useful coding methodology, e.g. TDD.
- On a breaking dependency (or dependent API) change, they reveal precisely where.