I work at a company now that has a whole lot of software that is only manually validated. This has caused nothing but regression after regression. Without calculating anything, I would easily bet anyone, even to odds like 10000:1, that if we get a majority of the code called, which means > 50%, through automated tests, it will save the company tremendous money. And I mean, the amount of time getting the tests written will save the company money over the long term. If you have to see the neverending story of our bug list, you'll know what I'm talking about.
Making tests count is the problem. These "unit-oriented TDD mocks" that want small little tests on everything seem to be working on a problem I've never had to write. That is: a set of services where most of the code manipulates a database. This can be seen like a gigantic global side-effect system. You write code, and you tweak this other thing, outside of the confines of your process' RAM. What this means is each unit test really has to integrate with a database-oriented test. Which doesn't really fit the definition of "unit" test, because sometimes you have:
- Code and schemas.
- Often more than one DB element.
- Often, code that mirrors the schema, and code that stores it.
My personal testing-oriented list of principles has become:
- Define your components, and test the interaction at this level.
- Cover as much as you can automatically easily.
- If you have code that can not be called via your component interface, you probably don't need it.
- Remove brittle tests.
- Avoid dependencies on specialized configuration.
- Make configuration painless.
I'm sure this list will be refined over time.