Sunday, February 1, 2009

Good Automated Testing

Funny, how after I rant on unit testing a bit, the Stack Overflow podcast also has a rant about unit testing. But I want to make this clear: I like automated tests. A lot. 

I work at a company now that has a whole lot of software that is only manually validated. This has caused nothing but regression after regression. Without calculating anything, I would easily bet anyone, even to odds like 10000:1, that if we get a majority of the code called, which means > 50%, through automated tests, it will save the company tremendous money. And I mean, the amount of time getting the tests written will save the company money over the long term. If you have to see the neverending story of our bug list, you'll know what I'm talking about.

Making tests count is the problem. These "unit-oriented TDD mocks" that want small little tests on everything seem to be working on a problem I've never had to write. That is: a set of services where most of the code manipulates a database. This can be seen like a gigantic global side-effect system. You write code, and you tweak this other thing, outside of the confines of your process' RAM. What this means is each unit test really has to integrate with a database-oriented test. Which doesn't really fit the definition of "unit" test, because sometimes you have:
  1. Code and schemas.
  2. Often more than one DB element.
  3. Often, code that mirrors the schema, and code that stores it.
So how do you define unit, as in one, isolated, individual, thingy? I throw this definition away. What I really want is automated testing", because I just don't want people in the process. I'm thinking about saying automated component testing, because I also want the software to be described at the level where you might redistribute or reuse it - that should be your component.

My personal testing-oriented list of principles has become:
  • Define your components, and test the interaction at this level.
  • Cover as much as you can automatically easily.
  • If you have code that can not be called via your component interface, you probably don't need it.
  • Remove brittle tests. 
  • Avoid dependencies on specialized configuration.
  • Make configuration painless.
I'm sure this list will be refined over time.

Thursday, January 29, 2009

Read, and I mean read, your documentation

It must suck being a technical documentation writer. Nobody reads your crap. Don't try to convince me of this otherwise. I've written enough planning and specification and API documentation to learn the Cardinal Rule of Technical Documentation: it will not be read. Skimmed, maybe, but not read.

At fault is the culture of technology. We live in a world of instant connection many thousands of instant messages, microblogs, wall notices, etc. But the display of the computer is still not good enough for general reading. And animation is that wonderful brain candy that everyone loves to hide everywhere in your display, distracting you from your train of thought at a moment's notice.

For code monkeys, this is a sin. Why? Because our documentation tools are usually paltry, clunky affairs to begin with. Take javadoc, for example. Creating javaDoc is a horrible writing experience. 
  • It uses HTML, which really was never intended to be read, so your documentation is actually not really readable in it's natural state, the text file itself.
  • Almost every text editor I know does not allow you to create a readable measure in the source code file itself. You end up having to redistribute newlines, etc, yourself.
  • The default stylesheets of javadoc are, well, horrendous by any kind of typographic standard.
You add on top of this the technology tools of distraction, and sprinkle in code completion, and voila - nobody will read your API documentation. At least never in detail. And most of the time, not even to get things compiling.

Do yourself a favor. Read documentation you get thoroughly. And I mean truly read it. Explore it, find links, make a meta map. When you know a library well, you will not only start to use it, you will actually think instead of waiting for the compiler to kick in. You will memorize details you never thought possible. You will become an expert, rather than just another dude with a keyboard cranking out the lines like an IDE addict. 

Sunday, January 18, 2009

Tristan Can Rant About Testing Like Everyone

Once upon a time, I tried to develop unit tests for some code running on a JBoss server. There was a mishmash of crap dependencies, with various calls to Hibernate (a framework I now loathe), lots of wacky static method calls, the list goes on. I barely got the thing running, and it took forever to initialize (well, like 15 seconds, which is insane for a unit test). And then it broke because someone tweaked the ant build file. And because I'm already bald, well, it's ok, I had no hair to pull out.

The development of this unit test gave me two observations:

1. Mocking DB requests are useless, especially with a persistence framework like Hibernate.
2. Our internal design sucked balls, and that was 99% of the reason developing a unit test was painful.

In the end, if you make testing easy, it will be fun, and you will write less code, and all will be well in the universe. The trick is making it easy.

Here's a few things I've done to start liking testing a bit more:

1. Get the testing pain out of the way. Use a CI server (Hudson) to drive the slow-ass tests. Keep them running, but don't piss your fellow developers off by having these kinds of tests running at any point during a standard build procedure. I once knew this guy who forced a maven build to run an insane amount of nitpicky testing whenever you tried to *build the main component*. In the end, I now spit upon his name whenever I see his handiwork on the screen.

2. Start writing things in Scala. (We're a Java shop, please don't hold it against me.) You could use Groovy, or Ruby here. I just like Scala, because I just understand it at a performance level. But the big win is a massive SLOC reduction. Less code wins.

3. Get as much stuff covered with as few lines of test code as possible. Often, this runs counter to the whole "unit testing" philosophy, but I don't care. Often, this drives me to re-architect code: change how exception handling happens, for example, because you realize that all this error handling happens in the middle of a call stack. It's a way of code spelunking. Over time, I start to refine the conventions I keep, and then break the tests up a bit more.

In my experience, testing often just gets the same short stick that documentation gets. I'm not simply talking about code comments, but just communication of "what got done". More code monkeys go the route of writing something like "crap happened" on a trouble ticket, and then making a ton of changes with no described rationale. Or they go into useless rambles where you end up skipping over the really important 5th sentence in the 4th paragraph of their diatribe.

Testing well is in some way like writing well. Elegant testing is fantastic, but it often takes several revisions for you to get there. The whole "unit tests are always great" nonsense reminds me of coders that would say crap like "for every line of code you should have a comment line". ... huh? what?

</rant>