Sunday, February 1, 2009

Good Automated Testing

Funny, how after I rant on unit testing a bit, the Stack Overflow podcast also has a rant about unit testing. But I want to make this clear: I like automated tests. A lot. 

I work at a company now that has a whole lot of software that is only manually validated. This has caused nothing but regression after regression. Without calculating anything, I would easily bet anyone, even to odds like 10000:1, that if we get a majority of the code called, which means > 50%, through automated tests, it will save the company tremendous money. And I mean, the amount of time getting the tests written will save the company money over the long term. If you have to see the neverending story of our bug list, you'll know what I'm talking about.

Making tests count is the problem. These "unit-oriented TDD mocks" that want small little tests on everything seem to be working on a problem I've never had to write. That is: a set of services where most of the code manipulates a database. This can be seen like a gigantic global side-effect system. You write code, and you tweak this other thing, outside of the confines of your process' RAM. What this means is each unit test really has to integrate with a database-oriented test. Which doesn't really fit the definition of "unit" test, because sometimes you have:
  1. Code and schemas.
  2. Often more than one DB element.
  3. Often, code that mirrors the schema, and code that stores it.
So how do you define unit, as in one, isolated, individual, thingy? I throw this definition away. What I really want is automated testing", because I just don't want people in the process. I'm thinking about saying automated component testing, because I also want the software to be described at the level where you might redistribute or reuse it - that should be your component.

My personal testing-oriented list of principles has become:
  • Define your components, and test the interaction at this level.
  • Cover as much as you can automatically easily.
  • If you have code that can not be called via your component interface, you probably don't need it.
  • Remove brittle tests. 
  • Avoid dependencies on specialized configuration.
  • Make configuration painless.
I'm sure this list will be refined over time.

Thursday, January 29, 2009

Read, and I mean read, your documentation

It must suck being a technical documentation writer. Nobody reads your crap. Don't try to convince me of this otherwise. I've written enough planning and specification and API documentation to learn the Cardinal Rule of Technical Documentation: it will not be read. Skimmed, maybe, but not read.

At fault is the culture of technology. We live in a world of instant connection many thousands of instant messages, microblogs, wall notices, etc. But the display of the computer is still not good enough for general reading. And animation is that wonderful brain candy that everyone loves to hide everywhere in your display, distracting you from your train of thought at a moment's notice.

For code monkeys, this is a sin. Why? Because our documentation tools are usually paltry, clunky affairs to begin with. Take javadoc, for example. Creating javaDoc is a horrible writing experience. 
  • It uses HTML, which really was never intended to be read, so your documentation is actually not really readable in it's natural state, the text file itself.
  • Almost every text editor I know does not allow you to create a readable measure in the source code file itself. You end up having to redistribute newlines, etc, yourself.
  • The default stylesheets of javadoc are, well, horrendous by any kind of typographic standard.
You add on top of this the technology tools of distraction, and sprinkle in code completion, and voila - nobody will read your API documentation. At least never in detail. And most of the time, not even to get things compiling.

Do yourself a favor. Read documentation you get thoroughly. And I mean truly read it. Explore it, find links, make a meta map. When you know a library well, you will not only start to use it, you will actually think instead of waiting for the compiler to kick in. You will memorize details you never thought possible. You will become an expert, rather than just another dude with a keyboard cranking out the lines like an IDE addict. 

Sunday, January 18, 2009

Tristan Can Rant About Testing Like Everyone

Once upon a time, I tried to develop unit tests for some code running on a JBoss server. There was a mishmash of crap dependencies, with various calls to Hibernate (a framework I now loathe), lots of wacky static method calls, the list goes on. I barely got the thing running, and it took forever to initialize (well, like 15 seconds, which is insane for a unit test). And then it broke because someone tweaked the ant build file. And because I'm already bald, well, it's ok, I had no hair to pull out.

The development of this unit test gave me two observations:

1. Mocking DB requests are useless, especially with a persistence framework like Hibernate.
2. Our internal design sucked balls, and that was 99% of the reason developing a unit test was painful.

In the end, if you make testing easy, it will be fun, and you will write less code, and all will be well in the universe. The trick is making it easy.

Here's a few things I've done to start liking testing a bit more:

1. Get the testing pain out of the way. Use a CI server (Hudson) to drive the slow-ass tests. Keep them running, but don't piss your fellow developers off by having these kinds of tests running at any point during a standard build procedure. I once knew this guy who forced a maven build to run an insane amount of nitpicky testing whenever you tried to *build the main component*. In the end, I now spit upon his name whenever I see his handiwork on the screen.

2. Start writing things in Scala. (We're a Java shop, please don't hold it against me.) You could use Groovy, or Ruby here. I just like Scala, because I just understand it at a performance level. But the big win is a massive SLOC reduction. Less code wins.

3. Get as much stuff covered with as few lines of test code as possible. Often, this runs counter to the whole "unit testing" philosophy, but I don't care. Often, this drives me to re-architect code: change how exception handling happens, for example, because you realize that all this error handling happens in the middle of a call stack. It's a way of code spelunking. Over time, I start to refine the conventions I keep, and then break the tests up a bit more.

In my experience, testing often just gets the same short stick that documentation gets. I'm not simply talking about code comments, but just communication of "what got done". More code monkeys go the route of writing something like "crap happened" on a trouble ticket, and then making a ton of changes with no described rationale. Or they go into useless rambles where you end up skipping over the really important 5th sentence in the 4th paragraph of their diatribe.

Testing well is in some way like writing well. Elegant testing is fantastic, but it often takes several revisions for you to get there. The whole "unit tests are always great" nonsense reminds me of coders that would say crap like "for every line of code you should have a comment line". ... huh? what?


Saturday, December 13, 2008


So, I'm taking over the technical management of a web service. One of the first things I did was talk to everyone, mostly to check the temperature. The kinds of questions I asked were basic. If you ask a basic question, you get a better answer. And one of those questions was What can we do better? The one unanimous issue was that we needed better documentation.

Great. Only two problems with this.
  1. Most of the team doesn't write well.
  2. In the past, nobody has gone out of their way to really make documentation happen.
Why I point this out: most teams probably think that this is just a scheduling problem. As if you just added some time for Documentation, the problem is solved. I think if you just added more time for documentation, people will just half-ass some crap on the wall and probably spend the rest of the time you slated for documentation surfing the intertubes.

I need a strategy. I need a system that will help bad writers get better incrementally. Which means that everything must be painfully obvious. The first pieces of the puzzle are the stuff that actually exists, which are the most important parts, anyway.
  • Code/API Documentation — What does your code do? Especially, what side effects are there?
  • Adminsitration Guide — How do we run the system?
  • Users Guide — How do users run the system?
The other part is planning. Essentially, you need to identify groups of features, and then answer two big questions for each feature set. One, what's the business value here? Or, how do we get paid? Two, how is this thing going to work?

I'm not sure I've not forgotten about something completely hugemongous.

Women Are Out of The IT Compartment

Why do we create stupid compartments for everything? At first, I thought it was just Americans with their shared cultural stupidity:
  • You can be black, or white. Or maybe brown or yellow, but those aren't really used. But I've actually checked White on forms in school.
  • Red state means religious conservative Republican, blue state means agnostic liberal Democrat.
But you know what? When it comes to programming, the whole world shares another category:
  • Programmer - a guy who likes tech. Pretty geeky, usually.
It's a shame, really. And it's everyone's fault. Another blog flipped a switch about the little ways we help permeate the chicks don't do IT thing. In the article, it was shopping, where a salesman kept ansering the brother, instead of the sister who asked the questions.
The entire time we were in the store, despite the fact that it was my sister asking the questions, despite the fact that I only answered questions that she asked of me directly (in other words, I was there to help her, not to help the sales guy sell to her), almost the entire conversation was spent with the sales guy talking to me, even if he was answering her question. His body language was unquestionably that of, "She's clearly not capable of making this decision herself", and addressed everything to me, despite her repeated attempts to catch his eye and have him talk to her, the actual purchaser with the question.
My concern is that this is a kind of "shared cultural discrimination". You stick a saleswoman in there answering questions, and most of the time, I think you'd get the same result. I don't think this is a "boy's club" sort of situation, but a general acceptance that women just aren't interested and don't really know about this sort of stuff.

Wednesday, September 24, 2008

Build some scala-swing documentation for 0.1

Beta? I don't need no stinkin' beta!

One thing about HTML documentation, I love them links. Especially the ability to navigate up and down the inheritence heirarchy. Mmm... hierarchy.

But the new scala-swing project, which looks interesting, and is included in the latest release candidate for scala, has no documentation. Ergo, I went about tweaking the src jar just to get things a goin.

First, I unjarred the scala-swing-src.jar file into the directory scala-swing-src.

Second, I changed the scala.home property to my scala distribution, which I had downloaded.

Third I added this code to the build.xml file.

<taskdef name="scaladoc"
<pathelement location="${scala.home}/lib/scala-compiler.jar"/>
<pathelement location="${scala.home}/lib/scala-library.jar"/>

<property name="docs.dir" value="api" />
<property name="sources.dir" value=".." />

<target name="docs">
<mkdir dir="${docs.dir}" />
deprecation="yes" unchecked="yes">
<pathelement location="${scala.home}/lib/scala-compiler.jar"/>
<pathelement location="${scala.home}/lib/scala-library.jar"/>
<include name="**/*.scala" />

Third, I ran the command ant docs from the scala-swing-src/doc directory, and blammo, I've got scaladoc API.

Note: this is not a true step-by-step howto, because it's getting late. You will have to engage your brain to fill in 1-2 blank spots in the steps above.

Monday, March 17, 2008

tickets != tasks

My company uses trac, which so far, is my favorite project management tool that I've used. It gets out of the way for the most part. Really it is perfect for a group of code monkeys. But it is missing something very important - scheduling.

The basic unit of Trac are tickets, assigned to milestones. This works well when it comes to reporting problems, but also pretty nice when you need to see the issues you are solving. That's the core of a ticketing system. Issues happen, you write a ticket. A series of issues tracked by milestone do a good job of saying this is what happened, what will happen, what we want. It does not, however, say when anything will happen.

To say when, it's better to start with an issue, and then try to define small steps that get you going. This seems like a natural one-to-many ratio of issues to tasks, and that's true. But the real problem lies with the fact you rarely can take the one issue and accurately define the many tasks. As time flies, your many will grow, and change, as you learn things, specs change, whatever. Ultimately it is very hard to embed task analysis inside of an issue-focused system.

While tasks start their lives once someone is taking on an issue, the culmination of tasks needs to be seen in the perspective of time. In fact, I would also like to see things like milestones in such a calendar as well. A project calendar like this would spot the when of a project. And leave the tickets and milestones to simply define the what happened.