Screenshot 2014-07-24 08.52.38

I’ve recently blogged about many of the steps in the ThoughtWorks’ recruitment process but most of these posts were general in nature.  Here are my thoughts gathered from an actual code review I did yesterday, and it’s fairly typical of the approach I take for all the code reviews I perform…

OK, so the code review is has arrived. Unzip the archive.  Quick check.  I’ve been told it’s a Java solution to Problem X and, at first blush, every I see supports that belief.

Ahhh, it has a Readme – this looks encouraging.  Time to fire up IntelliJ as I read the Readme…

The Readme.

The Readme tell me which Java version I need: excellent.  And even better, it’s not exotic and I have it installed: WOOT!

It also tells me where “public static void main” lives.  That’s the first bit of non-test code I look at normally, so that’s handy.

It also tells me how to provide CLI input to run the program.  Nothing surprising here, but it’s a nice touch.  And the program even uses a default file if none is supplied.  And that default file is supplied with the archive!  Easy peasy!

What’s this?  A code coverage report is included as well.  To be honest, I wouldn’t even think to look for one, so having this note in the Readme was certainly useful.  I don’t place a whole lot of value on just code coverage, but it’s nice to see the candidate thought it important enough to include.

And finally some test data input and output.  I’m pretty sure this is exactly the same as what we provide the candidate, so not a lot of value added with that one 😦

No mention of testing libraries unfortunately.  Assuming JUnit is OK for Java solutions, but I’ve been caught out on using the wrong versions before, especially with Hamcrest.

The Tests

Hmmm, not completely green out-of-the-box.  One failure and two ignored.  Not much I can do about the ignored ones (given they’ve been explicitly annotated as such), but what’s the cause of the failure?  Ahhh, it’s throwing an NPE trying to find the default input file… perhaps that wasn’t a good design idea after all.

The big question is: is the test failure expected, or a side effect of how I’ve created the project in IntelliJ?  There’s no build script, so I cannot verify by running tests from the command line 😦  More importantly, how much do I care to dig into this?  Let’s see how the rest of the code looks and if it’s a close decision, I’ll come back.

How do the test names look?  Not too bad: slightly redundant in using @Test annotation and “test” prefix, but I haven’t found a good way to address that myself (note: substituting “should” for “test” doesn’t count).  Some intent in the naming which is a plus.

Small test methods.   Nice.

One assertion per test.  Nice.

Some use of setup methods, but still obvious setup code in all tests which hasn’t been centralised… not so good.

Also, one of the test classes (FoobarTest) spends it’s entire time testing Java regex handling and doesn’t even import the Foobar class. WTF?!?

The Implementation

So the main method is small and simple, but as it reads, it doesn’t really convey anything about the high level design of the solution.  There’s a bad mix of high and low level implementation method calls.  Different levels of abstraction.

Digging further shows lots of IDE yellow flags suggesting potential issues.  Some of these flags are for inconsequential items (e.g., JavaDoc), but some point to logical code optimisations and potential bugs waiting to rise.  I don’t place a heavy emphasis on IDE warnings, but it’s something to note.

Ahah, here’s a bigger issue: everything’s static.  No state at all in the classes.  Apart from one class with non-static methods that could have been made static given the usage pattern?  Why, why, WHY? (Delilah).

System.exit(0) sprinkled liberally throughout low-level error handling code – this is starting to get a bit nasty.

Three levels of nested conditionals is leading to some fairly complicated methods as well.

And add to the mix inheritance between two classes that would make Barbara Liskov squirm.

But the biggest issue is that the design of the solution is buried in the implementation.  More than any of our other coding problems, Problem X has the most complex domain and surfacing that domain in the solution is tricky.  Even understanding the solution for this problem from past implementations, I cannot see how it is implemented here without looking at the lowest level code, and this in only 500 lines of Java code.

I think I’ve seen enough.


For me, a code review is about building a picture of a developer’s potential without the most important piece of the puzzle: the developer themself.  In absence of the candidate, I use a variety of proxy metrics; their design, documentation (readme), testing approach, general code quality and the like.  Sometimes I go through all of these things and I still don’t have an obvious overall impression.

In this particular cases, there were a number of pluses, but they were soon outweighed an even larger (and more important) number of issues.

Overall impression: I’m not impressed.

So this is the end of the road for this candidate, right?  Wrong.

Well, at least not always right.  By design, I don’t know anything about the skill level or background of the candidate.  Don’t even know their name.  My impression is formed purely on their solution.  It may be this candidate is a graduate or has only recently moved into software development from another career path.  In that case, we may continue the recruitment process with my code review notes being used as input into subsequent interviews.