The code I wish I had (I always code in a tux)

in Testing

The TDD Practice I wish I had

In Test-Driven development, it is considered a best practice to write a test for the code you wish you had before writing the actual code.  In that spirit, here is an overview of the TDD practice I wish I had.  In other words, while the way I currently build apps would not be considered TDD, it is certainly something I aspire to, because I very much believe in its power.

TDD in a nutshell

For those unfamiliar with TDD, or Test-Driven Development, very briefly, it is a model in which you first write a code-based specification of what some part of your system should be able to do and what result you expect when it is done. After that, you run the test and the test fails, because there is no code that does what was expected. Then, you write the least amount of code needed to make that test pass.

A single test has little value; the power of TDD is in numbers.

Now, one lonely test has little value. However, the power of TDD is in numbers. Once you start to build up a suite of tests, when you then add a new feature and run your test suite, you can make sure that your code has not regressed, or in less technical terms, that you didn’t unintentionally break some other part of the system when adding a new feature.

What’s the big deal with regression?

Of the many benefits of TDD, a huge one is that you can push code to production and still sleep at night.

Let’s say I’m working on a banking system, and I add a feature for filtering a list of accounts based on if there have been any overdrafts in the last n months. But in adding this filter, I unintentionally modify the overall filtering of accounts, such that accounts with a zero balance do not appear at all in the list. Since we don’t have a test suite that checks for this, the feature ships and everything is hunky dorey until a couple days later when users of the system realize they suddenly can’t view brand new accounts with a zero balance, and all hell breaks loose.

Now multiply this issue by ten or a hundred and you start to realize the type pain that TDD can prevent.

Regression is like trying to take a step forward but then discovering it was several steps backward.

Regression is like thinking you moved forward but discovering you instead moved backward.

Believe me, you do not want to be in this situation. Of the many benefits of TDD, a huge one is that you can push code to production and still sleep at night.

Ok, enough introduction. Let’s get back to the main topic: what I would consider an ideal TDD process.

My Ideal TDD Process

These are what I would see as the main steps of an ideal TDD flow.

1. Wrap TDD in BDD

Make sure any TDD work is driven by an actual user need

I want to make sure that any TDD work is driven by an actual user need, and BDD (or Behavior-Driven Development) can help ensure that, in the form of acceptance tests, which are tests that simulate a user going through the motions of using the given feature.

In terms of process, this means that we begin by picking up a user story from a backlog and then convert that story into acceptance tests, which I would work with a PO, domain expert, or the like, to write.

2. Stub out unit tests in pseudo code

Start by writing tests in plain English

With an understanding of what we need to achieve from a user perspective, instead of starting to write code-based tests, I would first write out those tests in plain English, or pseudocode, which is both fast and creates an outline of the tests that need to be written.

3. Convert each line of pseudocode into a test

Next, we turn a line of pseudocode into a code-based test. Very often this process of  “translation” can lead to additional insights, eg maybe something was easy to write in plain English, but not so trivial, it turns out, when writing in actual code.

4. Get each test to pass before working on the next one

If you convert all your code stubs into actual tests you might waste a lot of time.

Before continuing to write the next code-based test, we get this test to pass. In other words, we write the corresponding “real” code.  Why not convert all the pseudocode tests into code before getting them to pass? One reason is that, while getting each test to pass, you are likely to gain insights about your code and how it should work and it is very likely that you’ll be revising your pseudocode. If you’d spent time writing actual code tests, you might have wasted a lot of time.

5. Keep going until the original acceptance tests pass

This is the essential workflow. We just keep going until the original acceptance tests pass.

Are we Done?

Once those test pass, after a quick sanity check of the acceptance tests (eg maybe the test is passing but there is something obviously wrong when manually completing the task), I would ask the Product Owner to bless the feature as Done.

Usually, there will be some tweaks needed to the feature and we will decide if they can be added later, in the form of a new user story, or if they must be added for the feature to be considered Done. One way or another, we will hopefully get to Done, and we can pick up the next story in the backlog and the process starts all over again.

What about refactoring?

A mantra of TDD is “Red, Green, Refactor” which means you write a failing test, get it to pass and then refactor (or revise) your code. Since you now have a test that checks if your code is working, you can refactor with confidence. (Remember my discussion earlier about regression?) 

My preference is to combine refactoring with writing new code.

That is all good and well, but I am loathe to spending too much time refactoring for its own sake.  Yes, if there is some obvious cleanup that can be made, by all means, do it.  But overall, my preference is to combine refactoring with writing new code. In other words, the mantra becomes a kind of recursive Red/Refactor, Green, where every instance of writing new code is an opportunity to refactor old code.

I also wanted to mention two aspects of TDD that aren’t necessarily process-oriented but still an essential part of what I would consider an ideal practice.

Pairing and TDD

Pair programming is, of course, another staple of good Agile practice, and I am a huge fan of working this way.  In fact, I would say that if you are doing pairing and TDD really well, you’d be hard-pressed not to ship code that is totally rock solid. 

When you are pairing and doing TDD, alternate between writing tests and getting tests to pass.

And this gets to a key relationship between TDD and pairing, which I need to credit the good folks at Pivotal Labs for showing me: When you are pair programming and doing TDD, consider alternating between writing test code and writing the code that gets the test to pass.  This will allow you to continually shift your perspective from looking at the system from the outside in and vice versa.

Integrate Visual Design into your testing

One of the most common oversights I see with testing in general is for it to be focused solely on code.  But, as we all know, the visual display has a huge impact on usability and overall success.  At the same time, it’s easy as a developer to lose sight of the UI and visual layer in general.  To help prevent that from happening, one can build visual regression testing into the overall testing workflow, such as by using a tool like

A demo of visual regression testing.

A demo of visual regression testing. Learn more at

Props to Ian Yamey for showing me that one.

That, in a nutshell, is an ideal version of a TDD practice from my perspective.  I’d love to hear your thoughts in the comments!