Published on

"The ambiguity" of TDD

Authors

What the hell are you talking about?

I was really thinking about giving this post the following title: "TDD consultants hate him! Learn this simple trick to become TDD champion!", because I love permanent confusion (thanks Zen Koans!). The other perspective is well-known, cliche clickbait which we all hate. But! To the point.

Recently, there were a lot of discussions in the fellow software craftsmanship community about TDD and testing in general. I really liked all the perspectives presented, both from the warlords of industry standards and from folks who are starting their journey.

There are various brilliant "tales" (I mean - threads) about TDD, like this, that or that, conducted by the community titans. You can read and absorb their experiences, distilled into the genious-like conclusions.

As you, dear reader, probably noticed from previous posts, I like to play with the language, explore metaphors and think of how one can think of X (isn't it called metathinking? 🤔). Hence I would like to explore "the ambiguity" of TDD.

What "ambiguity" am I referring to? Am I attacking the golden, precious practice for creating software?

Let's see!

What is the essence of Domain-Driven Design?

What DDD practice teaches us, is the importance of the communication and collaboration between developers and domain experts.

Both those activites are based on the language. Not every language, but the right language. What does it mean right in this context?

I got dazzled by the tactical building blocks of DDD. The essence lives somewhere else and is the root of the entire practice - which starts directly in the language we use.

The language has its meaning, and the meaning lives in the particular context, that is driven by a set of rules.

It is enormously relevant to be precise in your words whether you express something in the code, when you name your resources in one of the cloud vendors or, especially, when you exchange thoughts with other human being by having a discussion.

We are always influenced by The King of all Kings, The Context and its knights - different perspectives.

An example? Let's take a concept of the "car". Depending on the perspective, it can be used or it can be a subject of various activites. What are exemplary contexts?

  • repairing
  • selling
  • transfering (logistics)
  • marketing/branding
  • etc.

One word, many meanings.

How does it relate to TDD?!

We talked about the language used so let's work on some definitions.

The definition from Wiki:

Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed

Another definition taken from an arbitrary page:

Test-driven development (TDD for short) is a programming approach in which developers write production code in response to a test case; as opposed to the traditional process where code is written first, and relevant test cases are created later.

Both given definitions, but also many others, put the word "test" in the central place. As if this was the most important part of the entire practice. Because it is, isn't it?

Testing units?

Unit tests are the means for practicing TDD. The more consultants you ask, the more definitions of the unit you will get, but it is not the point I want to draw attention at.

No doubt that there are various patterns, heuristics and tips how to make unit tests effective, efficient, maintanable, readable (let's take Kent Beck's "Test Desiderata" as an example).

But what does the word test actually mean in this context?

"The ambiguity"

I misunderstood TDD for a long time. Mostly because of "the ambiguity".

It's well-known "anti-practice" to write your tests cases after writing your code. You are losing the "design" trait of the tests.

On the other hand, everyone knows (doesn't matter if it comes from practice or "theory") that starting from the test cases helps your design. It means that writing your tests cases before writing your code helps your components to acquire desired properties like loose coupling, single responsibility, etc.

This means that in one context (when writing before) "tests" help achieving "better design", and in another context (when writing after) "tests" don't contribute to "better design".

Are we really talking about the same tests?

I firmly believe this is the source point of confusion. We mixed up the domains, the problems spaces, attempted to be solved by tests. We mashed up the contexts!

"What do you want me to do?"

Imagine you start with a blank tab in your favorite IDE. You stare at it and you think how to write your class, function or type - pick whatever you want.

Either you start from tests (I put the accent here to indicate that this word has a particular meaning in the context I would soon summon) or not, you specify how it needs to work. Implicitly it is done in your head, explicitly it might be done in the code, by expressing it using test.

In this particular context, our "test" friend is just a tool for automating the specification of what needs to be achieved.

So apparently, we used the word "test" in the context of the specifications. It is a mental shortcut to say that the test is the specification.

Could the specification be somewhere else?

In one of the previous posts, I explored the metaphor of using conversation to facilitate the design activities. When we specify what our unit (whatever it is) should do, our test converses with us using colors:

"your code works according to the specification"

OR

"your code does not work according to the specification"

So, as our test is "speaking" to us, couldn't we take one of our friends from the team, ask to sit next to us and test the piece of software we wrote every time we make a change? It will be tedious as hell, boring and patience-demanding activity, but he or she will give similar feedback: "works" or "doesn't work".

This brings us to the conclusion that our tests can run in the contexts of specification, meaning we arrive at the station "Test-Driven Specification".

Thus, this is the biggest "trick" I would sell to myself from the past - when you write your "tests" before your code, you specify.

Contrary, monotonous, exhausitve practice of getting your colleague's helphand would become "Human-Driven Specification".

Conclusion 🔍

When you write your "tests" before your code, you specify.

I am quite convinced that we need higher precision when we want to express something from this context. For example, instead of saying "I am writing a test for this class A" we should use "I am writing a specification of this class A"

"What would you like to verify?"

Let's bring a hypothetical situation from real-life.

You received a support ticket in one of the tools your company uses.

There's a bug in production.

No doubt.

You got details how to reproduce it. Immediately you open application or Swagger API, you reproduce steps and boom - there's inconsistency or exception thrown. Pick whatever you want.

So what we are actually doing? Are we specifying?

Reproducing this bug over and over manually is similar to "Human-Driven Specification", although now you are not "specifying" what it should do. It's more like a "Human-Driven Verification", isn't it?

Now we are in the context of verification. We could set up automated test to prove that particular part of the system "works" or "doesn't work". But this is different "works"/"doesn't work" than ones from "What you want me to do?".

Call this "test" a regression test or whatever - it's intention is completely different than "specification". It would live to verify or to prove whether results are the ones we expect.

Conclusion 🔍

When you write your "tests" after your code, you verify.

Is it wrong that we wrote this test after we producted the code that makes money for your company? Nope, because the context is different. This test expresses the "Test-Driven Verification" perspective.

In his famous quote, Edsger W. Dijkstra states:

“Program testing can be used to show the presence of bugs, but never to show their absence!”

I think it fits very well into the context of verification, but not exactly to the context of specification.

What is the responsiblity of TDD?

I think that both contexts have their place in the development cycle. One doesn't exist without its counterpart and together create something greater.

Almost as in the systems thinking "mantra": "a system is something more than its parts".

Specification is oriented towards design, maintainability, loose coupling, evolvability - where subjects could be "things", "subsystems" or "components".

Verification is oriented towards correctness, usefulness, value-bringing - where subjects could be "users", "customers".

Conclusion 🔍

When you write your "tests" to prove the value it gives to end recipients, you verify too.

So actually, TDD, as the practice of writing "tests" before writing your code, is about specification. It facilitates "good" design, etc.

What about verification? Would it be actual responsibility of TDD?

Question 🤔

Is verification a responsibility of TDD?

Thanks to the "single responsiblity" principle, we know that one shouldn't put two responsiblities on the shoulders of a single "guy".

How does it look from outside?

Who is responsible for "verification", if not TDD?

The closest asignee would be so-called "Outside-In" approach, in which we start from the "most outer" level, having in mind what results we expect to get when using this thingy.

Another thought is Behavior-Driven Development (BDD), concentrating on "business" value, the visible results of using the given system/component.

Combining those two "responsiblities", we have a perfect mixture covering two contexts. Verification guy takes care of the business value and Specification fellow makes sure that the design is balanced in terms of all the good traits of software engineering practices. Almost like in Organization, they are organizing each other to achieve the bigger goal.

I know that I am just scratching the surface of a huuuge topic, either when mentioning BDD or Outside-In approach, but I think the point is clear - TDD itself is just a "side" of a Ying-Yang process.

Ok dude, what now?

Am I expecting all professionals to stop using TDD acronym and start using "Test-Driven Specification"?

Would BDD become "Test-Driven Verification"?

Of course! I am expecting those things to happen. I would love to become an IT influencer, software engineering celebrity having zealots gathering around (acolytes aren't bad, right?).

Ok, enough jokes. It helped (and still helps) challenging the way I think about the software, the patterns and methodologies.

This would be a single advice I would give to my "younger" self from the past, when starting the career. The fundamentals are the most difficult to grasp and I believe the understanding comes after a longer period of time.

Conclusion 🔍

Test-Driven Development is not about tests themselves.

It is either about verification or specification using tests.

Next time when you will be faced with a need to write a piece of code, ask yourself this question:

Question 🤔

Am I testing to verify if it works correctly or to specify what it should do?

Have a great testing (whatever it means in your current context)!