Published on

The ambiguity of documentation

Authors

New day, new task

Imagine that we were assigned a task to add something to our existing application.

Let's say this "something" is actually very serious piece of UI or maybe a visualization of huge amount of data that we received from IoT devices.

We start thinking, asking Chat-GPT and other useful tools, we try to wrap our heads around the problem we are faced with.

Well, I don't know how would you do next steps but I would probably check if this is a problem from a category "a set of already solved problems".

It might be that we just avoided reinventing the wheel as someone already hit similar issue long time ago and we might benefit from his or her learnings.

Few clicks, minutes of googling, quick evaluation and there it is - a library or a package, name it however you want.

How the hell can I use it?

Let's assume the tool has some docs written. There's a section about examples, how to install and what are the potential pitfalls.

Easy.

npm i / yarn add / dotnet add package / pip install - boom, we've got the tool!

We try to take a simple example, brief walkthrough of each lines. Looks fairly easy.

Ok, now our case!

Change here, change there. As predicted, should work.

And this magic moment happens when we actually might have missed the point of this tool.

Bad docs, bad docs. Didn't guide us decently enough!

Click click, few keystrokes and here we are - everything wired up, now it's the time for success.

Nah, not yet. We forgot importing one last thing and adding it to the configuration.

Maybe this time...

A manual?

Does it sound familiar?

Have you experienced such "quick & mindless" stitching just to make things work?

Don't get me wrong, not saying it's a miserable thing to do.

I am still falling in this "trap", especially when I really want to do a proof of concept work just to yield some understanding or to verify if my one million-dollar idea makes sense.

While being in the process of exploration, when I have "skin in the game" experience, I am magically treating documentation as a manual.

Do this, do that and here's the effect you want to get.

Quite imperative, isn't it?

What if there's another level of the documentation, that is typically hidden?

A philosophy?

Let's start from the beginning (in the ideal world).

A humble creator, a mere doer, faced a problem.

No one had similar issue before or the suggested solution didn't fit the mental representation of the mentioned builder.

As there is a problem to solve, we might be exposed to Language of the problem.

The same was with our library author - in the "problem area" some concepts existed that were or were not captured by him, while designing the solution.

What if "design" was accidental and "just happened"?

Or maybe there was a huge effort put into understanding "what do we mean by that?", which yielded precise words, artistically distilled into abstractions.

Words, concepts, abstractions, building blocks

Have you ever started your evaluation of a tool/a library/a framework based on the concepts it comes with?

How natural did they seem to you?

Of course, sometimes there's "no time" (really?).

Then we see all the memes "try to fight with the tool for 4 hours vs read the docs for 5 min".

But going back to the point, the documentation not only informs us about the usage but also embodies the concepts, the philosophy, the motivation behind it.

Sometimes they are not mentioned explicitly, though.

Those levels, or perspectives, (what is the difference? you might be interested in Modeling Maturity Levels) might manifest themselves directly in the API of the tool or through so-called "building blocks".

Language designers?

As we are at the "building blocks" station, is it yet another name for abstraction?

You know, abstracting away what is irrelevant in order to amplify what is essential.

Hence, are the fellow library authors, consciously or not, designing/capturing the language around the problem they try to solve?

We are slowly reaching the main point of this little tale - are the concepts properly described in the documentation?

I believe this might transform the documentation from being "just a manual" into a profound library (pun intended), full of intentional words and ideas expressed that constitute the knowledge base.

The knowledge base that when "loaded" into our minds becomes the way we "live by" while using the tool.

Modeling around the language

Isn't that the point of the process of modeling?

The tool's author has some ideas in his or her mind.

They are getting expressed in the code via some abstractions (enhanted concepts?).

The language, that binds those abstractions, appears in the docs, too. (sounds similar to the bounded context from DDD, isn't it?)

Let's take one of my favorite state management libraries - Overmind, masterfully crafted by Christian Alfoni.

When one goes through the docs, he or she can quickly get into "code examples".

But those "code snippets", as we saw at the beginning of this little tale, use concepts that were created by the author and are expressed in the code.

When such a library gets adopted in the team, all the team members using the tool might eventually speak the language that it brought.

It might sound obvious to you, but isn't that the point - if the author created a well-suited model, solving the problem(s), it's so easy to use it within its context?

And when those abstractions and concepts don't capture the essential aspects of the problem - wouldn't this tool be a nightmare to use, e.g. its API is too low-level or requires unnecessary boilerplate? (when discussing the essential and the accidental perspectives, you might be interested in Essentially bounded, Accidentally unlimited)

We could take other examples of "tools", like Kubernetes or Docker.

They came with their concepts - pods, containers, images, services, etc.

Those names stuck with us got adopted and became a de facto communication standard, language patterns used in day-to-day life (don't you talk with your relatives about images and replica sets?).

We might eventually conclude that exploring, distilling, and naming concepts living in the problem's world, is enormously impactful on the efficiency of the tool.

More accurate models win, blunt models lose?

Can we evaluate the success of solving the problem by the set of models that are provided by a tool of choice? (when I say "models" I have in my mind concepts captured, e.g. in the code by types)

Going back to Overmind, it comes with set of core concepts: actions, operators, effects.

Each of them represent some sub-problems of a bigger problem that we might encounter while building the rich UI application.

And what's more intersting, there are some "functions" or roles that those abstractions provide or play.

A set of capabilities that must be there to efficiently solve the problem that one is facing.

What if some of those capabilities are not explicitly expressed?

Capabilities, concepts, abstractions

As Mathias and Rebecca showed in their talk, if you care about something, make sure you have it properly represented.

Conclusion 🔍

If you care about something, have a building block for it.

Those building blocks, "concepts" or abstractions (name them as you wish, dear Reader), will eventually come from the effortful time consisting of hard thinking and "what does it mean?" questions posed multiple times.

All of this is a yielded knowledge.

Understanding.

The essence that should have its place in the code, in the speech and in the documentation.

It might be that the tool is flashy and shiny but don't be fooled by it - there probably are some core and foundational concepts behind it. (you might be interested in New tools, old rules)

Next time, when you will be learning a new tool/library, a new language or even a new pattern, try to ask yourself:

Question 🤔

What are the core concepts, motivation and philosophy behind it?