Published on

The ambiguity of interfaces

Authors

You had one job

Imagine that we were challenged to implement a new functionality in our application.

A typical assignment - read some data, make a decision, do some actions, eventually record a decision.

Turns out that we already have some system up and running.

Thank God that someone already thought about dependency injection (following the best practices, right?) and we have a bunch of "injectables".

We need to compose powers of IApplicationConfigurationService, IDocumentProcessingServiceClient and IDocumentRepository.

Of course, this is a toy example and we don't see such code in real life.

But please bear with me, dear Reader.

There will be fun moments, I promise.

Configuration?

Let's start with our lovely IApplicationConfigurationService collaborator.

It provides an application configuration based on partner identifier.

public interface IApplicationConfigurationService
{
    public Task<ApplicationConfiguration> Get(Guid partnerId);
}

And here we have configuration:

public class ApplicationConfiguration
{
    public ConfigurationA ConfigurationA { get; }
    public ConfigurationB ConfigurationB { get; }
    public ConfigurationC ConfigurationC { get; }
    public ConfigurationD ConfigurationD { get; }
    /* much more configuration */
}

public class ConfigurationD
{
    public int ParameterX { get; }
    public string ParameterY { get; }
    public double ParameterZ { get; }
    public bool UseDocumentProcessing { get; }
}

Pretty straightforward configuration with many possible knobs for tuning the application.

We know that the configuration is stored in CosmosDB so we have CosmosDBApplicationConfigurationService.

Not bad, huh?

More processing?

To use additional processing power we are going to utilize a separate document processing API.

To achieve so we need to have a client, right?

So IDocumentProcessingServiceClient will be implemented, believe me or not, by DocumentProcessingServiceClient.

No one would have guessed!

Here's the interface definition:

public IDocumentProcessingServiceClient
{
    public Task<ImportantDocument> Process(ImportantDocument document);
}

You probably can imagine, dear Reader, how the implementation looks like.

There's a HttpClient involved, and some other typical code for interacting with a web API.

Let's move on.

Processing result?

Time to store result of our processing - a processed document.

Whatever it means, actually.

But for our little tale it does not really matter.

The process level logic is that we need to keep it.

We use CosmosDB, as one could guess it, so IDocumentRepository will be implemented by CosmosDBDocumentRepository.

The interface itself, looks as follows:

public class IDocumentRepository
{
    public Task<ImportantDocument> GetBy(Guid partnerId, Guid importantDocumentId);
    public Task Save(ImportantDocument importantDocument);
    public Task<IEnumerable<ImportantDocument> GetBrokenImportantDocuments(Guid partnerId);
    public Task<IEnumerable<ImportantDocument> GetActiveImportantDocuments(Guid partnerId);
}

I leave it to you, dear Reader, to imagine how the implementation might look like.

Important documents command handler

We are modern, we use CQRS.

This means we have our command handler ProcessImportantDocumentCommandHandler that will implement our new feature - store important document with possible additional processing.

For now, our business logic is choosing whether the processing is needed, based on partner's configuration.

And then store the document.

public class ProcessImportantDocumentCommandHandler
{
    public ProcessImportantDocumentCommandHandler(
        IApplicationConfigurationService applicationConfigurationService,
        IDocumentProcessingServiceClient documentProcessingServiceClient,
        IDocumentRepository documentRepository
    )
    {
        _applicationConfigurationService = applicationConfigurationService;
        _documentProcessingServiceClient = documentProcessingServiceClient;
        _documentRepository = documentRepository;
    }

    public async Task Handle(Guid partnerId, ImportantDocument importantDocument)
    {
        var applicationConfiguration = await _applicationConfigurationService.Get(partnerId);
        if(applicationConfiguration.ConfigurationD.UseDocumentProcessing)
        {
            var processedImportantDocument = await _documentProcessingServiceClient.Process(importantDocument);
            await _documentRepository.Save(processedImportantDocument);
        }
        else
        {
            await _documentRepository.Save(importantDocument);
        }
    }
}

Isn't that difficult, right?

Typical, boring, enterprise level.

The feature got implemented, everyone is happy.

So happy.

Let's dance.

First change

Turns out that during one of our team discussions we decided to move UseDocumentProcessing flag from ConfigurationD to ConfigurationA.

Just because it makes much more sense.

We learned something new - let's do so.

Easy change.

Tiny work inside of ApplicationConfiguration class causes change within ProcessImportantDocumentCommandHandler.

This is expected, as ProcessImportantDocumentCommandHandler depends on the configuration, right?

But this requires also changing some of the tests (not many, but still).

Everything adjusted, we are ready to move on.

Second change

Due to various reasons, the service for processing important documents needs to be inlined - architects said we too much roundtrips and it is not that hard to integrate Ruby code in C#.

This simply pays off.

Hm, but wait.

IDocumentProcessingServiceClient expresses that it is a client for a separate service and now we will inline this responsiblity.

What this thing will become?

We could have an implementation, let's say InMemoryDocumentProcessingServiceClient but this does not sound well.

Is this a client?

We could rename it to InMemoryDocumentProcessingService and it would make sense too, right?

Of course, then we would need to remove the Client suffix from our lovely interface - IDocumentProcessingServiceClient, so it becomes IDocumentProcessingService.

Wasn't that painful.

Third change

Change, change never changes.

Requirements hit you hard.

We got feedback that we need to check if an incoming important document does not belong to already active documents.

We need to alter behavior.

public class ProcessImportantDocumentCommandHandler
{
    public ProcessImportantDocumentCommandHandler(
        IApplicationConfigurationService applicationConfigurationService,
        IDocumentProcessingService documentProcessingServiceClient,
        IDocumentRepository documentRepository
    )
    {
        _applicationConfigurationService = applicationConfigurationService;
        _documentProcessingService = documentProcessingService;
        _documentRepository = documentRepository;
    }

    public async Task Handle(Guid partnerId, ImportantDocument importantDocument)
    {
        var activeImportantDocuments = await _documentRepository.GetActiveImportantDocuments(partnerId);
        if(activeImportantDocuments.Any(activeDocument => activeDocument.Id == importantDocument.Id))
        {
            throw new CannotProcessAlreadyActiveDocument(importantDocument.Id);
        }
        var applicationConfiguration = await _applicationConfigurationService.Get(partnerId);
        if(applicationConfiguration.ConfigurationA.UseDocumentProcessing)
        {
            var processedImportantDocument = await _documentProcessingService.Process(importantDocument);
            await _documentRepository.Save(processedImportantDocument);
        }
        else
        {
            await _documentRepository.Save(importantDocument);
        }
    }
}

Simple, transparent and straightforward.

But here comes the next change.

Turns out that the information about active important documents will no longer be available in our CosmosDB instance.

Due to various reasons, active important documents are going to be stored in Azure Blob Storage as blobs.

Well, it's just another...Wait.

Another what?

Our implementation of the IDocumentRepository interface is named as CosmosDBDocumentRepository.

We could just inject AzureBlobStorageClient and replace implementation of the relevant method - GetActiveImportantDocuments.

Then CosmosDBDocumentRepository wouldn't sound well...

What options do we have?

Wouldn't DocumentRepository be enough? It could accept two dependencies: CosmosClient and AzureTableStorageClient.

Consumers of DocumentRepository wouldn't notice the change of the structure and the behavior is going to be the same.

Profit?

Have you ever talked to a command handler?

Let's imagine that we have a quick chat with a command handler.

Yes.

Command handler.

No, I am not kidding.

Discussion
Us:Hey!
ProcessImportantDocumentCommandHandler:yo, long time no see.
Us:We were wondering about your task.
ProcessImportantDocumentCommandHandler:And?
Us:What do you need to get your job done?
ProcessImportantDocumentCommandHandler:Well, I need someone who provides information about all active important docs. Hmmm, someone who can provide information whether additional processing is required. Then of course someone who does the additional processing. And finally, someone who knows how to save a new important document.
Us:A tiny question - you said "someone who provides information about all active important docs" - do you really need that much?
ProcessImportantDocumentCommandHandler:When you asked me very explicitly I would say no, I don't need that much information. I need someone to check if an incoming important document is already active. That's it.
Us:Cool, it's awesome we could help!

We got some nice feedback from the command handler.

We could now try to be explicit in the needs and try to embed each required capability in the form of an interface.

So ProcessImportantDocumentCommandHandler needs collaborators that can:

  • check incoming important document activeness
  • check additional processing requirement
  • process incoming important document
  • save incoming important document.

One could say that each collaborator will be responsible for each of those capabilities.

We could name each one a responsibility.

Let's enter playful mood and represent those responsibilities as interfaces.

public interface ICheckImportantDocumentactiveness
{
    public Task<bool> IsActive(Guid partnerId, ImportantDocument importantDocument);
}

public interface ICheckAdditionalProcessingRequirement
{
    public Task<booL> IsRequired(Guid partnerId);
}

public interface IProcessImportantDocument
{
    public Task Process(ImportantDocument importantDocument);
}

public interface ISaveImportantDocument
{
    public Task Save(ImportantDocument importantDocument);
}

They might look a bit weird.

Verbs in the name - yuck.

Time to provide those capabilities to our friend, ProcessImportantDocumentCommandHandler.

public class ProcessImportantDocumentCommandHandler
{
    public ProcessImportantDocumentCommandHandler(
        ICheckImportantDocumentactiveness importantDocumentactiveness,
        ICheckAdditionalProcessingRequirement additionalProcessingRequirement,
        IProcessImportantDocument importantDocumentAdditionalProcessing,
        ISaveImportantDocument ImportantDocuments
    )
    {
        _importantDocumentactiveness = importantDocumentactiveness;
        _additionalProcessingRequirement = additionalProcessingRequirement;
        _importantDocumentAdditionalProcessing = importantDocumentAdditionalProcessing;
        _importantDocuments = ImportantDocuments;
    }

    public async Task Handle(Guid partnerId, ImportantDocument importantDocument)
    {
        if(await _importantDocumentactiveness.IsActive(importantDocument))
        {
            throw new CannotProcessAlreadyActiveDocument(importantDocument.Id);
        }
        if(await _additionalProcessingRequirement.IsRequired(partnerId))
        {
            var processedImportantDocument = await _importantDocumentAdditionalProcessing.Process(importantDocument);
            await _importantDocuments.Save(processedImportantDocument);
        }
        else
        {
            await _importantDocuments.Save(importantDocument);
        }
    }
}

Not much changes, right?

Code-wise, it's even worse!

Our fellow handler accepts additional dependency (but let's call it a collaborator) so more stuff to mock, right?

But let's look closer.

In this round, we didn't take our perspective (God-like creature, looking from "above") on what the command handler requires to work.

We asked it explicitly and the handler expressed it needs very clearly.

So we've been able to note down responsibilities of its collaborators to be provided so that it could do its job.

In fact, who owns the contract for those needed responsibilities? (or should we name them as services?).

It's our handler!

The command handler is "the service taker" and it expresses the contracts.

ICheckImportantDocumentactiveness, ICheckAdditionalProcessingRequirement, IProcessImportantDocument and ISaveImportantDocument capabilities will be satisfied by someone else, somewhere.

From handler's perspective, it does not matter, as long as service contracts are satisfied!

Service providers

Turns out that our existing "dependencies" - but let's stop naming them as such - they are "living" things - we can talk to them so give them little respect!

Existing collaborators might play certain roles of services providers and eventually deliver those capabilities.

How?

Imagine the following:

public ApplicationConfigurationAdditionalProcessingRequirement : ICheckAdditionalProcessingRequirement
{
    public ApplicationConfigurationAdditionalProcessingRequirement(
        ApplicationConfigurationService applicationConfigurationService
    )
    {
        _applicationConfigurationService = applicationConfigurationService;
    }

    public async Task<bool> IsRequired(Guid partnerId)
    {
        var applicationConfiguration = await _applicationConfigurationService.Get(partnerId);
        return applicationConfiguration.ConfigurationA.UseDocumentProcessing;
    }
}

Similarily for checking document activeness:

public AzureBlobStorageimportantDocumentactiveness : ICheckImportantDocumentactiveness
{
    AzureBlobStorageClient _azureBlobStorageClient;
    public AzureBlobStorageimportantDocumentactiveness(
        AzureBlobStorageClient azureBlobStorageClient
    )
    {
        _azureBlobStorageClient = azureBlobStorageClient;
    }

    public async Task<bool> IsActive(ImportantDocument importantDocument)
    {
        /* implementation */
    }
}

What about additional processing?

public class DocumentProcessingWebService : IProcessImportantDocument
{
    HttpClient _httpClient;
    public DocumentProcessingWebService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }
    public async Task Process(ImportantDocument importantDocument)
    {
        /* implementation */
    }
}

We can name the collaborator accordingly, based on the provided capabilities.

Capabilities, responsibilities and roles

Isn't it that I made some fancy new ways to inject dependencies, breaking the convention of using interfaces?

Isn't it that this tiny toy example allows that and in the big pile of code it won't be that easy?

Might be.

Interface-oriented design might mean something different than using interface keyword.

It is a mental model, a way of thinking.

Instead of focusing on "doers" (-ers suffix), we directed our magnifying glass to "capabilities" or simply - "behaviors".

At some point "a doer" might and probably should appear. But it does not mean it's required from the start.

As we explored "weird .NET convention" on putting "I", before each interface's name, in I, interface, we are able to amplify the essence of each "object" - behavior.

This might make our thinking oriented towards treating "objects" more like "units of behavior" rather than "bags of data with some methods attached".

And that's why interface keyword might be really dangerous.

Why?

Typically a new interface gets created from God-like perspective - a designer.

And this comes with a risk, for an interface, getting polluted by this overarching knowledge.

But wait, what we really want to express through interface?

An ability to "mock" it later?

Need-driven design

Imagine that there's a magical language that allows doing this:

public responsibility ISaveImportantDocument
{
    public Task Save(ImportantDocument importantDocument);
}

As there is a need to save important document, loading might be also relevant:

public responsibility ILoadImportantDocument
{
    public Task<Result<ImportantDocument, LoadingDocumentError>> Load(Guid partnerId, Guid documentId);
}

There are high chances that those capabilities might be assigned to a specific role:

public role ImportantDocumentsRepository
    : ISaveImportantDocument, ILoadImportantDocument
{}

Everything is living in the conceptual plane and there's no such language, of course.

But no one can stop us, dear Reader, from having different way of thinking about what we do, right?

Implementation-wise, we would use interface.

Also, we dropped conventional I prefix for this role description.

Not having this convention in place might feel weird but there's a potential benfit - we would need to name someone who is going to play this role accordingly.

So instead of having ImportantDocumentsRepository implementing IImportantDocumentsRepository, we would have ImportantDocumentsCosmosDbRepository playing a role of ImportantDocumentsRepository.

Worth noting is that now we are operating on very granular responsibilties when we need to.

Beyond interfaces

interface is just a keyword in various languages we use.

Focusing on behaviors, capabilities, responsibilities and roles, required by a particular services takers, "decouple" service providers, playing those roles, from the context they are acting in.

This might eventually lead to more code.

Code should serve the purpose and should deserve its existence.

If it is there to capture the knowledge, to enchant the understanding - it's totally worth it.

Even though it initially might remind ourselves about The cost of modeling.

By being very explicit on what is needed by a specific collaborator, we were able to shuffle those responsibilities and assign them differently, without making much changes in "the consumers of these capabilities" (service takers).

This is one of the positive aspects of "Outside-In approach" - it is "consumer-driven" so we should pay attention to requirements stated by the "service takers".

When dealing with interface, we shouldn't focus on "object"-side of it.

Look beyond that, dear Reader, think of the consumers, of the "users" - what do they really need?

What capabilities they ask for so that they can do their job in the best way?

Next time, dear Reader, when working with either existing interface or interface to-happen, try to "replace" interface keyword with responsibility or role - how does it sound?

"By sheer coincidence", this will make our "interfaces" smaller, much more well-defined, and serve the right purpose - the recipient's needs.

Almost as if we "followed" one of "the principles" - Interface Segregation Principle.

But not religiously following - knowing what eventually it might yield - a very specific way of thinking!

And remember, dear Reader - we always can talk to command handlers (and not only to them!), use this great option to learn about their needs.

It will pay off.