Dependency Inversion Principle - part II

Hadi published on
8 min, 1435 words

Dependency inversion principle - part II

Here, I have to write this one, because I noticed I didn't cover everything in my previous post Dependency inversion principle.


Above wallpaper reference1

In the previous post, I explained how Dependency Inversion Principle, DIP, is about having independent components with strictly defined input and output, IO, of that component. Denote the "strictly define IO" is a fancy name for just "naming" things. For instance, the IO of an object named "table" is the output information that can be analyzed to show a flat surface above the ground, while its size is comparable to the size of an ordinary human body. So we named an object by defining its IO. As mentioned before, this is easy to name stuff, however, the hard part of DIP is making the underlying object independent of the rest of the world under the defined IO.

Venn Diagram Dependency Got Inverted

Here, \(A\) is depending on \(C\), where \(C\) is the interfaces, or the definitions of IO of \(B\), which we call \(B\) the implementation. Overall, the followings were the summarization of DIP in that post.

  • Hiding the details by defining only the inputs and outputs.
  • The IO definition must stay consistent over time.
  • Making independent implementation and interfaces out of those details.

I shouldn't stop there! I should have shown you the whole picture!

Bubble DIP

This is what I call a bubble fractal. As the above picture shows the blue color inside the bubbles are the implementations of the yellow IO that wrapped them. Therefore, each component, bubble, is only touching the other ones via their surfaces, their defined IO. Notice, the whole thing is a big bubble, which has its own defined IO. Also, similarly the implementations of each of small bubbles can be split into even smaller bubbles. And so on. This scaling property is what we need to call it a fractal. In the end, we are looking for structures that scale well, so we could use them to scale up to Kardashev scale2 Type I civilization, right? Also notice, if you want to address each of these bubbles, the effective data structure would be a tree graph, where each node is a bubble that links to the small bubbles inside. I am going to give some examples for this bubble fractal later in this post.

In the previous post, there is a section named "What's its practical usage?", which contains a list of usages for the Dependency Inversion Principle, DIP, so here are more usages.

Writing tests

Why I missed this one! The implementation inside each of above bubbles is depending on some other bubbles that are touched via their yellow interfaces. This allows us to replace the inside of those interfaces whenever we need a test environment to verify their behavior. Notice, by test environment we mean having mocks, or fakes, instead of actual implementations. It's a great advantage of using DIP, because you can test all corner cases of an implementation by replacing its dependencies with whatever is needed to reproduce that corner case.

If you ever wrote tests for your program you should have noticed that decoupling the implementations by adding interfaces among them would help you to replace some of them with mocks, or fakes, to write unit tests, or even functional tests, but not end-to-end tests of course! Notice here, by decoupling we are referring to having independent components.

Parallel Tasks

In the world that we almost reached to the end of Moore's law3, we have to run our programs across different Physical, or virtual, threads to keep up with the demand of computation. This means you need to decouple your runtime, to let the pieces of computation run in isolation, in parallel, or concurrent. Having isolation, which is referring to independent components, in the software world can always be handled by using DIP.

Denote, it's not only about small pieces of the runtime, or application. We are talking about the architecture of the application in general. If you scale horizontally for CPU resources, by using a load balancer, or storage resources, via sharding your data, even though you can implement it across different programming languages, you still need to have independent components, and clear IO, that is DIP. By the way, clear definition of IO in DIP allow us to measure them, which can be used to determine how effective is a component.

In a smaller scale, DIP gives us a playground to have some arguments about how to implement parallel jobs. Let's assume you are using syncronized keyword in Java, or generally using mutex, in an implementation. These keywords, or methods, can potentially make your runtime dependent to others, where they block each other, which breaks the independent constraint we have in DIP. Therefore, you should only use these keywords, or methods, to deal the input into the independent components, or aggregate the outputs of a bunch of independent components. Therefore, the part of the code that's syncronized should call some interfaces, where the independent implementation of the interfaces is hidden. This gives us good measures to see if we're doing it in a way that they run frictionlessly.

Ownership principle

Let's jump from parallel tasks to parallel world lines4 of employees in a company. If you look carefully, they are not so different. It's more clear in IT companies, where the ownership of the code is well-defined, and also, the architecture of the software, where it's decoupled the responsibilities in the code, reflects to the architecture of the company across employees. Of course, this would be true, only if the company is following the ownership principle, but let's assume it's.

However, you can think of application of DIP in companies' architecture in general, by defining clear, and measurable, IO interfaces for a group of people, then let them decide how to implement those interfaces. Here, implementing those interfaces translates to handling their tasks. A well-defined IO will lead to frictionless interactions among people, which obviously reduces the costs in one hand, and open the doors for scaling up on the other hand. Notice, when we talk about the decision-making by employees to satisfy the defined constraints of the IO interfaces, we're talking about the ownership principle. Those employees are the owner of what they are doing. It would be micromanagement if instead of defining the IO, management controls how employees are doing their job. It's proven to be costly. Additionally, it's worth mentioning that fully micromanaging everyone is impossible in practice, therefore, defining, and measuring, the IO is the way to go.

The bubble fractal above, will help us to imagine how it should be done. For instance, in cross-functional teams5 it's proven that we have fewer friction while developing something. By functions here, we mean department like structures, such as mobile, frontend, backend, design, etc. The trick is to have independent components, where we have agents (developers, designers, QA, PO, DevOps, etc.) in each team to define the IO among different functions. Because the agent is almost the same person, then the defined IO is almost constant over time, which if you recall it was a constraint for the DIP. Additionally, there could be some guidelines for how that IO should work to strengthen this consistency.

In fact, DIP is what we used partially to scale the structure of organizations, and governments. You can see that when you observe the hierarchy structure of departments, which is the tree graph behind the fractal bubble of DIP. You can say at least the naming is happened, even though the components are not truly independent everywhere. In my view, neither organizations nor governments are not scaling properly in our civilization. That makes understanding of independent component very important.

Conclusion

As before, DIP is everywhere, but it needs more attention to make the underlying implementations independent. I tried to be clear, but there are probably some vague areas in my explanation, where I would be glad to cover if you point it out to me in Mastadon.


References