A Programmer's Attitude Towards Effective Test Cases


It is always a crowd's favorite to ask questions like "how should one test private methods?" and "Should one test private methods?" on different communities, and I have long thought to organize my thoughts and to write a summary for such questions. The topic itself is relatively straightforward, but the answer may differ from one developer to another. Especially, the topic seems extra divided in the international forums. However, in essence, this question can be boiled down to what effective test cases are.

Private methods were drafted within the object oriented perspective, and functions hidden in the closures, in the sense that they access the exposed functions from within, are products of the same concept. These ideas can all be categorized as encapsulated items that are hidden behind the module's external interface, and can be called internal implementation for the sake of simplicity. Now the question remains, should you test the internal implementations? To save you some time, the answer is "no." Well, actually the answer is "yes," and by now it is fair for you to wonder what I'm on. You must refrain from directly writing test cases for internal implementations and should only test the exposed external interfaces. In conclusion, they must be tested.

It is acceptable to write temporary test cases for internal implementations to automate the repetitive tests, but the final code should only contain test cases regarding the external interface.

You can write test cases that reflect the current situation, but you must keep only the test cases for the future.

So, what's the gist, physicist?

From now on, let's consider the act of testing to be writing test cases to automate the tests for a certain module. I will not blabber on and on about numerous benefits of testing, as the benefits are clear and can be found anywhere with ease. Different developers emphasize different benefits of testing, and I will also not differentiate between unit tests and integrated tests. This article purely discusses automated test codes. Such tests may or may not utilize TDD methodology.

It has been around seven years since I began pursuing the TDD pattern, the methodology of seriously utilizing tests. Since then, my thoughts on testing have become ever more clear. The sole purpose of testing is to assist the developer (project). In reality, it hardly matters how a developer writes the test cases, as long as they benefit the developer in any way. Therefore, whether to include internal implementations in testing is up to the individual developer and one's preferences need not be questioned. Therefore, the developer has the freedom to test the private methods if the developer finds it necessary. Regarding this sentiment, no one, not even Mr. Kent Beck, Martin, or Uncle Bob should have a say. However, freedom is only exercisable when working alone. When you are working on a project with a team, it's a whole new world.

I must admit that I sometimes write test cases for the internal implementations. But this only holds true for temporary cases. It is acceptable to write test cases to automate the divided test routines, and it is often the case when you adhere to the TDD process where you start off by testing external interfaces and ending up testing the internal implementations. In some cases, the interface may even be removed or combined. The test cases that directly test the internal implementations must be removed and assimilated into tests for external interfaces according to the changes in the module.

The Dangers of Self Satisfaction

Directly testing the internal implementations devalues the future worth of the test cases. In other words, they are of no help. While it may feel like you're writing test cases, you must never be sucked into the Red-Green-Refactoring cycle as illustrated by TDD without truly understanding the benefits of test cases like a code junkie addicted on dopamine. Sometimes, the test cases may not be helpful.

If you have personal experience, you will have a visceral understanding of the fact that it is exponentially easier to test internal implementations rather than to test using the external interfaces. Furthermore, testing internal implementations feels more intuitive, resulting in a higher code-to-satisfaction rate. It may be our instinct to write test cases to get a quick view of the whole picture, and such test cases leave you in a state of pride and euphoria. However, to reiterate, not all test cases are always helpful. The fewer the test cases, the better. The value of tests are measured by the efficiency of optimizing the maximal effect with minimal test cases. The opposite scenario with innumerable test cases will hinder the project by interfering with every minuscule step along the way. Such micromanagement is sure to cause an adverse attitude towards testing.

The number of unnecessary test cases and the value of necessary test cases are inversely related, and the trust of the project's test cases will plummet. As soon as the project loses its trust in its test cases, the test cases become a hinderance that matches the legacy codes in annoyance. You'll end up in a place where you can't stop writing test cases nor removing test cases while maintaining zero value. Or, the worst case scenario, you may have persuaded yourself that the test cases are beneficial.

The Relationship Between the Module and the Test Cases

The modules within the application have respective responsibilities and can be modified in terms of features or replaced with a better performing module. They are the software version of small cogs in a larger machine.

Outdated cog can be replaced with a cog that is lighter, faster, or safer. Here, the larger machine does not care one bit about the unnecessary details of the cog like its color or material as long as the cog fits and works. It is simply satisfied with the fact that the cog is able to maintain its function. Therefore, the only thing that needs to be tested before replacing an older cog with a new cog is whether the cog functions expectedly with other cogs.

The test cases ensure that each module can maintain its responsibility upon modifying or adding new features. Furthermore, it can provide a sense of trustworthiness of the larger machine known as the application when the internal implementations have been refactored, modified, or even replaced.

Therefore, the test cases must ensure that the entire system can perform expectedly regardless of which module is being tested. It hardly matters what the cog looks like or what the cog is made of. It doesn't even matter whether the cog is actually a cog within a cog like a set of Russian dolls. The user must only be aware of the exposed externalities of the cog and must be able to utilize what is made public to them. Such is the function and the responsibility of the cog. Test cases can also be considered to be the module's users. The test cases must test for the module's unchanging responsibility, not the changeable internal implementations. If the external interface that enables the module to adequately perform its duties are constantly changing, it is a sign that there are design flaws in the application.

Modules must always maintain the external interface regardless of the internal implementations; in other words, it must be able to be replaced with a module with the identical function and responsibilities. The test cases that seeks to test the modules should only be aware of the module's abstract responsibilities. In doing so, this acknowledges the modules' diversity and autonomy.

If you think you've seen this somewhere else, you are correct. This is Dependency Inversion Principle (SOLID aka DIP). While it is not identical in the literal sense, the equality stands in terms of the purpose and the effect. The test cases must look at the modules as the modules' user, and the test cases must not depend on the module's internal details. It must rely on the abstractness. The tests must target the abstract responsibilities not the modules. This is how you ensure flexible test cases and ultimately, this is how you test any module that share the same interfaces.

If there are still modules that you think need to be tested internally, it is likely a signal that the module may need to be separated as an independent module with its own responsibilities. If the internal implementation can be extracted as an independent class or a module, test cases can be written to test the external interface for this entity. Such is a good example of modifying the internal implementation to be an external interface.

Consider the test case to be the module's user. The user does not need to know the internal implementation of the module. This is the relationship between the test case and the module, and the test case is also a module that has the target module as a dependency.

Effective Test Cases

The effective test cases can be inspected from two different perspectives: the present and the future.

The test cases written for the present automate the tests for the current code. This can manage different input values and can reduce the time spent on checking the results. It is possible for the internal implementations (or what you thought were external interfaces) to be tested during this process simply because it is beneficial from the perspective of task automation. Furthermore, as more test cases pile up, it can prevent the side effects of newer codes interefering with the original codes.

While TDD does not directly help with the application's structural design, it can be incredibly beneficial in terms of managing the roles and responsibilities of the modules within a cooperative framework. Throughout this process, the separation between the ins and outs of the program is made clearer. Writing test cases before the actual program signifies programming from the users' perspective.

The test cases written for the future not only describe the target module's roles and responsibilities but also must become a detailed user's manual. The simple act of reading the description must provide the user with a complete understanding of the test cases--brief yet to the point. Furthermore, if there are changes in functionality or additions to the target module, the test cases must automatically ensure that the changes are in compliant with the existing specs in order to minimize the possible risks. In the extreme sense, it must be able to ensure the same functionality even after having changed the module entirely.

Regardless of whether you follow the TDD cycle or not, in order to maximize the efficiency of writing test cases, you must write test cases for the present while translating it to be future oriented as the project progresses. Clearly, you can begin with writing future oriented test cases. The important thing is to remove unnecessary test cases and to improve better tests along the way.

If you have adopted the TDD pattern or test automation for a project, you must commit to developing and refactoring tests as well as modules continuously. The test cases are what allows the modules to maintain the external interface while optimizing the internal codes to be faster and lighter. Since there cannot be a perfect code to begin with, the test cases must also evolve to be faster, lighter, and more intuitive. Test cases are modules that test other modules. They must be given the respect that they deserve. The mere act of writing test cases does not automatically make the program better. It can even be a hurdle and an annoyance. You must continuously strive to make better test cases and to come up with better testing methods.

"The single ultimate way to come up with effective test cases" for all projects does not exist. Only the best given the project's situation can exist. Understanding the benefits the test cases offer and what test cases can be of help is the bare minimum of writing appropriate test cases for the project and the stating point.

Test the Internal Implementation Only Through Public Interface

As I have previously mentioned, the test cases should not have direct knowledge of the internal implementations, and the internal implementations should only be tested through the public interfaces. The internal implementations are bound to be used by the public interfaces. If not, such codes should be removed. While it may be easier to test the internal implementations directly, we must remember that we are not writing test cases just for the sake of writing test cases. We are writing test cases to get the benefits of test automation by writing the tests.

Amidst the debate on "whether or not to test the internal implementations," there are some who claim that testing the internal implementations through the external interface makes it harder to test for the completeness of the program. This is because it is difficult to understand the range of the internal implementations that are being tested. Here, I offer you a little tip. The coverage is the metric designed to be used exactly for this purpose. It is not merely a tool to see which programmer has written good test cases.

Coverage is a metric that one can refer to when there are difficulties in determining the range of tests in cases like using the public interface to test the internal implementations. While it does not validate the quality of the tests, it can validate the quantity of the tests appropriately. It is necessary to keep in mind that one can write a completely useless test case and still achieve near 100% in terms of coverage. While the coverage does not guarantee quality, it provides an outline for which parts of the modules the test cases have been written and for which parts of the modules to write the next test cases. The coverage is exactly as advertised. Some people ask you ambitious questions like how much coverage you have when you tell people that you are following a TDD pattern, but when you consider the underlying intent of the question, (I'll hold my tongue and any further judgements.)

If you are developer seriously intent on implementing TDD pattern or test automations for a project, you'll more likely be interested in the separation of what is testable and what is not. The separation of testability can be thought of in terms of standard, method, and consensus, but I will refrain from going into more detail in this article.

Closing Remarks

In conclusion, it is my belief that the test cases and modules should be weighted equally. They are, in essence, modules that are responsible for testing other modules. If you think of the relationship between the test cases and the modules to be a relationship between two modules, you'll get a better understanding of how the test cases should treat the target modules.

For a front-end developer, test automation was one of the harder subjects. The test cases written in codes are more suitable for situations where the inputs and outputs consist of programmable data. To front-end developers, the side effects or visible results that cannot be quantified with data have always posed difficult challenges. Fortunately, with the advancements in front end came front end testing technology. With the numerous options, now, the challenge is to determine which option would yield the best results.

As is the commonly accepted fact in the programming landscape, there is not an ultimate solution, but the tests and methodologies will continue to evolve as independent fields of software development. We must not forget that the most important thing amidst all the changes is the test efficiency. We must continue to look at things objectively. Only the tests that are truly beneficial can help you. Tests that seem helpful may end up coming back to haunt you.

Sungho Kim2020.07.08
Back to list