Your suggestion is essentially arguing that if your code calculates the length of the hypotenuse (= Pythagoras' theorem), and you test this code, that you then don't need to test the underlying methods which calculate the squares and square roots of numbers.
Tests should not assume the implementation of the unit under tests.
Otherwise, you'd open the door to selectively not testing some things "because you know that that part works". This goes against the purpose of having an automated testing suite that you can run to detect regressions in the future.
Tests don't exist to succeed, they exist to fail.
This one may seem counterintuitive at first. Let's use an analogy:
we don't buy a smoke detector so the house doesn't catch on fire. We buy a smoke detector so that when the house is on fire, we know about it as soon as possible.
Similarly, we write tests because we want them to fail. That's their entire point, we use their failure to alert us that something's gone amiss in the codebase.
Scenario A: your Pythagorean theorem test fails. As we established, this is a complicated orchestration of several components (theorem calculator, square calculator, square root calculator). Which component failed?
You don't know. That's a problem. Not that you can't debug it, but I would question the quality of a test that cannot tell me specifically which part failed. This clearly means that we've not asserted our full process.
Scenario B: your Pythagorean theorem test succeeds. So now we know everything is working as intended, right?
Most likely, yes. I don't want you to think that you should never trust a test when it passes.
However, it's possible that you're not catching any "two wrongs make a right" kinds of scenarios. This requires a different example scenario to explain. For example you could have a complex process which requires you to parse a string from a particular source, remove the letter "e", and then return the cleaned up string.
Now consider this scenario:
- [Bug] The censoring logic doesn't actually remove the letter "e"
- [Bug] The parser logic doesn't store any character past the first 4 characters
- [Test data] You happen to have used
"abcde"
as your test string.
This is obviously a cherry-picked bug with a blatantly simple root cause, but I've encountered these kinds of scenarios several times in enterprise-grade applications (a much more convoluted bug than this simple example).
Not having tests that individually assert each component makes it significantly harder to understand that there is a problem, plus identify where it is located.
It's a trap!
In my experience as a developer and consultant over the years, it is a very common occurrence for developers to suggest cutting corners when writing their test suites. The intention of trying to cut down the total amount of effort required is good, and the developer might be putting in considerably genuine effort in achieving what they consider to be an equally qualitative test report.
However, in a significant amount of cases, developers fall into the trap of assuming that a test suite is equivalent if the happy path looks the same, but this is not the correct approach. Ideally, you should consider both paths, but if you're going to focus on one path, it should be the unhappy path.
In essence, a test suite does not succeed because its tests all pass, it succeeds because none of its tests fail. (I'm aware this is semantically equivalent but I'm trying to get you to focus on the failures rather than the successes).