This is a question related to proof and evidence.
When you have a test suite to help you in the verification and validation, you cannot be sure that the tests cover all the potential situations your software may go through once it is used in real life. Successful tests just proves that the specific cases that are tested work as expected in the testing condition. In more mathematical words, tests are just examples and not a proof.
For example, a test could produce perfect results in 100% of the test cases on the test machine but fail in some situations:
- on another machine, because there's not enough memory and your software cannot cope with this unexpected situation, or because of a flaw in the CPU that only affects some specific values;
- on the same machine, but with other values, because of subtle rounding errors (look at all these stackoverflow questions regarding unexpected failures of some comparison on floating point values).
A failed test is therefore a counter-example. It the evidence that invalidates the assumption that the system works as it should in all the cases (contraposition).
Additional information: Software engineering also knows methods to formally verify that systems are bug free. But these formal methods in real life are extremely difficult and expensive (more that 3 times more expensive than traditional methods). Therefore they are only used in rare cases. Most of the time a good test suite appears sufficient.