Not all unintentional or undesirable aspects of software behavior are bugs. What is important is to ensure that software has a useful and documented range of conditions in which it can be relied upon to operate in useful fashion. Consider, for example, a program which is supposed to accept two numbers, multiply them, and output the results, and which outputs a bogus number if the result is at more 9.95 but less than 10.00, more than 99.95 but less than 100.00, etc. If the program was written for the purpose of processing numbers whose product was between 3 and 7, and will never be called upon to process any others, fixing its behavior with 9.95 wouldn't make it any more useful for its intended purpose. It might, however, make the program more suitable for other purposes.
In a situation like the above, there would be two reasonable courses of action:
Fix the problem, if doing so is practical.
Specify ranges in which the program's output would be reliable and state that the program is only suitable for use on data which is known to produce values within valid ranges.
Approach #1 would eliminate the bug. Approach #2 might make the progress less suitable for some purposes than it otherwise might be, but if there is no need for programs to handle the problematic values that might not be a problem.
Even if the inability to handle values 99.95 to 100.0 correctly is a result of a programming mistake [e.g. deciding to output two digit to the left of the decimal point before rounding to one place after, thus yielding 00.00], it should only be considered a bug if the program would otherwise be specified as producing meaningful output in such cases. [Incidentally, the aforementioned problem occurred in Turbo C 2.00 printf code; in that context, it's clearly a bug, but code which calls the faulty printf would only be buggy if it might produce outputs in the problematic ranges].