We can see the parity bit as a simple checksum. No checksum system, included the parity, can assure the received datum is correct. These system can however reduce the possibility of taking bad data as good, which is totally different.
Let's start talking about a one byte checksum. For every datum (a packet) you receive, you have 256 possible values for the checksum, and of those 256, only one is correct. So, 255 among 256 possible errors are detected. That's no bad.
If now we go to the parity bit, we have only two possible values for the checksum: 0 and 1. So we can detect 1 error among 2.
The bottom line is this, I think: we can not speak about "how many errors can we detect?", instead we can speak about "how much we improve the error detection?". Using no parity bit, there is no error detection; using a single bit of checksum (a parity) we [can potentially] detect 50% of the errors. Depending on which side we look at the question, either we doubled the quality (because we halved the probability of undetected errors), or we have made an infinite step ahead.
Last thought: the parity bit protects every single byte transmitted on a serial line. Often a meaningful message is made of several bytes; if we invalidate the whole message as soon as any single byte fails the parity, then every byte in the message doubles the quality of the error detection.