All of the software debouncing routines I've seen involve waiting until some number of sequential reads of a signal return 0 or 1. That makes sense, of course. But it means there's an inevitable compromise between robustness and latency. The more readings you demand to accept a change in level, the longer the response time.
It seems like a simple alternative would be to simply ignore the input readings for a certain amount of time after an edge. If the switch had been reading 0, and then a single poll returns a 1, then interpret this as a logical 1 for the duration of the expected bounce period. Likewise when transitioning from 1 to 0.
Obviously this would still limit the maximum input rate. But it would also bring the latency for a single button press down to nearly zero, even for extremely long debounce times.
Are there problems with this approach? It seems like an obvious approach to software debouncing, so I'm surprised it doesn't seem to be used.