7

All of the software debouncing routines I've seen involve waiting until some number of sequential reads of a signal return 0 or 1. That makes sense, of course. But it means there's an inevitable compromise between robustness and latency. The more readings you demand to accept a change in level, the longer the response time.

It seems like a simple alternative would be to simply ignore the input readings for a certain amount of time after an edge. If the switch had been reading 0, and then a single poll returns a 1, then interpret this as a logical 1 for the duration of the expected bounce period. Likewise when transitioning from 1 to 0.

Obviously this would still limit the maximum input rate. But it would also bring the latency for a single button press down to nearly zero, even for extremely long debounce times.

Are there problems with this approach? It seems like an obvious approach to software debouncing, so I'm surprised it doesn't seem to be used.

Sneftel
  • 338
  • 1
  • 11

3 Answers3

4

I would offer a Conditional Yes.

Your suggested approach assumes nice, clean signals. If you were to pick up any noise on the trace you would run the risk of acting on faulty information.
For example: If there was a voltage spike(s) on the signal when you polled the input, you would read a 1 in your program. In your proposal, you would assume that the switch had be pressed (when it really had not) until you polled the input again after your "debounce interval". At which point you find out that, oh wait just kidding, the switch really wasn't pressed.
So, for the duration of the "debounce interval" you are continuing through your program presumably acting on the faulty information that the switch has been pressed.

Basically it comes down to:
What are you trying to protect against?
What's the worst that can happen if you're wrong?

If you were building a battery-operated kid's game in a plastic box, there probably won't be much electrical noise to screw up your inputs; so your simplified debouncing would be just fine.
However, if this part of life-support device in a hospital... you might want to go with a little more robust input debouncing logic.

Adam Head
  • 1,416
  • 2
  • 17
  • 27
  • Makes sense. So then the best approach might be a hybrid one -- a short "must all agree" period, followed by a longer "inhibit" period. – Sneftel Feb 03 '14 at 21:42
  • @Sneftel Sure; it's just a question of 'what are you trying to filter out?' – Adam Head Feb 03 '14 at 22:09
3

Yes, it will work.

The conventional approach has the advantage that it will reject noise as well.

Usually mechanical bounce is limited to \$ \le \$ 5msec, so I don't think you'd see the difference in apparent response time.

Spehro Pefhany
  • 376,485
  • 21
  • 320
  • 842
  • Do you have a reference for the statement that bounce is limited to less than 5 ms? My recollection is that it can be much longer. – Joe Hass Feb 03 '14 at 21:18
  • On small switches (eg. tact switches) from generic Asian suppliers, my experience is that it is sufficient. I looked up three with max specs, two said < 1msec, one said < 3msec on/8msec off. 5msec is the guarantee on the mechanical encoders we use. I'm sure if you tried you could find a counter-example, for example a big snap-action switch, but on such as switch a 20msec delay is even less noticeable than on a "keypad" type switch. "One size fits all" Maxim debouncers use 20msec. If it starts to creep up to 50msec debounce delay, it becomes noticeable on a keypad. – Spehro Pefhany Feb 03 '14 at 21:35
2

That would work, of course. But it can be even simpler: I always debounce by reading the pin(s) at least 50 ms apart. Simple and effective.

That does indeed introduce a latency of up to 50 ms. I don't think any human will notice the difference. (How far does a push-button button travel in 50 ms?)

Wouter van Ooijen
  • 48,407
  • 1
  • 63
  • 136