9

When looking up datasheets on logic gates, propagation delay is usually shown as a range. Sometimes there's a "typical" value in the middle, but there's a min and max listed. On what does the range vary?

Take this chip for example: https://www.ti.com/lit/ds/symlink/sn74auc2gu04.pdf?ts=1647469241013 enter image description here

At 1.2 volts, the propagation delay is shown as MIN 0.7ns and MAX 3.1ns. Is it just basically random each time? Does it depend on temperature? High->low vs low->high?

I don't really know how to handle that range, and it's likely to matter for my application.

AltF4
  • 109
  • 5
  • 6
    Especially for a 20 year old IC like that the delay may change over the years as manufacturing changes, etc. Think of the spec as giving them a range of values they have to stay within for as long as they sell the product, not necessarily a range of values any particular batch of parts will necessarily cover. – user1850479 Mar 17 '22 at 00:17
  • 1
    Multiple separate things: how does it vary a) over that family of parts (i.e. manufacturing variation, tolerance) and b) for that specific part (due to voltage and temperature). And c) how does it vary for that specific part, over time (ageing) – smci Mar 19 '22 at 02:19

3 Answers3

22

The propagation delay can vary with manufacturing conditions as well as operating conditions. Temperature probably will play an effect. However, the point is, you can't control the propagation delay. You are only guaranteed (assuming the honest manufacturers) that the propagation delay will be within the limits specified if the chip is operated under the limits specified. You are not guaranteed anything more.

I don't really know how to handle that range, and it's likely to matter for my application.

If this is a one-off, diy project, then you can try your circuit, and if it works to your satisfaction, then great. If it does not, you can try another chip, or tweak some component values.

However, if you plan to make multiple circuits, or if you need guaranteed reliability, then I strongly recommend that you redesign your circuit so that propagation delays (within the specified limits) will not affect the correct operation of the circuit. Even if the circuit works correctly with one batch of chips, it may fail with another batch of chips. Or it might work in someone's cool basement, but fail when placed in a hot attic. Or any other of a variety of situations. Except for single dyi projects that you don't mind tweaking, always design so that the circuit will work correctly even if the components happen to be at the limits of their specified tolerances.

Math Keeps Me Busy
  • 18,947
  • 3
  • 19
  • 65
  • Are you even guaranteed anything for **sure**? Is it impossible that very rarely the propagation delay will fall outside the stipulated interval even when under optimal operating conditions? I had the impression that logic gates might (rarely) fall into a metastable state, and stipulated operating conditions merely reduce that risk to a very small (but still nonzero) likelihood. – user21820 Mar 17 '22 at 09:10
  • 2
    @user21820: Metastability is really only a thing in flip flops when you have a timing violation on the input. Which kind of means you are operating outside of specification and there are no guarantees on the output level/voltage. – Michael Mar 17 '22 at 12:31
  • @user21820: The propagation time minimum is specified from the time the inputs leave a valid state, and the propagation time maximum is measured from the time the inputs re-enter a valid state. In some cases, valid input conditions depend upon a device's current state, and an input condition which would have been valid before a partial transition may cease to be valid. For example, suppose a Schmitt trigger input is specified to register a change from low to high somewhere between 3 and 4 bolts, and from high to low somewhere between 1 and 2 volts. If the input goes from 0.5V (valid low) – supercat Mar 17 '22 at 14:54
  • to 2.5V and sits there, that would be a valid logic level. If it then goes up to 3.01 volts for 1ns and back to 2.1 volts, and then starts ramping up to 4.2 volts, and its propagation delay was specified as 5ns to 10ns, its output would be guaranteed not to switch within the first 5ns of the first time the voltage reached 3.0 volts, and would stabilize within 10ns of the last time the voltage crossed the 4.0 volt threshold if it remained high, but could arbitrarily switch any time between those moments. – supercat Mar 17 '22 at 14:57
  • @supercat: Thanks for the detailed comment, which is helpful. From Michael and your comments, I gather that "metastability" is absolutely never a concern here. However, I am still curious as to whether there is a possibility of failure to obey the stipulated propagation delay bounds on the order of 1/10^18. My reason for asking is that a single CPU can have billions of MOSFET gates, and within 1 s we would have on the order of 10^9 gate delays, so even single failure every 10^18 gate operations would actually be observable. – user21820 Mar 18 '22 at 12:06
  • @user21820: Think about a pin at a bowling alley. If a ball hits the pin squarely, it will be obvious within a very small fraction of a second that the pin has been knocked over. If a pin isn't hit at all, it will be obvious that it's going nowhere. If a pin grazed, however, it might wobble for awhile before it ends up either falling down or returning to equilibrium upright. Reliable digital designs need to whenever practical ensure that no pin will ever be "grazed", and when that isn't practical they need to include some other means of possibly-artificially resolving such issues. – supercat Mar 18 '22 at 14:44
  • @user21820: At an actual bowling alley, this would typically be done by having a pin-sweeper activate a couple of seconds after the ball reaches the back of the lane, and then making a judgment as to whether the pin fell before the sweeper hit it. It's possible that there might be a close call where it looks as though a pin might have fallen just as the pin sweeper hit it, but that's far less likely than that a pin might wobble for two seconds. Further, if one wanted to add an extra level of certainty, one could say that if the pin judge rules within two seconds of the sweeper trigger... – supercat Mar 18 '22 at 14:47
  • ...that the pin is down, then the pin scores. If a judge would take more than two seconds to recognize that the pin should score, it doesn't. While a judge might theoretically take exactly two seconds to rule after a pin took almost exactly two seconds to fall, that would be exceptionally improbable. – supercat Mar 18 '22 at 14:48
  • @supercat: Sorry, but your last few comments are not addressing my question at all. It is obvious to anyone that in designing circuits one would have to take into account potential failures. I asked you whether or not there **can be failures on the order of 1/10^18**, because of the sheer number of gates on a single CPU. Can you answer that or not? – user21820 Mar 18 '22 at 15:01
  • @user21820 Parts falling outside of the datasheet specs are a QC issue, not a design issue. QC is always needed, either with outgoing inspection agreements from the supplier or incoming inspection by the manufacturer. – J... Mar 18 '22 at 15:26
  • @J...: I don't think you [] understood the statistical issue here. QC **cannot detect** failure on the order of 1/10^18. [Edited by a moderator.] – user21820 Mar 18 '22 at 15:37
  • @user21820 If your circuit will fail when that order of error is present then obviously you can detect that failure or you wouldn't be concerned about it. – J... Mar 18 '22 at 15:38
  • @J...: [] I was asking about the logic gates mentioned in this question, which are **claimed** to satisfy certain guarantees. Nobody has provide a single bit of evidence that there are no failures on the order of 1/10^18, and QC cannot establish that. My reasons for asking are: (1) I believe such failure is possible; (2) it is not an idle inquiry since CPUS have billions of MOSFET gates (which are **clearly not** the ones in the question), so I want to know whether people **know** that there is a microscopically small failure probability. [Edited by a moderator.] – user21820 Mar 18 '22 at 15:45
  • @user21820: Ah, so you were wondering about non-metastability-related failures rather than e.g. a potential failure of a double-synchronizer to guarantee propagation delays. In a properly designed, specified, and manufactured part, the kinds of failures you're talking about "just don't happen". The probability of occurrence is so much smaller than the probability of other catastrophic failures (e.g. being struck by a meteorite) that it's not worth investing any engineering effort into addressing them. – supercat Mar 18 '22 at 16:36
  • @supercat: Yes, in practice we don't care whether a consumer-grade TI logic gate fails at such a minuscule rate. But I was just asking whether we know in theory how such failures occur. Is it merely exponential decay with increasing time? Or super-exponential? Why ask? Because such issues for a macroscopic logic gate may also show up for microscopic gates in a CPU. If I don't ask, it's like saying let's not care because it won't happen to me. – user21820 Mar 18 '22 at 16:44
  • 1
    you are not guaranteed anything no. There is going to be a failure rate of so many parts per thousand or million just do to mfg process if nothing else. You validate the design, and you build wafers that intentionally lean fast and slow, and you exercise the parts from those wafers. You also have experience from that process, company ABC and DEF may have run millions of parts before XYZ came along and decided to make parts at that foundry with that process. So some of it is design, some is experience, some is testing. And some parts slip through – old_timer Mar 20 '22 at 00:45
  • 1
    You do not test every part to destruction nor test the to the point of damage. You need to get each part/wafer through the fixture fast, so you do what you can and expect that there will be some that fall through. Based on failure analysis of the ones that fall through that you can analyze, you may or may not choose to add more tests, not add more tests, add an errata, change the documentation, or just do nothing... – old_timer Mar 20 '22 at 00:47
  • But the folks that buy parts and put them on their boards go through all the same things, design, test the design, as volumes increase, testing decreases, tolerate failures in the field and returns. If you knowingly have a part that has specifications and your design is flawed because you did not honor those specifications, that is on you not the chip folks. – old_timer Mar 20 '22 at 00:48
  • @user21820 Yes, obviously manufacturers are acutely aware of the small probabilities of bad components. The problem is formally managed under the umbrella of [Acceptable Quality Limits (AQL)](https://en.wikipedia.org/wiki/Acceptable_quality_limit). I'll say again - this is a QC issue. Depending on the reliability requirements of the product it can either be eliminated before shipping with stricter QC or you simply RMA the few that do escape to customers and end up not working. – J... Mar 21 '22 at 13:14
  • @old_timer: Thanks for your comment. You are the first in this thread who has directly said "no you are not guaranteed anything for sure", instead of beating around the bush. That is, the above post is incorrect because, even assuming an honest manufacturer, we are **not** guaranteed that the propagation delay will be within the limits specified if the chip is operated under the limits specified. – user21820 Mar 21 '22 at 14:19
  • You also raised a relevant point that even honest manufacturing and testing will still yield a measurable nonzero failure rate (because among other things we do not test to the point of damage). That's also important to know, and in my opinion should be part of the correct answer. So thank you. – user21820 Mar 21 '22 at 14:23
  • @J...: [] I was asking about a chip in **perfectly good condition according to specifications**, and whether it could fail. So far, nobody has actually answered that question. *old_timer* gets close, but focuses on the fact that we cannnot catch all failures in reasonable QC testing. Nobody has addressed the possibility that a chip is physically exactly as designed and yet can fail extremely rarely. And nobody has addressed my query about the failure rate in this case: *Is it merely exponential decay with increasing time? Or super-exponential?* [Edited by a moderator.] – user21820 Mar 21 '22 at 14:27
  • @user21820 You are not guaranteed in the sense of a physical law, but you are guaranteed in the sense of legal liability. If a manufacturer sells you parts that do not meet advertised or contractual specifications, then those parts are _defective_ from a legal point of view, and you may be able to recover damages from the manufacturer. Honesty will not necessarily prevent a supplier from selling defective parts, because defective parts can slip through the cracks. However, there are dishonest suppliers who sell defective and/or counterfeit parts as a regular business practice. – Math Keeps Me Busy Mar 21 '22 at 14:30
  • @MathKeepsMeBusy: Um, that makes a big difference in meaning; I suggest you clarify your post if you really meant a **legal** rather than **physical** guarantee. Thanks. – user21820 Mar 21 '22 at 14:32
  • @user21820 No supplier can _physically_ guarantee that none of their parts are not defective. All they can do is make a promise that has legal weight, and they can take whatever steps they deem appropriate to ensure their promise is fulfilled. To me, that seems so obvious that it does not generally require explanation. – Math Keeps Me Busy Mar 21 '22 at 14:37
  • @All - This comment thread deteriorated into personal attacks and snarky comments. Some comments have therefore been deleted. It's impossible to please everyone when drawing a line, sorry, but a line needed to be drawn so I did. || If there is a new question then it should be asked separately (referencing this one, if appropriate) since asking a new question is not an allowed use of comments. || Also remember that no-one *has to* answer a question in a comment. And repeating such questions (especially with added snark) may constitute harassment. Remember: [Be Nice](/help/conduct). Thanks. – SamGibson Mar 21 '22 at 17:37
5

Is it just basically random each time?

No - from microsecond to microsecond it will be fairly stable. Over longer periods it will drift, especially with temperature.

Does it depend on temperature?

Yes, very much so. Most digital logic is a temperature sensor whether you want it to be or not.

High->low vs low->high?

Some of the better data sheets quote them separately. They are not guaranteed to be the same, although they will move together.

I don't really know how to handle that range, and it's likely to matter for my application.

You haven't told us what the application is, but generally you have to "design it out" so that the variance in delay doesn't matter.

While the different devices in a package won't have exactly the same delay, it will be very close, and they will tend to move together because they're all the same temperature inside the package. Does that make your life easier?

pjc50
  • 46,540
  • 4
  • 64
  • 126
-1

Most logic gates use CMOS technology and the MOSFETs inside the logic gate have Source-Gate capacitance.That is why there is a delay.

Jun Seo-He
  • 495
  • 3
  • 11