0

enter image description here

I would like to make a current source from a reference voltage chip (REF102). OPA277 is used as a buffer to pin the voltage at pin 4 to the bottom of the current setting resistor R. Hypothetically, if the reference voltage chip outputs a voltage at 2% error, and I use a 1% tolerance resistor to set the current, what would be the typical error of the current going to the load?

I know for uncorrelated error sources, we can use RSS (Root Sum of Squared) Tolerance method to calculate the total statistical error, but in this application the voltage and current are correlated. How does one go about calculating typ error in this case?

toolic
  • 5,637
  • 5
  • 20
  • 33
Efanatic
  • 53
  • 6
  • Are you asking for the min and max error, or the statistical distribution of error? For the latter you could probably assume truncated normal distributions for the resistor and reference, but you would need the statistical data from the manufacturer to know the std deviation and limits. You can probably safely assume that the "typical" value is centered around the nominal reference voltage and nominal resistance. – John D Dec 23 '22 at 20:22
  • Lets say I have the statistical distribution of error for both resistor and reference, to combine the standard deviations, would I just use the square root sum of the squares? I am guessing yes, because the distributions of error are uncorrelated. – Efanatic Dec 23 '22 at 20:45
  • https://en.wikipedia.org/wiki/Ratio_distribution – John D Dec 24 '22 at 02:51

0 Answers0