4

I need a way to predict delta temperature curve based on previous trend. I do a lot of heat rise testing on electrical conductors and I would like to be able to atleast get an approximation of the curve before it is done.

For example if I already have the data points in this curve before the line. I want to me able to predict the rest after the line. It doesn't have to be that accurate. If I am within 10% that would be fine.

enter image description here


ETA: I'd like to ahead some more light on this question. I think part of the problem with exponential decay not fitting to my graph is that the ambient rose over time. See below I added ambient to my graph and created another that is only rise over ambient. Disregard the large dip in rise over ambient after it stabilized. This data comes from a test that was done outside, so wind and or sun/shade could account for that.

Sample and Ambient Temperatures enter image description here Rise over Ambient enter image description here

TheColonel26
  • 301
  • 1
  • 4
  • 17
  • I've asked a lot of people this very question. It never occurred to me ask here. You get the vote. – gbarry Apr 19 '15 at 01:38
  • Found a related question at http://electronics.stackexchange.com/questions/33106/predicting-thermal-equilibrium-whats-wrong-with-my-time-constant – gbarry Apr 22 '15 at 21:38

4 Answers4

3

You've already been given good answers, so I won't expand on those. I just point out something no one seems to have mentioned: extrapolation of a function value can be risky if you don't have some theoretical justification that guarantees that the extrapolated curve follows the model you have chosen. In other words you must be sure in advance that the "shape" of the actual curve (noise aside) matches that of the model, otherwise you should check that the results make sense. From what you say it seems that your systems all follow the same pattern (some sort of thermal "RC" equivalent circuit with a exponentially flattening response, like the charge of a capacitor), so you should be safe. If you deal with systems/test setups which exhibit a different pattern be aware that you will have to change your model.

  • +1 In particular, the curve will likely be significantly different from a simple exponential if there are multiple time constants that are close to each other. – Spehro Pefhany Apr 19 '15 at 13:54
  • Very good point, and this is the reason why the green curve in my answer doesn't predict the later behavior well. – sweber Apr 19 '15 at 15:18
2

As Majenko said, curve fitting using Least Squares is an appropriate solution. However, you must pick an appropriate function to fit to. A direct line fit is not appropriate here for long time predictions, but is good enough for short time predictions.

A more appropriate function is an exponential decay function: \begin{align*} T = a - b e^{-c t} \end{align*}

This is a non-linear function and thus requires some sort of iterative process to get the fitting parameters a, b, and c. However, it's not too hard to do. The basic idea is to use a Newton-Raphson root finding algorithm along with a least squares algorithm. Luckily, there are several pre-existing implementations. If you're interested in learning the mathematical derivation, see here (though it does get quite technical).

Here is an implementation using Scipy's leastsq function. I use an explicit Jacobian matrix because for some reason the estimated Jacobian gives bonkers results.

import numpy as np
from scipy.optimize import leastsq
def exp_decay(args, t):
    '''
    args: list-like with the 3 curve fitting parameters a,b,c
    t: times
    '''
    a,b,c = args
    return a - b*np.exp(-c*t)

def jac(args, T, t):
    a,b,c=args
    res = np.zeros([3,len(t)])
    res[0,:] = -1
    res[1,:] = np.exp(-c*t)
    res[2,:] = -b*t*np.exp(-c*t)
    return res

def find_coeffs(T, t):
    '''
    helper for finding the fitting coefficients
    '''
    resid = lambda args,T,t: T-exp_decay(args, t)
    return leastsq(resid, np.array([1,1,1e-9]), args=(T,t), Dfun=jac, col_deriv=True)[0]

# some dummy data to demonstrate usage
T = np.array([-5,0,5,7,9,12.5,15,17])
t = np.array([1,500,935,1402,1869,2803,3737,4600])

coeffs = find_coeffs(T,t)

# plot the results to show the fit
t_p = np.linspace(1,25000,25000)
from matplotlib.pyplot import *
plot(t_p, exp_decay(coeffs, t_p), label='exponential decay fit')
plot(t,T,'o')
grid(True)
show()

To demonstrate why using an appropriate function matters, I pulled a few rough data points from your plot (marked in green in the plot) and fit both a line and the exponential decay function to them. As you can see, a line fit gets the answer roughly right for a very short period of time, but deviates a lot over longer times. The exponential decay fit is much closer, and adding more data points would give a better final answer.

enter image description here

helloworld922
  • 16,600
  • 10
  • 54
  • 87
1

I think what you are after is the "Least Squares" algorithm.

I use it in my Average library for Arduino for predicting the temperature rise of my reflow oven to account for the latent heat of the elements so it can turn off in time.

This is a snippet from that template-based library:

template <class T> void Average<T>::leastSquares(float &m, float &c, float &r) {
    float   sumx = 0.0;                        /* sum of x                      */
    float   sumx2 = 0.0;                       /* sum of x**2                   */
    float   sumxy = 0.0;                       /* sum of x * y                  */
    float   sumy = 0.0;                        /* sum of y                      */
    float   sumy2 = 0.0;                       /* sum of y**2                   */

    for (int i=0;i<_count;i++)   {
        sumx  += i;
        sumx2 += sqr(i);
        sumxy += i * _store[i];
        sumy  += _store[i];
        sumy2 += sqr(_store[i]);
    }

    float denom = (_count * sumx2 - sqr(sumx));
    if (denom == 0) {
        // singular matrix. can't solve the problem.
        m = 0;
        c = 0;
        r = 0;
        return;
    }

    m = 0 - (_count * sumxy  -  sumx * sumy) / denom;
    c = (sumy * sumx2  -  sumx * sumxy) / denom;
    r = (sumxy - sumx * sumy / _count) / sqrt((sumx2 - sqr(sumx)/_count) * (sumy2 - sqr(sumy)/_count));
}

template <class T> T Average<T>::predict(int x) {
    float m, c, r;
    leastSquares(m, c, r); // y = mx + c;

    T y = m * x + c;
    return y;
}

The first function takes an array of data (in _store[]) and calculates the slope in the form y = mx + c. It gives you the M and C (as well as R, but I forget what that is now). The second function then just uses that formula to predict what the value will be at some point in the future.

Majenko
  • 55,955
  • 9
  • 105
  • 187
1

While I in general agree with the other answers, the methods to achieve the result are way to complicated. Use the right tool to do the job, and you're done in a few lines which concentrate on the job itself. For example gnuplot:

t(x)=a-b*exp(c*x)
a=25
b=30
c=0.00001
fit t(x) "temperature" via a, b, c
plot "temperature", t(x)

Here, temperature is a simple text file containing time and temperature in two columns.

The initial values are estimated as follows, after the fit-command, they contain the finally computed values:

a is the final temperature, when the exponential is =0 for large x
b is the total change in y
c is just a guess

As result, I also get this print-out:

Final set of parameters            Asymptotic Standard Error
=======================            ==========================

a               = 27.7729          +/- 0.2875       (1.035%)
b               = 30.9734          +/- 0.6285       (2.029%)
c               = -0.000225597     +/- 1.115e-05    (4.941%)

If I only take into account data in the xrange 0...4500, I get:

Final set of parameters            Asymptotic Standard Error
=======================            ==========================

a               = 19.192           +/- 0.4385       (2.285%)
b               = 25.5059          +/- 0.3719       (1.458%)
c               = -0.00051065      +/- 2.074e-05    (4.061%)

Here is a plot of the result. (I used WebPlotDigitizer at http://arohatgi.info/WebPlotDigitizer/ to get some data from your plot)

enter image description here


As you can see, the formula given by @helloworld922 fits the data very well if you look at the x-range 0...4500 only (green curve). But after, the data rises to about 27°C, while the formula says the final temperature is 19°C.

If you use the entire data, the (blue) curve does fit to the data overall somehow, but not in detail, especially not in the xrange <4500.

The difference in the final temperatures is ~8.5K.

Does "10%" make sense on a celsius scale?

The celsius-scale is arbitrary, the origin (0°C) is not any kind of physical "zero". Imagine you inherit a bank account and don't know the initial balance, nor will you ever get the actual balance. So, you just say you got it with $0.00 and keep track of what you pay into and take out. One day, you know there should be $50 more on the account since you inherited it, another day, its $100. So, is the balance of the second day twice the balance on the first? What if your grandma left you $1,000,000 on that account, and you just don't know? Finally, this does not make sense. Only if you use the Kelvin-scale, where 0K is the physical lowest possible temperature, percentages make sense. In this context, 8.5K <-> 300K (~room temperature) is 2.8% ...

As said, the given formula does not describe the entire data very well, especially processing just the first part of the data does not give a good prediction. You need an other formula fitting the whole data range very well. This would be a formula consisting of more than one exponential function. (This would describe that you are heating some material, which at some point starts to dissipate the heat somewhere else)

But: As the given formula describes the first part of the data very very well, (so there is no deviation giving more information), a more complex function would fail to give you a better prediction.

sweber
  • 8,580
  • 1
  • 15
  • 35
  • A percentage makes perfect sense as long as one defines it as celsius *difference* and not an absolute scale. I doubt if the OP is trying to measure some thermodynamic variable, so just shift the origin of the scale to some convenient and meaningful value such as the initial temperature. It looks like there are two different thermal time constants here, which is not surprising given the physical model. If one knows these, it is indeed possible to get a better prediction. – Oleksandr R. Apr 19 '15 at 15:58