4

I have a system that's sampling two 50Hz sine waves at 1kHz. What's the best way to estimate the phase shift between the two?

The measurements are somewhat noisy and the sampling period is not very even. I know the sample times to a high resolution but can't control them to be evenly spaced. The phase shift needs to be estimated on chunks of data covering approximately one second.

The measurement noise makes the zero-crossing method unreliable; if the noise causes a measured signal to cross zero twice at one "real" zero-crossing then the algorithm breaks down somewhat.

At present I'm doing something like this (python code):

t, v, a = [NumPY arrays of samples]
p_inst = v * a
s = numpy.sign(p_inst)
s_avg = numpy.trapz(s, x=t) / (t[-1] - t[0])
shift_fraction = s_avg / 2 + 0.5
shift_angle = (1 - shift_fraction) * numpy.pi

Here s is the sign of the product of the two sine waves. Finding the average of this using numpy.trapz gives a value representing how often the two signals have the same sign, with 1 meaning "all of the time", -1 meaning "none of the time" and zero meaning "half of the time". This is converted to a fraction of the time the signals have the same sign (shift_fraction) and this is converted to a phase-shift angle.

The biggest problem with this is that it doesn't tell you whether the phase difference is leading or lagging.

Does anyone have a better method?

Edit For those suggesting finding the correlation peak, am I right in thinking that, with 20 samples per cycle, that method will necessarily limit the resolution of the estimate to 18 degrees? What would you think of instead calculating numpy.correlate(v, -a), finding the first zero crossing and then interpolating to estimate the actual zero point? I think that, for sinusoidal signals, the correlation should also be sinusoidal, and so the small angle approximation should make the interpolation reasonably good.

Can anyone compare the correlation and demodulation methods?

Further edit And, for those giving the demodulation method, am I right in thinking that if I don't care about the actual phase difference, just the power factor, then I can do this:

s = sin(t * 2 * pi * f)
c = sin(t * 2 * pi * f)
a_s_d = s * a
v_s_d = s * v
a_c_d = c * a
v_c_d = c * v
s_a = numpy.trapz(a_s_d, t)
c_a = numpy.trapz(a_c_d, t)
s_v = numpy.trapz(v_s_d, t)
c_v = numpy.trapz(v_c_d, t)
# cos ( a - b ) = cos a * cos b + sin a * sin b
power_factor = c_a * c_v + s_a * s_v

Or can I not rely on 0 < s_a, c_a, s_v, c_v < 1?

Tom
  • 346
  • 1
  • 2
  • 11
  • 3
    Have you considered taking a cross-correlation of the two signals, using `numpy`'s [`correlate`](https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.correlate.html) or a library such as `scipy` (i.e., `scipy.​signal.​signaltools.correlate(A, B)`)? The time-shift of the peak of the cross-correlation captures the phase difference. There's a [good answer](https://stackoverflow.com/a/6157997/5209610) on StackOverflow that discusses this (it's more complex though). – Vladislav Martin Jun 15 '17 at 14:31
  • cross correlation, but you will get peaks every 2*PI if signals are sine. – Marko Buršič Jun 15 '17 at 15:11
  • 1
    Why is this tagged, "power factor correction"? Power factor correction has to do with high voltage lines and not sampling. –  Jun 15 '17 at 15:46
  • 1
    @KingDuken, he's estimating the phase difference between voltage and current waveforms (based on the names of the variables in the pseudo-code). Probably for the purpose of estimating the power factor. – The Photon Jun 15 '17 at 16:11
  • Filtering could help with the noise. Maybe zero crossing would then be more reliable. – user57037 Jun 16 '17 at 06:27
  • @MarkoBuršič You're right. However, given two finite sinusoids, the tallest of those peaks will correspond to (0 <= time-shift < sine's period). The rest of those peaks will degrade with every additional (2 * pi) phase-shift. – Vladislav Martin Jun 16 '17 at 17:16
  • If you are trying to estimate PF then look at my answer here: https://electronics.stackexchange.com/questions/76213/how-to-sense-the-zero-crossing-of-current - if you are trying to estimate power then check this out too: https://electronics.stackexchange.com/questions/202667/whats-the-most-economical-way-to-digitally-measure-240v-mains-voltage-current and this: https://electronics.stackexchange.com/questions/82188/how-is-the-power-factor-part-involved-in-wattmeter-reading – Andy aka Jun 16 '17 at 17:58

6 Answers6

3

The easy to follow but cpu intensive way

If you want something which is obvious how it works and simple to understand, and you don't mind throwing lots of CPU cycles at it (sounds like you don't, if you're using python), then you could just fit sine waves to the two signals and read the phase out from the fit function.

You'd probably want to use scipy.optimise.curve_fit() to fit something like \$V=A \times \sin \left(2 \pi f t+\phi \right)\$ with \$\phi\$ as the only free parameter. This could converge on several different values of \$\phi\$, so take it modulo \$2\pi\$, then do the same for the current and subtract the two values of \$\phi\$, taking care to add or subtract \$2\pi\$ to get it in the meaningful range.

This will be much, much more CPU intensive than filtering, or doing realtime homodyne detection, but it's easy to follow, only a few lines long, and CPU cycles are cheap.

If you can't afford those cpu cycles

Then you might want to try generating two quaderature sine waves in software, then using them to do IQ demodulation of the two signals. Then calculate the phases and subtract. This will not be as easy to follow, but could be made very fast (even in python) and will be very, very accurate.

Take a chunk of data which is an integer number of cycles (assuming your 50Hz grid is nice and stable, just take 1s of data), and multiply it point-by-point with each of the generated sine waves. Then integrate each one using numpy.trapz() to get two scalars. Take these as the arguments to numpy.arctan2(), and voila, you have the phase of that signal relative to the generated sine wave. Do the same for the the second signal and subtract the two to get the difference. As above, add or subtract \$2\pi\$ to get it in the meaningful range.

Jack B
  • 12,167
  • 1
  • 27
  • 46
  • I ended up using your quadrature demodulation method - sorry it took so long to credit! – Tom Jul 10 '18 at 15:59
3

I'm going to give two options. I don't know which will give better results.

  1. Interpolate your data onto a regular time grid. Then use all the various well-known methods for phase estimation (zero-crossings, cross-correllation, ...) that you and other answers mentioned.

  2. Construct a new sine and cosine array on your time grid. Estimate

$$A = \int_0^T f(t)\sin\left(2\pi f t\right){\rm d}t$$ and $$B = \int_0^T f(t)\cos\left(2\pi f t\right){\rm d}t$$

for each of your two signals, using trapz as you did to estimate average power.

Now the phase of each signals relative to the reference cosine can be estimated as \$\tan^{-1}\left(\frac{A}{B}\right)\$, or in Python numpy.arctan2(B, A). And the phase difference between the two signals can be obtained by subraction.

Caveat: There's a chance I made an error in the above that will give an error of \$\pi/2\$ in the individual signals' phase estimates. Also be careful of possible \$2\pi\$ wrap-around in the final subtraction. Check your results for reasonableness before using them.

The Photon
  • 126,425
  • 3
  • 159
  • 304
2

In the world of signal processing, phase-difference is commonly estimated with the cross-correlation of two signals; let's call them a(t) and b(t). Given a(t) and b(t) are similarly shaped wave-forms, cross-correlation should be a reliable approach.

Here's a Python script (part of which was inspired by this SO answer) that uses the correlate function found in the numpy library to calculate the phase shift between two sine waves, a(t) and b(t). You should be able to extrapolate the contents of this script for your own application:

import numpy as np

# Create the time axis (seconds)
num_samples = 10001
samples_per_second = 1000
freq_Hz = 50.0
t = np.linspace(0.0, ((num_samples - 1) / samples_per_second), num_samples)
# Create a sine wave, a(t), with a frequency of 1 Hz
a = np.sin((2.0 * np.pi) * freq_Hz * t)
# Create b(t), a (pi / 2.0) phase-shifted replica of a(t)
b_shift = (np.pi / 2.0)
b = np.sin((2.0 * np.pi) * freq_Hz * t + b_shift)

# Cross-correlate the signals, a(t) & b(t)
ab_corr = np.correlate(a, b, "full")
dt = np.linspace(-t[-1], t[-1], (2 * num_samples) - 1)
# Calculate time & phase shifts
t_shift_alt = (1.0 / samples_per_second) * ab_corr.argmax() - t[-1]
t_shift = dt[ab_corr.argmax()]
# Limit phase_shift to [-pi, pi]
phase_shift = ((2.0 * np.pi) * ((t_shift / (1.0 / freq_Hz)) % 1.0)) - np.pi

manual_t_shift = (b_shift / (2.0 * np.pi)) / freq_Hz

# Print out applied & calculated shifts
print ("Manual time shift: {}".format(manual_t_shift))            --> 0.005
print ("Alternate calculated time shift: {}".format(t_shift_alt)) --> 0.005000000000000782
print ("Calculated time shift: {}".format(t_shift))               --> 0.005000000000000782                         
print ("Manual phase shift: {}".format(b_shift))                  --> 1.5707963267948966, b(t) relative to a(t)                            
print ("Calculated phase shift: {}".format(phase_shift))          --> -1.570796326794651, a(t) relative to b(t)

When phase_shift is negative, then you know that the 1st signal input in correlate() is lagging behind the 2nd signal input. In our case, this means that a(t) is lagging behind b(t) by (pi / 2). When phase_shift is positive, then you know that the 1st signal input is leading ahead of the 2nd signal input.

As you can see, the cross-correlation of two signals can be simply used to detect the phase difference between two signals. The error vector between the actual & calculated time / phase shifts will remain low as long as your signals are linearly related, sampled at the a rate above their Nyquist Limit, and have a decent SNR.

Fun fact, you can still use the cross-correlation for this application even if the signals are a-periodic! Take caution when taking the cross-correlation between two signals that are related non-linearly.

  • Thanks for a detailed answer. Could you have a look at the edit to the question and respond? In your example, you have 1000 samples per cycle, while I have 20 - doesn't this rather limit the resolution of this method? – Tom Jun 16 '17 at 08:26
  • Also, isn't generating an array of `[-t:t/num_samples:t]` and using an index lookup rather a wasteful way of calculating the time shift? What's wrong with `2 * t * (ab_corr.argmax() - num_samples/2) / num_samples`? – Tom Jun 16 '17 at 08:29
  • @Tom See my results; they match the (pi / 2) phase shift we expect well. I don't believe that cross-correlation is limited by the sampling rate, as long as you're obeying Nyquist's Theorem (i.e., not under-sampling your signals). Regarding the `dt` array; I've included an alternative method for calculating time shift that avoids an index look-up (producing same time shift as with the `dt` index look-up method). One thing that is wrong with your suggestion is that it will also generate an array (remember, `t` is an array)... – Vladislav Martin Jun 16 '17 at 16:48
1

You do not mention it, but based on your variables it appears that you are attempting to measure the phase shift between current and voltage on a 50 Hz line.

To do proper phase measurements it is essential that the signal be clean and you specifically mentioned it is not. To get a clean signal, you can do low pass filtering either in an analog fashion prior to the ADC or digitally post the ADC. Naturally any phase shift from filtering must be equal for both signals or the introduced phase error must be predictable.

Two common means of implementing a digital low pass filter are IIR and FIR algorithms. Fortunately, the scipy library has them built in. For example, look at scipy ifilter functions. Be aware however that your data is considered nonuniform time data so you will first need to first resample your data using a spline or other appropriate interpolation algorithim.

Once the signals have gone through the LP filter then the most common technique for finding the phase difference is to find the peak of each signal, compute the time difference between peaks, and convert this to degrees or radians.

Glenn W9IQ
  • 5,526
  • 11
  • 17
  • Actually it'd be more accurate to say that I'm not very confident about the noise in the signals, rather than that I have any hard evidence to say it's bad. As for your method of calculating the phase shift, how is it better than the zero-crossing method? It seems to me that the small slope at the peak of a sinusoid would make estimating the peak worse than estimating the zero crossing. – Tom Jun 16 '17 at 08:23
  • I have used it before quite successfully. With your high sample rate, it shouldn't be s problem. – Glenn W9IQ Jun 16 '17 at 11:08
0

One way is to use phase-sensitive detection. At 50 Hz with 1000 samples, you are getting 20 samples per cycle. Use 20-sample blocks, and subtract the sum of the second 10 samples from the sum of the first 10 samples. This will give you a value that is related to the phase relationship between the boundary of the block and the signal by the sine function (you will have to scale for amplitude-you can use an average of the P-P voltage). It will be at a positive peak when exactly in phase and a negative peak when you are 180 out. The approach removes any DC component. You can then average this value over many cycles to mitigate your noise. If you measure both signals with the same block timing, you get the phase relationship between each signal and your sampling system, which gets you an answer. Alternatively, you can build a loop that moves the block boundary (by shifting samples) and keeps the output at zero, but if you take this approach your resolution is 1/20th of a full cycle which is 18 degrees (assuming your sample times are fixed at 1 kHz).

John Birckhead
  • 10,524
  • 1
  • 11
  • 27
  • Does this method work with irregular sample intervals? – The Photon Jun 15 '17 at 16:15
  • No. You are relying on two things: the amount of time for the positive samples and the amount of time for the inverted samples are exactly the same, and that they are at the frequency of interest. I think you can see that you are basically taking the data and subtracting one half of the samples from the other half, which should yield zero for any random stuff. Only data that has the characteristic of having a value of different polarity in each half cycle at your block rate will yield any output over the long haul. – John Birckhead Jun 15 '17 at 20:11
0

If your only problem with the measurement is the ambiguity, you can resolve it by going back to the data, using some method to harmonize zero crossings too close to one another (take their mean arrival time, maybe), and then calculating the angle using the zero crossing method. The result you want will be whichever of the two possible phase shifts is closer to the zero crossing result.

While I'm on the subject, one method to find the phase angle that I've never tried, but should work, which also gives ambiguous results, is to measure the voltage of each sine wave, then measure the voltage difference between them. That would give you the 3 sides of a triangle and you can find the angles with law of sines/law of cosines.

toiler
  • 59
  • 4