19

I apologize if this question is not well-posed. I'm reading a paper that claims the following:

The magnetometer vectors are sampled at 100 Hz. The detector filters and down samples the vectors down to 10 Hz to remove signal noise and reduce the computation required for live processing on the smartwatch.

My questions is: if they wanted the sampling frequency to be 10Hz, why did they not just sample at 10Hz initially?

colglaz
  • 193
  • 1
  • 5

5 Answers5

43

if they wanted the sampling frequency to be 10Hz, why did they not just sample at 10Hz initially?

In order to avoid aliasing, the signal has to be lowpass-filtered before sampling. No frequencies above Fs/2 should be present in the analog signal (or, realistically, they should be attenuated enough to be buried in the noise, or to a level low enough to meet the specifications you want).

If you sample at Fs=10Hz and want to acquire say, 4Hz signals, your filter will need to let them through, yet provide strong attenuation above 5Hz, so it will need a flat transfer function in the passband, then a steep fall-off after the cutoff frequency.

These high-order filters are difficult and expensive to implement in the analog domain, but very simple to do in the digital domain. Digital filters are also very accurate, the cutoff frequency does not depend on the tolerance of capacitors for example.

Thus, it is much cheaper to use a low-order analog lowpass, oversample by a large factor, then use a sharp digital filter to downsample to the final sample rate you actually want.

The same digital hardware can be used for several channels too. At this low sampling frequency, the computing power requirements are very low, and a modern microcontroller will easily implement many channels of digital filtering at a very cheap price.

bobflux
  • 70,433
  • 3
  • 83
  • 203
  • How do you determine what to down sample to? Is there a rule of thumb? Can we simply duwn sample to say the audio range in an sdr processing chain as soon as possible? – RichieHH Oct 19 '21 at 02:21
  • Well if the signal won't alias because it's been filtered and contains only bandwidth F, then you can downsample it to a bit above 2F, but not before that... – bobflux Oct 19 '21 at 08:32
10

You mentioned the word magnetometers. This expands the scope a little.

Magnetometers for those not familiar measure magnetic flux and create a proportional output voltage/ signal according to the flux.

It is likely you will also detect a high amount of unwanted "electrical energy", due to the radiated magnetic energy from any electrical cables around.

In fact, directly sampling at 10hz in the presence of 50hz could drive you mad, as you might not be exactly 10hz, and you will see what looks like a slow DC shift up and down over a period of several seconds.

The 100hz becomes significant in helping to null out this unwanted signal from what you actually want to see. This is typical for places where 50hz is found, in the US 60hz of course.

If you are using magnetometers in some countries, the 100hz/ 10hz does not work so well; you might find a different model for these markets.

The answers on antialiasing/ filtering etc are still correct; this is just more specific for your use case.

user179518
  • 101
  • 2
7

They don't immediately downsample. They "filter and down sample". Presumably the filter is a low-pass that eliminates aliases that might occur in the downsampled signal. The filtering also might reduce noise by using information from several of the 100 Sps samples to contribute to determining each of the sample values in the decimated (10 Sps) signal.

The Photon
  • 126,425
  • 3
  • 159
  • 304
  • 5
    This answer is correct, but just for completeness, to downsample correctly, you *must* low-pass filter (at the Nyquist frequency) before downsampling. The filter is not optional. – Mark Lakata Feb 27 '18 at 00:12
  • @MarkLakata I disagree. The filter itself is not required, what is required is that you don't have signals above Fs/2. If you expect some, then you need to add the so called anti-aliasing filter. If, by design or by nature of what you measure, you don't expect anything (signal or noise) above Fs/2 then the filter is useless. – Blup1980 Feb 27 '18 at 08:20
  • @Blup1980 Technically true - but only if you are sampling a mathematically-pure signal, with infinite resolution, and with zero jitter on waveform generation and sampling points. Even for post-processing a "pure" computer-generated waveform, this means you need it in all digital sampling because of noise in the LSB (although for high resolutions you may choose to ignore it because it is small). For the OP's case, it is absolutely required and is never optional. – Graham Feb 27 '18 at 10:49
  • @Blup1980 fair enough, it is possible that the signals were stupidly sampled at 100 Hz with a 20 Hz LP filter in place. But assuming the general case where your input waveforms that are not frequency limited, you need to low pass your data before resampling at a lower frequency. https://en.wikipedia.org/wiki/Sample-rate_conversion In the case of a magnetometer (i.e. compass on a smart phone) you can assume that there is plenty of noise at all frequencies above 20 Hz. – Mark Lakata Feb 28 '18 at 17:37
5

There are many cases where various fast (compared to the signal) noise sources can affect readings. Another example is a photodiode taking slow meaurements. It could easily pick up the 50/60/100/120Hz flicker of vairous common light sources depending where you are, and will probably even pick up high-frequency LED/fluorescent light flicker.

In some cases you may be able to use a low-pass filter on the input, but it's often simpler to optimise the filtering in software (e.g. simply oversample and average some number n of samples, where n is user-configurable).

Reducing the sampling rate doesn't (necessarily) (linearly) increase the settling time, so you're essentially snapshotting the input signal. In fact in the MCP3002 for example, the settling time is based on the SPI clock speed, which may be set for other reasons and not on the sampling rate at all (which makes sense: the device doesn't know about the sampling rate, just the fact that it's being asked to sample, but the data sheet figures use clock speed set from the sampling rate). If the device performance is set by the clock speed, and the minimum clock speed is higher than you'd like for the performance, you may as well read out faster, and averaging is cheap.

Chris H
  • 2,331
  • 11
  • 18
  • Very good point, the choice of sampling frequency may be an artefact of some unrelated design choice. – KalleMP Feb 27 '18 at 14:50
3

Over sampling eases the aliasing filter and transient response, with a SAR ADC, while averaging by decimation reduces the noise by root n samples in software. If an integrating IDC AD were avail, it could be done in one step.

Tony Stewart EE75
  • 1
  • 3
  • 54
  • 182