By true measurement duration I mean the window of time during which information is being collected from the continuous signal under test.
Since there is only one value per sample, the extremes are:
- The average over the entire measurement window.
- An instantaneous value at the end of the window.
Most data acquisition systems fall somewhere between those two, unless they were designed to prevent antialiasing across the entire sampling frequency range.
There's also a question of whether the measurement window is equal to the sampling period or shorter than it.
To prevent antialiasing independently of the output range, you either need tunable front-end filters - e.g. switched capacitor filters, or the filtering is done post-sampling. The letter is the cheaper and better performing approach for general purpose data acquisition. The input signals have to be heavily oversampled, e.g. with a sigma-delta converter, and are then filtered and output at a desired data rate. Since the output samples are synthetic anyway - in the sense that there are filters and decimators involved, it doesn't matter much what the output data rate is - just a matter of setting the filter coefficients and update rate correctly.
Be mindful though that the output sample rate is not intrinsically tied to the actual rate the ADC operates at. In advanced data acquisition systems, the ADC runs at the maximum sampling frequency, synchronized to a quartz oscillator, and the output rate is synthesized numerically on an FPGA or an MCU or APU, and is referenced to an external clock source that drives an internal numerical timebase with performance mostly limited by the internal clock source in terms of phase noise.
Unfortunately, it seems that $2k DAQ devices are not great in that respect, and to get that sort of performance the price tag goes up significantly - unless you're designing your own product, where you can get an awful lot of performance for $500 worth of A/D and signal chain.
As a practical case, the NI DAQ USB-6361 has a variable sampling rate from "no minimum" to 2 MS/s and a small signal bandwidth of 1.7 MHz
Unless they explicitly state that the sampling is alias-free, you must assume that it's the typical "low effort" option where the ADC rate is same as the output sampling rate. This comes with serious performance compromises if you run the ADC at any rate that's not the maximum - and even then the small signal bandwidth is too large for the sampling rate.
It's one of those dubious designs that are really hard to use to their full potential. They are almost universally pretty noisy and EMI sensitive since if you are sampling at, say 100Hz, the whole bandwidth's worth of junk is aliased into the 50Hz output bandwidth. For that particular device, the noise specs are for full bandwidth, and the overall performance is hardly impressive at all.
Also, the analog inputs look like an unbuffered MUX input. Driving those inputs is not trivial, since they inject short, high-bandwidth charge pulses onto the signal lines.