Using this website as a reference for differential probes:
https://circuitcellar.com/research-design-hub/high-voltage-differential-probe/
By Andrew Levido, Circuit Cellar
The differential probe is described as attenuating the entire common input mode range down to what can be accepted by the differential amplifier, resulting in milliVolt levels for the signal of interest whose difference would then need to be re-amplified relative to GND to suit the desired output range.
Now, if the same probe was isolated (isolated output to be-specific), it would not need to attenuate the entire-common mode input range. Since it would float to the level of the signal of interest, you would only need to attenuate the range of the signal of interest to a range that could be accepted by the amplifier.
Other than the cost of isolation, is there any performance drawback that I am not seeing for the isolation amplifier in favour of the differential amplifier?
The context of my question is I need to make a rather general purpose analog input to a microcontroller and for safety purposes, I was going to isolate it anyways. The plan was to run the ADC running off an isolated source and communciate with the microcontroller through an isolator. It then occurred to me that if it was isolated anyways no re-amplification would be required like the differential probe method. Since I have control of the ADC and can easily make it isolated, I'm not seeing much of a point with the differential probe method where the entire common mode input range is attenuated and the tiny signal difference is then re-amplified referenced to ground.
Is the existence of the differential probe method just a cost issue? Or when one does not have the choice of floating the ADC? I'm not seeing any benefits to it for my usage case if I'm planning to isolate the ADC anyways.