Mind, I don't know about the particulars of mains-frequency CT design.
I suspect they just don't want to measure it. Consider what ±15% means: it could be inductive or capacitive. Now, either case is plausible -- there's a LOT of wire on there to make up the some thousands of turns needed for low frequency use -- but it beggars belief that the phase could be, not only inductive or capacitive, but that far out, at mains frequency.
I suspect they just don't want to characterize it for every part manufactured.
There's also the difference between commercial grade and revenue grade CTs.
You can also look at the datasheet and see if that's the phase shift over the full rated bandwidth (say, some Hz to kHz?), which might have cutoffs of such phase angle, or flatness within some dB. This is a more sensible explanation, really. But without a datasheet, who knows what they meant.
Even if all of these meanings were true in the worst case, it's still possible to use such a crummy part. A calibration step would be necessary, perhaps a fairly messy one (there are multiple variables to solve for), and repeated for every unit (as the variation between individual components might be too great with respect to the desired tolerances and frequency range). But it's not a big deal to insert an inverse filter network, in DSP, to compensate for the frequency response (amplitude and phase shift) of the sensor. The amount of computing power required to do this, isn't even beyond the reach of say an ATMEGA microcontroller (i.e., entry level Arduino). The calibration should only have to be calculated once, saved into EEPROM or what have you, and that's that.
Note that frequency response calibration increases the noise floor in a nonuniform manner, as you're increasing gain at that frequency to compensate. Having a flat analog and digital signal path may be preferable say for instrument design or revenue purposes, while calibration (or just winging it) might be fine for a device simply measuring its own power for control purposes.