As declared in the USB Battery Charging specification, a USB port can indicate that it is a Dedicated Charging Port (DCP), e.g. a dumb wall charger, by shorting D+ to D-. Version 1.1 of said spec declares that such a port must be able to supply up to 1.5 A. However, the latest (and final) version 1.2 increases this to 5 A.
This confuses me. Say a device following v1.2 is plugged into a DCP that follows v1.1, with no communication happening besides the shorted D pins, the device would identify the port as being able to supply 5 A and could happily start drawing much more than 1.5 A, which the charger may not be able to supply, or maybe it would.... and then catch fire.
How can this be in a system that is as backwards compatible as USB? Sure, nowadays with Power Delivery, we're back to having to check all parts of a USB charging chain (charger, cable and device) to get optimum performance, but surely not for safety? Am I missing something? Was version 1.1 only ever a draft, not meant for implementation?