First of all, you're doing something VERY right that a lot of IoT designers and users don't: You consider the fact that operation needs to be reliable and latency-bounded. Not everyone does that, and that's why many IoT devices really are bad.
The choice of standard between 802.11 b/g/n won't really influence your latency much. I assume we're bounding latencies < 10ms, because everything about that will "just work in 99.5% of cases using good WiFi hardware".
If you're in a latency-bound scenario, you certainly won't
- use TCP (and thus, MQTT, which builds atop of that)
- use a device that emulates a slow serial link – if your packets have let's say 4 characters, and you have 9600 baud, then you'll spend a millisecond just to get data from the µC to the WiFi device
- use WiFi, since there's no guarantee your station will be able to send within a finite time window, at all (only a likelihood)
If you need reliability, on the other hand, you mustn't
- use pure UDP (since there's no guarantee or feedback that the packets reach their destination)
- use a pure one-way radio protocol (same reason)
- use a ESP8266, which takes it price advantage for a lack of testing, design and certification for high-reliability operation (and thus, no large electronics manufacturer will use that without doing that testing themselves, in which case ready-made modules from trustworthy manufacturers usually become cheaper)
So, first of all, define what your latency requirements, and your reliability requirements are. You must have a piece of paper that says
The latency for {one|two}-way communication needs to be < {max latency} in {tolerable percentage}% of cases. There must not be a probability of more than {tolerable percentage}% of losing a packet.
Then, you can look at the theoretical limits of systems, and then look at the practical limits of implementations of those who fit that.