The standard is a sort of "contract" between you and your compiler that defines the meaning of your programs. As programmers, we often have a certain mental model of how the language works, and this mental model is often at odds with the standard. (For example, C programmers often think of a pointer as roughly "an integer that denotes a memory address", and therefore assume that it's safe to perform any arithmetic/conversions/manipulations on pointers that one might perform with an integer that denotes a memory address. This assumption does not agree with the standard; it actually imposes very strict restrictions on what you can do with pointers.)
So, what's the advantage of following the standard, rather than your own mental model?
Simply put, it's that the standard is correct, and your own mental model is wrong. Your mental model is typically a simplified view of how things work on your own system, in common cases, with all compiler optimizations disabled; compiler vendors do not generally make an effort to conform to it, especially when it comes to optimizations. (If you don't hold up your end of the contract, you can't expect any specific behavior from the compiler: garbage in, garbage out.)
People seem to not like non-portable solutions, even if they work for me.
It might be better to say, "even if they seem to work for me". Unless your compiler specifically documents that a given behavior will work (that is: unless you're using an enriched standard, consisting of the standard proper plus compiler documentation), you don't know that it really works, or that it's really reliable. For example, signed integer overflow typically results in wrapping on many systems (so, INT_MAX+1
is typically INT_MIN
) — except that compilers "know" that signed integer arithmetic never overflows in a (correct) C program, and often perform very surprising optimizations based on this "knowledge".