The variable x only exists while the block is being executed. That's how it is defined in the language.
Now the compiler has to allocate memory while the block is being executed. Assume the compiler figures out "I need 16 bytes when the block is not being executed, and 24 bytes when it is executed". It can follow two strategies: Allocate 16 bytes, then allocate 8 more bytes when the block is entered, and release the 8 bytes when leaving the block. Or it can just allocate 24 bytes permanently. Because this is not something your code can observe, the compiler is free to chose either way. The compiler would then decide "I can either have two more instructions and execute them every time I enter the block, or have eight bytes allocated but unused most of the time". I think the unused eight bytes are better.
On the other hand, assume it was not "int x" but "int x [20000]". Now we have two instructions vs. 80,000 or 160,000 bytes. The compiler might decide the other way this time, because the numbers are different.
Now assume we have
for (int i = 0; i < 10000; ++i) {
int x[20000];
...
}
It would likely be best to allocate x not 10,000 times within the for loop, but just once just before the for loop, and deallocate just after the loop, because according to the language, x would be allocated except for the tiny amount of time just between the "++i" and the "i < 10000".
But it's completely up to the compiler.