I see and work with a lot of software, written by a fairly large group of people. LOTS of times, I see integer type declarations as wrong. Two examples I see most often: creating a regular signed integer when there can be no negative numbers. The second is that often the size of the integer is declared as a full 32 bit word when much smaller would do the trick. I wonder if the second has to do with compiler word alignment lining up to the nearest 32 bits but I'm not sure if this is true in most cases.
When you create a number, do you usually create it with the size in mind, or just create whatever is the default "int"?
edit - Voted to reopen, as I don't think the answers adequately deal with languages that aren't C/C++, and the "duplicates" are all C/C++ base. They fail to address strongly typed languages such as Ada, where there cannot be bugs due to mismatched types...it will either not compile, or if it can't be caught at compile time, will throw an exception. I purposely left out naming C/C++ specifically, because other languages treat different integers much differently, even though most of the answers seem to be based around how C/C++ compilers act.