As a general engineering hobbyist, I am learning more about the world of microcontrollers each and every day. Once thing I don't quite understand though is the significance of the bit version of a microcontroller.
I have been using the ATmega8 for several months, and it seems to work great for my purposes. I know how things like clock speed, memory, number of IO pins, types of communications buses, etc. differentiate one micrcontroller from another. But I don't quite understand the significance of, say, 8-bit vs. 16-bit vs. 32-bit. I do understand that a higher bit version allows the device to store larger numbers, but again, how does this impact my decision? If I am designing a product, under which hypothetical scenario would I decide that an 8-bit processor simply won't do, and that I need something higher.
Is there any reason to believe that a theoretical 32-bit variant of the ATmega8 (all other things equal) would be superior to an 8-bit version (if such a device was possible)?
I may be speaking nonsense, but I guess that's a result of my confusion.