I know the data bus size defines the size of the processor, but can the processor actually process data above that limit?
Would really appreciate some explanation on this.
I know the data bus size defines the size of the processor, but can the processor actually process data above that limit?
Would really appreciate some explanation on this.
The external data-bus width doesn't always agree with the processor's internal structure. A well-known example is the old Intel 8088 processor, which was identical to the 16-bit 8086 internally, but had an 8-bit external bus.
Databus width is not a real indicator of the processor's power, though a less wide bus may affect data throughput. The actual power of a processor is determined by the CPU's ALU, for Arithmetic and Logic Unit. 8-bit microcontrollers will have 8-bit ALUs which can process data in the range 0..255. That's enough for text processing: the ASCII character table only needs 7 bits. The ALU can do some basic arithmetic, but for larger numbers you'll need software help. If you want to add 100500 + 120760 then the 8-bit ALU can't do that directly, not even a 16-bit ALU can. So the compiler will split numbers to do separate calculations on the parts, and recombine the result later.
Suppose you have a decimal ALU, which can process numbers up to 3 decimal digits. The compiler will split the 100500 in 100 and 500, and the 120760 into 120 and 760. The CPU can calculate 500 + 760 = 260, plus an overflow of 1. It takes the overflow digit and add that to the 100 + 120, so that the sum is 221. It then recombines the two parts so that you get the final result 221260. This way you can do anything. The three digits were no objection for processing 6 digits numbers, and you can write algorithms for processing 10-digit number or more. Of course the calculation will take longer than with an ALU which can do 10-digit calculations natively, but it can be done.
Any computer can simulate any other computer.
The humble 8-bit processor can do exactly what a supercomputer can, given the necessary resources, and the time. Lots of time :-).
A concrete example are arbitrary precision calculators. Most (software) calculators have something like 15 decimal digits precision; if numbers have more significant digits it will round them and possible switch to mantissa + exponent form to store and process them. But arbitrary precision expand on the example calculation I gave earlier, and they allow to multiply
\$ 44402958666307977706468954613 \times 595247981199845571008922762709 \$
for example, two numbers (they're both prime) which would need a wider databus than my PC's 64-bit. Extreme example: Mathematica gives you \$\pi\$ to 100000 digits in 1/10th of a second. Calculating \$e^{\pi \sqrt{163}}\$ \$^{(1)}\$ to 100000 digits takes about half a second. So, while you would expect working with data wider than the databus to be taxing, it's often not really a problem. For a PC running at 3 GHz this may not be surprising, but microcontrollers get faster as well: an ARM Cortex-M3 may run at speeds greater than 100 MHz, and for the same money you get a 32-bits bus too.
\$^{(1)}\$ About 262537412640768743.99999999999925007259, and it's not a coincidence that it's nearly an integer!
The size of the data bus only determines how much data can be transferred over the bus at any one instant-- and has no effect outside of that.
The maximum value that a CPU can handle is unlimited for most CPUs, but the max value that can be handled at any one instant is limited (usually) by the width of the ALU, or Arithmetic Logical Unit. And the ALU width is (usually) the same width as the CPU's internal registers.
The reason why the max value is unlimited is because CPUs can divide up a value into sections, and process each section independently. We do this when we do math using pencil and paper. We divide up the numbers into digits of 0 to 9. CPUs do this by dividing the value into 8, 16, 32, or 64 bit chunks.
Sometimes CPUs will have a data bus width that is narrower than the ALU width. This is done to reduce the cost of the chip and the memory it is interfacing to. The original IBM PC used the 8088 CPU which was a 16 bit CPU with an 8-bit data bus. Even today there are CPUs that do this. I recently used an ARM 9 CPU that is 32 bits internally and 16 bits on the data bus.
Other CPU's might have an external data bus that is WIDER than the ALU. This increases cost but also improves the CPU to RAM bandwidth.
This whole subject goes back to a different question (that nobody has asked), which is: What does it mean to have an 8, 16, 32, or 64 bit CPU? The answer is far from simple! The simple answer to this question is going to be inaccurate and the complex answer is very complex.
No, the "size" of a processor is not really the width of its data bus. For example, the 8088 was a 8 bit bus version of the 8086, but both were considered "16 bit" processors. Unless something else is specifically stated, the bit size of a processor means the ALU width, which is the width of the word that it can natively perform arithmetic and logical manipulations on. The reason both the 8088 and 8086 were 16 bit processors is because they had 16 bit ALUs internally. That means they could, for example, add two 16 bit numbers as a atomic operation.
Nowadays things get a little more confusing since a "processor" may have multiple cores or other parallelism. It is important to distinguish between bit width and parallelism, whether at the high level like multiple CPU cores, or at the low level like some SIMD (Single Instruction Multiple Data) architectures. Modern x86 processors commonly used in PCs are now 64 bit. A quad core isn't a 256 bit processor, but 4 64-bit processors. Similarly, a SIMD processor that can perform 8 16-bit adds at the same time is not a "128 bit" processor. Note that operating on multiple 16 bit words is not the same as operating on a single 128 bit word that is understood as a whole. One obvious difference is that there is no carry between the individual 16 bit adders, which would be required to call it a 128 bit adder. See https://electronics.stackexchange.com/a/42788/4512 for more discussion on some of these issues.
Yes, a processor can process any size data, but the larger that data is compared to its native size, the more instructions are required to do the wide manipulation in software.
The question is a little vague and my answer was re-written.
The size of the physical data bus is not the limiting factor but rather the size of the addressable space. Memory space can be segmented in a variety of ways. GPU's use 256,512 & 1024 bit data buses. Modern CPU's use 64 and 128 bit data bus with dual port.
Obviously a wider bus makes addressing large amounts easier, but it must be packed and unpacked with Memory Management chips and controlled so the contents do not become wasted and filled with zeroes on unused bits. (MBZ bits = reserved, must be zero)
The only way to extend the data bus beyond its physical size is to use virtual memory management with VM capable architecture, bios, OS and drivers. In the early days 64bit VM was used on 32bit CPU's.
Forgive me if my assumptions were wrong on your question. ( still not sure what you want)