I was theorizing a new kind of 64-bit number for use in data storage, but then started wondering if it could become practical if created in hardware. (I believe it would be too slow if only software could interpret it).
Encoding
Bits 0-7: a (8-bit signed integer)
Bits 8-15: b (8-bit signed integer)
Bits 16-39: c (24-bit signed floating-point: 1b sign, 7b exponent, 16b mantissa)
Bits 40-64: d (24-bit unsigned floating-point: 8b exponent, 16b mantissa)
Formula:
a^b+cd
Note that I've asked StackOverflow how the two 24-bit floating-point numbers should be encoded and the minimum value of these.
Justification
Using this, Wolfram|Alpha says the positive smallest non-zero number is about the ratio of 3 planck lengths to 1 kilometer and the largest positive number is insanely larger than the number of atoms in a ball of carbon the size of our known universe
Problem
Let's, for this question only, ignore the fact that there are likely huge gaps in the possible numbers.
To decode this is trivial. We can easily construct an ALU that can raise one 8-bit number to the power of another, and then add that to the product of one 24-bit floating-point number to another. The problem comes in encoding. I have no idea what algorithm might discover the best way to encode 12,345,678,901,234,567,890,123,456,789,012,345,678,901,234,567,890.98765432109876543210987654321
, despite the fact that that's a perfectly reasonable number for this format to represent. Still, I know that whatever algorithm is best for this, it would be faster on hardware than software (duh). My question, here, is would such an algorithm be practically built onto a silicon wafer, or would it be too complex, require too much memory, etc.?
Also, I'm very new to this kind of question. If you can help me re-word it, I'd be very grateful