The answer is not as obvious as could be, due to some circumstances.
Well, nearly all modern computers (essentially, since mid-1960s) use Twoʼs complement (jargon that gets official; more proper variant would be "zeroʼs complement"). And, when you perform a usual addition or subtraction on assembler level, it's done as truncating to N least bits, where N is used word length. Carry and overflow signs of this operation are either put in Flags/PS/etc. register (typical for CISC, but also for SPARC, ARM, POWER), or ignored (MIPS, Alpha, RISC-V, etc.) Adding 30000 to 30000 with 16-bit word, youʼll get -5536 as resuly of such truncating. You may treat carry+result as a N+1-bit result for unsigned operation; for signed one, you shall get "true sign" for this that is (SF xor OF) (x86 names), (N xor V) (NZVC names).
But even on assembler level result is not so unambiguous; one could utilize exception-on-overflow mode or machine instruction (e.g. MIPS add
raises exception on signed overflow but addu
doesn't).
Similarly for multiplication, if you want N bits of product of N bits by N bits, you needn't distinguish signed and unsigned multiplication and you will get truncated value.
The same approach is defined for wide bunch of programming languages, as default mode or with concrete operators. Some of the most important ones:
- Java - default operations on +, -, *
- C# - operations on +, -, * in "unchecked" mode
- Go - default operations on +, -, *
- Swift - operations on &+, &-, &*
- C, C++ - operations on +, -, * on unsigned numbers
all they truncate result and ignore overflow.
But, as soon as this is error prone, there is a stable trend to invent alternatives that provide some checking of result. They may be defined in the way that, if overflow happened, result would not be defined at all.
For example, Rust has following functions for its numeric types:
- checked_add() returns option<T> for a type T; so, if overflow happened, option<> value is empty.
- overflowing_add() returns pair of N-bits value (truncated if overflowed) and overflow flag;
- wrapping_add() returns just N-bits value.
The first of them (checked_add), as told, doesnʼt have real value if overflow happened. If you call unwrap
on it, youʼll get runtime panic.
Analogs for other languages are:
- Java: Math.addExact() and analogs: raising exception on overflow.
- C#: operators, +, -, * in "checked" mode: the same.
- Swift: operators +, -, * (but functions like Rust's overflowing_add also exist).
- GCC extensions: functions like __builtin_add_overflow that return boolean sign of overflow.
But the most weird case is C and C++ for signed integer arithmetic. It is declared as programmer's responsibility to avoid overflows. If compiler detects overflow condition, it can either reduce possible value sets for operator arguments, or insert so-called poison value. This is not really a value but a sign that improper operation is requested. Depending on compiler mode and various (generally unpredictable) circumstances, this can cause explicit generating of CPU exception, changing of cycle iterations near the overflowing operator, and so on.
Just for me, Iʼd strictly vote for making "checked" mode (exception on overflow) default one and allow to variate mode with syntax-level modifiers (pragmas, etc.)
Finally, a separate view is required for floating point. I would assume IEEE754 as default. With it, two special values INF and NaN (not-a-number) are invented. While NaN is formally an allowed value (it has defined representation, processing in standard operations, etc., it has characteristics specially crafted as different from normal numbers: for example, a == a
is false, if a
is NaN.
With spreading of languages that provide combined data types, part of cases like Rustʼs option<>
will grow, and, respectively, the simple truncating mode will reduce in popularity. I treat this as a good sign of movement towards the program safety.