37

Not that it's really a problem for anyone that has faced this syntactic issue before, but I see a wild amount of confusion stemming from the use of the caret (^) as the XOR operation in lieu of the widely accepted mathematical exponentiation operation.

Of course there are a lot of places where the (mis-)use of the caret is explained and corrected, but I haven't come across any definitive sources as to why the caret was given a different meaning.

Was it a matter of convenience? An accident? Obviously the reasoning could be different for the various languages, so information in any regard would be insightful.

Aaron Hall
  • 5,895
  • 4
  • 25
  • 47

1 Answers1

69

Although there were older precursors, the influential French mathematician Rene Descartes is usually credited for introducing superscripted exponents (ab) into mathematical writing, in his work Geometrie which was published in 1637. This is the notation still universally used in mathematics today.

Fortran is the oldest programming language widely used for numerical computations that provides an exponentiation operator, it dates to 1954. The exponentiation operation is denoted by a double asterisk **. It should be noted that many computers at that time used 6-bit character encodings that did not provide a caret character ^. The use of ** was subsequently adopted by creators of various more recent programming languages that offer an exponentiation operation, such as Python.

The first widely adopted character set that contained the caret ^ was the 7-bit ASCII encoding which was first standardized in 1963. The oldest programming language I am aware of that used the caret to denote exponentiation is BASIC, which dates to 1964. Around the same time IBM adopted the EBCDIC character encoding, which also includes the caret ^.

The C language came into existence in 1972. It does not provide an exponentiation operator, rather it supports exponentiation via library functions such as pow(). Therefore no symbol needs to be set aside for exponentiation in C, and other, later, languages in the C-family, such as C++ and CUDA.

On the other hand, and uncommon for programming languages up to that time, C provides symbols for bitwise operations. The number of special characters available in 7-bit ASCII was limited, and since there was a "natural affinity" of other operations to certain special characters, e.g. & for AND and ~ for NOT, there were not all that many choices for the symbol for XOR.

I am not aware of a published rationale provided by Ritchie or Kernighan as to why they chose ^ to denote XOR specifically; Ritchie's short history of C is silent on this issue. A look at the specification for the precursor to C, the language B, reveals that it did not have an XOR operator, but already used all special characters other than ^, $, @, #.

[Update] I sent email to Ken Thompson, creator of B and one of the co-creators of C, inquiring about the rationale for choosing ^ as C's XOR operator, and asking permission to share the answer here. His reply (slightly reformatted for readability):

From: Ken Thompson
Sent: Thursday, September 29, 2016 4:50 AM
To: Norbert Juffa
Subject: Re: Rationale behind choice of caret as XOR operator in C?

it was a random choice of the characters left.

if i had it to do over again (which i did) i would use the same operator for xor (^) and bit complement (~).

since ^ is now the better known operator, in go, ^ is xor and also complement.

The use of ^ for exponentiation in "mathematics" that you refer to is actually usage established at a much later date for typesetting systems such as Knuth's TeX which dates to 1978, command line interfaces for algebra systems such as Mathematica which dates to 1988, and graphing calculators in the early 1990s.

Why did these products adopt the use of ^ for exponentiation? In the case of calculators I suspect the influence of BASIC. Throughout the 1980s it was a very popular first programming language and was also embedded into other software products. The notation therefore would have been familiar to many buyers of the calculators. My memory is vague, but I believe there were even calculators that actually ran simple BASIC interpreters.

njuffa
  • 1,276
  • 12
  • 9
  • 4
    That is some nice history lesson. So the real question would be "Why would Tex's software (and other from that period) had decided to use the XOR symbol has exponential?" ;) – AxelH Sep 19 '16 at 10:53
  • 8
    @AxelH Actually, it does not. TeX uses `^` for *superscript*. And it makes sense to me to use an "up" character for superscript and a "down" character (`_`) for subscript. – svick Sep 19 '16 at 13:08
  • 2
    @AxelH I agree with svick that the `^` suggests an up arrow, which in turn hints at the traditional position of a superscripted exponent, so it is a bit of a natural fit. I am sure Donald Knuth would respond to an email inquiring about his decision to use `^` for exponentiation in TeX, but given that he is busy working on more important things, such as completing TAOCP, I did not want to take that step (I did consider it). – njuffa Sep 19 '16 at 13:46
  • @svick and njuffa, I agree to both of you off course, it makes senses. – AxelH Sep 19 '16 at 13:54
  • 7
    According to https://en.wikipedia.org/wiki/Caret, Algol 60 used an upwards pointing arrow for exponentiation, and notes that originally ASCII also had an up arrow at the same code point where it currently has the caret symbol, so the two were clearly equated to each other around the same time. – Periata Breatta Sep 23 '16 at 04:32
  • 1
    @AxelH from reading this answer it seems the use the caret as exponential in TeX (in 1978 or later) already had predecessors such as BASIC. – Didier L Sep 23 '16 at 17:57
  • 2
    Also note that Knuth was already using [the up-arrow notation in mathematics for iterated exponentiation](https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) in 1976, in which a single up-arrow is equivalent to exponentiation. – Didier L Sep 23 '16 at 18:03
  • 4
    Really nice historic answer. I would just like to add a likely reason for C implementing bit operators instead of an exponentiation: The goal of C is to be a low level language that allows the programmer easy access to a large part of the stuff that CPUs support natively. And CPUs always provide bit manipulation instructions, yet I do not know of a single CPU that has special instructions for exponentiation. As such, exponentiation always requires a loop (at least with integer arithmetic), and there is not a single feature in C that hides a loop behind a language construct. – cmaster - reinstate monica Sep 24 '16 at 20:31
  • 1
    If anything, the math symbol closest to caret is '∧' - logical and - whereas XOR is '⊻'. – Pete Kirkham Sep 27 '16 at 08:34
  • 2
    The 1963 ASCII standard did not have a caret character. Code point 94 was an upward pointing arrow. That was changed to the caret in 1965. (https://en.wikipedia.org/wiki/Caret#Circumflex_accent). Computer terminals and teleprinters that rendered code 94 as an upward arrow still were in wide-spread use throughout much of the 1970s. – Solomon Slow Sep 29 '16 at 16:43
  • 1
    @jameslarge Thanks for the additional information. While I have no reason to doubt the Wikipedia version, I have been unable to independently corroborate this history of the caret from primary sources, nor the use of an up-arrow for exponentiation in Algol 60 mentioned in an earlier comment (my only book on Algol 60 does not seem to use it). My involvement with computers started in 1981, too late for personal experience with the up-arrow/caret issue. I am currently considering editing a sentence into my answer that simply points to the relevant comments. – njuffa Sep 29 '16 at 16:55
  • 2
    @njuffa Holy crap you even went out of your way to email Ken Thompson. Here, have a bounty. – Qix - MONICA WAS MISTREATED Sep 29 '16 at 18:40
  • 2
    Basic's exponentiation and C's XOR use is not the only interpretation. Let me remind you that for PASCAL, Mr. Wirth decided to "mis-use" the caret for pointer operations – tofro Sep 29 '16 at 19:05
  • 1
    @tofro While that is definitely so (I used Pascal myself for many years so have first hand knowledge of the caret being used for pointer notation) it seems to broaden the scope beyond what the question asked, which is specifically about the use for exponentiation vs XOR. – njuffa Sep 29 '16 at 19:16
  • 1
    Just picking a bit of a nit here: TeX does not DO exponentiation (as far as I'm aware, anyway, and if it does it's far from common). It displays superscripts, which can be used to show equations involving exponents, but can also be used for other superscripts, e.g. $U^{238}$. – jamesqf Nov 16 '16 at 23:39
  • 1
    "calculators that actually ran simple BASIC interpreters", TI's graphing calculators still run simple versions of BASIC, though I don't know when they first implemented it. – mbrig Jul 27 '17 at 20:07
  • I just looked in the [BCPL Reference Manual](https://www.bell-labs.com/usr/dmr/www/bcpl.html), and it seems that this ancestor of C and B did have a XOR operator, but it was spelt `NEQV` or ` ≢`. So C didn't (consciously or otherwise) inherit it that way. – Toby Speight Feb 12 '20 at 14:18