-2

As far as I have been able to find, the first language to use ^ for exponentiation was BASIC, in 1964. Earlier languages, such as Fortran, used other symbols such as ** for exponentiation (although in Fortran's case this was likely influenced by its limited character set compared with later languages).

My question is, why did BASIC choose to use ^ for exponentiation? It is not a case of simply using existing mathematical notation (unlike + and -), since the ^ symbol was not initially used in math to mean exponentiation (e.g. TeX usage is more recent than BASIC).

I am looking for an objective answer backed up with a proper source.


As pointed out in the accepted answer, the original 1964 Basic used (up-arrow) for exponentiation (as can be found in the original manual, page 5). ASCII did not even include a ^ until 1965. Later versions of Basic did, however, use ^ for exponentiation.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • 3
    https://en.wikipedia.org/wiki/Caret#Surrogate_symbol_for_superscript_and_exponentiation BASIC is a descendant of ALGOL60 and the Wikipedia article explains why ALGOL60 used it, but I'm not sure if a Wikipedia article would be considered a proper source. – Dan Wilson Oct 16 '18 at 20:35
  • https://try-mts.com/algol-60-language-features/ seems to contradict wikipedia. I'm not sure which to trust, although the wikipedia article is missing a citation. – CoffeeTableEspresso Oct 16 '18 at 20:39
  • 2
    Your argument that "*the ^ symbol is not really used in math to mean exponentiation*" also seems a bit odd given that the caret was absolutely used to indicate a superscript (exponentiation) before rich text was widely available. It's still used today on Math Stack Exchange for LaTeX markup. – Dan Wilson Oct 16 '18 at 20:41
  • @CoffeeTableEspresso In the page you linked the operator is displayed as up-arrow ↑ not as a caret ^. From a computer history point of view these were largely the same glyph, and different machines differed in how they displayed that character. This is all long before ASCII which included carets but not arrows. Similarly, some languages used ← for assignment, which would typically be rendered as _ underscore in an ASCII environment. – amon Oct 16 '18 at 20:43
  • 1
    Again, I have not found any sources before 1964 indicating that the caret was used to indicate superscripts. Usages such as TeX or modern calculators are both more recent than Basic's usage. – CoffeeTableEspresso Oct 16 '18 at 20:44
  • The Wikipedia article also explains that "the caret can signify exponentiation (3^5 for 3⁵), *where the usual superscript is not readily usable*." That *really* ought to be good enough for you, despite the [citation needed] warning on the ALGOL60 reference. – Robert Harvey Oct 16 '18 at 20:45
  • 3
    Related: https://softwareengineering.stackexchange.com/a/331392/308851 – Dan Wilson Oct 16 '18 at 20:46
  • @RobertHarvey I won't accept that answer since (a) graphing calculators didn't exist when Basic was invented, so they couldn't have influenced its design, (b) I haven't been able to find any sources indicating that `^` was used to indicate superscripts at any point before 1964, and (c) I believe the link I provided shows that ALGOL 60 actually didn't use the `^` for exponentiation. Even if ALGOL 60 did use the caret for exponentiation, that would just change my question to "why did ALGOL 60 use the caret for exponentiation?". – CoffeeTableEspresso Oct 16 '18 at 20:50
  • @DanWilson that post you linked was actually what got me wondering about Basic. I'd previously wondered about `^` for XOR, but that post raised a few questions about Basic in the answer. – CoffeeTableEspresso Oct 16 '18 at 20:52
  • 1
    Well, you could have linked to that in your post. I hope you find a source, but for the record, the tone in your post is rather off-putting. "*It is not a case of simply using...*" and "*I will not accept any subjective answers...*" – Dan Wilson Oct 16 '18 at 20:58
  • @DanWilson I put the thing about not accepting subjective answers so this wouldn't be closed as primarily opinion based. I put the thing about mathematical notation because I frequently get that as an answer, but it doesn't seem to hold up since Basic predates graphing calculators and TeX, two cases of that usage. I did not mean for my post to be off-putting. – CoffeeTableEspresso Oct 16 '18 at 21:02
  • I'm actually curious now where you found that BASIC (1964) was the first language to use caret for exponentiation. [This page](https://en.wikipedia.org/wiki/BASIC) lists an `EXP` function for such a purpose. – Dan Wilson Oct 16 '18 at 21:05
  • 2
    Although modern ASCII specifies that code 0x5E, 0x5F, and 0x60 are caret, underscore, and back-tick, they were not always thus. Code 0x5E was an up arrow before it was a caret. Code 0x5F was left-arrow, and on some display devices that could handle lower-case, 0x60 was a solid block [on those that couldn't handle lower case, it was an at-sign just like 0x40]. – supercat Oct 16 '18 at 21:12
  • 1
    @DanWilson: The EXP function computes `e^x`, but the caret/up-arrow operator can be used to compute `x^y` without having to manually write `exp(log(x)*y)`. – supercat Oct 16 '18 at 21:14
  • 1
    I finally tracked down a copy of the first edition of the Basic manual https://www.dartmouth.edu/basicfifty/basicmanual_1964.pdf. It suggests the up arrow was chosen because it suggests a superscript. The caret wasn't included in ASCII until 1965, whereas Basic came out in 1964. The codepoint that is currently the caret used to be the up arrow, as @supercat has said. I guess all that's left is tracking down a copy of the Basic manual from a few years later, although I suspect the answer is "because the caret used the same code point as the up arrow". – CoffeeTableEspresso Oct 16 '18 at 21:22
  • @RobertHarvey I reformulated: I think it's better to say what OP was looking for ("objective answer"), than to say what OP was not looking for (previously: "subjective answers will not be accepted"). But to take your words: does it matter ? Honestly, I can very well understand that people may not be interested in such purely historical questions. But why such a DV storm? Did it irritate people? Did its curiosity break a taboo? Personally, I found it worth looking at, because I always wondered how come that some see a power in the caret, when other see an XOR. – Christophe Oct 16 '18 at 23:25
  • 2
    @Christophe: The fact that the original character was an up arrow on some obscure teletype system is mildly interesting. Its relevance to modern-day Software Engineering problems is essentially zero. If you know the history of this site, you also know that a great deal of effort was expended by many people to make it topically relevant. In doing so, we had to discard many categories of questions that didn't make the cut for various reasons. That's why we defend the scope so vociferously now. "Interesting" and "curiosity piquing" are unfortunately not enough to make a question topical. – Robert Harvey Oct 16 '18 at 23:33
  • @RobertHarvey how is my question any more off-topic than something like https://softwareengineering.stackexchange.com/questions/331388/why-was-the-caret-used-for-xor-instead-of-exponentiation/331392 ? – CoffeeTableEspresso Oct 16 '18 at 23:35
  • If you look right below that question, you'll see that it was controversial enough to spark a meta discussion (linked in the comment). – Robert Harvey Oct 16 '18 at 23:36
  • Yes, I'm aware of the meta-discussion. However, the question was ultimately allowed. I was hesitant to ask because I worried about the same happening here, but figured someone would have the answer I was looking for. – CoffeeTableEspresso Oct 16 '18 at 23:39
  • 2
    My beef with the question is not that it is subjective (it's demonstrably objective if someone can find a quote from the original decision-makers). My objection is more about *relevance.* Stack Exchange questions and answers must be about more than simply satisfying one person's curiosity about some obscure fact from the distant past, *especially on this site.* Otherwise we're just one more wretched blog filled with noise that's not useful to anyone. – Robert Harvey Oct 16 '18 at 23:42
  • @CoffeeTableEspresso three things: 1) *"As far as I have been able to find..."* has no sources, 2) *"It is not a case of simply using existing mathematical notation..."* presumes some knowledge about the intentions of the language designers, but again no sources, and 3) you failed to list any other relevant research, including the Software Engineering SE post which piqued your curiosity. IMO it's the same as people on Stack Overflow who say "I tried everything but it didn't work" without actually linking to any of the approaches they tried. My $0.02, just explaining the downvote. – Dan Wilson Oct 17 '18 at 01:27

2 Answers2

5

I think the proper answer is the one you already preemptively rejected.

It is not a case of simply using existing mathematical notation (unlike + and -), since the ^ symbol is not really used in math to mean exponentiation.

Why do you say that? Sure, in "formal" mathematical notation, exponentiation is written as a superscript, but ASCII has no concept of superscripts or subscripts, and there is a simple, informal notation (one might even say "for beginners," which is the B in BASIC) which involves using the ^ symbol. It's the simplest, most intuitive way to express the operation, given the constraints of the ASCII character set and the explicit target audience of "beginners" rather than people with a heavy math or computer science background.

Mason Wheeler
  • 82,151
  • 24
  • 234
  • 309
  • 2
    You didn't provide any citations, as the OP requested. :P – Robert Harvey Oct 16 '18 at 20:56
  • 1
    This doesn't explain why `^` was chosen though. At the time basic was invented, there were no graphing calculators, TeX, etc. As far as I can tell, Basic was the first usage of `^` for exponentiation, despite the fact that previous languages like Fortran used other operators for exponentiation like `**`. – CoffeeTableEspresso Oct 16 '18 at 20:57
  • 4
    @CoffeeTableEspresso: Graphic calculators are a red herring. Wikipedia just uses that as an example, but it's an apt one. BASIC chose the caret for two reasons: 1. ASCII doesn't have superscripts, and 2. The caret is suggestive of something raised to a power. No I don't have proof, and no, I'm not going to find it for you, since the people who made this decision died a long time ago. Frankly, I kinda thought this was common knowledge. – Robert Harvey Oct 16 '18 at 21:00
  • 1
    @RobertHarvey the original basic for 1964 didn't chose a caret but an up arrow like ALGOL and the ascii standard committee made the choice for the caret as a general substitute for the up arrow. – Christophe Oct 16 '18 at 22:19
  • @RobertHarvey my whole question was about why Basic used the caret for exponentiation. The fact that Basic didn't actually use the caret for exponentiation seems relevant to answering the question. – CoffeeTableEspresso Oct 16 '18 at 22:49
  • @RobertHarvey I see that John Kemeny died in 1992, but I'm having trouble finding a death date for Thomas Kurtz. Are you sure he's dead? It's not unheard of for people to live to into their 90s. – 8bittree Oct 17 '18 at 04:41
  • @8bittree according to Wikipedia, he's still alive actually. https://en.wikipedia.org/wiki/Thomas_E._Kurtz – CoffeeTableEspresso Oct 17 '18 at 16:41
5

The BASIC article on wikipedia provides a link to the first user manual created by the inventors of the language. At that time, October 1964, the power operator was an ↑ up arrow (page 9). It was available on the keyboard used on the system (page 15). It was however not a standard character since in the manual, all the up arrows are not printed but manually corrected.

Other languages used the up arrow symbol for exponentiation as well, as for example ALGOL, which was with FORTRAN part of the language sources for BASIC.

At the same period, in 1963 a first version of the ASCII character set was published. There are documented discussions of the standard committee about which character to include in the new standard character set. This article provides historical references based on paper archives. It shows that the popularity of ALGOL influenced the choice of the ASCII committee (for example the square brackets). The article also provides 3 references on the use of the caret as a substitute for the up arrow, when it is not used as an accent.

So in conclusion, the use of the caret is not a choice of the language designers, but a result of the choices made by the ASCII standard committee on the available characters.

Christophe
  • 74,672
  • 10
  • 115
  • 187
  • Interesting article. I find it curious that it says the grave accident was an "opening quote" to appease US people who might object to an accept replacing a back arrow, when I don't think that code point was ever used as a back-arrow. The only other thing I've seen it used for was a solid block character [which IMHO was a nice use for it, and would sensibly fold back to the 0x40 `@` character] but the article didn't mention that. – supercat Oct 16 '18 at 21:30
  • 1
    Thank you, this is exactly the kind of answer I was looking for. The documentation mentions the up-arrow is a substitute for superscripts (the accepted mathematical notation). The links provided about character sets then explain why Basic uses the caret. – CoffeeTableEspresso Oct 16 '18 at 21:35
  • 1
    Note that the same is true for Smalltalk. Smalltalk used the up-arrow for `return` (pass a value "up" the stack) and left-arrow for assignment. However, Smalltalk used its own character encoding, and when ASCII became the norm, the codepoints for those two characters were assigned to `^` and `_` respectively. The Smalltalk community then decided to stick with `^` for `return`, but change to `:=` for assignment (although to this day many implementations still accept `_` for assignment for backwards-compatibility with old source programs). – Jörg W Mittag Oct 17 '18 at 08:14
  • 1
    @JörgWMittag thank you for this interesting complement. The smalltalk caret appeared to me completely arbitrary and obfuscating, but now I realize how logic it can appear when put into the right context ! – Christophe Oct 17 '18 at 10:44