6

As a general rule, programming languages are designed for use by humans, and thus for entry using a US-QWERTY keyboard. (This makes some number of conceptual skips, myself as proof as I'm doing most of work on a BE-AZERTY, but indulge me and join me in this leap of faith.)

Using the letters of the alphabet for reserved words and identifiers, it makes sense to assign the several punctuation characters to a function in the language that will occur with a probability in correlation to the ease of access of the punctionation mark. (There's a story about trigraphs related to this, but not relevant to this question.)

There's a story somewhere about selecting =for assignment and == for comparison because of memory economy, but I can't find a suitable hyperlink address.

Further applying this theory to practice, with C language syntax:

  • ! for 'not'
  • " for string literals
  • # for compiler and pre-processor input

The theory breaks down at $. It doesn't appear at all by any implementation of the C language, that I know of. (Also, no $ here.)

Was it reserved for something that wasn't finished?
Was it in use by something that has since gotten out of use?
Or does it have a use in (some) implementation(s) that are not in common use?

Stijn Sanders
  • 193
  • 1
  • 8
  • 7
    [IBM 2741 terminal](http://softwareengineering.stackexchange.com/a/188458/31260) simply didn't have this symbol easily accessible – gnat Feb 01 '17 at 13:29
  • 2
    Possible duplicate of [Why do programming languages, especially C, use curly braces and not square ones?](http://softwareengineering.stackexchange.com/questions/188455/why-do-programming-languages-especially-c-use-curly-braces-and-not-square-ones) – gnat Feb 01 '17 at 13:29
  • Originally there were digraphs and trigraphs. Later, when those things went into obsolescence, C++ came in with name mangling (vendor-specific), and some vendors took up @ and $. If some vendors did it, others would not be allowed to use it except for the purpose of interoperating with name-mangling. However, a proper answer should look at: (1) the timeline, (2) technical cause-and-effect, (3) explanation of intentions (motivations) by decision-makers, possibly in a biography, interview or memoir. (I will not judge whether this question is on-topic or not.) – rwong Feb 01 '17 at 13:44
  • 7
    Unless the original language designers actually *say* why they didn't use `$`, any answer will probably just be anecdotes and speculation. – FrustratedWithFormsDesigner Feb 01 '17 at 19:07

1 Answers1

6

Things are not so clear-cut. GCC actually allows $ in identifier names, and the manual says this is because many traditional implementations do so. I think MSVC does the same. It's actually the ANSI standard which forbids the $, and that standard came quite a bit later than the invention of C. And since C99, even the standard allows other "implementation-defined" characters in identifiers.

The situation is therefore maximally complicated:

  • The standard says "no", but leaves a loop hole for individual compilers to answer "yes"
  • Widely used implementations answer "yes", but switch to "no" when you use "ANSI" mode, implying that "yes" would violate the standard.

As far as I know, the reason why $ has such a dubious status is that compiler writers liked to use it in intermediate code as an easy way of ensuring that their secret identifiers never clash with programmer-visible identifiers. (C++ was originally implemented on top of C: the first C++ compilers were actually transcompilers to C, and they needed lots of secret identifiers.)

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310