As a general rule, programming languages are designed for use by humans, and thus for entry using a US-QWERTY keyboard. (This makes some number of conceptual skips, myself as proof as I'm doing most of work on a BE-AZERTY, but indulge me and join me in this leap of faith.)
Using the letters of the alphabet for reserved words and identifiers, it makes sense to assign the several punctuation characters to a function in the language that will occur with a probability in correlation to the ease of access of the punctionation mark. (There's a story about trigraphs related to this, but not relevant to this question.)
There's a story somewhere about selecting =
for assignment and ==
for comparison because of memory economy, but I can't find a suitable hyperlink address.
Further applying this theory to practice, with C language syntax:
- ! for 'not'
- " for string literals
- # for compiler and pre-processor input
The theory breaks down at $. It doesn't appear at all by any implementation of the C language, that I know of. (Also, no $
here.)
Was it reserved for something that wasn't finished?
Was it in use by something that has since gotten out of use?
Or does it have a use in (some) implementation(s) that are not in common use?