I understand how the number of symbols used can be increased with things like quadrature phase-shift keying or quadrature amplitude modulation, and so on. I have thought for a year that it would be interesting design to increase bits-per-symbol by using the analogy of chords in music, superimposing many frequencies, and then demodulating them into the individual frequencies. I ask here if anyone knows of such techniques used historically and/or in the present, or if there is any clear reason why it would be a bad technique, or if it might be good but just has not been tested yet.
Update: Only thing I miss now is more examples of where the baseband wave is multifrequency (and for numerical data. ) Such examples would give a sense of how that technique compares to alternatives like QAM. "Touch tone telephones" is one example although pretty low-tech (combining audio tones rather than just two individual electromagnetic frequencies. )
Examples of carrier waves done in parallel has been clearly answered with DSL and PEP, and is clearly used there to increase bits-per-symbol (and has also been answered for running multiple connections over same medium, like in radio or telephone wires. ) But I am not aware of many examples at baseband wave level, in DSL the examples I have seen (elsewhere, not this question) use QAM on each individual channel. I would assume it is superior at that level, because market selected for it.
I added this update for context of where I was coming from with the question. The question has been answered, but I am interested in techniques used historically and/or in the present and do not mind that expanded on. Very thankful for the help regardless. Peace.