To multiplex upstream and downstream traffic typically frequency multiplexing or time multiplexing is used. E.g. an ADSL connection may use FDM and use 25kHz to 138kHz for upstream and 138kHz to 1104kHz for downstream connections. In this case the downstream has a substantially larger band of frequencies available and can thus reach larger transmission speeds.
However while most users use downstream more than upstream most of the time it is not like that all the time. E.g. online backup services need upstream most of the time. In that case it would be desirable that the frequency division is realigned to put an emphasis on upstream traffic. Not only is such a dynamic allocation of frequencies not possible with ADSL (or I have not found anyone offering that) but I cannot even call my ISP and tell them that I seldomly need downstream anyway and I would like to have a lot of upstream and I am very willing to sacrifice downstream for that.
For LTE I could not figure out how the duplexing is actually performed but the specification allows for larger download rates (300 Mbit/s) than upload rates (75 Mbit/s) rather than a combined connection speed which can be divided as needed.
Why does this happen? Is this for marketing business reasons or are there actually physical problem with dynamic allocation? In the later case: What are those reasons?
(I think server fault is the best place to ask because it is about networks but it the answer to my question has to do with physics or signal processing it may also be better on one of those sites. I am happy to migrate the question if in which direction I need to look for the answer)