First, let's clear up some things:
- ‘broadband’, when applied to a data network, is a marketing term that describes a high-speed WAN service provider, like cable modem or fiber link.
Its opposite is ‘narrowband’, like a phone modem. Quote from here: "According to the FCC, the definition of broadband internet is a minimum of 25 Mbps download and 3 Mbps upload speeds."
'Broadband' and 'narrowband' also have meaning in communications, usually referring to RF spectrum use. This is somewhat related to networking, but the meaning needs to be clarified from context.
- ‘baseband’ only describes a signal that isn’t modulated onto an RF carrier.
'Baseband' makes no statement about its data rate or bandwidth, which can be low (phone line) or insanely high (fiber optic link). Nor does it make any statement about where it’s used: it could be part of a LAN, as a link to a WAN, or some other purpose, such as the input to an RF modulator.
A ‘baseband’ signal nonetheless can use the entire bandwidth of a link, and often does. PCI Express and other serdes standards are baseband coded, yet seek to squeeze the maximum data rate out of connection media (about 16 Gbit/s these days for PCIe 4.0, over copper.)
How fast can it go? The ultimate media throughput limit is set by the Shannon-Hartley Theorem, as follows:
- \$ C = B \log_2 (1 + S/N) \$
Where C is Capacity in bits-per-second, S and N represent signal and noise respectively, while B represents channel bandwidth.
More here: http://wisptools.net/book/bookc1s3.php
- 'full duplex' merely means that the link in use is capable of carrying signal in both directions at the same time.
'Full duplex' makes no statement about data rate. It could be very slow, like 300bps analog modem, or insanely fast (hundreds of Gbit/s per link) with an optical link. As long as data flow in both directions at the same time, it's full-duplex.
So what's 'broadband', really? A cable TV network is often cited as an example of wired broadband. So let’s dig into that a bit. RG6/U cable, the kind used to connect cable TV to the home, has a bandwidth of about 1GHz, and a theoretical throughput in the terabit/s range.
However, using today's DOCSIS 256QAM technology and 54-900MHz spectrum, we have 141 6MHz channels, each at 38Mbit/s per channel. Result? We get a paltry 5.36 Gbit/s for the entire cable, and that's only in the downstream direction. Upstream is much slower, limited to the band below 42 or 54MHz.
Making matters worse, that 54-900 MHz cable spectrum is shared with linear TV. So the available data bandwidth is much lower than that: probably only 25-50 channels' worth, with the balance for old-school linear programming and pay-per-view (is that even still a thing?). This gives 1-2Gbps, if you're lucky.
What makes cable networking viable at all is that it's a hub-and-spoke Hybrid Fiber/Coax network, divided into 'nodes', each with a separate fiberoptic connection to the main office. Each 'node' serves around 100-200 homes. So you're sharing that 1-2Gbit/s throughput from the main plant with hundreds, if not thousands of people.
Are you mad at your cable provider yet? You should be. Most people put up with it because, at least in North America, it's often their only broadband choice. It's not like, say, Korea, where Gbit-plus broadband seems to be a birthright.
My question is how baseband signal used full bandwidth where more than
one sender sending data but in broadband signal using portion with
respect to their sender?
Short answer, when multiple devices use the same physical medium, some form of multiplexing is needed. It could be frequency-domain using a carrier, time-domain using a fixed schedule, a contention-resolving scheme like CS/MA, clever spread-spectrum encoding like CDMA or frequency-hopping, some kind of signal encoding (as I'll describe below for Gbit Ethernet), spatial means like MIMO, or phase orientation used in satellite TV.
Within a network like the campus Ethernet LAN you describe, each client gets its own 1-to-1 Ethernet connection to a switch. The switch, in turn, is internally provisioned with enough throughput to route packets between its ports with no blockage. This routing does add latency, which is a major focus of high-performance switch and network design.
However, if multiple users on a switch are contending for the same resource (say, an upstream connection to a WAN) this will result in reduced throughput, which is dealt with at the higher levels of the protocol stack. Meanwhile, the switch will do its best to schedule and send packets to the upstream device, and throttle the multiple downstream users if need be.
Some switches overcome congestion by using one fast upstream port to service multiple slower downstream ports; good network design takes this into account.
What about WAN networks like cable TV? Cable networks carrying TV and data are 'broadband' in the sense that they carry a wide bandwidth of channels to the user, but more importantly, provide a high-speed data link to the user. Remember, 'broadband' is a marketing term, not a technical one.
However, cable networks are not inherently 2-way, let alone symmetric: only a small portion of that bandwidth (the spectrum below 42 or 54MHz) is allocated to upstream traffic.
Above the level of cable networks and other 'broadband' service provider WANs, there exists the backbone (carrier) level networks who handle all these kinds of data (video, voice, files, messages, etc.) aggregated onto a high-performance, low-latency network. Typically these networks use a protocol called Asynchronous Transfer Mode, or ATM.
Unlike Ethernet, ATM uses short, fixed-length packets with managed point-to-point latency and throughput to ensure the quality of service needed for each data type. More about ATM here: https://www.techopedia.com/definition/5339/asynchronous-transfer-mode-atm
So like that LAN switch, an ATM network can suffer congestion. ATM deals with this by giving priority to latency-intolerant traffic over data that can wait. Where it differs from Ethernet is that the ATM network uses much smaller data units sent at much higher rates, so it can respond more precisely to demand than Ethernet.
If you're a data 'whale' like Google or Facebook, your connection to the world is at this highest backbone level, using ATM or a similar protocol.
You may also be asking, how can Ethernet wiring achieve full duplex on a single set of wires, without an RF carrier for each direction? The short answer: Gbit link partners use an encoding and receiving technique that blocks their own transmitted signal so they can receive at the same time. The signal on the line is the electrical sum of TX and RX, each partner removes what it knows it sent, leaving behind what the partner sent, which it receives and decodes.
This is not unlike an analog telephone, which uses a hybrid circuit (directional coupler) to block side tone, allowing two-way conversation on a single wire. So, too, with Ethernet, which implements the hybrid as well as echo cancellation using DSP. (Phone line modems do this too.)
More here: How does "bidirectional" transmission on gigabit ethernet work?
In both cases - Ethernet and telephone - the signals are is unmodulated baseband (no RF carrier), sent at the symbol rate using the needed bandwidth to represent it. Gbit Ethernet uses 125MHz bandwidth to send data at 250Mbit/s, x 4 pairs = 1000Mbit/s, using an encoding called PAM-5.
More here: Understanding data transmission rates over copper wire