Participants in I²S can be in master or slave mode. The master has to provide the clock and the slave has to accept the clock.
I would like to know how under what considerations this decision is made.
Participants in I²S can be in master or slave mode. The master has to provide the clock and the slave has to accept the clock.
I would like to know how under what considerations this decision is made.
Usually, the decision is simply driven by the capabilities of the components you're trying to put together: all components are not able to be a master.
More specifically: in a I2S system, most often, you find three kind of components: DACs, ADCs, and MCUs (or SoCs). Most DACs I have seen are not able to behave as masters. MCUs are usually versatile and able to behave as both. ADCs are also, very often, able to behave as both.
The reason is the following: If you need a SoC and a DAC (a lot of applications), the SoC will typically have a complex clocking system available (with fully configurable PLLs and stuff), which makes it a good fit to be the master. The DAC therefore don't need to be able to act as the master. For applications with SoC and ADC, you'll also want the SoC to be the master for the same reason. However, in some other applications, you may want to have ADCs that are directly connected to DACs (no SoC in between). This is why, typically, ADCs can also behave as masters (but in that case, the clocking options are usually less flexible than with a SoC).
So, to answer your question, for a typical application that uses a SoC, the SoC will be the master. If you don't have a I2S SoC (you just have ADC+DAC), the ADC will certainly have to be the master.
Note: Of course, ADC/DAC/SoC are not the only kind of devices you can find in a I2S system. But for the other kind of devices, the same rationale can often be applied. For example: S/PDIF transmitters are often slave-only, while receivers can usually act as both master or slave.
For the best quality audio, select as master the device that will provide the most jitter free MCLK
, BCLK
, and LRCLK
. Ideally use an external oscillator for MCLK
. A MCU may not be able to derive the clock rates required without jitter [1]. For example, if the MCU is also handling USB. If the codec has odd audio interface modes it is often easier to configure the MCU to support them.
I believe in I2S there are Transmitters and Receivers. And the Master is which ever one is generating the clock. In a multiple Transmitter and Receiver scenario it gets a bit unclear. Here is what the I2S specification found here says:
... the transmitter as the master, has to generate the bit clock, word-select signal and data. In complex systems however, there may be several transmitters and receivers, which makes it difficult to define the master. In such systems, there is usually a system master controlling digital audio data-flow between the various ICs. Transmitters then, have to generate data under the control of an external clock, and so act as a slave.