24

There are four address spaces in PCI express:

  • Memory Mapped
  • I/O mapped
  • Configuration Space
  • Message

Can anyone please explain significance of each address space, and it's purpose in brief ?

As per my understanding, These all spaces are allocated into RAM (i.e. processor's memory). Configuration space is the space allocated for common set of registers (present in all PCIe devices). Is this space is common between all PCIe devices ? And how it's useful for PCIe functional operation ?

This space contains BAR (base address register). Is this register gets used to specify the address available in PCIe endpoint ?

I am new to the PCIe, and trying to learn it. I am referring the Base specification, But I think it's written for the readers having some prior knowledge of PCI and PCIe.

Also please refer some free online references useful to speed up the understanding of base specification. I understand that whenever any PCIe device attached with root complex, it will be assign with some memory region.

ronex dicapriyo
  • 341
  • 1
  • 2
  • 5

1 Answers1

42

It's been awhile since this was asked, but I hate orphaned questions :)

First, let's over-simplify a modern x86 platform and pretend it has 32-bits of address space from 0x00000000 to 0xFFFFFFFF. We'll ignore all the special / reserved areas, TOLUD (top of lower usable DRAM, Intel parlance) holes, etc. We'll call this system memory map.

Second, PCI Express extends PCI. From a software point of view, they are very, very similar.

I'll jump to your 3rd one -- configuration space -- first. Any addresses that point to configuration space are allocated from the system memory map. A PCI device had a 256 byte configuration space -- this is extended to 4KB for PCI express. This 4KB space consumes memory addresses from the system memory map, but the actual values / bits / contents are generally implemented in registers on the peripheral device. For instance, when you read the Vendor ID or Device ID, the target peripheral device will return the data even though the memory address being used is from the system memory map.

You stated these are "allocated into RAM" -- not true, the actual bits / stateful elements are in the peripheral device. However, they are mapped into the system memory map. Next, you asked if it was a common set of registers across all PCIe devices -- yes and no. The way PCI config space works, there is a pointer at the end of each section that indicates if there is more "stuff" to be read. There's a bare minimum that all PCIe devices have to implement, and then the more advanced devices can implement more. As for how useful it is for functional operation, well, it's mandatory and heavily utilized. :)

Now, your question about BARs (base address registers) is a good space to segue into memory space and I/O space. Being somewhat x86 centric, the specification allows the specification of a BAR size, in addition to the type. This allows a device to request a regular memory-mapped BAR, or a IO space BAR, which eats some of the 4K of I/O space a x86 machine has. You'll notice that on PowerPC machines, I/O space BARs are worthless.

A BAR is basically the device's way to tell the host how much memory it needs, and of what type (discussed above). If I ask for say 1MB of memory-mapped space, the BIOS may assign me address 0x10000000 to 0x10100000. This is not consuming physical RAM, just address space (do you see now why 32-bit systems run into issues with expansion cards like high-end GPUs that have GB of RAM?). Now a memory write / read to say 0x10000004 will be sent to the PCI Express device, and that may be a byte-wide register that connects to LEDs. So if I write 0xFF to physical memory address 0x10000004, that will turn on 8 LEDs. This is the basic premise of memory-mapped I/O.

I/O space behaves similarly, except it operates in a separate memory space, the x86 I/O space. Address 0x3F8 (COM1) exists both in I/O space and memory space and are two different things.

Your last question, messages refer to a new type of interrupt mechanism, message signaled interrupts or MSI for short. Legacy PCI devices had four interrupt pins, INTA, INTB, INTC, INTD. These were generally swizzled among slots such that INTA went to INTA on Slot 0, then INTB on Slot 1, then INTC on Slot 2, INTD on Slot 3, and then back to INTA on Slot 4. The reason for this is that most PCI devices implemented only INTA and by swizzling it, having say three devices, each would end up with their own interrupt signal to the interrupt controller. MSI is simply a way of signaling interrupts using the PCI Express protocol layer, and the PCIe root complex (the host) takes care of interrupting the CPU.

This answer might be too late to help you, but maybe it will help some future Googler / Binger.

Finally, I recommend reading this book from Intel to get a good, detailed introduction to PCIe before you go any further. Another reference would be the Linux Device Drivers, an online ebook from LWN.

Krunal Desai
  • 6,246
  • 1
  • 21
  • 32
  • The Post was quite helpful. I am very new to PCIe, For Enumeration process to happen (COnfiguration space allocation and mapping ) do we require driver support, or it can be initiated by Os. – kamlendra Mar 26 '16 at 06:14
  • Thanks, glad it was helpful! Generally, on x86 platforms, the BIOS software does some amount of memory allocation based on the configuration space information it parses from the PCI devices. Modern OSes generally accept this memory map as-is, AFAIK, though they too will go through and enumerate devices to load the appropriate drivers. I remember seeing some interesting low-level stuff in Linux that could let you potentially change what the BIOS had assigned. – Krunal Desai Mar 26 '16 at 08:35
  • Note that *only* memory marked as pre-fetchable can transfer more than a single DWORD per transaction; all other spaces can transfer *only* a single DWORD per transaction. The size of a burst is limited to MAX_PAYLOAD_SIZE (discovered during enumeration). – Peter Smith Apr 22 '16 at 07:35
  • Hello. I am new to PCI and will like a bit of clarification on your answer. You stated that the 256b / 4k bytes of configuration space is mapped into system memory. From my self tutoring, i thought access to configuration space is handled through a PCI controller which is statically mapped into the system memory. This controller provides a few registers (for device/function identification, offset into address space, result address) that serve as a small interface into the configuration space. So in effect, about only 5 - 10 bytes is statically reserved for the PCI controller. Is this right ??? – Cerezo Oct 04 '19 at 22:26
  • So, when the host(CPU) writes to a register at address 0x10000004 a value 0xFF, the PCI root complex will fetch this data(maybe it was always polling for a data at any address from 0x10000000 to 0x10100000) and will write it at address 0x04 at the endpoint(PCIe device)? Is this understanding correct? – AlphaGoku Nov 25 '19 at 04:46
  • The last point about messages is potentially somewhat misleading. The term "message" in PCIe is a bit overloaded - there are both "message signalled interrupts" (MSI) and "messages". MSI uses some address space associated with the interrupt controller to enable devices to trigger interrupts via memory write requests. Messages, on the other hand, are not really exposed to software directly, and are used to implement things like legacy interrupts. – alex.forencich Dec 08 '22 at 21:21
  • Also, PCI/PCIe device enumeration and BAR assignment takes place completely using standard registers in config space. This requires no support from device drivers as it takes place before the drivers are loaded. The devices must be enumerated and bus numbers assigned before the OS can see all of the devices and figure out which drivers to attach. In most cases, the initial enumeration and bus number assignment will be done in BIOS before the OS is loaded. – alex.forencich Dec 08 '22 at 21:23
  • Access to config space is technically system-dependent can can be implemented in a few different ways. One of them is to use an "indirect" technique where the bus, device, function, and register numbers are set in config registers somewhere, along with additional operations to trigger config reads or writes. But I think modern systems typically implement config space access differently, mapping the whole config space into a contiguous region of system address space, where each 4kb block of address space corresponds to a different PCIe bus/device/function. – alex.forencich Dec 08 '22 at 21:30
  • @AlphaGoku I think it's mostly the case, but some PCIe core device (like Synopsys DW PCIe core I'm looking at now) have internal address translation unit (iATU) for incoming and outgoing transactions, so I think it can undergo another stage of address translation. So in the EP mode, 0x10000004 -> 0x4 or 0x10000004 -> 0x2004. I'm not 100% sure and still reading the document. – Chan Kim Jan 19 '23 at 05:12
  • Cheers for this post, it helped clarify all of these concepts for me! – kop48 May 16 '23 at 05:37