PCIE 4 GPU/Motherboard Lanes

I have just upgraded to a b550 mottherboard and popped in a ryzen 3600 which is apparently able to do pcie 4 lanes with the GPU I am pretty sure its working but was wondering what a more knowledgeable user would say about it.

So I was reading on the internet and found

"PCI Express slots on the motherboard can be wider then the number of lanes connected. For example a motherboard can have x8 slot with only x1 lane connected.

On the other hand, you can insert a card using only for ex. 4 lanes to a x16 slot on the motherboard, and they will negotiate to use only those x4 lanes.

How to check from the running system how many lanes are used by the inserted PCIe cards?"

So I did

sudo lspci -vv | grep -E 'PCI bridge|LnkCap’grep -E ‘PCI bridge|LnkCap’

And I got this below:


00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
		LnkCap:	Port #1, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <32us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L1, Exit Latency L1 <64us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <8us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us
02:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset Switch Upstream Port (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode])
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode])
		LnkCap:	Port #8, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode])
		LnkCap:	Port #9, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us
		LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCap:	Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us
		LnkCap2: Supported Link Speeds: 2.5GT/s, Crosslink- Retimer- 2Retimers- DRS-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <4us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <512ns, L1 <4us
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
		LnkCap:	Port #0, Speed 16GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us

Does this mean all the lanes are connected properly with the GPU?

Youre thinking about this too much, youre looking for an issue based on a misunderstanding.

That statement simply means that you can a PCIE slot can be x8 physically in size but only be wired for x1 for data. The second example means you can take a PCIE x4 device and plug it into a x16 PCIE slot and it will work normally at x4. PCIE devices will work in basically any PCIE slot as long as you can physically connect it to the slot in question and it will negotiate the proper speed based on available lanes. That statement doesnt give you anything on PCIE 4.0 as its merely talking about # of lanes not the generation of lanes.

If youre not experiencing a problem i see no reason to look for one :upside_down_face:

If your GPU uses PCIE4, CPU uses PCIE4, and your motherboard uses PCIE4 then itll use PCIE4.

I was pretty sure there is no problem, it seems to be working right, but what you say makes sense. I just want to make sure I am understanding it right.

That command doesn’t even run for me. :thinking:

1 Like

That command and statement aren’t about generation of PCIE which is what you’re concerned with here.

PCIE has generation which denotes the overall speed and feature set, then there is lane count which can be between x1-x16 for most devices.
Lanes are the number of available data paths.

PCIE also has a number of different slot configurations. The primary ones you’ll encounter are m.2, standard pcie x1/x4/x8/x16, mini pcie, and thunderbolt also uses pcie lanes.

In the case of standard pcie slots they can be physically x1,x4,x8, or x16 in size (there are different sizes for each) but only have 1,4,8, or the entire x16 data lanes depending on how its configured. Pcie allows for a device to negotiate to find out how many lanes are available. In example a GPU wired for x16 will work just fine in a x1 wired slot (though performance will be reduced)

M.2, Mini PCIE, and Thunderbolt all use between x1-x4 lanes.

Generation will be determined by the device first, platform second. If a pcie3 device is in a pcie4 motherboard/CPU combo then it’ll run pcie3. If you do the opposite and put a pcie4 device in a pcie3 platform it’ll be the platform that determines the speed and the pcie4 device will use pcie3. If they all match it’ll run at PCIE4 in your case assuming the GPU is pcie4. This would only not be the case if the GPU or motherboard had damage enough to function but not properly which most people won’t encounter but overclockers can at times.

I wouldn’t even be concerned with it :+1:

Idk if this clarifys things any further :sweat_smile: