All of the boards have the ability to support multiple nvme drives. However, you won’t get full bandwidth to all of the devices concurrently. In most cases, that is probably fine, however if you are doing something like putting all your devices in a RAID array, you will likely see a bandwidth impact. Whether or not this yields any real-world performance impact probably depends on your use case.
For AM5 boards that support 4 nvme drives they almost always fall into one of two layouts. Either 1 of the devices is connected directly to the CPU’s lanes and the rest are connected through the chipset’s 4 lanes or 2 devices get full bandwidth and the other 2 are connected to the chipset.
It’s either more NVME slots or more PCIE Slots (or X speed) or USB ports for what I’ve seen in the wild. I learned this the hard way as well while building my Mediaserver with a board that shared some of it’s lane between one of the slot and nvme. That said an NVME slot is just a pcie slot in another format. You can get adapters to convert them to PCIE slot. I did this to add one more card in my mediaserver and it’s working fine.
My MSI X870E Ace Max has 5 nvme slots. That said it’s bottom slot shares bandwidth with the 5th nvme slot. You will often see the following in the manual or specs :
** M.2_1 & PCI_E3 share the bandwidth. M.2_1 will run at x2 speed when installing device in the PCI_E3 slot. You can switch M.2_1 slot to x4 in the BIOS, but this will disable the PCI_E3 slot.
This varies alot from board and mfg. If you need a lot of lanes you better go with a server version of a motherboard or HEDT. My Proxmox server has a lot of everything but it’s a Threadripper 3950x.