Changes between Version 17 and Version 18 of PCI


Ignore:
Timestamp:
09/12/2023 08:53:03 PM (8 months ago)
Author:
Tim Harvey
Comment:

add malibu details

Legend:

Unmodified
Added
Removed
Modified
  • PCI

    v17 v18  
    22
    33See also:
     4 * [wiki:malibu/PCIe]
    45 * [wiki:venice/PCIe]
    56 * [wiki:newport/PCIe]
     
    89Gateworks PCI support:
    910||= Product Family =||= Capabilities =||
    10 || Venice  GW74xx (IMX8MP) || PCIe Gen2 ^^^4^^^ ||
    11 || Venice  GW71xx/GW72xx/GW73xx (IMX8MM) || PCIe Gen2 ^^^3^^^ ||
     11|| Malibu  GW8901 (CN9130) || PCIe Gen3 MPS=512 ||
    1212|| ||
    13 || Newport GW64xx   || PCIe Gen3 ||
    14 || Newport GW61xx/GW62xx/GW63xx || PCIe Gen2 ^^^2^^^ ||
     13|| Venice  GW74xx (IMX8MP) || PCIe Gen2 ^^^4^^^ MPS=128 ||
     14|| Venice  GW71xx/GW72xx/GW73xx (IMX8MM) || PCIe Gen2 ^^^3^^^ MPS=128 ||
     15|| ||
     16|| Newport GW64xx   || PCIe Gen3 MPS=128 ||
     17|| Newport GW61xx/GW62xx/GW63xx || PCIe Gen2 ^^^2^^^ MPS=128 ||
    1518|| ||
    1619|| Ventana          || PCIe Gen1 ^^^1^^^ ||
     
    5861[=#throughput]
    5962= PCI Throughput
    60 There are several factors that can affect PCIe performance. The most obvious factor is how many lanes (pairs of TX/RX SERDES channels) you have: 1x, 2x, 3x, 4x etc which are pure multipliers to the rates that can be achieved over a single lane. The next most obvious factor is what generation of PCIe your host controller (root complex or RC) and device (endpoint or EP) supports: Gen1, Gen2, Gen3 etc which factors into the transfer rate and data transfer overhead [1]. Digging deeper into the Transaction Layer Packet (TLP) overhead is not as obvious as RC's and EP's have varying max payload packet sizes. Digging even deeper than this you may end up running into limits that have to do with the implementation of the host controller and SoC resources.
     63There are several factors that can affect PCIe performance. The most obvious factor is how many lanes (pairs of TX/RX SERDES channels) you have: 1x, 2x, 3x, 4x etc which are pure multipliers to the rates that can be achieved over a single lane. The next most obvious factor is what generation of PCIe your host controller (root complex or RC) and device (endpoint or EP) supports: Gen1, Gen2, Gen3 etc which factors into the transfer rate and data transfer overhead]. Digging deeper into the Transaction Layer Packet (TLP) overhead is not as obvious as RC's and EP's have varying max payload packet sizes. Digging even deeper than this you may end up running into limits that have to do with the implementation of the host controller and SoC resources.
    6164
    6265MiniPCIe connectors provide a single lane (1x) where as M.2 sockets can allow additional lanes depending on the socket.
     
    105108 * !MaxPayload under !DevCtl indicates what the device is configured for (128B payloads here)
    106109
    107 Taking into account lane encoding and 128B payloads, the theoretical max per lane would be:
     110Taking into account lane encoding and 128B payloads (Venice IMX8M, Newport CN803x, Ventana IMX6), the theoretical max per lane would be:
    108111||= Gen =|| Transfer rate (MT/s) =||= Encoding =||= TLP rate =||
    109112|| 1 || 2500 || 8B/10B 80% = 250MB/s =||= 86% 215MB/s ||
    110113|| 2 || 5000 || 8B/10B 80% = 500MB/s =||= 86% 430MB/s ||
    111114|| 3 || 8000 || 128B/130B 98.75% = 987.5MB/s =||= 86% 849.25MB/s ||
     115
     116Taking into account lane encoding and 512B payloads (Malibu CN913x), the theoretical max per lane would be:
     117||= Gen =|| Transfer rate (MT/s) =||= Encoding =||= TLP rate =||
     118|| 1 || 2500 || 8B/10B 80% = 250MB/s =||= 96% 240MB/s ||
     119|| 2 || 5000 || 8B/10B 80% = 500MB/s =||= 96% 480MB/s ||
     120|| 3 || 8000 || 128B/130B 98.75% = 987.5MB/s =||= 96% 948MB/s ||
    112121
    113122References: