Changes between Version 2 and Version 3 of ventana/PCIe


Ignore:
Timestamp:
10/30/2017 02:42:47 PM (12 months ago)
Author:
Bobby Jones
Comment:

Updated wiki from cache

Legend:

Unmodified
Added
Removed
Modified
  • ventana/PCIe

    v2 v3  
    1313[=#resources]
    1414== Resource Limits ==
    15 The i.MX6 CPU has an internal address translation unit (iATU) that connects the i.MX6 PCI host controller to the memory bus.  This iATU window size '''imposes a resource limit which can ultimately limit the number of PCI devices you can have on the bus'''. The iATU window is 16MB which can technically be broken up in a variety of ways but by default is used as:
     15The i.MX6 CPU has an internal address translation unit (iATU) that connects the i.MX6 PCI host controller to the memory bus.  This iATU window size '''imposes a resource limit which can ultimately limit the number of PCI devices you can have on the bus'''. The iATU window is 16MB which can technically be broken up in a variety of ways but later kernels (v4.x) by default use it as:
    1616 * 512KB config space
    1717 * 64KB io space
    1818 * 15MB mem space available for devices
    1919
     20These ranges are defined in the device tree file {{{imx6qdl.dtsi}}} under the {{{pcie}}} node. The last entry on each line dictates the size of the range.
     21{{{#!c
     22pcie: pcie@0x01000000 {
     23           compatible = "fsl,imx6q-pcie", "snps,dw-pcie";
     24           ...
     25           ranges = <0x81000000 0 0          0x01f80000 0 0x00010000   /* downstream I/O (1MB) */
     26                     0x82000000 0 0x01000000 0x01000000 0 0x00f00000>; /* non-prefetchable memory (15MB) */
     27           ...
     28}
     29}}}
     30
    2031PCI Devices can request 1 or more io regions, and 1 or more mem regions however when devices are behind a bridge (which they will be on a GW52xx, GW53xx, and GW54xx) the various resource requests must go through a PCI bridge which imposes a 1MB granularity for mem regions.  On the GW52xx, GW53xx, GW54xx, each PCIe socket is behind a bridge and thus has this 1MB granularity. The upstream port on a PCIe switch takes a mem resource itself, which ends up leaving 14 more 1MB windows available.
    2132
    22 The outcome is complex and is likely best explained with a series of examples of what is possible. The following examples use various hardware combinations of:
     33The outcome is complex and is likely best explained with a series of examples of what is possible. '''Remember that results may vary depending on BSP, kernel version, and specific radio models'''. The following examples use various hardware combinations of:
    2334 * Baseboards:
    2435  * GW54xx - 2 mem windows used by baseboard (1 for PCIe switch, 1 for eth1 GigE)
     
    3950 setenv extra 'coherent_pool=4M'
    4051}}}
    41    * Depending on the card(s) and mode(s) you're using, this value can change, so 4M is a very safe bet (considering it's currently set to 256k by default). To verify that the kernel got this new setting, just do a 'cat /proc/cmdline' and you should see the 'coherent_pool=4M' sitting there.
     52   * Depending on the card(s) and mode(s) you're using, this value can change, so 4M is a very safe bet (considering it's currently set to 256k by default). To verify that the kernel got this new setting, just do a {{{cat /proc/cmdline}}} and you should see the {{{coherent_pool=4M}}} sitting there.
    4253
    43 Various Examples:
    44  * GW54xx + 6x SR71e (slots fully loaded - 6 radios) (with 6 unused 1MB resource windows remaining)
    45  * GW54xx + 6x WLE300 (slots fully loaded - 6 radios)
    46  * GW54xx + 6x WLE900 (slots fully loaded - 6 radios) (with 'coherent_pool=4m' kernel cmdline argument)
    47  * GW54xx + GW16081 + 12x SR71e (slots fully loaded - 12 radios)
    48  * GW54xx + GW16081 + 5x WLE300 + 1x SR71e (6 radios)
    49  * GW54xx + GW16082 + 5x WLE300 + 4x DNMA H-5 (slots fully loaded - 9 radios)
    50  * GW54xx + GW16082 + 5x SR71e + 4x DNMA H-5 (slots fully loaded - 9 radios)
    51  * GW54xx + GW16081 + GW16082 + 9x SR71e + 4xDNMA H-5 (13 radios)
     54'''Disclaimer: Correctness of configurations can not be easily predicted and you should always verify compatibility yourself.'''
    5255
    5356Other configurations are possible if someone for example wants to spread out some PCIe devices across a couple of GW16081 mezzanines to allow many cellular radios (which USE USB, not PCI). The basic rules can be summarized as follows:
    54  * i.MX6 has 14 available memory resources
    55  * Most atheros radios seem to require 1 (ie SR71e, Option GTM671WFS), but some (ie WLE300 ath9k) requires 2
     57 * Most atheros radios seem to require 1 (ie SR71e, Option GTM671WFS), but some (ie WLE300 ath9k) requires 2 or more
    5658 * Each PCIe switch requires 1 (ie GW54xx/GW53xx/GW52xx has one on-board, add another if you have a GW16081 mezz)
    5759 * 2nd onboard eth1 GigE requires 1
     
    5961
    6062Notes:
    61  * The above examples refer to the PCIe host controller driver used in the OpenWrt (3.10+) kernel.  The 3.0.35 kernel used for our Yocto and Android BSPs reserve a 14MB mem resource window which leaves 1 less region affecting the examples above.
     63 * The above examples refer to the PCIe host controller driver used in the OpenWrt (3.10+) kernel. Ubuntu with a 4.13.0 kernel was also used for testing. Beware that the 3.0.35 kernel used for our Yocto and Android BSPs reserve a 14MB mem resource window which leaves 1 less region affecting the examples above.
     64
     65=== Identifying Allocation Failures ===
     66Failures to allocate pci resources manifest in a few different ways depending on BSP, kernel version, and model number. A failure to map devices behind the pci switch is the most typical failure case and occurs when
     67 * Newer kernels (v4.x) attempt to map all requested devices behind a pci switch at once, and if there are insufficient resources then none of these devices will enumerate. This failure is easily recognizable by the kernel prints of the form:
     68{{{
     69[    0.366474] pci 0000:01:00.0: BAR 8: failed to assign [mem size 0x01300000]
     70[    0.366501] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x00020000]
     71[    0.366527] pci 0000:01:00.1: BAR 0: failed to assign [mem size 0x00020000]
     72[    0.366574] pci 0000:02:01.0: BAR 8: failed to assign [mem size 0x00300000]
     73[    0.366599] pci 0000:02:04.0: BAR 8: failed to assign [mem size 0x00300000]
     74[    0.366624] pci 0000:02:05.0: BAR 8: failed to assign [mem size 0x00300000]
     75[    0.366649] pci 0000:02:06.0: BAR 8: failed to assign [mem size 0x00300000]
     76[    0.366674] pci 0000:02:07.0: BAR 8: failed to assign [mem size 0x00300000]
     77[    0.366699] pci 0000:02:08.0: BAR 8: failed to assign [mem size 0x00100000]
     78[    0.366723] pci 0000:02:09.0: BAR 8: failed to assign [mem size 0x00300000]
     79}}}
     80 * Previous kernels will inconsistently map devices while resources last. In some cases the pci devices may even appear to be mapped correctly judging by the kernel prints and output of lspci. However, attempting to utilize them results in functional behavior from only a fraction of the devices. In this case the best way to identify if this is the case is to check your system log for pci driver prints. Using the ath10k_pci driver as an example:
     81{{{
     82[   14.518701] ath10k_pci: probe of 0000:09:00.0 failed with error -5
     83}}}
     84  This will be printed for each failed device.
    6285
    6386=== Memory Calculation Example ===
     
    151174
    152175PCIe reset signals are typically always controlled by the kernel and software, however, they also can at times be controlled via a GPIO, with more details [wiki:gpio#GPIOmapping here]
     176