Greetings,
We have some nodes (Dell R415s running RHEL 6.8) with Connect-IB cards in a PCI 2.0 x16 slot (the only one available) and can't seem to get more than 45 Gbit/s using ib_send_bw. I have two of the nodes connected directly using a new FDR cable and with the SM running on one of the nodes. I have updated the BIOS, the OFED and the HCA firmware on both nodes. Still I can't seem to get the full FDR bandwidth. The connect-ib product page (http://www.mellanox.com/page/products_dyn?product_family=142&mtag=connect_ib ) states the following:
"Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions."
Since the PCIe 2.0 x16 is able to support 64 Gbit/s in one direction, shouldn't I be able to achieve full FDR (~54ish Gbit/s) as the product page implies. Or, am I wrong and there is some extra overhead that reduces the bandwidth for PCIe 2.0 x16 vs. PCIe 3.0 x16?
I have gone through the Performance Tuning for Mellanox Adapters guide and there isn't much more that I can try based on this. The latest BIOS has nowhere near the number of setting that are suggested to be tweaked in the guide. I have also tried mlnx_tune and get one warning:
----------------------------------------------------------
Connect-IB Device Status on PCI 01:00.0
FW version 10.16.1200
OK: PCI Width x16
Warning: PCI Speed 5GT/s >>> PCI width status is below PCI capabilities. Check PCI configuration in BIOS. <--------------
PCI Max Payload Size 128
PCI Max Read Request 512
Local CPUs list [0, 1, 2, 3, 4, 5]
----------------------------------------------------------
But this is probably correct since I am using a PCIe 2.0 x16 slot (PCIe 2.0 can only do 5 GT/s), right?
Here is the output of ibv_devinfo:
-------------------------------------------
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 10.16.1200
node_guid: f452:1403:002e:eb40
sys_image_guid: f452:1403:002e:eb40
vendor_id: 0x02c9
vendor_part_id: 4113
hw_ver: 0x0
board_id: MT_1220110019
phys_port_cnt: 1
Device ports:
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 4096 (5)
sm_lid: 1
port_lid: 2
port_lmc: 0x00
link_layer: InfiniBand
-------------------------------------------
and iblinkinfo:
-------------------------------------------
CA: A HCA-1:
0xf4521403002ee9f0 1 1[ ] ==( 4X 14.0625 Gbps Active/ LinkUp)==> 2 1[ ] "B" ( )
CA: B HCA-1:
0xf4521403002eeb40 2 1[ ] ==( 4X 14.0625 Gbps Active/ LinkUp)==> 1 1[ ] "A" ( )
-------------------------------------------
Can anyone tell me if this is the best I can expect or is there something else I can change to achieve FDR bandwidth with these HCAs?
Thanks in advance!
Eric