Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6275

Re: Configuration settings for IPoIB performance

$
0
0

Pure centos 6.5 x86_64

 

1. echo "connected" > /sys/class/net/ib0/mode

2. ifconfig ib0 mtu 65520

 

No other tuning! No external OFED install!

 

service rdma start

 

iperf -P 1 -c 172.10.11.3

------------------------------------------------------------

Client connecting to 172.10.11.3, TCP port 5001

TCP window size:  630 KByte (default)

------------------------------------------------------------

[  3] local 172.10.11.2 port 43968 connected with 172.10.11.3 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  11.6 GBytes  10.0 Gbits/sec

 

netperf -H 172.10.11.3

MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.10.11.3 () port 0 AF_INET

Recv   Send    Send                        

Socket Socket  Message  Elapsed            

Size   Size    Size     Time     Throughput

bytes  bytes   bytes    secs.    10^6bits/sec

87380  16384  16384    10.00    9954.79

 

UPDATE:

 

As you mentioned multiple threads here it goes:

 

iperf  -c 172.20.20.3  -P 4

------------------------------------------------------------

Client connecting to 172.20.20.3, TCP port 5001

TCP window size:  645 KByte (default)

------------------------------------------------------------

[  5] local 172.20.20.2 port 53514 connected with 172.20.20.3 port 5001

[  3] local 172.20.20.2 port 53513 connected with 172.20.20.3 port 5001

[  4] local 172.20.20.2 port 53512 connected with 172.20.20.3 port 5001

[  6] local 172.20.20.2 port 53515 connected with 172.20.20.3 port 5001

[ ID] Interval       Transfer     Bandwidth

[  5]  0.0-10.0 sec  6.11 GBytes  5.25 Gbits/sec

[  3]  0.0-10.0 sec  5.42 GBytes  4.66 Gbits/sec

[  4]  0.0-10.0 sec  6.70 GBytes  5.75 Gbits/sec

[  6]  0.0-10.0 sec  6.55 GBytes  5.63 Gbits/sec

[SUM]  0.0-10.0 sec  24.8 GBytes  21.3 Gbits/sec

 

~21Gbit seems to be the barrier on the hardware (E5-2620,128GB) without futher tuning

no matter the number of threads (2+).

The more threads the higher decrease in total bandwidth [SUM] for example 64 threads max only to 17Gbit.


Viewing all articles
Browse latest Browse all 6275

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>