Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6275 articles
Browse latest View live

Re: x3 Card, Dual port. Does port 1 IB port 2 ETH work now

$
0
0

I'm also disapointed that...:(

 

I think that cause for support of SR-IOV and VMware ESXi's compact hypervisor kernel environment. I think that Mellanox support SR-IOV and iSER support on IB driver at late 2014...

 

But I can't expect they will support En mode and driver...:(

 

 


Re: Linux VM communication on IPoIB Network

$
0
0

I found some information about this problem came from IPoIB features.

ESXi can't support eIPoIB interface currently. I'll waiting support it...:)

Does ConnectX®-3 EN card support LR4 QSFP+ transceivers?

$
0
0

Does ConnectX®-3 EN card support LR4 and will ER4 QSFP+  transceivers?

Is it possible to switch ConnectX-3 VPI into Ethernet mode in Solaris 11.1?

$
0
0

Is see there are guides on how to do this in Windows and Linux but nothing for Solaris 11.1. Is it possible/supported?

IPoIB not working with MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

$
0
0

Good afternoon,

I am trying to use IPoIB with an infiniband card MT26428. The Idea is to use a small cluster with MPI support over InfiniBand, using the SLURM scheduler.

 

I was using SGE as scheduler and for this one, it was necessary to use IPoIB. I guess it is the same with Slurm.

 

Actually, my problem is that I cannot use IPoIB with my cards. I install the drivers using :

 

mlnxofedinstall --all -n /root/myConfig.cfg

 

and the contents of myConfig.conf are:

IPADDR_ib0=10.1.2.101
NETMASK_ib0=255.255.255.0
NETWORK_ib0=10.1.2.0
BROADCAST_ib0=10.1.2.255
ONBOOT_ib0=1

 

Apparently, the install is succefull, after a reboot I get:

root@node01:~# ifconfig ib0

ib0       Link encap:UNSPEC  HWaddr A0-00-01-00-FE-80-00-00-00-00-00-00-00-00-00-00 
          inet addr:10.1.2.101  Bcast:10.1.2.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:4092  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1024
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

I do the same on a second machine, these machine has IP: 10.1.2.102

 

But I cannot ping the second machine from the first one (the opposite does not work too, of course).

 

I don't understand what I am missing in my configuration, is it maybe something related to routing?

root@node01:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.1.1.1        0.0.0.0         UG    100    0        0 eth0
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.1.2.0        0.0.0.0         255.255.255.0   U     0      0        0 ib0

 

I am not an expert in routing, I have tried to add the current machine as gateway for ib0 network:

route add -net 10.1.2.0 gw 10.1.2.101 netmask 255.255.255.0  dev  ib0

resulting in :

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.1.1.1        0.0.0.0         UG    100    0        0 eth0
10.1.1.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.1.2.0        10.1.2.101      255.255.255.0   UG    0      0        0 ib0
10.1.2.0        0.0.0.0         255.255.255.0   U     0      0        0 ib0

 

But this does not help.

 

I would be really grateful for any help, thanks in advance,

Best regards,

 

Andrea

Where is driver for Oracle Ent Linux 5 update 9?

$
0
0

Is there a compatible version for OEL 5U9 ?

Re: IPoIB not working with MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

$
0
0

Hi!

You were configure C class subnet.

But your IP address not in same subnet range.

* 10.0.1.xxx vs 10.0.2.xxx

And...

1st interface is eth0, but 2nd interface is Ib0.

(Does it Ethernet mode and IPoIB mode?)

If you want to configure IPoIB, all interface must be ib0, ib1, ...

 

Good luck to be with you...:)

Re: ESXi 5.5 ib_ipoib questions

$
0
0

Hi!

I think that's derive from Mellanox's marketing policy and Intel's SR-IOV support and VMware ESXi's limitation.

VMware moved from ESX to ESXi - That's a only distros.

 

That's a good choice for reduce patch, mainternance time and securities concern.

 

But there is a some problem to port Linux OFED to ESXi environments.

 

I was test vSphere 4 OFED on ESX 4.x and ESXi 4.x.

Result was very similar.

But there is some differences...

 

Some command on ESXi can't show a status like ESX.

 

I think past ESXi doesn't have a casual Linux kernel like ESX's console.

That's cause a problem to port a Linux based OFED to ESXi.

 

ESXi is a hypervisor.

 

That's a difference casual Linux kernel.

 

I'm also problem with SRP Target in very very high I/O load.

 

Mellanox add a memory tracking function in vSphere OFED 1.8.2 for ESXi 5.

They are well done.

SRP target on ESXi 5 was very stable dislike on ESX, ESXi 4.x.

 

Mellanox was well done.

 

But InfiniBand was a good player on Linux, not hypervisor.

 

If VMware think that RDMA must need for vSphere, they will co-work with Mellanox and launch a native support RDMA on their hypervisor...


Re: Please let me know 40Gb Ethernet card can split to 4 x 10Gb ?

$
0
0

Hi Ophir,

 

I had inquired previously about such functionality for ESXi 5 and referenced this Mellanox link - http://ir.mellanox.com/releasedetail.cfm?ReleaseID=601497

 

There are also other documents based on ESXi 4 that has the same functionality, but also includes screenshots - http://www.mellanox.com/related-docs/prod_software/IB_OFED_for_VI_3_5_and%20vSphere_4_installation_guide_1_30.pdf

 

I have yet been able to get this functionality to work in ESXi5.  Do you know if it does work?  Thank you.

 

Matt

Re: IPoIB not working with MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

$
0
0

Hi, thanks for your anser,

It appear strange to me, because with usual ethernet IPv4 you can have 2 different interfaces with separated networks (that's what I'm already doing for ethernet network on master node). I though it should be possible to define a third network using IPoIB, isnt'it?

 

EDIT:

in fact I think the problem is more related to hardware recognition by kernel, as I noticed this Link encap:UNSPEC  in the ifconfig result. But I think the correct modules are loaded, or do you see something missing:

 

root@node02:~# lsmod | grep ib_
ib_ucm                 22539  0
ib_srp                 42367  0
scsi_transport_srp     20226  1 ib_srp
ib_ipoib              122897  0
ib_umad                22133  0
ib_cm                  42799  4 ib_ucm,ib_srp,ib_ipoib,rdma_cm
ib_sa                  33766  6 mlx4_ib,ib_srp,ib_ipoib,rdma_ucm,rdma_cm,ib_cm
ib_mad                 51572  4 mlx4_ib,ib_umad,ib_cm,ib_sa
ib_uverbs              60698  2 ib_ucm,rdma_ucm
ib_core               101271  13 ib_ucm,mlx5_ib,mlx4_ib,ib_srp,ib_ipoib,ib_umad,rdma_ucm,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad,ib_uverbs
ib_addr                18748  3 rdma_cm,ib_uverbs,ib_core
compat                 13709  19 ib_ucm,mlx5_ib,mlx5_core,mlx4_en,mlx4_ib,mlx4_core,ib_srp,scsi_transport_srp,ib_ipoib,ib_umad,rdma_ucm,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad,ib_uverbs,ib_core,ib_addr

 

 

EDIT 2:

actually, I didn't start opensm now I did and IPoIB is working. But I get a warning during driver installation, telling that I should use LSPCI to improve the performance of the PCI MaxReadReq to 4096. I do it, but after reboot, the value is the same as before. Is there a way to fix it without adding it in an init.d script?

setup MaxReadReq to 4096 bytes definitely

$
0
0

Hello,

when installing the Mellanox drivers for infiniband, the installer complains that the MaxReadReq of my PCI card is too low, and that I need to set it to 4096. In the current session I do :

# IB_iface=$(lspci | grep Mellanox | awk '{print $1}')

# /usr/bin/setpci -s $IB_iface 68.W=4096

and it works.

 

BUT: after a reboot I have to do it again. How can I set it definitely to 4096? Is there a way to set the hardware to this value directly?

 

I could fix it adding the above 2 lines in an init script, but I guess there is a cleaner solution?

 

Thanks in advance

regards

 

Andrea

sRB-20210G factory default state and documentation?

$
0
0

Hello everyone,

 

I recently got my hands on a pair of Voltaire sRB-20210G from ebay and tried to configure them. In the process of doing so, I did a factory reset to get rid of any configurations that might interfere with my intended setup.

 

Now I'm stuck, since the switch apparently is not able to set the IP address of the management adapter of the board. How do I proceed to regain access to the sRB?

 

Maybe even more helpful would be a hint as to where I might find documentation for it? Searching the web and Mellanox' page did bring up a lot of references to the user guide, but not the user guide itself.

 

Any help is much appreciated!

Re: Please let me know 40Gb Ethernet card can split to 4 x 10Gb ?

$
0
0

Hi Matt, I'm not sure I understand what you wish to get from ESXi5?

 

Thanks,

Ophir.

Re: IPoIB not working with MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE]

$
0
0

Hi!

 

I thought you're in the vSphere ESXi environment...

 

If you are working Linux enviroment use the eIPoIB protocol.

 

IPoIB support IP application but there is some limitation.

 

But new eIPoIB protocol can support DHCP, Promiscuous mode and etc...

 

Good luck~!

Re: Where is driver for Oracle Ent Linux 5 update 9?


Re: Oracle enterprise linux 6.5 driver problem

How to clear the target connection via add_target

$
0
0

Hello experts,

I missed to create the target connection via add_target. as follows.

 

(ex.)

# echo "id_ext=0002c90300xxxxxx,ioc_guid=0002c90300xxxxxx,dgid=fe80000000000000000xxxxxx,pkey=ffff,service_id=000xxxxxx" > /sys/class/infiniband_srp/srp-mlx4_0-1/add_target

 

Please show me how to clear this setup.

 

Thanks in advance,

Re: Need help installing OFED 2.1 on Centos 6.5

$
0
0

Hi there,

 

Did you up the maxReadReq size too?

Configuration settings for IPoIB performance

$
0
0

Hello,

 

I'm seeing very poor IPoIB performance with QDR switch, RH 6.5, when measured with iPerf.

[  3]  0.0-10.0 sec  3.31 GBytes  2.84 Gbits/sec

 

I implemented the changes recommended in performance tuning guide, still no improvement.

Some of the suggestions, I read online was to change the MTU size to highest 65520, and run iperf with multiple threads.

 

1. How do I change the IPoIB MTU size to highest? Is that the setting in rdma.conf file to include line IPOIB_MTU=65520.

2. Running ibdiagnet -r, I see line rate at 10Gbps instead of 40. Is there a setting to change line rate.

0xff12601bffff0000:0x0000000000000001 | 0xc003 | 0xffff | 0x00000b1b | =2048 | =10Gbps  | 45

0xff12601bffff0000:0x00000001ff4bebeb | 0xc002 | 0xffff | 0x00000b1b | =2048 | =10Gbps  | 1

-I---------------------------------------------------

-I- IPoIB Subnets Check

-I---------------------------------------------------

-I- Subnet: IPv4 PKey:0x7fff QKey:0x00000b1b MTU:2048Byte rate:10Gbps SL:0x00

-W- Suboptimal rate for group. Lowest member rate:40Gbps > group-rate:10Gbps

 

Thanks

Re: Configuration settings for IPoIB performance

$
0
0

10Gb is normal. All QDR interface use the QUAD CHANNEL.

Therefore 10x4 equal to 40Gb.

 

If you want to configure MTU=65520, you must enable the CM mode.

I'm not a Linux admin but virtualization one.

 

You can find some information from OFED 2.1 for Linux user guide from Mellanox.

 

good luck...:)

Viewing all 6275 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>