Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6275 articles
Browse latest View live

Re: igmp mlag vlan config right?

$
0
0
  • You will need to enable IGMP snooping globally.  IGMP snooping has global admin state, and per VLAN admin state. Both states need to be enabled in order to enable the IGMP snooping on a specific VLAN.

       switch(config)# ip igmp snooping 

 

  • Enable IGMP snooping and IGMP querier on Vlan 1

        switch(config) # vlan 1 ip igmp snooping

        switch(config) # vlan 1 ip igmp snooping querier

 

  • IPL port channel is by default an mrouter port. So you will need to configure the MPo's as mrouter ports

        switch(config) # vlan 1 ip igmp snooping mrouter interface mlag-port-channel 1 
        switch(config) # vlan 1 ip igmp snooping mrouter interface mlag-port-channel 2

        switch(config) # vlan 1 ip igmp snooping mrouter interface mlag-port-channel 3

        switch(config) # vlan 1 ip igmp snooping mrouter interface mlag-port-channel 4

 

  • To enables fast leave processing on a specific interfaces

       switch(config) # interface mlag-port-channel 1 ip igmp snooping fast-leave

 

  • To create static-group

       ip igmp snooping static-group <IP address> interface <type> <number> [source<source-IP>] 

       switch(config)# ip igmp snooping static-group 232.43.211.234 interface mlag-port-channel 1 source 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4

 

  • Must the querier IP address be set on vlan 1?

       It is a source for the IGMP queries. So always recommended to have in address set on vlan 1

 

  • Is the configuration the same on both SX1012?

         Yes

 

Please also refer the our Mellanox Onyx UM for more details about configuring IGMP snooping on Mellanox Switches.


Mellanox frimware upgrade problem

$
0
0

Hello, I need your help to solve the problem I have experienced in a customer.

I did not perform the Mellanox frimware upgrade on VMware ESXI 6.5 using the following commands.

 

./mlxup --online

./mlxup -i /tools/***.bin

 

./mlxfwmanager

 

fw current version is 14.16.1006 web page version is 14.22.1002

1.jpg

Re: Mellanox frimware upgrade problem

Neo, error 'Device Management Discovery' .

$
0
0

We use Mellanox's Neo Version: 2.1.0-5 with Open License. I have question.

In Evens we got error periodically on most mellanox's switched:

Job for 'Device Management Discovery' failed. Error response: 'Command failed'

Help me please resolve this error.

Re: Neo, error 'Device Management Discovery' .

$
0
0

Hi ,

This is error indicates an issue with the discovery of the switch.

Which switches are you using ?

 

In general , the UFM agent (that is installed by default in Mellanox switches) is responsible for the initial discovery.

NEO uses multicast communication with all the switches in the fabric, and the UFM agent responsible to acknowledge.

If you would like to add manually a switch to NEO, you should configure SNMP and LLDP on the switches .

 

Thanks,

Samer

Re: Neo, error 'Device Management Discovery' .

$
0
0

Hi, Samer!

I use Mellanox switches - MSN2700B, MSN2410.

Switches are located in other network then Neo system, so multicast don't work. I think so.

Switches added to Neo manually, communication status is OK. In Setting -> Device Access I configured HTTP, SSH, SNMP.

All this are accessibly from Neo.

I turned off Agent Discovery and LLDP Discovery in Settings because I got errors like  'Device Management Discovery'.

Re: Neo, error 'Device Management Discovery' .

$
0
0

Hi Nikolay,

 

IP Discovery provider can operate in 2 optional modes:

1. Auto-Discovery – automatic discovery of devices found within a specified range of IP

addresses using Mellanox UFM-Agent. In this mode, Mellanox NEO controller discovers all

Mellanox Onyx switches by sending multicast messages. Every Mellanox Onyx switch

responds to the controller with its IP address. This information is stored in the controller

repository

Note: Mellanox NEO auto-discovery requires multicast traffic to be enabled on the managed

switches.

 

2. Manual IP scan – manual discovery of devices of one or more types found within a specified

range of IP addresses, run according to the following algorithm:

• Checks for connectivity with a ping

• If alive, NEO scans all devices and classifies them according to their type, using the following

protocols in order:

1. SNMP classification (SNMP v2, SNMP v3 using global credentials)

2. SSH connectivity with Linux credentials

3. WinRM with Windows classification

 

In case you cannot use multicast traffic (requirement for Auto discovery) then adding the Switches manually is the only option.

 

Thanks,

Samer

Re: Neo, error 'Device Management Discovery' .

$
0
0

Ok. As I told I added switches manually. But I still see errors in page Events.

How can I eliminate this errors?


Re: Neo, error 'Device Management Discovery' .

$
0
0

What is the exact error that you see in the events logs ?

can you send the output ?

Does auto discovery is off ?

Re: Neo, error 'Device Management Discovery' .

$
0
0

AutoDiscoveryNeo.jpgEventsError.jpg

Auto discovery is off. Error you can see on picture.

Re: Neo, error 'Device Management Discovery' .

$
0
0

Can you send a picture of the tasks/jobs windows ?

Re: Neo, error 'Device Management Discovery' .

Setup Mellanox MSX1012B in HA environment.

$
0
0

Hello Community,

 

I am new to Mellanox switches. I am trying to configure 2 units of MSX1012B in HA environment. These switches will be behind 2 juniper firewalls serving the server farm.

I followed the configuration guide but i am little confused here whether i need to configure IPL and MGLAG to meet my requirement. Below is the diagram which i want to achieve. Switch-A and Switch-B are MSX1012B.

 

Looking forward.IDC-for-community - Page 1.png

 

Thank you!

 

 

Re: Neo, error 'Device Management Discovery' .

$
0
0

For ETH discovery to work properly, you must configure LLDP for all managed devices such as MSN2700B, MSN2410.

1) Configure lldp on the switches.

2) Turn on LLDP Discovery .

3) Restart the NEO service please run the following command

/opt/neo/neoservice restart

4) Monitor if the same issue occurs.

Re: Neo, error 'Device Management Discovery' .

$
0
0

Done.

But the same issue has occurred.

On all switches it was configured yet:

##

## LLDP configuration

##

lldp


Re: Neo, error 'Device Management Discovery' .

$
0
0

Hi ,

I saw that you opened a support case#474466 through IBS account.

We will continue the debug through the support case.

 

Thanks,

Samer

Re: Firmware for MHJH29 ?

$
0
0

Hello Romain -

   Good day to you...

Could you get the board_id with "ibv_devinfo"

And the part number with:

> lspci | grep Mell       NOTE: the bus:dev.func of the device

> lspci -s bus:dev.func -xxxvvv

See:

Read-only fields:

                        [PN] Part number:

 

If you could update this thread with this information it would be very helpful.

thanks - steve

Re: Problem with symbol error counter

$
0
0

Usually, symbol errors caused by some physical condition and in many cases fixed by a) reseating BOTH ends of the cable or b) replacing the cable. If you are using OEM solution, you might contact the hardware vendor after trying reseating the cables and see if your equipment is under the warranty or open case with him.

In order to reset the fabric counters, use 'ibdiagnet -pc' command and the same command should be used to collect information about the fabric. ibqueryerrors, despite that it exists in Mellanox OFED, shouldn't be used as it not under the development. ibdiagnet is a swiss army knife.

missing ifup-ib in latest release?

$
0
0

Hi .. I have some old cluster nodes that were working fine under previous versions of CentOS 7 (I think it was CentOS 7.3 before update) but after doing a recent update to CentOS 7.5 I can't seem to get the interface to come up. I reinstalled the latest MLNX_OFED drivers (MLNX_OFED_LINUX-4.3-3.0.2.1-rhel7.5-x86_64) which installed properly. I see the card in lspci and the kernel modules seem to be loaded as well. However, I can't seem to bring up the interface. Doing an ifup I get this:

 

ifup ib0

ERROR     : [/etc/sysconfig/network-scripts/ifup-eth] Device ib0 does not seem to be present, delaying initialization.

 

Which seemed weird to me that it was trying to use the ifup-eth code instead of the ifup-ib code to bring up the interface. When I looked for this file I don't see it on the system with the mlnx_ofed software installed. If I don't install mlnx_ofed and just leave the CentOS drivers installed the card comes up fine. I also notice this comes from the rdma-core package from CentOS:

 

# rpm -qf /etc/sysconfig/network-scripts/ifup-ib

rdma-core-15-7.el7_5.x86_64

 

When I look at the mlnx_ofed installed machine I don't see an rdma-core package...

 

 

# rpm -qa | grep rdma

librdmacm-41mlnx1-OFED.4.2.0.1.3.43302.x86_64

librdmacm-utils-41mlnx1-OFED.4.2.0.1.3.43302.x86_64

librdmacm-devel-41mlnx1-OFED.4.2.0.1.3.43302.x86_64

 

So I'm wondering if I am missing something with this? Previous versions I didn't seem to have any issues with getting it installed and using it. Anyone have some advice as to what I should look at further to figure this out? Thanks,

Re: ConnectX-5 EN vRouter Offload

$
0
0

Hi Marc,

 

Do you mean that product brief has over promising mistake? Contrail cannot use OVS.

 

Best regards,

Viewing all 6275 articles
Browse latest View live