Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6275 articles
Browse latest View live

Testing MCX455A bandwidth between Dell servers

$
0
0

I am testing MCX455A bandwidth between Dell servers using the commands:

Server: Ib_write-bw –d mlx5_0  –i  1 –a –F
and

Client: Ib_write-bw –d mlx5_0  –i  1 –a –F  <server address>

The results looks fine until # bytes = 65536

Then I get the message:

mlx5: usb1 : got completion with errors

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00000000 00000000 00000000

00000000 00008813 08000029 40807dd3

Problems with warm up

 

This test used to work to completion and I don't think I've changed the configuration.  I have the same problem with multiple cards (I have 13).


ibv_post_send is slow in ping-pong

$
0
0

I tried to measure how much time it takes for each ibv_post_send (IB_WR_RDMA_WRITE) in the default rping program.

I used clock_gettime to measure and the results show that every ibv_post_send function takes around 170~180 nanoseconds. I expected it to be faster. Does any one have ideas on how to tune this? What could be the affecting factors? Many thanks in advance.

Re: vSphere 6.0 PFC configuration for Ethernet iSER with Ethernet Driver 1.9.10.5

Re: Mellanox eSwitchd issue on Openstack Kilo?

$
0
0

Hi Martijn

Sorry for being late, here is an eswitchd.log on the compute node.

And I think mlnx-agent.log will be helpful so I add it.

----------------------------

1.eswitchd.log

----------------------------

2016-09-01 16:24:37,419 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:39,419 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:39,420 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:39,420 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:41,420 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:41,420 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:41,420 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:43,421 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:43,421 DEBUG eswitchd [-] Resync devices

2016-09-01 16:24:45,421 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:45,422 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:45,422 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:47,422 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:47,422 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:47,422 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f996b181855'}}

2016-09-01 16:24:49,423 DEBUG eswitchd [-] Handling message - {u'action': u'get_vnics', u'fabric': u'*'}

2016-09-01 16:24:49,423 DEBUG eswitchd [-] fabrics =['default']

2016-09-01 16:24:49,423 DEBUG eswitchd [-] vnics are {u'fa:16:3e:a2:5b:b4': {'mac': u'fa:16:3e:a2:5b:b4', 'device_id': u'c510b038-ef87-4030-a4f0-4f99

6b181855'}}

--------------------------------------------

2.mlnx-agent.log

--------------------------------------------

2016-09-01 16:29:04.885 19895 DEBUG oslo_messaging._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': u'service', u'user_id': u'a76a50c916be47d5bc42aa900a3d2f52', u'roles': [u'_member_', u'admin'], u'tenant_id': u'fb93c4aa4484455eac338a3989feedca', u'auth_token': u'***', u'timestamp': u'2016-09-01 07:29:04.829415', u'is_admin': True, u'user': u'a76a50c916be47d5bc42aa900a3d2f52', u'request_id': u'req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a', u'tenant_name': u'service', u'project_id': u'fb93c4aa4484455eac338a3989feedca', u'user_name': u'neutron', u'tenant': u'fb93c4aa4484455eac338a3989feedca'} unpack_context /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:209

2016-09-01 16:29:04.887 19895 DEBUG neutron.agent.securitygroups_rpc [req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a ] Security group member updated on remote: [u'988dc170-b1de-4614-895b-1a423ec8faf4'] security_groups_member_updated /usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py:150

2016-09-01 16:29:04.888 19895 INFO neutron.agent.securitygroups_rpc [req-0a3f1dc5-5bd7-405a-abe5-822bef58fd0a ] Security group member updated [u'988dc170-b1de-4614-895b-1a423ec8faf4']

2016-09-01 16:29:05.011 19895 DEBUG oslo_messaging._drivers.amqp [-] unpacked context: {u'read_deleted': u'no', u'project_name': u'service', u'user_id': u'a76a50c916be47d5bc42aa900a3d2f52', u'roles': [u'_member_', u'admin'], u'tenant_id': u'fb93c4aa4484455eac338a3989feedca', u'auth_token': u'***', u'timestamp': u'2016-09-01 07:29:04.960239', u'is_admin': True, u'user': u'a76a50c916be47d5bc42aa900a3d2f52', u'request_id': u'req-19755025-f52a-4a5d-bdad-9b09a93759ef', u'tenant_name': u'service', u'project_id': u'fb93c4aa4484455eac338a3989feedca', u'user_name': u'neutron', u'tenant': u'fb93c4aa4484455eac338a3989feedca'} unpack_context /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:209

2016-09-01 16:29:05.012 19895 DEBUG neutron.agent.securitygroups_rpc [req-19755025-f52a-4a5d-bdad-9b09a93759ef ] Security group member updated on remote: [u'988dc170-b1de-4614-895b-1a423ec8faf4'] security_groups_member_updated /usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py:150

2016-09-01 16:29:05.013 19895 INFO neutron.agent.securitygroups_rpc [req-19755025-f52a-4a5d-bdad-9b09a93759ef ] Security group member updated [u'988dc170-b1de-4614-895b-1a423ec8faf4']

2016-09-01 16:29:05.479 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:06.430 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [-] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:06.433 19895 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 369d8286fe75477d83d700f4d991b7b9. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:07.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:07.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Starting to process devices in:{'current': set([u'fa:16:3e:15:72:31']), 'removed': set([]), 'added': set([u'fa:16:3e:15:72:31']), 'updated': set([])} run /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:374

2016-09-01 16:29:07.483 19895 DEBUG oslo_messaging._drivers.amqpdriver [req-63368208-c269-4901-839e-d4a9697faa33 ] MSG_ID is cd37e22ce92d43d6995ca428f4eefe0e _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:311

2016-09-01 16:29:07.483 19895 DEBUG oslo_messaging._drivers.amqp [req-63368208-c269-4901-839e-d4a9697faa33 ] UNIQUE_ID is 4a6c7c1ce95f48dbb91e47fc945503cb. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:07.557 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Adding or updating port with mac fa:16:3e:15:72:31

2016-09-01 16:29:07.558 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Port fa:16:3e:15:72:31 updated

2016-09-01 16:29:07.558 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Device details {u'profile': {}, u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': u'c00fd123-c176-492c-a7f4-97d41db325ce', u'segmentation_id': 2, u'device_owner': u'compute:nova', u'physical_network': u'default', u'mac_address': u'fa:16:3e:15:72:31', u'device': u'fa:16:3e:15:72:31', u'port_security_enabled': True, u'port_id': u'f8cad055-09ea-4d72-99be-2a992af3843c', u'fixed_ips': [{u'subnet_id': u'eaec5013-b11d-48e8-9bc9-9bc6c78d8286', u'ip_address': u'10.35.6.32'}], u'network_type': u'vlan'} treat_devices_added_or_updated /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:307

2016-09-01 16:29:07.558 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:07.560 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Connecting port f8cad055-09ea-4d72-99be-2a992af3843c port_up /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:93

2016-09-01 16:29:07.560 19895 INFO networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Binding Segmentation ID 2 to eSwitch for vNIC mac_address fa:16:3e:15:72:31

2016-09-01 16:29:07.561 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] Set Vlan  2 on Port fa:16:3e:15:72:31 on Fabric default set_port_vlan_id /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:93

2016-09-01 16:29:07.610 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] Port Up for fa:16:3e:15:72:31 on fabric default port_up /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:112

2016-09-01 16:29:07.611 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.mlnx_eswitch_neutron_agent [req-63368208-c269-4901-839e-d4a9697faa33 ] Setting status for fa:16:3e:15:72:31 to UP treat_devices_added_or_updated /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/mlnx_eswitch_neutron_agent.py:316

2016-09-01 16:29:07.612 19895 DEBUG oslo_messaging._drivers.amqpdriver [req-63368208-c269-4901-839e-d4a9697faa33 ] MSG_ID is 6411955127c94755a19ba2b04d04f278 _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:311

2016-09-01 16:29:07.612 19895 DEBUG oslo_messaging._drivers.amqp [req-63368208-c269-4901-839e-d4a9697faa33 ] UNIQUE_ID is 7738464db5c2477284cacd25000320e4. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:09.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:11.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:13.480 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:15.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:17.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:19.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:21.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:23.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:25.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:27.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:29.481 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:31.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:33.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:35.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:36.430 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [-] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:36.433 19895 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is d5c58c1faa8c400cb9c731da1034906e. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:264

2016-09-01 16:29:37.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:39.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

2016-09-01 16:29:41.482 19895 DEBUG networking_mlnx.plugins.ml2.drivers.mlnx.agent.utils [req-63368208-c269-4901-839e-d4a9697faa33 ] get_attached_vnics get_attached_vnics /usr/lib/python2.7/site-packages/networking_mlnx/plugins/ml2/drivers/mlnx/agent/utils.py:82

 

Thank you and best regards,

Muneyoshi

Re: Qos options and Vlarb table

$
0
0

Hi all

 

I think I understood in this topic
IB QoS is really very different from the analogue to Ethernet.
The IB uses two tables and the number of processed entires, which significantly increases the possibility of flexible configuration traffic prioritization under the specified goals.

For me, the best explanation was given in this link - InfiniBand QoS with Lustre ko2iblnd.

QP Context size

$
0
0

I have read the document saying the ICM (Infinihost Context Memory) is required by the HCA to store Queue Pair (QP) context/Completion Queue (CQ) and Address Translation Table entries.

I am wondering the QP context size is the same for a queue pair?

Or it varies, like how many WQEs (allowed outstanding requests) can affect this size?

Many thanks for your time.

Re: QP Context size

Re: Testing MCX455A bandwidth between Dell servers

$
0
0

Are you sure you used '-a' flag on both client and server?


Re: Testing MCX455A bandwidth between Dell servers

$
0
0

Can you please try and put the -a flag as the last flag in the command line on the server?

For the client it should be the last flag before the IP of the server.

Re: Testing MCX455A bandwidth between Dell servers

$
0
0

Thank you.  That fixed it.  Can you explain why?

Slow 40G->10G Performance when Traffic Flows over MLAG IPL

$
0
0

Hi,

 

I have a pair of SX1410's configured with MLAG. In a situation where a 40G port is trying to send traffic to a 10G port located on the partner switch, the performance is terrible - in this case around 50-60Mbps. Where receiving, the 40G attached host receives from the 10G host at nearly line speed (9.9Gb/s)

 

When the same 40G port sends traffic to a 10G port located on the same switch, the performance is very good, nearly line speed (9.9Gb/s).

 

There is no other traffic currently on the switches and I am using iperf to test, but can reproduce with other applications. Flow control is enabled pretty much every, so maybe this has a bearing. I cannot reproduce the problem where hosts are both 40G in any combination of tests - that works absolutely fine.

 

Any ideas?

 

Regards,

Barry

rdma_create_event_channel: No such device

$
0
0

Software: CentOS 7.2 with their repo provided drivers, libs, and utils.

Hardware: ConnectX-2 HCAs.

 

The basic network connections appear good, IPoIB

and infiniband-diags utilities do not fail.

But when trying to connect RDMA I get this error:

rdma_create_event_channel: No such device

Also same error results when tested with rping utility.

 

Are there compatibility issue with CentOS 7 provided Software and ConnectX-2 hardware ?

 

Gary

Re: rdma_create_event_channel: No such device

$
0
0

CentOS does install the kernel drivers by default: mlx4_ib.ko, mlx4_en.ko, mlx4_core.ko.

But it does not install the user space lib: libmlx4-rdmav2.so

$ yum install libmlx4

Does ConnectX-3 support header split?

$
0
0

I have an system in which performance critically depends upon the ability for the NIC to separate protocol headers and payload into separate buffers.

This is a proprietary protocol on top of Ethernet, not using TCP/IP.

 

The payload must be stored in system memory on a page (4K) boundary.

 

I am attempting to discern whether the ConnectX-3 NIC has the ability to support this behavior.

 

Our software system is built on top of FreeBSD and we have made suitable driver modifications for a few NICs from another vendor.

 

If we can do this with the CX3 NIC then it will open some new options for us and our customers in terms of hardware that will be available for use with our software.

 

I understand we probably need to modify the existing FreeBSD driver to operate the way we want; what I need to know is whether the NIC has the capabilities, and if so, how to program this behavior.

Re: 'State: Initializing' but works


Re: 'State: Initializing' but works

$
0
0

sorry, i am too busy to work on that issue.

I still cannot solve that issue, but please close that post.

Does MCX3141 only support ubuntu12.04? whether support 14.04 &15.04&15.10 and other new version? more questions: Does MCX3141 support SR-IOV for ROCEv2 in which ubuntu version?

$
0
0

Does MCX3141 only support ubuntu12.04? whether support 14.04 &15.04&15.10 and other new version? more questions: Does MCX3141 support SR-IOV for ROCEv2 in which ubuntu version?

Re: Testing MCX455A bandwidth between Dell servers

$
0
0

--all / -a flag is a feature that runs traffic on all message size from 2^1 to 2^23

When not using this flag, the default message size is 64KB.
Which means that if the server side doesn't have the '-a' flag set, it will prepare it's resources for 64KB messages.. And when the client tries to send 128KB messages, it will fail.

Re: Does MCX3141 only support ubuntu12.04? whether support 14.04 &15.04&15.10 and other new version? more questions: Does MCX3141 support SR-IOV for ROCEv2 in which ubuntu version?

$
0
0

I tested MCX3141(connectx_3_pro) in  ubuntu 15.04 version, i find normal network card function don't work, i use ping command to test, test failure, also same issue in ubuntu 14.04. 

Win2016 Eth connetcx 3 cable unplugged

$
0
0

Hi

We tried to connect our servers with connectx3 cards and cisco nexus 9372 by cable mellonox MC2206130-002-А3

But cisco and windows shows that cable is unplugged.

Before nexus these adapter work fine in IB mode with mellonox unmanaged switches.

 

We tried just connect 2 cisco nexus 9372 with cable MC2206130-002-А3 to each other - link was up.

So problem i think in server/drivers side.

OS windows 2016 tp5 with last drivers version 5.19.11822.0.

Port protocol Eth. Full duplex both sides.

 

What could be wrong?

Viewing all 6275 articles
Browse latest View live