BT Business Direct - PC Hardware, Components, Software, Digital Cameras, MP3 players
BT logo
Buy Products
Dell ConnectX®-4 VPI adapter card EDR IB (100Gb/s) and 100GbE

Dell ConnectX®-4 VPI adapter card EDR IB (100Gb/s) and 100GbE

Review this product
Quicklinx: D7MPWS00 Mfr#: MCX455A-ECAT
Discontinued

Description

ConnectX-4 adapter cards with Virtual Protocol Interconnect (VPI), supporting EDR 100 Gb/s InfiniBand and 100 Gb/s Ethernet connectivity, provide the highest performance and most flexible solution for high-performance, web 2.0, data analytics, database, and storage platforms.

With the exponential growth of data being shared and stored by applications and social networks, the need for high-speed and high performance compute and storage data centers is skyrocketing.

ConnectX-4 provides exceptional high performance for the most demanding data centers, public and private clouds, web 2.0 and big data applications, as well as High-Performance Computing (HPC) and Storage systems, enabling today's corporations to meet the demands of the data explosion.

  • Coherent Accelerator Processor Interface (CAPI)

    ConnectX-4 enabled CAPI provides better performance for Power and OpenPower based platforms. Such platforms benefit from better interaction between the Power CPU and the ConnectX-4 adapter, lower latency, higher efficiency of storage access, and better Return on Investment (ROI), as more applications and more virtual machines run on the platform.
  • I/O virtualization

    ConnectX-4 SR-IOV technology provides dedicated adapter resources and ensured isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more virtual machines and more tenants on the same hardware.
  • Overlay networks

    In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-4 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic. With ConnectX-4, data center operators can achieve native performance in the network architecture.
  • HPC environments

    ConnectX-4 delivers high bandwidth, low latency, and high computation efficiency for the High-Performance Computing clusters. Collective communication is a communication pattern in HPC in which all members of a group of processes participate and share data. CORE-Direct (Collective Offload Resource Engine) provides advanced capabilities for implementing MPI and SHMEM collective operations. It enhances collective communication scalability and minimizes the CPU overhead for such operations, while providing asynchronous and high-performance collective communication capabilities. It also enhances application scalability by reducing the exposure of the collective communication to the effects of system noise (the bad effect of system activity on running jobs). ConnectX-4 enhances the CORE-Direct capabilities by removing the restriction on the data length for which data reductions are supported.
  • RDMA and RoCE

    ConnectX-4, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
  • Mellanox PeerDirect

    PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.
  • Storage acceleration

    Storage applications will see improved performance with the higher bandwidth EDR delivers. Moreover, standard block and file access protocols can leverage RoCE and InfiniBand RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.
  • Distributed RAID

    ConnectX-4 delivers advanced Erasure Coding offloading capability, enabling distributed RAID (Redundant Array of Inexpensive Disks), a data storage technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement. The ConnectX-4 family's Reed-Solomon capability introduces redundant block calculations, which, together with RDMA, achieves high performance and reliable storage access.
  • Signature handover

    ConnectX-4 supports hardware checking of T10 Data Integrity Field / Protection Information (T10-DIF/PI), reducing the CPU overhead and accelerating delivery of data to the application. Signature handover is handled by the adapter on ingress and/or egress packets, reducing the load on the CPU at the Initiator and/or Target machines.

Specifications

Summary

Product Description
NVIDIA ConnectX-4 VPI MCX455A-ECAT - network adapter - PCIe 3.0 x16
Device Type
Network adapter
Form Factor
Plug-in card
Interface (Bus) Type
PCI Express 3.0 x16
PCI Specification Revision
PCIe 1.1, PCIe 2.0, PCIe 3.0
Data Link Protocol
InfiniBand, 100GbE
Data Transfer Rate
100 Gbps
Network / Transport Protocol
TCP/IP, UDP/IP, IPoIB
Compliant Standards
IEEE 802.1Q, IEEE 802.1p, IEEE 802.3ad (LACP), IEEE 802.3ae, IEEE 802.3ap, IEEE 802.3az, IEEE 802.3ba, IEEE 802.1AX, IEEE 802.1Qbb, IEEE 802.1Qaz, IEEE 802.1Qau, IBTA 1.3, IEEE 802.1Qbg, IEEE 1588v2


Detailed Specification

General

Device Type
Network adapter
Form Factor
Plug-in card
Interface (Bus) Type
PCI Express 3.0 x16
PCI Specification Revision
PCIe 1.1, PCIe 2.0, PCIe 3.0

Networking

Connectivity Technology
Wired
Data Link Protocol
InfiniBand, 100 Gigabit Ethernet
Data Transfer Rate
100 Gbps
Network / Transport Protocol
TCP/IP, UDP/IP, IPoIB
Features
VLAN support, VPI, QoS, Jumbo Frames support
Compliant Standards
IEEE 802.1Q, IEEE 802.1p, IEEE 802.3ad (LACP), IEEE 802.3ae, IEEE 802.3ap, IEEE 802.3az, IEEE 802.3ba, IEEE 802.1AX, IEEE 802.1Qbb, IEEE 802.1Qaz, IEEE 802.1Qau, IBTA 1.3, IEEE 802.1Qbg, IEEE 1588v2

Expansion / Connectivity

Interfaces
1 x InfiniBand - QSFP

Miscellaneous

Compliant Standards
CISPR 22 Class A, EN55024, EN55022 Class A, ICES-003 Class A, RoHS, FCC CFR47 Part 15, UL 60950-1, EN 60950-1, IEC 60068-2-32, IEC 60068-2-29, IEC 60068-2-64, KC, VCCI Class A, AS/NZS CISPR 22, RCM, IEC60950-1, CAN/CSA-C22.2 No. 60950-1

Environmental Parameters

Min Operating Temperature
0 °C
Max Operating Temperature
55 °C
Verified by visa Mastercard secure Waste of Electrical and Electronic Equipment (WEEE) Directive