The NVIDIA® ConnectX®-7 family of Remote Direct Memory Access (RDMA) network adapters supports InfiniBand and Ethernet protocols and a range of speeds up to 400Gb/s. It enables a wide range of smart, scalable, and feature-rich networking solutions that address traditional enterprise needs up to the world's most-demanding AI, scientific computing, and hyperscale cloud data center workloads.
Accelerated Networking and Security
ConnectX-7 provides a broad set of software-defined, hardware-accelerated
networking, storage, and security capabilities which enable organizations to
modernize and secure their IT infrastructures. Moreover, ConnectX-7 empowers agile and high-performance solutions from edge to core data centers to clouds, all while enhancing network security and reducing the total cost of ownership.
Accelerate Data-Driven Scientific Computing
ConnectX-7 provides ultra-low latency, extreme throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration,scalability, and feature- rich technology needed for today's modern scientific computing workloads.
InfiniBand Interface
> InfiniBand Trade Association Spec 1.5 compliant
> RDMA, send/receive semantics
> 16 million input/output (IO) channels
> 256 to 4Kbyte maximum transmission
unit (MTU), 2Gbyte messages
Ethernet Interface
> Up to 4 network ports supporting
NRZ, PAM4 (50G and 100G),in various configurations
> Up to 400Gb/s total bandwidth
> RDMA over Converged Ethernet (RoCE)
Enhanced InfiniBand Networking
> Hardware-based reliable transport
> Extended Reliable Connected (XRC)
> Dynamically Connected Transport
(DCT)
> GPUDirect® RDMA > GPUDirect Storage
> Adaptive routing support
> Enhanced atomic operations
> Advanced memory mapping, allowing user mode registration (UMR)
> On-demand paging (ODP), including
registration-free RDMA memory access
> Enhanced congestion control > Burst buffer offload
> Single root IO virtualization (SR-IOV)
> Optimized for HPC software libraries including:
> NVIDIA HPC-X®, UCX®, UCC, NCCL, OpenMPI, MVAPICH, MPICH,
OpenSHMEM, PGAS
> Collective operations offloads
> Support for NVIDIA Scalable
Hierarchical Aggregation and Reduction Protocol
> Rendezvous protocol offload
> In-network on-board memory