SC’13 – Mellanox® Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that NVIDIA® GPUDirect™ RDMA technology, which is supported on NVIDIA Tesla® K40 and K20 series GPU accelerators, is now supported on Mellanox’s Connect-IB™ InfiniBand adapters. The combined solution of Mellanox’s Connect-IB FDR InfiniBand adapters, NVIDIA GPUDirect RDMA technology and Tesla GPU accelerators provides industry-leading application performance and efficiency for GPU-accelerator based high-performance clusters.
With full support based on the message passing interface (MPI) MVAPICH2-2.0b release by The Ohio State University, the following features and capabilities are enabled:
- Multi-rail capabilities for NVIDIA GPUDirect RDMA with MVAPICH2
- 67 percent reduction in small message latency and a 10 percent reduction in large message latency
- 5X bandwidth improvement for small messages with Connect-IB
- Support for RDMA over InfiniBand and Ethernet (RoCE)
“Using enhanced MVAPICH2-2.0b with NVIDIA GPUDirect RDMA-based designs, end-users will now see a significant reduction in latency for small messages and an increase in bandwidth for large messages,” said Professor Dhableswar K. (DK) Panda of The Ohio State University. “The MVAPICH2-2.0b design with NVIDIA GPUDirect RDMA support is able to deliver excellent performance for K40 GPUs using Connect-IB FDR adapters.”
“We see increased adoption of FDR InfiniBand and NVIDIA GPUDirect RDMA technology by leading commercial partners, government agencies, as well as academia and research institutions,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox’s FDR InfiniBand solutions with NVIDIA GPUDirect RDMA are providing the highest level of application performance, scalability and efficiency for GPU-based clusters.”“With 12GB of ultra-fast GDDR5 memory and support for PCIe Gen 3 interconnect technology, the new Tesla K40 accelerators are ideal for ultra-large scale scientific and commercial workloads,” said Ian Buck, vice president of Accelerated Computing at NVIDIA. “When coupled with NVIDIA GPUDirect RDMA technology, Mellanox InfiniBand solutions unlock new levels of performance for HPC customers by enabling direct memory access from the GPU across the InfiniBand fabric.”