"The leadership scale science and data analytics problems we are working to solve today and in the near future require very high bandwidth linking compute nodes, storage, and analytics systems into a single problem solving environment," said Arthur Bland, OLCF Project Director, Oak Ridge National Laboratory. "With HDR InfiniBand technology, we will have an open solution that allows us to link all of our systems at very high bandwidth.""Data movement throughout the system is a critical aspect of current and future systems. Open network technology will be a key consideration as we plan the next generation of large-scale systems, including ones that will achieve Exascale performance," said Bronis de Supinski, chief technology officer in Livermore Computing. "HDR InfiniBand solutions represent an important development in this technology space." "We are excited to see Mellanox continue leadership in high speed interconnects," said Parks Fields, SSI team lead HPC-design at the Los Alamos National Laboratory. "HDR InfiniBand will provide us with the performance capabilities needed for our applications." Supporting Industry Quotes: "High-speed storage for HPC solutions are critical for maximizing performance benefits of today's HPC, machine learning, media production and Big Data," said Kurt Kuckein, director of product management, DDN Storage. "DDN and Mellanox HDR 200Gb/s technology will enable absolute unmatched performance in high performing storage solutions for our end-customers that demand the ultimate in performance for their real-time workloads." "Whether it's high performance computing, big data or cloud, Mellanox and Dell EMC HPC Systems customers will benefit from the extreme performance, scalability and first to market speed advantage of our joint end-to-end solutions," said Jim Ganthier, senior vice president, Validated Solutions Organization and HPC, Dell EMC. "Our collaborative innovation with Mellanox helps customers accelerate time to insights and results, utilizing an open standards-based approach and enabling their next discoveries." "Fabrics are key to high performance clusters," said Scott Misage, vice president, HPC Solutions and Apollo Pursuits, Hewlett Packard Enterprise. "Mellanox 200Gb HDR products will help our joint customers take full advantage of the scalability of HPE's purpose-built Apollo HPC solutions, maximizing overall application efficiency for their High Performance Computing workloads."
"Mellanox is not only an innovator for networking solutions but an advocate for improving data center ROI," said Mr. Qiu Long, president of the Huawei Server Product Line. "With the introduction of this new 200Gb/s HDR solution, high performance computing and many other demanding applications can forge ahead.""Mellanox is advancing the bandwidth, latency, and programmability of fabrics with 200Gb HDR InfiniBand solutions for the OpenPOWER ecosystem, and we are looking forward to integrating HDR InfiniBand into the OpenPOWER technology portfolio," said Brad McCredie, vice president and IBM Fellow, Systems and Technology Group CTO, IBM Systems. "The OpenPOWER ecosystem incorporates the best of new technologies through collaborative innovation, and we're excited to see how ConnectX-6 and Quantum will push performance to the next level." "Mellanox has taken a quantum leap forward in data center networking with InfiniBand solutions that now provide world-class performance of 200 million messages per second," said Mr. Leijun Hu, VP of Inspur Group. "In addition, the new Mellanox Quantum 200Gb/s HDR InfiniBand switches now represent the world's fastest, most flexible switch with an extremely low latency of 90ns." "Demanding HPC workloads, such as Artificial Intelligence, requires extremely high bandwidth for enormous amounts of data crunching. HDR InfiniBand will be an increasingly important technology for the modern datacenter. Mellanox intelligent interconnect solutions are the foundation for many of our market leading HPC solutions, from big to small; we're excited to deliver the advantages of HDR to a broader set of HPC clients running exceptionally challenging workloads," said Scott Tease, Executive Director, High Performance Computing, Lenovo Data Center Group. "ConnectX-6 significantly improves bandwidth to NVIDIA ® GPUs resulting in better scale out solutions for HPC, deep learning, and data center applications," said Dr. Ian Buck, Vice President of the Accelerated Computing Group at NVIDIA. "With integrated support for NVIDIA GPUDirect ™ technology, Mellanox interconnect and NVIDIA's high performance Tesla ® GPUs will enable direct data transfers across clusters of GPUs, essential to addressing complex and computationally intensive challenges in very diverse markets." "Our customers are constantly looking towards the next cutting edge infrastructure that gives them the competitive advantage," said Ken Claffey, VP and GM, Seagate Cloud Systems and Silicon Group. "Seagate couldn't be more excited to embrace Mellanox's HDR 200Gb/s capabilities that will deliver unmatched storage platforms for network-intense applications like media streaming and compute clustering."
"We are thrilled to see Mellanox's newest solutions that literally double data speeds from the previous generation," said Mr. Chaoqun Sha, SVP of Technology at Sugon. "These new solutions are not only ideal for both InfiniBand and the Ethernet standards-based protocols, but also give customers the flexibility to take advantage of Mellanox's innovative Multi-Host technology."The ConnectX-6 adapters include single/dual-port 200Gb/s Virtual Protocol Interconnect ® ports options, which double the data speed when compared to the previous generation. It also supports both the InfiniBand and the Ethernet standard protocols, and provides flexibility to connect with any CPU architecture - x86, GPU, POWER, ARM, FPGA and more. With unprecedented world-class performance at 200 million messages per second, ultra-low latency of 0.6usec, and in-network computing engines such as MPI-Direct, RDMA, GPU-Direct, SR-IOV, data encryption as well as the innovative Mellanox Multi-Host ® technology, ConnectX-6 will enable the most efficient compute and storage platforms in the industry. The Quantum 200Gb/s HDR InfiniBand switch is the world's fastest switch supporting 40-ports of 200Gb/s InfiniBand or 80-ports of 100Gb/s InfiniBand connectivity for a total of 16Tb/s of switching capacity, and with an extremely low latency of 90ns. Mellanox Quantum advances the support of in-network computing technology, delivers optimized and flexible routing engines, and is the most scalable switch IC available. Mellanox Quantum IC will be the building block for multiple switch systems - from 40-ports of 200Gb/s or 80-ports of 100Gb/s for Top-of-Rack solutions - to 800-ports of 200Gb/s and 1600-ports of 100Gb/s modular switch systems. To complete the end-to-end 200Gb/s InfiniBand infrastructure, Mellanox LinkX solutions will offer a family of 200Gb/s copper and Silicon Photonics fiber cables. Visit Mellanox Technologies at SC16 (November 14-17, 2016) Visit Mellanox Technologies at SC16 (booth #2631) to learn more on the new 200G HDR InfiniBand solutions and to see the full suite of Mellanox's end-to-end high-performance InfiniBand and Ethernet solutions.
For more information on Mellanox's event and speaking activities at SC16, please visit: Mellanox at SC16.Supporting Resources:
- Learn more about Mellanox products and solutions at: www.mellanox.com
- Follow Mellanox on Twitter, Facebook, Google+, LinkedIn, and YouTube
- Join the Mellanox Community