SC12--Mellanox® Technologies, Ltd. (NASDAQ: MLNX) (TASE: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that the U.S. Department of Energy’s Brookhaven National Laboratory has deployed Mellanox FDR 56Gbp/s InfiniBand with RDMA to build a cost-effective and scalable 100Gb/s network for compute and storage connectivity. Some of the key research being conducted currently at Brookhaven National Laboratory includes system biology to advance the fundamental knowledge underlying biological approaches to producing biofuels, sequestering carbon in terrestrial ecosystems, advanced energy systems research and nuclear/high-energy physics experiments to explore the most fundamental questions about the nature of the universe. “Researchers at Brookhaven National Laboratory rely on data-intensive applications that require high speed (throughput) accesses to data storage systems,” said Dantong Yu, research engineer at Brookhaven National Laboratory. “Scientists often need to read and write data in an aggregated speed of 10Gbps, 100Gbps and beyond, which is equivalent to fetching a full-length HD movie in less than a second. The efficiency and scalability of Mellanox InfiniBand solutions with RDMA should help us eliminate bottlenecks on the interconnection between servers and storage, while also controlling processing cost and latency. Faster access to data enables us to move our research forward more quickly.” One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical and environmental sciences, as well as in energy technologies and national security. Brookhaven National Laboratory constructed a storage area network (SAN) testbed utilizing iSCSI Extensions for RDMA (iSER) protocols over Mellanox InfiniBand-based storage interconnects with RDMA. This storage solution is scalable to allow a large number of cluster/cloud hosts to have unrestricted access to virtualized storage and enable gateway hosts, such as FTP and web servers, to move data between client and storage with an extremely high speed. Combined with its front-end network interface, the upgraded SAN will eliminate bottlenecks and deliver 100Gb/s end-to-end data transfer throughput to support applications that constantly need to move large amounts of data within and across Brookhaven’s data centers.
“National research labs, such as Brookhaven National Laboratory, require extremely fast data access for their applications in order to conduct their research more effectively,” said Gilad Shainer, vice president of market development at Mellanox. “Mellanox InfiniBand and RDMA solutions provide the most efficient and scalable interconnect infrastructure to enable Brookhaven National Laboratory to increase their application performance and achieve their research goals.”Visit Mellanox Technologies & Brookhaven National Laboratory at SC12 (November 12-15, 2012) Visit Mellanox Technologies at booth #1531 on Wednesday, November 14 th at 11:45am, to see Brookhaven National Laboratory’s live demonstration of its high speed data transfer network. Supporting Resources:
- Learn more about Mellanox’s complete FDR 56Gb/s InfiniBand solution
- Follow Mellanox on Twitter and Facebook