Mellanox InfiniBand Solutions Enable New Levels Of Research At Brookhaven National Laboratory

SC12--Mellanox® Technologies, Ltd. (NASDAQ: MLNX) (TASE: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that the U.S. Department of Energy’s Brookhaven National Laboratory has deployed Mellanox FDR 56Gbp/s InfiniBand with RDMA to build a cost-effective and scalable 100Gb/s network for compute and storage connectivity. Some of the key research being conducted currently at Brookhaven National Laboratory includes system biology to advance the fundamental knowledge underlying biological approaches to producing biofuels, sequestering carbon in terrestrial ecosystems, advanced energy systems research and nuclear/high-energy physics experiments to explore the most fundamental questions about the nature of the universe.

“Researchers at Brookhaven National Laboratory rely on data-intensive applications that require high speed (throughput) accesses to data storage systems,” said Dantong Yu, research engineer at Brookhaven National Laboratory. “Scientists often need to read and write data in an aggregated speed of 10Gbps, 100Gbps and beyond, which is equivalent to fetching a full-length HD movie in less than a second. The efficiency and scalability of Mellanox InfiniBand solutions with RDMA should help us eliminate bottlenecks on the interconnection between servers and storage, while also controlling processing cost and latency. Faster access to data enables us to move our research forward more quickly.”

One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical and environmental sciences, as well as in energy technologies and national security.

Brookhaven National Laboratory constructed a storage area network (SAN) testbed utilizing iSCSI Extensions for RDMA (iSER) protocols over Mellanox InfiniBand-based storage interconnects with RDMA. This storage solution is scalable to allow a large number of cluster/cloud hosts to have unrestricted access to virtualized storage and enable gateway hosts, such as FTP and web servers, to move data between client and storage with an extremely high speed. Combined with its front-end network interface, the upgraded SAN will eliminate bottlenecks and deliver 100Gb/s end-to-end data transfer throughput to support applications that constantly need to move large amounts of data within and across Brookhaven’s data centers.

If you liked this article you might like

Cisco's Explanations for Its Soft Guidance Only Go So Far

The Cloud is Still Merciless to Enterprise Hardware Firms, But Security is a Strong Point

Why Telecom Equipment Giants Are Struggling, but Their Suppliers Are Thriving

Apple Supplier InvenSense Could Be The Next Takeover Target in Chips

Analysts' Actions -- Arista Networks, CF Industries, Duke Energy, FireEye and More