BRISBANE, Australia, June 4, 2013 (GLOBE NEWSWIRE) -- SGI (Nasdaq:SGI), the trusted leader in technical computing, today announced that Translational Research Institute (TRI) has selected SGI to provide a big data HPC solution powered by SGI® UV™ 2000 shared memory platform, SGI® Rackable® clusters and SGI® InfiniteStorage™ to accelerate results at its new state-of-the-art research centre.
This new facility, which represents four leading medical research institutes, will focus on advanced treatments and therapies for common and serious disease such as cancers, diabetes, inflammatory diseases, HIV, malaria, bone and joint diseases and obesity. The Institute is destined to be the largest biomedical research institute in the Southern Hemisphere.
Researchers will now have access to the necessary technology and facilities in one location, which will positively impact productivity through the rate at which work is processed and scientific results are achieved. SGI's compute and data storage solution provides more than 2,200 SGI Rackable compute cores, 256 cores and 4TB of memory SGI UV 2000 and more than one petabyte of SGI InfiniteStorage high performance storage. Up to three petabytes of historical and inactive data will be stored on tape and available via SGI's DMF system.The combination of SGI Rackable scale out clusters, UV 2000 shared memory for large 'in memory' application requirements along with high performance storage offered the flexibility of compute, storage platforms and software that TRI needed to tackle their big data problems. This solution will complement the massive amounts of data that high resolution gene sequencers, microscopes and associated laboratory equipment generate. The SGI solution will assist in increasing productivity and accelerating the time to discovery of new treatments, subsequent commercialisation and significant patents. TRI Chief Operating Officer Dr. Kate Johnston highlights the rapid advances seen in research and development over the last decade, signalling that scientific research is fast becoming an exercise in handling enormous data sets.