INTEL DEVELOPER FORUM, SAN JOSE, Calif. — Sandia National Laboratories today announced at the Intel Developer Forum it will demonstrate an Intel-based next-generation, High Performance Computing (HPC) cluster, the first such cluster to utilize PCI Express visualization systems.
The demonstration will run PCI Express Ethernet and Graphics adapters supplied by ATI on servers supplied by Linux Networx and Celestica. It will also make use of InfiniBand architecture, a new Input/Output standard network technology that holds promise for improved performance of High-Performance Computing (HPC) clusters at a lower cost.
The InfiniBand cluster demonstration utilizes 16 computers, containing a combination of Intel® Xeon™ and Intel® Itanium™ 2 processors and utilizing InfiniBand Host Channel Adapters, configured together within a 10 Gbps InfiniBand fabric. The cluster uses the Linux operating system. “We look forward to working with Intel and others to evaluate the advantages PCI Express and InfiniBand architecture’s low-latency, high bandwidth interconnect technology,” said Matt Leininger, a computational scientist at Sandia National Laboratories.
“Open interconnect standards like InfiniBand architecture and PCI Express for Intel-based systems provide outstanding performance for world class HPC clusters,” said Jim Pappas, director of initiative marketing for Intel’s Enterprise Platform Group. “Sandia National Laboratories has a long history in developing some of the most powerful clusters on the planet. We look forward to working closely with them in testing their InfiniBand cluster.”
The InfiniBand architecture simplifies and speeds server-to-server connections and links to other server-related systems in such areas as remote storage and networking devices. InfiniBand architecture’s easier connectivity, reduced latency, improved bandwidth and enhanced interoperability features increase the performance, reliability and scalability of Intel-based servers to meet the growth needs of emerging e-Business data centers.
Sandia, said Leininger, will soon begin testing the performance and scalability of the current-generation cluster. The 128-node machine, expected to be among the world’s top systems, is being built by Linux Networx and will run on 256 Intel Xeon processors at 3.0 GHz. The cluster will eventually be housed at Sandia National Laboratories in Livermore, Calif., and is powered by InfiniBand adapters supplied by Mellanox and Intel Xeon processors from Linux Networx.
The cluster, scheduled to be delivered to Sandia this week, will initially be used for InfiniBand software stack validation and hardware testing, and ultimately will be available for the lab’s internal research and development.