N8 HPC currently operate one facility, Polaris, an SGI High Performance Computing cluster with 332 compute nodes. Each node has two of the latest 8 core Intel Sandy Bridge processors and these nodes have a capacity of 320 GigaFLOPS/second. By using all of the nodes of Polaris together, a peak performance of 110 TeraFLOPS/second is possible. Unlocking this peak performance has been helped by our use of a fast InfiniBand interconnect joining the nodes together.
To give an indication of the compute power available, Polaris is capable of 110 TF/s which is roughly equivalent to the compute power of half a million iPads.
In terms of other compute capability, Polaris is about an eighth of power of the national supercomputer, HECToR.
- 316 nodes (5,056 cores) with 4 GByte of RAM per core (each node has 8 DDR3 DIMMS each of 8 GByte ie 64 GBytes of memory per node). These are known as "thin nodes".
- 16 nodes (256 cores) with 16 GByte of RAM per core (each node has 16 DDR3 DIMMS each of 16 GByte ie 256 GBytes of memory per node). These are known as "fat nodes".
- Each node comprises two of the Intel 2.6 GHz Sandy Bridge E5-2670 processors.
- Each processor has 8 cores, giving a total core count of 5,312 cores
- Each processor has a 115 Watts TDP and the Sandy Bridge architecture supports "turbo" mode
- Each node has a single 500 GB SATA HDD
- There are 4 nodes in each Steelhead chassis. 79 chassis have 4 GB/core and 4 more have 16 GB/core
The compute nodes are fully connected by Mellanox QDR InfiniBand high performance interconnects with 2:1 blocking
File System & Storage
- 174 TBytes Lustre v2 parallel file system with 2 OSSes. This is mounted as /nobackup and has no quota control. It is not backed up and files are automatically expired after 90 days.
- 109 TBytes NFS filesystem where user $HOME is mounted. This is backed up.
- The login nodes are running RHEL6
- The compute nodes are running CentOS6.
- No cross-compilation is required.