- SGI High Performance Computing cluster with a total of 332 compute nodes
- 316 nodes (5,056 cores) with 4 GByte of RAM per core (each node has 8 DDR3 DIMMS each of 8 GByte ie 64 GBytes of memory per node). These are known as “thin nodes”.
- 16 nodes (256 cores) with 16 GByte of RAM per core (each node has 16 DDR3 DIMMS each of 16 GByte ie 256 GBytes of memory per node). These are known as “fat nodes”
- Each node comprises two of the Intel 2.6 GHz Sandy Bridge E5-2670 processors
- Each processor has 8 cores, giving a total core count of 5,312 cores
- Each processor has a 115 Watts TDP and the Sandy Bridge architecture supports “turbo” mode
- Each node has a single 500 GB SATA HDD
- There are 4 nodes in each Steelhead chassis. 79 chassis have 4 GB/core and 4 more have 16 GB/core
- By using all of the nodes of Polaris together, a peak performance of 110 TeraFLOPS/second is possible. Unlocking this peak performance has been helped by our use of a fast InfiniBand interconnect joining the nodes together
- The compute nodes are fully connected by Mellanox QDR InfiniBand high performance interconnects with 2:1 blocking
File System & Storage
- 174 TBytes Lustre v2 parallel file system with 2 OSSes. This is mounted as /nobackup and has no quota control. It is not backed up and files are automatically expired after 90 days.
- 109 TBytes NFS filesystem where user $HOME is mounted. This is backed up.
- The login nodes are running RHEL6
- The compute nodes are running CentOS6.
- No cross-compilation is required.
- http://n8hpc.org.uk/software-az/and how to request other applications
- Asking for help
- Skills training via online and eLearning