N8 HPC currently operates two facilities, the Farr machine and Polaris.

Polaris Details

Compute nodes

  • SGI High Performance Computing cluster with a total of 332 compute nodes
  • 316 nodes (5,056 cores) with 4 GByte of RAM per core (each node has 8 DDR3 DIMMS each of 8 GByte ie 64 GBytes of memory per node). These are known as “thin nodes”.
  • 16 nodes (256 cores) with 16 GByte of RAM per core (each node has 16 DDR3 DIMMS each of 16 GByte ie 256 GBytes of memory per node). These are known as “fat nodes”
  • Each node comprises two of the Intel 2.6 GHz Sandy Bridge E5-2670 processors
  • Each processor has 8 cores, giving a total core count of 5,312 cores
  • Each processor has a 115 Watts TDP and the Sandy Bridge architecture supports “turbo” mode
  • Each node has a single 500 GB SATA HDD
  • There are 4 nodes in each Steelhead chassis. 79 chassis have 4 GB/core and 4 more have 16 GB/core
  • By using all of the nodes of Polaris together, a peak performance of 110 TeraFLOPS/second is possible. Unlocking this peak performance has been helped by our use of a fast InfiniBand interconnect joining the nodes together

See also

Connectivity

  • The compute nodes are fully connected by Mellanox QDR InfiniBand high performance interconnects with 2:1 blocking

 File System & Storage

  • 174 TBytes Lustre v2 parallel file system with 2 OSSes. This is mounted as /nobackup and has no quota control. It is not backed up and files are automatically expired after 90 days.
  • 109 TBytes NFS filesystem where user $HOME is mounted. This is backed up.

Operating Systems

  • The login nodes are running RHEL6
  • The compute nodes are running CentOS6.
  • No cross-compilation is required.

 

Farr Machine Details

  • HERC1 is an SGI UV2000 shared memory computer
  • 256 cores and a total of 4Tb memory. This is configured as 32 x Intel E5-4650L (8 core, 2.6GHz, AVX capable) CPUs and 256 x 16Gb DIMMs (1600MHz, DDR3)
  • The cluster itself consists of 32 NUMA (Non-uniform memory architecture) nodes, each with 8 cores (a single CPU) and 128Gb memory
  • One of the 32 NUMA nodes acts as a login node and the other 31 nodes act as compute nodes and are accessed through a batch queue
  • The cluster runs the PBS scheduler, rather than SGE which runs on Polaris

Disk storage is provided through a single array that provides:

  • A 4 Tb /home filesystem ($HOME directories and shared between all users)
  • A 40 Tb /scratch filesystem (shared between all users)

Further Information