NHR-NORD@Göttingen

“Emmy” was procured in 2018 as an HLRN-IV system in conjuction with our partners at Zuse Institute Berlin (ZIB). Together with the Berlin “Lise” system, it is the successor to the HLRN-III system, which was previously hosted in Hanover and Berlin. With “Emmy”, the GWDG and the University of Göttingen became one of eight NHR centers, thus formally making “Emmy” an NHR system. Nonetheless, the NHR Scientific Board decides on the admission of projects for 50% of the runtime available on “Emmy”.

In 2022, “Emmy” was supplemented by the powerful GPU system “Grete”, which the NHR site is using to improve support for a wide range of applications, particularly in the field of artificial intelligence (AI).

See the Quickstart Guide for all necessary information on how to get access and started on the system. If you are using our systems for your research, please also refer to the guidelines on using our systems for your research.

System Overview

21 Racks

The nodes of phase 1 are placed in 10 racks in an air-cooled area and the 11 racks of phase 2 are operated with direct hot water cooling.

1.423 Compute Nodes

449 compute nodes belong to phase 1 and are equipped with Intel Xeon Gold 6148 SKL-SP CPUs. The other 974 compute nodes of phase 2 are equipped with Xeon Platinum 9242 CLX-AP CPUs.

111.464 CPU Cores

These are distributed over phase 1 and phase 2 with 17 960 and 93 504 cores respectively.

100 Gbit/s Interconnect

A 100 Gbps Intel Omni-Path network offers low latency and high bandwidth with a fat-tree topology and combines the two phases of Emmy with two 896 port director switches in a single fab

5,95 PetaFlop/s

This result was achieved during the LINPACK benchmark, putting Emmy in 133th place on the Top 500 list (as of November 2023) and in 242nd place on the GREEN500 list (as of November 2023).

478 TB RAM

Across all 1 423 nodes are 478 TB of memory available.

8,4 PiB Storage

A total of 8.4 PiB of storage capacity is available on global parallel file systems, divided into 340TiB for the GPFS-based Home and 8.1 PiB for the Lustre-based Work file system.

6,4 PiB Tape Storage

To archive their results, users have a total of 6.4 PiB of tape storage with a 120 TiB hard disk cache at their disposal.

Icons made by Freepik and phatplus

Node Architectures

NameNumber of nodesCPU & GPUNumber of CPU-CoresMemory [GB]Partition
gcn14322 x Xeon Gold 6148
Architecture
Skylake
Sockets
2
Cores
20
Threads
2
Frequency
2.4 GHZ
HTT
true
20187[medium40:test]
gfn1162 x Xeon Gold 6148
Architecture
Skylake
Sockets
2
Cores
20
Threads
2
Frequency
2.4 GHZ
HTT
true
20754[large40]
ggpu0112 x Xeon Gold 6148
Architecture
Skylake
Sockets
2
Cores
20
Threads
2
Frequency
2.4 GHZ
HTT
true
4 x Tesla V100-PCIE-32GB
CUDA cores
5120
Tensor cores
640
RAM
32 GB
20754[gpu]
ggpu0x22 x Xeon Gold 6248
Architecture
Skylake
Sockets
2
Cores
20
Threads
2
Frequency
2.5 GHZ
HTT
true
4 x Tesla V100S-PCIE-32GB
CUDA cores
5120
Tensor cores
640
RAM
32 GB
20754[gpu]
gcn210042 x Xeon Platinum 9242
Architecture
Cascade Lake
Sockets
2
Cores
48
Threads
2
Frequency
2.3 GHZ
HTT
true
48376[standard96:test]
gfn2162 x Xeon Platinum 9242
Architecture
Cascade Lake
Sockets
2
Cores
48
Threads
2
Frequency
2.3 GHZ
HTT
true
48754[large96]
gsn222 x Xeon Platinum 9242
Architecture
Cascade Lake
Sockets
2
Cores
48
Threads
2
Frequency
2.3 GHZ
HTT
true
481510[huge96]