The operation of local HPC resources and especially of the Scientific Compute Cluster (SCC) at the GWDG is achieved by the transparent integration of different systems into a joint operating concept for the basic supply of the Max-Planck Institutes and the university. This includes a uniform software management, a shared batch management environment, cross-system monitoring and accounting, and cross-system file systems. Thus, synergies are achieved through the integration of different system generations and special-purpose systems (e.g. GPU clusters). Users will find a uniform environment on all HPC systems, while at the same time individual application environments are supported. Nonetheless, this results in a highly heterogeneous cluster, which requires good knowledge of the architecture differences and highly tuned run scripts.

The extensive documentation, the FAQ and the first steps can be found online. If you are using our systems for your research, please also refer to the acknowledgement guidelines on using our systems for your research.

System Overview

7 Racks

4 Racks at the Faßberg are cold water cooled. The two GPU nodes at the MDC are air cooled. One CPU rack at the MDC is warm water cooled.

410 Compute Nodes

The SCC cluster contains a mixture of Xeon Platinum 9242, Broadwell Xeon E5-2650 v4, Haswell Xeon E5-4620 v3, Broadwell Xeon E5-2650 v4 and Xeon Gold 6252 CPUs

18.376 CPU Cores

Distributed over all compute and GPU nodes.

100 GBit/s & 56 Gbit/s Interconnect

The interconnect for the system at the Faßberg is run with 56GBit/s FDR Infiniband, and the MDC system runs with 100GBit/s Omni-Path.

99 TB RAM

Across all 410 nodes, 88 TB of memory are available

5,2 PiB Storage

The BeeGFS storage in the MDC system consists of 2 PiB HDD and 100 TiB SSD and 130TiB HDD at the Faßberg system. The StorNext home file system is around 3 PiB large.

22+ PiB Tape Storage

Backup storage is provided by Quantum Scalar Tape Libraries. To ensure reliable backups, these are stored at two different locations

Icons made by Freepik and phatplus

Node Architectures

NameNumber of nodesCPU & GPUNumber of CPU-CoresMemory [GB]Partition
amp952 x Xeon Platinum 9242
Architecture
Cascade Lake
Sockets
2
Cores
48
Threads
2
Frequency
3.8 GHZ
48384[medium]
amp12 x Xeon Platinum 9242
Architecture
Skylake
Sockets
2
Cores
48
Threads
2
Frequency
3.8 GHZ
48384[gailing]
dmp682 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
Frequency
2.2 GHZ
12128[medium]
dmp42 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
Frequency
2.2 GHZ
12128[int]
dmp102 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
Frequency
2.2 GHZ
12128[medium-upsw]
dfa152 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
12512[fat]
dsu54 x Xeon E5-4620 v3
Architecture
Haswell
Sockets
4
Cores
10
Threads
1
101536[fat fat+]
gwde14 x Xeon E7-4809 v3
Architecture
Haswell
Sockets
4
Cores
8
Threads
1
82048[fat fat+]
dge72 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
2 x GTX 1080
CUDA cores
2560
RAM
8192 GB
12128[gpu]
dge82 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
4 x GTX 980
CUDA cores
2560
RAM
4096 GB
12128[gpu]
dge302 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
10
Threads
2
HTT
true
4 x GTX 980
CUDA cores
2560
RAM
4096 GB
1064[gpu-hub]
gwdo201 x Xeon E3-1270 v2
Architecture
Ivy Bridge
Sockets
1
Cores
4
Threads
2
HTT
true
1 x GTX 770
CUDA cores
1536
RAM
2048 GB
432[gpu-hub]
dte102 x Xeon E5-2650 v4
Architecture
Broadwell
Sockets
2
Cores
12
Threads
1
2 x Tesla K40m
CUDA cores
2880
RAM
12 GB
12128[gpu]
agt22 x Xeon Gold 6252
Architecture
Cascade Lake
Sockets
2
Cores
24
Threads
2
8 x Tesla V100/32
CUDA cores
5120
Tensor cores
640
RAM
32 GB
24384[gpu]
agq142 x Xeon Gold 6242
Architecture
Cascade Lake
Sockets
2
Cores
16
Threads
2
4 x Quadro RTX5000
CUDA cores
3072
Tensor cores
384
RT cores
48
RAM
16 GB
16192[gpu]
em322 x Xeon E5-2640 v3
Architecture
Haswell
Sockets
2
Cores
8
Threads
1
8128[em]
sa322 x Xeon E5-2680 v3
Architecture
Haswell
Sockets
2
Cores
12
Threads
1
12256[sa]
hh72 x Epyc 7742
Architecture
Rome
Sockets
2
Cores
64
Threads
2
HTT
true
641024[hh]
sgiz132 x Xeon Gold 6130
Architecture
Skylake
Sockets
2
Cores
16
Threads
1
1696[sgiz]
gwdd82 x Xeon E5-2650 v3
Architecture
Haswell
Sockets
2
Cores
10
Threads
1
Frequency
2.3 GHZ
1064[]