HLRN-Systems
![]() | 21 Racks | The nodes of phase 1 are placed in 10 racks in an air-cooled area and the 11 racks of phase 2 are operated with direct hot water cooling. |
![]() | 1 423 Compute Nodes | 449 compute nodes belong to phase 1 and are equipped with Intel Xeon Gold 6148 SKL-SP CPUs. The other 974 compute nodes of phase 2 are equipped with Xeon Platinum 9242 CLX-AP CPUs. |
![]() | 111 464 CPU Cores | These are distributed over phase 1 and phase 2 with 17 960 and 93 504 cores respectively. |
![]() | 100 Gbit/s Interconnect | A 100 Gbps Intel Omni-Path network offers low latency and high bandwidth with a fat-tree topology and combines the two phases of Emmy with two 896 port director switches in a single fab |
![]() | 5.95 PetaFlop/s | This result was achieved during the LINPACK benchmark, putting Emmy in 47th place on the November 2020 Top 500 list. |
![]() | 478 TB RAM | Across all 1 423 nodes are 478 TB of memory available. |
![]() | 8.4 PiB Storage | A total of 8.4 PiB of storage capacity is available on global parallel file systems, divided into 340TiB for the GPFS-based Home and 8.1 PiB for the Lustre-based Work file system. |
![]() | 6,4 PiB Tape Storage | To archive their results, users have a total of 6.4 PiB of tape storage with a 120 TiB hard disk cache at their disposal. |
Icons made by Freepik from www.flaticon.com
Icons made by phatplus from www.flaticon.com