SCC

Cookies and Tracking help us to give you a better experience on our website.

The operation of local HPC resources and especially of the Scientific Compute Cluster (SCC) at the GWDG is achieved by the transparent integration of different systems into a joint operating concept for the basic supply of the Max-Planck Institutes and the university. This includes a uniform software management, a shared batch management environment, cross-system monitoring and accounting, and cross-system file systems. Thus, synergies are achieved through the integration of different system generations and special-purpose systems (e.g. GPU clusters). Users will find a uniform environment on all HPC systems, while at the same time individual application environments are supported. 

The SCC is distributed across the headquarter at Faßberg as well as in the "Fernmeldezentrale" (old telecommunications centre):

 

This sketch shows the structure of the cluster and a schematic representation of the available resources.  The spatial seperation has a direct influence on the use of the system, especially the use of storage. There are different compute nodes with different properties:

 
Host CPU Number of Cores Memory per Node [GB] GPU Partition
gwdd 001-168 2x Ivy Bridge Xeon E5-2670 v2 10 64   medium
gwdd 169-176 2x Haswell Xeon E5-2650 v3 10 64   int
dmp 001-079 2x Broadwell Xeon E5-2650 v4 12 128   medium
gwda 023-048 4x Abu Dhabi Opteron 6378 16 256   fat
dfa 001-015 2x Broadwell Xeon E5-2650 v4 12 512   fat
dsu 001-005 4x Haswell Xeon E5-4620 v3 10 1536   fat+
gwde 001-001 4x Haswell Xeon E7-4809 v3 8 2048   fat+
dge 001-007 2x Broadwell Xeon E5-2650 v4 12 128 2x 1080 gpu
dge 008-015 2x Broadwell Xeon E5-2650 v4 12 128 4x 980 gpu
dte 001-010 2x Broadwell Xeon E5-2650 v4 12 128 2x Tesla K40m gpu

Storage Structure of the SCC


The SCC has various storage options. Each user has their own $HOME directory which is accessible from anywhere within the SCC and has a backup. This is the general Unix/Linux file service operated by the GWDG. However, this file system is comparatively slow. The personal storage space can be increased upon request. To do so, please write an informal mail to support@gwdg.de with the request for a quota increase of your $HOME directory. 

The local storage of a compute node can be accessed with /local. This is consist of a very fast SSD storage. A directory /local/jobs/ is created automatically. This path is also exported as environment variable $TMP_LOCAL. The local storage has an automatic file deletion feature to ensure a clean setup for every new batch job.

Via the frontend and compute nodes of the old "Fernmeldezentrale" (old telecommunication centre) (gwda, gwdd, gwde, gwdu101, gwdu102) one has access to the shared filesystem /scratch and via the compute nodes at the Faßberg (dfa, dge, dmp, dsu, dte, gwdu103) one has access to the shared file system /scratch2. The shared file systems /scratch and /scratch2 are parallel BeeGFS file systems with high bandwidth. During program execution on the batch system, they are usually the best choice for temporary data. Neither of these file systems will automatically delete data or create a backup. To back up important results, the scc_backup mechanism is available to back up the data at night to the home directory ${HOME}/scc_backup. To do this, the data to be copied must be located in scratch/scratch2 in the folder /scratch(2)/${USER}/scc_backup. To ensure access to /scratch or /scratch2 you should use the -C scratch or /scratch2 option in Slurm correspondingly. Since scratch/scratch2 is shared among all users, we strongly recommend that you store data that is not immediately needed elsewhere. You can use the archive, which can be reached with the environment variable $AHOME. When using the archive you should always make sure to compress folders (as tar file) before moving them . If the free space on scratch/scratch2 runs out, you will be prompted to remove files from these systems.

Transfer Data

 

In order to copy data to or from the $HOME directory or to and from SCRATCH/SCRATCH2 you can choose between several possibilities, depending on where you want to transfer data from.

The transfer node transfer-scc.gwdg.de is available within the GÖNET. This node has access to the $HOME directory as well as to /scratch

The transfer node transfer.gwdg.de is available worldwide, but only has access to the $HOME directories. There is no dedicated transfer node for /scratch2, instead you have to use gwdu103.gwdg.de. More information and command line examples can be found in the documentation.

Services

Information and help on the usage of the HPC services for the university and MPG (SCC and GöHPC systems).

HPC on Campus

HPC-Systems hosted by GWDG on the Göttingen campus.

The GWDG, as the joint data center of the University of Göttingen and the Max Planck Society, hosts the Scientific Compute Cluster (SCC) to ensure access to a local HPC-system for all scientists. In this role, the GWGD also hosts intitute-owned systems in the scope of the GöHPC cooperation in an integrated hosting concept. In addition, the GWDG will be hosting two external systems: already now the Göttingen HLRN-IV system and from the end of 2020 on the Göttingen site of the HPC initiative of the German Aerospace Center (DLR).

Research and Science

Research, teaching and consultation on HPC on the Göttingen campus.

In addition to IT operations, one of the GWDG's main areas of activity is research and  science. This is underlined by the various projects and the chairs of Prof. Dr. Ramin Yahyapour and Prof. Dr. Philipp Wieder. The HPC team is also commited to the promotion of young scientists by supporting teaching and supervising master and doctoral theses.

The GWDG and in particular the HPC team are in close contact with the researchers who work on the HPC systems of the GWDG. This results in various methodological and application science synergies and projects, which are consolidated under the label GöHPC.

Contact Us

 

Write an E-Mail to: hpc@gwdg.de

Or chat with us on Rocket.Chat