For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. To get it activated or if you are unsure whether you have a full GWDG account, please send an informal email to email@example.com
As all services, also the usage of our HPC ressources is accounted in a fictious currency, so called Work Units ("Arbeitseinheiten", AE). For the current pricing, see the Dienstleistungskatalog.
Once you gain access, you can login to the frontend nodes
gwdu103.gwdg.de. These nodes are accessible via ssh from the GÖNET. If you come from the internet, the preferred way to gain access to the GÖNET is to use a VPN connection. Alternatively you can first login to
login.gwdg.de. From there you can then reach the frontends.
ssh <GWDG username>@login.gwdg.de
The frontends are meant for editing, compiling, and interacting with the batch system, but please don't use them for testing for more than a few minutes, since all users share resources on the frontends and will be impaired in their daily work, if you overuse them. gwdu101 is an AMD based system, while gwdu102 and gwdu103 are Intel based. If your software takes advantage of special CPU dependent features, it is recommended to use the same CPU architecture for compiling as targeted for running your jobs.
Preparing your environment
HPC systems provide software for many different users. Often they even provide different versions of the same software (e.g. compilers). To prevent dependency clashes and similar, the software is provided in so called "modules", which can be loaded by the user to her needs.
To see all available modules, use
module avail. Once you know which modules you need, you can load them with
module load < module name>. Necessary environment variables, e.g.
PATH, are set by the module. With
module show <module name> you can see further details of the module, i.e., which environment variables are set.
All information about the workload manager Slurm can be found in our documentation.