First Steps

Cookies and Tracking help us to give you a better experience on our website.


For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. To get it activated or if you are unsure whether you have a full GWDG account, please refer to the our official documentation.

As all services, also the usage of our HPC ressources is accounted in a fictious currency, so called Work Units ("Arbeitseinheiten", AE). For the current pricing, see the Dienstleistungskatalog.

Logging in

Once you gain access, you can login to the frontend nodes, and These nodes are accessible via ssh from the GÖNET. If you come from the internet, the preferred way to gain access to the GÖNET is to use a VPN connection. Alternatively you can first login to From there you can then reach the frontends.

ssh <GWDG username>
ssh gwdu<101|102|103>

The frontends are meant for editing, compiling, and interacting with the batch system, but please don't use them for testing for more than a few minutes, since all users share resources on the frontends and will be impaired in their daily work, if you overuse them. gwdu101 is an AMD based system, while gwdu102 and gwdu103 are Intel based. If your software takes advantage of special CPU dependent features, it is recommended to use the same CPU architecture for compiling as targeted for running your jobs.

Preparing your environment

HPC systems provide software for many different users. Often they even provide different versions of the same software (e.g. compilers). To prevent dependency clashes and similar, the software is provided in so called "modules", which can be loaded by the user to her needs.

To see all available modules, use module avail. Once you know which modules you need, you can load them with module load < module name>. Necessary environment variables, e.g. LD_LIBRARY_PATH and PATH, are set by the module. With module show <module name> you can see further details of the module, i.e., which environment variables are set. A complete guide to our module system and the installation of new modules can be found here.

All information about the workload manager Slurm can be found in our documentation.


Information and help on the usage of the HPC services for the university and MPG (SCC and GöHPC systems).

HPC on Campus

HPC-Systems hosted by GWDG on the Göttingen campus.

The GWDG, as the joint data center of the University of Göttingen and the Max Planck Society, hosts the Scientific Compute Cluster (SCC) to ensure access to a local HPC-system for all scientists. In this role, the GWGD also hosts intitute-owned systems in the scope of the GöHPC cooperation in an integrated hosting concept. In addition, the GWDG will be hosting two external systems: already now the Göttingen HLRN-IV system and from the end of 2020 on the Göttingen site of the HPC initiative of the German Aerospace Center (DLR).

Research and Science

Research, teaching and consultation on HPC on the Göttingen campus.

In addition to IT operations, one of the GWDG's main areas of activity is research and  science. This is underlined by the various projects and the chairs of Prof. Dr. Ramin Yahyapour and Prof. Dr. Philipp Wieder. The HPC team is also commited to the promotion of young scientists by supporting teaching and supervising master and doctoral theses.

The GWDG and in particular the HPC team are in close contact with the researchers who work on the HPC systems of the GWDG. This results in various methodological and application science synergies and projects, which are consolidated under the label GöHPC.

Contact Us


Write an E-Mail to:

Or chat with us on Rocket.Chat