Skip to content

Continuous Integration (CI)

Please note

Currently, we do not enforce a timeout of jobs on runner side. The runners are built to best support your scientific projects. It would be a pity if the generally available offer had to be limited due to misuse of the platform.
So let us all use the resources for the best!

The Helmholtz GitLab is by default equipped with the possibility to use Continuous Integration for your projects. Several SaaS runners are provided for your projects.

Getting familiar with GitLab CI

In order to get started and familiar with GitLab CI we heavily suggest to have a look into any of these resources.

Linux shared runners

General Purpose

By default, without specifying a tag, all runners are configured with these properties:

Executor Operating System privileged (run Docker-in-Docker )
Docker Linux False

Multiple runners are available to execute your jobs. If you don’t provide a tag your job will run on any of the machines listed below. Except for hifis-runner-manager-1 and gitlab-runner-manager.hemera all concurrent jobs share the available resources.

Name Base OS # CPU cores Memory Hard disk # Concurrent Jobs Executor Tags privileged
hifis-runner-manager-1 Flatcar Linux 2 4096 MiB 40 GiB 20 docker+machine hifis, docker, dind True
shared-hzdr-1 Ubuntu 2 8192 MiB 80 GiB 2 docker webterminal False
shared-hzdr-2 Ubuntu 2 8192 MiB 80 GiB 2 docker webterminal False
gitlab-runner-manager.hemera Flatcar Linux 6 16 GiB 40 GiB 1 docker+machine performance, dind, docker True

If you require additional isolation of your jobs, hifis-runner-manager-1 is the way to go. On this runner all your jobs run on a fresh VM in autoscaling mode. Each instance is used only for one job. This way it is guaranteed that no jobs affect each other.

What does privileged refer to?

Usually, Docker Containers are run with privileged set to false. For security reasons, it is in general a bad idea to use the privileged mode. A container in privileged mode is granted extended privileges. It allows the container nearly all the same access to the host as processes running outside containers on the host. Nevertheless, this privileged mode is a prerequisite to run a Docker-in-Docker setup. In our setup this is not a security issue: every job is executed in a fresh VM thrown away after the associated job exited. Find more information in the Docker blog.

What is autoscaling?

In this mode jobs are executed on machines created on demand. Autoscaling means reduced queue times to spin up CI/CD jobs, and isolated VMs for each job. This isolation of VMs created on-demand also maximizes security as a nice secondary effect, because jobs are not able to interfere with other jobs.

Specific Runners

Please note

The specific runners are a limited resource which is generally available. If you do not require the additional features, please do not use them. It would have a negative impact on the speed of your pipeline. The runners described above are available in a highly scalable, low latency way.
Help us to ensure that the resources remain accessible without restrictions.

For certain projects it is useful to test on different hardware or make use of accelerators, e.g. GPUs. This is why we enable you to run your jobs on a set of special purpose runners. We cover four different CPU architectures and two GPU vendors. Below table gives you an information about the available hardware and the tags to use.

Name Base OS Tags # CPU cores Memory Accelerator CPU architecture CPU type
ci-intel-cuda Ubuntu intel, x86_64, cuda, p5000 8 20 GiB Quadro P5000 x86_64 Intel(R) Xeon(R) Silver 4110
ci-amd-rocm Ubuntu amd, rocm, rx-vega, amd-epyc-7351 8 12 GiB Radeon RX Vega 64 & Radeon R9 FURY x86_64 AMD EPYC 7351
ci-arm64 Ubuntu aarch64, arm64, cavium-thunderx-88xx 12 12 GiB - aarch64 Cavium ThunderX 88XX
ci-ppc64le-cuda Ubuntu ppc64le, p100, cuda, power8nvl 32 50 GiB 2 × Tesla P100 SXM2 ppc64le Power8NVL

Example

Below you find an example of a .gitlab-ci.yml file using some of the runners mentioned above. By adding tags to your ob definition you let GitLab know on which runner it shall schedule your job.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
test:
  image: nvidia/cuda
  stage: test
  script:
    - nvidia-smi
  tags:
    - cuda  # Job will run either on ci-intel-cuda or ci-ppc64le-cuda

test:cuda:intel:
  image: nvidia/cuda
  stage: test
  script:
    - nvidia-smi
  tags:
    - cuda
    - x86_64  # Job will run on ci-intel-cuda

Mac shared runners (Experimental)

At the moment an experimental Mac runner is available for testing. If your project requires access, please write a mail. We will do our best to help you setting it up and unterstand the limitations compared to the Linux runners.

Windows shared runners

Currently, we do not offer shared Windows runner. As soon as we also support the Windows platform we will let you know.

Feel free to register your own runners for Windows.

Back to top