Skip to content

HIFIS Cloud Portal - Developer Manual

Document History

Current Version: 1.0

Current Version Date: 2022-03-01


This document contains all information necessary to participate in the development of the HIFIS Cloud Portal

Development Setup

All code related the Cloud Portal can be found in Gitlab at

The easiest way to participate in the development is to use the provided container-based environment, which you can find in the tools directory of the main repository. The environment can be set up using docker-compose or podman-compose. The environments always use a base portal-dev container that has all necessary tools available to build/test/run the different parts of the Cloud Portal and a mongodb database container. Additionally, there can be other containers for Selenium testing or Helmholtz Cloud Agent (HCA) development. The different configuration are:

  • docker-compose.yaml: default environment with with one portal-dev and one mongodb container.
  • docker-compose-with-selenium.yaml: same as default with an additional chrome container to run web application tests.
  • docker-compose-with-hca.yaml: same as default with an additional rabbitmq and an hca container to test the HCA integration.

If cannot or don’t want to use the container you can also setup your own environment. You need at least the following software:

  • OpenJDK 11
  • Maven >= 3.6
  • Node >= 12.x (LTS)
  • MongoDB >= 4
  • Python 3

You can find more details in the Dockerfile.


The general system architecture is being described in the architecture part. The source code should be documented sufficiently without describing boilerplate code (e.g. getters and setters).

Development Infrastructure


We are using git as VCS in the project. All source code is maintained in the GitLab repository from HZDR. The repositories for Helmholtz Cloud access layer components are located in the group, and for the portal specifically in

You should fork the main repository and develop in branches. Code changes are only potentially accepted as merge requests from your fork to the main branch.


In order to always have a compilable and executable code base, we use the CI features from Gitlab. For each MR a pipeline with a set of stages will run. The stages make sure that the code compiles, runs the given unit tests and package everything in containers, which are hosted in the integrated Gitlab container repository. In the last step a deployment to a Kubernetes test cluster at DESY is triggered. The MR will be deployed as a fully functional Cloud Portal instance with its own database that can be used for further testing and reviews. Only after the pipeline has run successfully and the code was reviewed the MR may be merged into the main branch.

CD Infrastructure

The main deployment of the Cloud Portal is split in to different environments, integration and production, both hosted on a Kubernetes cluster at DESY and managed by FluxCD.

The integration testbed can be found at (restricted access) and is automatically redeployed after each push to the main branch. In that case a pipeline runs that builds containers with the latest tag and pushes them to the Gitlab repository. In the last step of the pipeline a script is running that triggers a redeployment and will therefore pull the new containers.

The production deployment can be found at This deployment uses tagged releases which are created from the main branch. The the deployment is not automatic an instead has to be adapted in Flux whenever a new tagged release is created.

Clusters and Configurations

Both the clusters for the MR deployments and the integration testbed and production are hosted on Rancher-managed Kubernetes cluster. To get access to these cluster you first have to ask for access to Rancher (

The two clusters are:

  • guest-k8s: MR deployments
  • kube-cluster1: Integration and Production

The MR deployments are created using manual Helm releases that are created by a pipeline running on a repository at the DESY Gitlab ( It is necessary to run this pipeline directly at DESY and not HZDR since the Kubernetes API is not accessible from outside DESY. The MR deployments are using the dynamic DNS mapping to make the MR deployments easily accessible for reviews. The deployments are automatically accessible outside DESY.

The integration testbed and production deployment is managed using FluxCD. The configuration repository is available at DESY Gitlab ( It is a private repository and access has to granted manually. Redeployment of the integration testbed is again triggered from the HZDR repository to the hifiscp-deployment-triggers repository at DESY. The production deployment can only be changed from the Flux repository.

Data import

In the future the service catalogue information will be fetched regularly from Plony but at the moment it is still stored in the main repository ( MongoDB provides functionality to easily export all data in JSON format and also import this data again. This is used to store a backup of the data in Gitlab. Currently, this is also the place to change any service related information that will be shown in the Cloud Portal using MRs. For the integration testbed this data is automatically imported whenever the MR is merged to main. To make the data available in production a pipeline has to be manually started in the hifiscp-deployment-triggers. This can be done directly from the repository ( by setting the variable CP_BUILD_STAGE to production.