Service Information

Kubernetes as a Service uses a shared architecture: on the host cluster, virtual clusters (control planes) are deployed, which operate in isolation from each other. To access the kube‑api of a virtual cluster, a kubeconfig is provided. This can be added either to `~user/.kube/config` or to desktop Kubernetes management dashboards such as Lens or Headlamp.


Virtual Cluster Limitations and Available Functionality

  • Kubernetes version: 1.32.8
  • Each virtual cluster has its own control plane, etcd, and CoreDNS, separate from the host cluster.
  • Every pod in the cluster runs in a user namespace and has a unique, non‑privileged UID and GID on the host cluster. This allows containers to use pid 0 inside the pod.
  • All pods in the cluster are automatically assigned an fsgroup and supplemental group with a fixed ID for that cluster.
  • Containers in pods must define resource requests/limits. By default, all containers in the cluster are assigned:
limits:
  memory: 100Mi
requests:
  ephemeral-storage: 200Mi
  memory: 100Mi
  cpu: 100m
  • In dashboards, the displayed node resources and usage do not reflect reality. These values are generated and should be ignored.
  • Pod placement on nodes is handled by the host cluster scheduler.
  • CPU and memory usage for pods are displayed according to the metrics server of the host cluster.

External Access and Backups

  • For external access to services, a LoadBalancer IP is provided.
  • It is recommended to use an Ingress Controller and bind services that require external access to it.
  • The control plane of virtual clusters (persistent volumes for etcd and cluster configuration) is backed up from the host cluster side.
  • Additionally, we can back up the persistent volumes of your pods.
  • Persistent volumes inside virtual clusters can also be backed up using k8up.
  • If a database is used, backups can be performed via the corresponding database operator, e.g., Percona Operator.