Skip to main content

Getting Started

vNode is a multi-tenancy container runtime that leverages linux user namespaces and seccomp filters to provide strong isolation between workloads. It is designed to be used in a Kubernetes environment and can run privileged workloads such as docker-in-docker or kubernetes-in-kubernetes securely. It enforces that every container executed by vNode runs as non-root within a sandbox.

To learn more about vNode, please refer to the vNode architecture. To get started, please follow the steps below.

info

vNode uses linux kernel features to provide extra isolation, it does not build on any virtualization technology such as KVM or Hyper-V and hence executes containers in native bare-metal speed.

Benefits of using vNode

  • Rootless: Every container is started and executed as non-root within a vNode and no virtualization is needed, everything stays bare-metal, with almost zero overhead.
  • Isolated: vNode uses linux user namespaces and seccomp filters to improve isolation between workloads.
  • Secure: With the improved container isolation, even privileged Kubernetes features such as hostPID, hostNetwork or hostPaths are possible to use. Containers such as docker-in-docker or kubernetes-in-kubernetes can be securely isolated.
  • Compatible: vNode is compatible with all major Kubernetes offerings such as EKS, GKE, AKS, etc.

Before you begin

In order to deploy vNode, you need the following:

Set your environment variables

# vNode version to deploy
RELEASE=v0.0.1

# Platform host
PLATFORM_HOST=https://platform-your-domain.com

# Platform access key
PLATFORM_ACCESS_KEY=your-access-key

Install vNode

Kind

Create a KinD cluster via:

kind create cluster

Deploy vNode runtime via:

# Deploy the helm chart
helm upgrade --install vnode-runtime vnode-runtime -n vnode-runtime --version $RELEASE \
--repo https://charts.loft.sh --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optional, only set this if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

EKS

Create an EKS cluster via:

# EKS settings
EKS_CLUSTER_NAME=vnode-runtime-test
EKS_REGION=eu-west-1
EKS_NUM_NODES=1
EKS_VERSION=1.30
EKS_MACHINE_TYPE=t3.xlarge
EKS_AMI_FAMILY=AmazonLinux2023 # It's important to use AmazonLinux2023 for vNode as the default AmazonLinux is using a too old kernel version

# Create the EKS cluster
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $EKS_REGION \
--node-type $EKS_MACHINE_TYPE \
--node-ami-family $EKS_AMI_FAMILY \
--nodes $EKS_NUM_NODES \
--managed

Deploy vNode runtime via:

# Deploy the helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime --version $RELEASE \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optional, only set this if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

GKE

Create an GKE cluster via:

# GKE settings
GKE_CLUSTER_NAME=vnode-runtime-test
GKE_ZONE=europe-west2-a
GKE_MACHINE_TYPE=n4-standard-4
GKE_NUM_NODES=1

# Create the GKE cluster
gcloud container clusters create $GKE_CLUSTER_NAME \
--zone $GKE_ZONE \
--machine-type $GKE_MACHINE_TYPE \
--num-nodes $GKE_NUM_NODES \
--release-channel "regular"

# (Optional) Fetch cluster credentials for kubectl
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE

Deploy vNode runtime via:

# Deploy the helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime --version $RELEASE \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optional, only set this if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

AKS

Create an AKS cluster via:

AKS_RESOURCE_GROUP=my-resource-group
AKS_CLUSTER_NAME=my-cluster
AKS_NODE_COUNT=1
AKS_MACHINE_TYPE=Standard_D4s_v6
AKS_LOCATION=westeurope
# We need Azure Linux v3.0 or newer as we need at least kernel version 6.1 or newer, which is currently only supported in 1.32.0 and above
AKS_KUBERNETES_VERSION=1.32.0

# Create the AKS cluster
az aks create --yes \
--resource-group $AKS_RESOURCE_GROUP \
--name $AKS_CLUSTER_NAME \
--node-count $AKS_NODE_COUNT \
--node-vm-size $AKS_MACHINE_TYPE \
--location $AKS_LOCATION \
--kubernetes-version $AKS_KUBERNETES_VERSION \
--os-sku AzureLinux \
--generate-ssh-keys

# Get the credentials for the AKS cluster
az aks get-credentials --overwrite-existing --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME

Deploy vNode runtime via:

# Deploy the helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime --version $RELEASE \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optional, only set this if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

Other Distros (K3s etc.)

Deploy vNode runtime via:

# Deploy the helm chart
helm upgrade vnode-runtime vnode-runtime -n vnode-runtime --version $RELEASE \
--repo https://charts.loft.sh --install --create-namespace \
--set "config.platform.host=$PLATFORM_HOST" \
--set "config.platform.accessKey=$PLATFORM_ACCESS_KEY"

# Optional, only set this if your platform is using a self-signed certificate
# --set "config.platform.insecure=true"

Note: This doesn't work for docker-desktop or orbstack kubernetes as they are using cri-dockerd which is currently not supported as we only support containerd. Openshift with cri-o is also not supported right now.

Unsupported Distros

Kubernetes clusters with the following properties are currently not supported:

  • Clusters that use a kernel version that is below 6.1
  • Clusters that use a container runtime that is not containerd
  • Docker Desktop (uses cri-dockerd)
  • Orbstack (uses cri-dockerd)
  • OpenShift (uses cri-o)

If your Kubernetes cluster uses special paths for containerd, kubelet or cni, you can set the following configuration options to configure the paths for vNode correctly:

config:
# The root directory of containerd. Typically this is /var/lib/containerd
containerdRoot: ""
# The state directory of containerd. Typically this is /run/containerd
containerdState: ""
# The config path for containerd. Typically this is /etc/containerd/config.toml
containerdConfig: ""
# The directory where to copy the shims to. Typically this is /usr/local/bin
containerdShimDir: ""
# The root path for the kubelet. Typically this is /var/lib/kubelet
kubeletRoot: ""
# The root path for the kubelet pod logs. Typically this is /var/log
kubeletLogRoot: ""
# The directory where the cni configuration is stored. Typically this is /etc/cni/net.d
cniConfDir: ""
# The directory where the cni binaries are stored. Typically this is /opt/cni/bin
cniBinDir: ""

Use vNode

With vCluster

vCluster and vNode work very well together and you can use vNode to run privileged workloads in a vCluster and provide an additional layer of security. In order to use vNode as the runtime for the workloads of a vCluster, you can set the following configuration:

sync:
toHost:
pods:
runtimeClassName: vnode
info

This will require vCluster version v0.23 or newer

With Nvidia GPU Operator

vNode is compatible with the Nvidia GPU Operator and can be used to run GPU workloads. The only requirement is that vNode requires enabling CDI in the GPU Operator.

This can either be done when installing the GPU Operator:

helm upgrade gpu-operator nvidia/gpu-operator --install \
-n gpu-operator --create-namespace \
--set cdi.enabled=true # Enable CDI

Or with an existing GPU Operator installation:

kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \
-p='[{"op": "replace", "path": "/spec/cdi/enabled", "value":true}]'

For more information on how to enable CDI, please refer to the Nvidia GPU Operator documentation.

Example using vNode

Create a new privileged workload pod that uses the vNode runtime:

# Create a new privileged pod that uses shared host pid
echo "apiVersion: v1
kind: Pod
metadata:
name: bad-boy
spec:
runtimeClassName: vnode # This is the runtime class name for vNode
hostPID: true
terminationGracePeriodSeconds: 1
containers:
- image: ubuntu:jammy
name: bad-boy
command: ['tail', '-f', '/dev/null']
securityContext:
privileged: true" | kubectl apply -f -

# Wait for the pod to start
kubectl wait --for=condition=ready pod bad-boy

# Get a shell into the bad-boy
kubectl exec -it bad-boy -- bash

Now within the privileged pod, you can show processes, which only shows the process of the current container and shim, but not of any other host container / pods:

# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-init
root 53 1 0 10:20 ? 00:00:00 /var/lib/vnode/bin/vnode-containerd-shim-runc-v
65535 75 53 0 10:20 ? 00:00:00 \_ /pause
root 185 53 0 10:20 ? 00:00:00 \_ tail -f /dev/null
root 248 53 0 10:20 pts/0 00:00:00 \_ bash
root 256 248 0 10:20 pts/0 00:00:00 \_ ps -ef --forest

Now if you run this container without the vNode runtime, you will see that the process can see all the other host processes:

# ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:19 ? 00:00:00 /sbin/init
root 88 1 0 10:19 ? 00:00:00 /lib/systemd/systemd-journald
root 308 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 389 308 0 10:19 ? 00:00:00 \_ /pause
root 663 308 2 10:19 ? 00:00:04 \_ etcd --advertise-client-urls=https://192.16
root 309 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 404 309 0 10:19 ? 00:00:00 \_ /pause
root 540 309 0 10:19 ? 00:00:00 \_ kube-scheduler --authentication-kubeconfig=
root 318 1 0 10:19 ? 00:00:00 /usr/local/bin/containerd-shim-runc-v2 -namespa
65535 411 318 0 10:19 ? 00:00:00 \_ /pause
...