Controlling Replicas With Kubernetes Node Labels and LINSTOR Auxiliary Properties
By using Kubernetes node labels and LINSTOR® auxiliary properties, you can better control the placement of your replicas within your cluster. This is useful when you need to avoid placing two replicas within a single failure domain (such as a rack or DC).
Assume that you have a six node Kubernetes cluster with LINSTOR configured using the LINSTOR Operator for persistent storage, and you have a LINSTOR storage pool named lvm-thin
configured across all nodes.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-0 Ready control-plane 8d v1.28.15
kube-1 Ready <none> 8d v1.28.15
kube-2 Ready <none> 8d v1.28.15
kube-3 Ready <none> 8d v1.28.15
kube-4 Ready <none> 8d v1.28.15
kube-5 Ready <none> 8d v1.28.15
LINSTOR ==> node list
╭───────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞═══════════════════════════════════════════════════════════╡
┊ kube-0 ┊ SATELLITE ┊ 172.16.145.82:3366 (PLAIN) ┊ Online ┊
┊ kube-1 ┊ SATELLITE ┊ 172.16.126.70:3366 (PLAIN) ┊ Online ┊
┊ kube-2 ┊ SATELLITE ┊ 172.16.79.139:3366 (PLAIN) ┊ Online ┊
┊ kube-3 ┊ SATELLITE ┊ 172.16.89.199:3366 (PLAIN) ┊ Online ┊
┊ kube-4 ┊ SATELLITE ┊ 172.16.186.9:3366 (PLAIN) ┊ Online ┊
┊ kube-5 ┊ SATELLITE ┊ 172.16.241.198:3366 (PLAIN) ┊ Online ┊
╰───────────────────────────────────────────────────────────╯
LINSTOR ==> storage-pool list
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
8<------------------------------------------------------------snip--------------------------------------------------------------------------------8<
┊ lvm-thin ┊ kube-0 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-0;lvm-thin ┊
┊ lvm-thin ┊ kube-1 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-1;lvm-thin ┊
┊ lvm-thin ┊ kube-2 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-2;lvm-thin ┊
┊ lvm-thin ┊ kube-3 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-3;lvm-thin ┊
┊ lvm-thin ┊ kube-4 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-4;lvm-thin ┊
┊ lvm-thin ┊ kube-5 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 9.59 GiB ┊ 9.59 GiB ┊ True ┊ Ok ┊ kube-5;lvm-thin ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Also assume you have your six nodes evenly distributed across three separate racks within your data center, or across three separate availability zones (AZ) within a cloud region.
These examples assume kube-0
and kube-1
are in one rack or AZ, kube-2
and kube-3
are in another, and kube-4
and kube-5
are in yet another.
LINSTOR, by default, is not aware of this distribution and therefore might place both replicas of a two replica LINSTOR volume within the same rack or AZ. This would leave your data inaccessible during a rack or AZ outage. Alternatively, you might want to keep replicas within a single rack or AZ to isolate LINSTOR data replication, or to keep replication latency to an absolute minimum.
In either situation, you will first need to add Kubernetes labels to each node. The LINSTOR Operator will automatically monitor for a handful of select node labels and apply them as auxiliary properties on the LINSTOR node objects. These node labels are:
kubernetes.io/hostname
topology.kubernetes.io/zone
Aux/topology/topology.kubernetes.io/zone
topology.kubernetes.io/region
Aux/topology/topology.kubernetes.io/region
Using the previous assumptions, you will add the following node labels to your Kubernetes nodes, by using the zone
key with values a
, b
, and c
to differentiate your racks or availability zones.
# kubectl label nodes kube-{0,1} topology.kubernetes.io/zone=a
node/kube-0 labeled
node/kube-1 labeled
# kubectl label nodes kube-{2,3} topology.kubernetes.io/zone=b
node/kube-2 labeled
node/kube-3 labeled
# kubectl label nodes kube-{4,5} topology.kubernetes.io/zone=c
node/kube-4 labeled
node/kube-5 labeled
The output of a node list-properties
command will show the Kubernetes node labels on each of the relevant LINSTOR node objects.
LINSTOR ==> node list-properties kube-0
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Key ┊ Value ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ Aux/linbit.com/configured-interfaces ┊ ["default-ipv4"] ┊
┊ Aux/linbit.com/last-applied ┊ ["Aux/linbit.com/configured-interfaces","Aux/topology/kubernetes.io/hostname","Aux/topology/linbit.com/hostname","Aux/topology/topology.kubernetes.io/zone"] ┊
┊ Aux/topology/kubernetes.io/hostname ┊ kube-0 ┊
┊ Aux/topology/linbit.com/hostname ┊ kube-0 ┊
┊ Aux/topology/topology.kubernetes.io/zone ┊ a ┊
┊ CurStltConnName ┊ default-ipv4 ┊
┊ NodeUname ┊ kube-0 ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
You can then configure LINSTOR storageClasses
to avoid placing replicas within a single failure domain by using the LINSTOR storageClass
parameter replicasOnDifferent
, naming the zone
key.
cat << EOF > linstor-sc-on-diff.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "linstor-csi-lvm-thin-on-diff-r2"
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "2"
storagePool: "lvm-thin"
replicasOnDifferent: "topology.kubernetes.io/zone"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "linstor-csi-lvm-thin-on-diff-r3"
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "3"
storagePool: "lvm-thin"
replicasOnDifferent: "topology.kubernetes.io/zone"
reclaimPolicy: Delete
allowVolumeExpansion: true
EOF
kubectl apply -f linstor-sc-on-diff.yaml
Creating persistent volume claims (PVCs) by using the storageClass
created earlier will result in replicas being distributed where the zone
key has different values.
cat << EOF > pvcs-on-diff.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim-diff-zone-0
spec:
storageClassName: linstor-csi-lvm-thin-on-diff-r2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim-diff-zone-1
spec:
storageClassName: linstor-csi-lvm-thin-on-diff-r3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
EOF
kubectl apply -f pvcs-on-diff.yaml
Within LINSTOR, you will see that each replica of the LINSTOR resources is in a different zone.
LINSTOR ==> resource list
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Layers ┊ Usage ┊ Conns ┊ State ┊ CreatedOn ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-751e78da-c5f0-466d-8682-4b1feccc448e ┊ kube-1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:27:07 ┊
┊ pvc-751e78da-c5f0-466d-8682-4b1feccc448e ┊ kube-3 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:27:06 ┊
┊ pvc-751e78da-c5f0-466d-8682-4b1feccc448e ┊ kube-4 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:27:07 ┊
┊ pvc-77911dcb-1c55-4650-b378-2232fa2dd466 ┊ kube-1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:26:52 ┊
┊ pvc-77911dcb-1c55-4650-b378-2232fa2dd466 ┊ kube-2 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:26:52 ┊
┊ pvc-77911dcb-1c55-4650-b378-2232fa2dd466 ┊ kube-4 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2025-04-15 23:26:52 ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
You can also configure LINSTOR storageClasses
to place replicas within the same zone by using the LINSTOR storageClass
parameter replicasOnSame
, and specifying the key and value pair.
cat << EOF > linstor-sc-on-same.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linstor-csi-lvm-thin-on-same-a-r2
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: 2
storagePool: lvm-thin
replicasOnSame: "topology.kubernetes.io/zone=a"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linstor-csi-lvm-thin-on-same-b-r2
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: 2
storagePool: lvm-thin
replicasOnSame: "topology.kubernetes.io/zone=b"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linstor-csi-lvm-thin-on-same-c-r2
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: 2
storagePool: lvm-thin
replicasOnSame: "topology.kubernetes.io/zone=c"
reclaimPolicy: Delete
allowVolumeExpansion: true
EOF
kubectl apply -f linstor-sc-on-same.yaml
Creating PVCs by using the storageClasses
created earlier will result in replicas being distributed where the key zone has the specified value.
cat << EOF > pvcs-on-same.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim-zone-a
spec:
storageClassName: linstor-csi-lvm-thin-on-same-a-r2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim-zone-b
spec:
storageClassName: linstor-csi-lvm-thin-on-same-b-r2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol-claim-zone-c
spec:
storageClassName: linstor-csi-lvm-thin-on-same-c-r2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
EOF
kubectl apply -f pvcs-on-same.yaml
Within LINSTOR, you will see that each replica of the LINSTOR resources are in the same zone.
LINSTOR ==> resource list
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ Conns ┊ State ┊ CreatedOn ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-0ef85bf7-2a9a-4e6f-9d7b-a473518c6cee ┊ kube-2 ┊ 7001 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:55 ┊
┊ pvc-0ef85bf7-2a9a-4e6f-9d7b-a473518c6cee ┊ kube-3 ┊ 7001 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:52 ┊
┊ pvc-0fc56b3d-b249-4e6f-a225-41224cb367f9 ┊ kube-0 ┊ 7000 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:52 ┊
┊ pvc-0fc56b3d-b249-4e6f-a225-41224cb367f9 ┊ kube-1 ┊ 7000 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:53 ┊
┊ pvc-35144a76-d15f-4709-9911-b6c951e87cc1 ┊ kube-4 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:54 ┊
┊ pvc-35144a76-d15f-4709-9911-b6c951e87cc1 ┊ kube-5 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ 2023-03-24 22:17:56 ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
As mentioned earlier, LINSTOR only monitors for a handful of node labels.
Originally LINSTOR would import all node labels, but this could lead to all sorts of undesired behavior.
However, it is still possible to import select node labels if needed.
This is done by using a LinstorSatelliteConifguration
property.
For example, say you have a large cluster where you have not labeled the nodes specifically with region
or zone
but instead labeled them with availzone
.
You can instruct LINSTOR to import that node label by using the following YAML configuration:
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: avail-zones
spec:
properties:
- name: Aux/availzone
valueFrom:
nodeFieldRef: metadata.labels['kubernetes.io/availzone']
For more information on importing non-standard labels into LINSTOR, refer to the documentation here: https://piraeus.io/docs/stable/reference/linstorsatelliteconfiguration/#specproperties.
Written by: MDK - 3/24/23
Updated by: DJV - 6/10/25