In my previous post, I talked about how I used Longhorn as a persistent storage solution for stateful workloads in Kubernetes. Back then, it worked fine—but this time, I wanted to dig deeper. I’ve run benchmarks to compare Longhorn, Local Persistent Volumes (PV), and OpenEBS.
Let’s dive into performance, complexity, and memory usage—especially in small clusters.
Longhorn: Reliable but Heavy
Longhorn works, but it’s not lightweight. In my 3-node cluster, it uses about 1.5 GB of memory across its components:
- Instance Manager
- CSI Plugin (Attacher, Provisioner)
- Longhorn Manager
- UI
That’s fine for production, but in my small, resource-limited environment, it feels like overkill—especially because I don’t use snapshots or replication. My stateful apps (like PostgreSQL) handle replication themselves. Why spend resources on redundant layers?
TL;DR: For apps that manage their own data integrity, Longhorn might be doing too much.
Benchmark Setup: Longhorn vs. Local PV
Let’s keep this simple: compare IOPS, bandwidth, and latency with the help of FIO.
1. Longhorn Setup
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-single-replica
provisioner: driver.longhorn.io
parameters:
numberOfReplicas: "1"
dataLocality: "best-effort"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn-single-replica
resources:
requests:
storage: 1Gi
2. Local PV Setup
It’s more hands-on.
Create the disk manually:
sudo mkdir -p /mnt/disks/localdisk1
sudo chmod 777 /mnt/disks/localdisk1
PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: local-storage
local:
path: /mnt/disks/localdisk1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s3
persistentVolumeReclaimPolicy: Delete
volumeMode: Filesystem
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
FIO Benchmark Pod
Used across all three volumes.
apiVersion: v1
kind: Pod
metadata:
name: fio-longhorn
spec:
restartPolicy: Never
containers:
- name: fio
image: ljishen/fio
command: ["fio"]
args:
- --name=pg-test
- --filename=/data/testfile
- --size=200M
- --bs=8k
- --rw=randrw
- --rwmixread=70
- --ioengine=libaio
- --iodepth=16
- --runtime=60
- --numjobs=1
- --time_based
- --group_reporting
resources:
requests:
cpu: "1"
memory: "256Mi"
limits:
cpu: "2"
memory: "512Mi"
volumeMounts:
- mountPath: /data
name: testvol
volumes:
- name: testvol
persistentVolumeClaim:
claimName: longhorn-pvc # or local-pvc / openebs-pvc
Note: When using local-pvc
, be aware that it is tied to a specific node. To ensure your pod runs on the correct node, you’ll need to add node affinity to match the node where the local volume exists.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s3
Benchmark Results: Longhorn vs Local PV
Metric | Longhorn PV | Local PV |
---|---|---|
Read IOPS | 811 | 7757 |
Read Bandwidth | 6.3 MiB/s | 60.6 MiB/s |
Read Latency (avg) | 14,189 µs | 1467 µs |
Write IOPS | 346 | 3328 |
Write Bandwidth | 2.7 MiB/s | 26.0 MiB/s |
Write Latency (avg) | 12,913 µs | 1377 µs |
CPU Usage (sys) | 4.71% | 26.25% |
Volume Backend | Longhorn (user-space) | Kernel block device |
Takeaway: Local PVs dominate in raw performance. But they’re hard to manage manually—especially with dynamic provisioning, node affinity, cleanup, and failure recovery.
Enter OpenEBS: Lightweight + Automation
I didn’t want to keep scripting local PVs for every new workload. That’s where OpenEBS comes in—a container-native storage solution that supports hostPath-based volumes while keeping things simple.
I disabled replication (don’t need it), and installed via Helm:
helm repo add openebs https://openebs.github.io/openebs
helm repo update
helm install openebs --namespace openebs openebs/openebs \
--set engines.replicated.mayastor.enabled=false --create-namespace
Then created the required directory once on each node:
sudo mkdir -p /var/openebs/local
That’s it—OpenEBS handles provisioning, metrics, and lifecycle.
OpenEBS PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openebs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: openebs-hostpath
resources:
requests:
storage: 1Gi
Final Benchmark: All Three Compared
Metric | Longhorn PV | Local PV | OpenEBS PV |
---|---|---|---|
Read IOPS | 811 | 7757 | 7401 |
Read Bandwidth | 6.3 MiB/s | 60.6 MiB/s | 57.8 MiB/s |
Read Latency (avg) | 14,189 µs | 1467 µs | 1539 µs |
Write IOPS | 346 | 3328 | 3177 |
Write Bandwidth | 2.7 MiB/s | 26.0 MiB/s | 24.8 MiB/s |
Write Latency (avg) | 12,913 µs | 1377 µs | 1440 µs |
CPU Usage (sys) | 4.71% | 26.25% | 26.05% |
Volume Backend | User-space (Longhorn) | Kernel block device | Kernel block device |
Conclusion
- Local PV is the fastest, but painful to manage.
- Longhorn is fully featured and resilient—but too heavy for small clusters and overkill for apps that handle their own replication.
- OpenEBS is the sweet spot: solid performance, simple setup, lightweight, and automated.
While we could use something like local-path
(e.g., in K3s/Rancher), OpenEBS strikes the best balance between speed, simplicity, and flexibility—and in the next section, I’ll dive deeper into its key features.