Kubernetes pods are designed to be ephemeral. They start, run, and die. That's great for stateless apps, but the moment you need to persist data (databases, user uploads, config files), you need something that outlives the pod. By default, anything written inside a container is gone the moment that container stops. Delete the pod, lose the data.
This is the companion article to Episode 2 of the Kubernetes on Raspberry Pi series. By the end you'll have NFS-backed persistent storage working in your cluster, with data that survives pod deletion and restarts.
All YAML configs referenced here are in the kubernetes-series GitHub repo under video-02-persistent-storage/.
How Kubernetes Models Storage
Kubernetes splits storage into two concerns: the actual storage resource, and the request for that storage. A PersistentVolume (PV) is the actual storage, like a hard drive that exists independently of any pod. In this series, our PVs live on an NFS server running on a Raspberry Pi 5 NAS.
A PersistentVolumeClaim (PVC) is a request for storage. A pod says "I need 1GB of ReadWriteMany storage" and Kubernetes finds a PV that matches and binds them together. Once bound, the pod mounts the PVC like any other volume.
The separation exists for a reason: cluster admins create PVs, developers create PVCs. Developers don't need to know the NFS server IP or mount paths. They just request storage by size and access mode.
NFS supports three access modes. ReadWriteOnce allows one node to mount read-write. ReadWriteMany allows multiple nodes to mount read-write simultaneously, which is typical for NFS. ReadOnlyMany allows multiple nodes to mount read-only.
NFS Server Setup
This guide assumes you have NFS already running on your NAS. Verify it's reachable from your workstation:
showmount -e 10.51.50.58
You should see your export path listed. If not, check that the NFS service is running and your cluster nodes are in the allowed subnet.
Create the directory for nginx on the NAS and set ownership to match the NFS export settings:
ssh pi@10.51.50.58
mkdir -p /mnt/raid/share/nginx
chown 2000:2000 /mnt/raid/share/nginx
exit
The chown 2000:2000 matches the all_squash + anonuid=2000 setting in /etc/exports, which maps all NFS clients to UID 2000.
Create the PersistentVolume
The PV describes the actual storage resource: where it lives and what access modes it supports.
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.51.50.58
path: /mnt/raid/share/nginx
persistentVolumeReclaimPolicy: Retain
The persistentVolumeReclaimPolicy: Retain setting is important. When the PVC is deleted, Kubernetes keeps the data rather than wiping it. Apply it and confirm it shows as Available:
kubectl apply -f pv.yaml
kubectl get pv
# STATUS should show: Available
Create the PersistentVolumeClaim
The PVC expresses what storage a workload needs. Notice there's no NFS server IP here, just requirements. Kubernetes finds a matching PV and binds them:
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubectl apply -f pvc.yaml
kubectl get pvc
# STATUS should show: Bound
Once the PVC shows Bound, Kubernetes has matched it to the PV we created.
Update the Nginx Deployment
Now we can mount the PVC into a pod. The Deployment needs two additions compared to a basic deployment: a volumes entry that references the PVC, and a volumeMounts entry that puts it somewhere inside the container.
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: html-storage
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: html-storage
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
kubectl apply -f nginx-deployment.yaml
kubectl get pods -w
Prove It Works
Write custom content to the persistent volume and then delete the pod:
kubectl exec -it <nginx-pod> -- bash
echo "<h1>Persistent Storage!</h1>" > /usr/share/nginx/html/index.html
exit
kubectl delete pod <nginx-pod>
kubectl get pods -w
Once the new pod is running, the custom page should still be there. The data lived on NFS and the pod restart was irrelevant. You can also verify directly on the NAS:
ssh pi@10.51.50.58
cat /mnt/raid/share/nginx/index.html
What's Next
Manual PV creation works but doesn't scale. In Episode 3 we deploy Gitea, a self-hosted Git server, using the same PV/PVC pattern, and start building toward the dynamic provisioning we'll add later in the series.