Cloud File Services for Persistent Volumes
  • 11 Feb 2022
  • Dark
    Light

Cloud File Services for Persistent Volumes

  • Dark
    Light

Article summary

Pods in Kubernetes are ephemeral, and any locally stored data is lost upon the recreation of a Pod. To provide persistent storage, you will need to mount a persistent volume into the container at runtime. This is accomplished using a resource called a "Persistent Volume," which needs to be backed by network-accessible storage. The currently supported solution for persistent volumes within the Expedient Enterprise Cloud is an NFS share powered by Cloud File Services. This allows for the creation of both dedicated and shared persistent volumes. Instructions for configuring an appropriate share and associated Kubernetes resources are below.

Create view within cloud file services

Each volume being created will require the creation of a new view (share) within the file services platform. The view should be created using the NFS protocol and assigned a quota that matches the size of the volume you intend to create. Complete instructions for the creation of a new file services view are available at How to Create NFS and SMB Views.

Configuring your cluster

To mount your new share to a Pod, you will need to create a persistent volume, an associated persistent volume claim, and then add the claim to the pod spec of the deployment you intend to mount the volume to. Example manifest files are below. 

Complete documentation for persistent volumes is available at https://kubernetes.io/docs/concepts/storage/persistent-volumes.

Persistent Volume

This resource defines the actual volume mount information for your NFS share. To use within your cluster, update the metadata>name and metadata>labels fields to something descriptive, set the spec>capacity>storage field to the amount of storage you intend to assign to the pod, configure the access mode (ReadWriteMany or ReadWriteOne) as appropriate to your environment, and configure the nfs settings with the correct IP address for your file services instance and set the path to the name you provided when creating the view.

Persistent Voume
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: EFS-pv
  labels:
    app: demo
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - noatime
    - nfsvers=3
    - proto=tcp
    - rsize=1048576
    - wsize=1048576
    - timeo=10000
    - hard
    - intr
    - nolock
    - nodirplus
  nfs:
    path: /View-Name
    server: 10.184.103.2

Persistent Volume Claim

Once your volume has been created, you will need to create a persistent volume claim. This additional resource will enable you to easily change the backing storage for a pod without modifying its underlying container specification. It will also allow your administrators to create several volumes and then allow your developers to request a volume with specific capabilities via a claim without needing to know the details of the underlying volume configurations. You should set up the volume claim to request the same amount of storage and access modes you provided on your volume. If you want to tie this claim to a specific volume, you can use the "selector" to match the claim with any volumes bearing the same labels. If you don't wish to tie these together, you can remove the "selector" section of the config file.

Persistent Volume Claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: EFS-pvc
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 100Gi
  selector:
    matchLabels:
      app: demo

Example Deployment Spec

Below is an example deployment containing a pod that will mount the persistent volume using the claim created above. The significant bit here is the "volumes" portion of the file, specifically the "claimName" which should be set to match the name of the volume claim created above.

Example Deployment Spec
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: EFS-demo
  labels:
    app: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: scratch
          volumeMounts:
            - name: demo-mount
              mountPath: /var/demo
      volumes:
        - name: demo-mount
          persistentVolumeClaim:
            claimName: EFS-pvc


Was this article helpful?

What's Next