3045 words
15 minutes
Set up volumes using Azure Storage without account access keys in Azure Kubernetes Service

Account access key, represented as static token, is always a security concern: as if the keys fall into the wrong hands, unauthorized access can occur. There is even an Azure Policy to help users prevent this type of issue.
This article helps you mount different types of Azure storages without using account access keys.

Options#

Choose among the following options:

  • Use NFS to access Azure fileshare
  • Use BlobFuse/NFS to access Azure block blob container
  • Use Azure Disk

Before you begin#

WARNING

Manual deployment of CSI drivers from GitHub is not supported by Microsoft. You will not receive support in this case.
To get support from Microsoft, make sure you use the managed method instead of using “kubectl” or “helm chart”.

Use NFS to access Azure fileshare#

In this section, we will use NFS to access Azure fileshare.

  1. Prepare Azure storage account

To use NFS protocol to access Azure fileshare, you need to create Premium storage account:

sa=$(tr -dc a-z < /dev/urandom | head -c 16)

# Secure transfer must be disabled to use NFS in Azure fileshare
az storage account create -n ${sa} -g ${rG} \
--kind FileStorage -o none \
--sku Premium_LRS --default-action Deny \
--allow-shared-key-access false  \
--https-only false

saId=$(az storage account show \
-n ${sa} -g ${rG} --query id -o tsv)

To create Azure fileshare using NFS protocol:

fileshare=aks-fileshare

az rest --method PUT -o none \
--url https://management.azure.com${saId}/fileServices/default/shares/${fileshare}?api-version=2023-05-01 \
--body "{'properties':{'enabledProtocols':'NFS'}}"
  1. Grant access from AKS to storage account

You can use two different methods:

Method 1: Disable public access to storage account and use private link
Connnection from private links are considered as whitelisted, and we can create a private endpoint and then disable public access.
To create private link, see also: Creating a private endpoint.
To disable public access:

az storage account update -n ${sa} -g ${rG} \
--public-network-access Disabled -o none

Method 2: Add AKS subnet to the list of allowed virtual networks in storage account
You can allow only specific subnets to access the storage account. See also: Grant access from a virtual network.

NOTE

For AKS using dynamic IP allocation feature, add node subnets to the list of allowed subnets.

  1. Prepraing Kubernetes storage resources
# Randomize volumeHandle ID
volUniqId=${sa}#${fileshare}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azurefile-premium-nfs
provisioner: file.csi.azure.com
parameters:
  protocol: nfs
  skuName: Premium_LRS
reclaimPolicy: Delete
mountOptions:
  - nconnect=4  # Azure Linux node does not support nconnect option
  - noresvport
  - actimeo=30
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: file.csi.azure.com
  name: pv-${sa}-${fileshare}-nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azurefile-premium-nfs
  mountOptions:
    - nconnect=4  # Azure Linux node does not support nconnect option
    - noresvport
    - actimeo=30
  csi:
    driver: file.csi.azure.com
    volumeHandle: ${volUniqId}
    volumeAttributes:
      resourceGroup: ${rG}
      storageAccount: ${sa}
      shareName: ${fileshare}
      protocol: nfs
---
apiVersion: v1
kind: Namespace
metadata:
  name: fileshare
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-${sa}-${fileshare}-nfs
  namespace: fileshare
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azurefile-premium-nfs
  volumeName: pv-${sa}-${fileshare}-nfs
  resources:
    requests:
      storage: 5Gi
EOF
  1. Mount the fileshare into Pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nfs-mount-1
  namespace: fileshare
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${fileshare}-nfs
EOF
  1. Write the message to a file and check if it works after the Pod starts
kubectl exec nfs-mount-1 -n fileshare -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
kubectl logs nfs-mount-1 -n fileshare

Output:

cat: can't open '/mnt/azure/text': No such file or directory
cat: can't open '/mnt/azure/text': No such file or directory
cat: can't open '/mnt/azure/text': No such file or directory
hello!
  1. Check if other Pods can get the same file
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nfs-mount-2
  namespace: fileshare
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${fileshare}-nfs
EOF
sleep 20; kubectl logs nfs-mount-2 -n fileshare

Output:

pod/nfs-mount-2 created
hello!
hello!
hello!
hello!

Since we can successfully read the message “hello!” from the other Pod, the write operation succeeds.

Use Azure block blob container#

In this section, we will use either BlobFuse or NFS to access Azure block blob container.
Before proceeding, ensure that the blob CSI driver is installed, as it is not installed by default:

az aks show -n ${aks} -g ${rG} -o tsv \
--query storageProfile.blobCsiDriver.enabled

If not, install the driver via following command:

az aks update -n ${aks} -g ${rG} \
--enable-blob-driver -o none

Prepare Azure storage account#

If using BlobFuse protocol, you can create a Standard storage account:

sa=$(tr -dc a-z < /dev/urandom | head -c 16)

az storage account create -n ${sa} -g ${rG} \
--kind StorageV2 -o none \
--sku Standard_LRS \
--allow-shared-key-access false

saId=$(az storage account show \
-n ${sa} -g ${rG} --query id -o tsv)

If using NFS protocol, you need to create Azure Data Lake Storage (ADLS) with NFS enabled, instead of normal storage account:

sa=$(tr -dc a-z < /dev/urandom | head -c 16)

az storage account create -n ${sa} -g ${rG} \
--kind StorageV2 --sku Standard_LRS \
--enable-hierarchical-namespace -o none \
--allow-shared-key-access false \
--enable-nfs-v3 --default-action Deny

saId=$(az storage account show \
-n ${sa} -g ${rG} --query id -o tsv)

To create Azure block blob container:

container=aks-container

az rest --method PUT -o none \
--url https://management.azure.com${saId}/blobServices/default/containers/${container}?api-version=2023-05-01 \
--body "{}"

Use BlobFuse to access Azure block blob container#

In this section, we will use BlobFuse to access Azure block blob container.

  1. Get object/client ID of AKS kubelet identity
aksIdentityType=$(az aks show -n ${aks} -g ${rG} \
--query identity.type -o tsv)

if [[ "$aksIdentityType" == "SystemAssigned" ]] || \
[[ "$aksIdentityType" == "UserAssigned" ]]
then
echo 'Your AKS is using "Managed Identity" as security principal.'
aksKubeletClientId=$(az aks show -n ${aks} -g ${rG} -o tsv \
--query identityProfile.kubeletidentity.clientId)
aksKubeletObjectId=$(az aks show -n ${aks} -g ${rG} -o tsv \
--query identityProfile.kubeletidentity.objectId)
fi
if [[ "$aksIdentityType" == "" ]]
then
echo 'Your AKS is using "Service Principal" as security principal.'
aksKubeletClientId=$(az aks show -n ${aks} -g ${rG} \
--query servicePrincipalProfile.clientId -o tsv)
aksKubeletObjectId=$(az ad sp show \
--id ${aksKubeletClientId} --query id -o tsv)
fi

echo ${aksKubeletClientId}
echo ${aksKubeletObjectId}
NOTE

If you are using “Service Principal”, make sure you have the application password, otherwise you cannot proceed.

WARNING

If AKS is using “Service Principal” as security principal, the use of “Service Principal” itself already means that you are using a static token cluster-wide, although it is not account access key.
If you are having AKS using “Service Principal” as security principal, consider converting AKS to use “Managed Identity”.
In the following content, the scenario of using “Service Principal” will still be demonstrated. This may help you bypass the Azure policy, but keep in mind that this is not recommended.

  1. Grant permission to AKS kubelet identity for accessing storage account
az role assignment create --role "Storage Blob Data Contributor" \
--assignee-object-id ${aksKubeletObjectId} -o none \
--scope ${saId}/blobServices/default/containers/${container} \
--assignee-principal-type ServicePrincipal
  1. Disable public access to storage account and create private link (Optional)

If you want additional protection at network level, you can have AKS using an Azure private endpoint for connection.
To create private link, see also: Creating a private endpoint.

To disable public access:

az storage account update -n ${sa} -g ${rG} \
--public-network-access Disabled -o none
  1. Prepraing Kubernetes storage resources
# Randomize volumeHandle ID
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
TIP

In official document, there is still a statement telling you not to use character # and /.
But this statement is obsolete. Check out: Confusing error log regarding the character # in volumeHandle - kubernetes-sigs/blob-csi-driver.

If the AKS is using “Managed Identity” as security principal, use the following manifest:

cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azureblob-fuse
provisioner: blob.csi.azure.com
parameters:
  skuName: Standard_LRS
reclaimPolicy: Delete
mountOptions:
  - '-o allow_other'
  - '--file-cache-timeout-in-seconds=120'
  - '--use-attr-cache=true'
  - '--cancel-list-on-mount-seconds=10'
  - '-o attr_timeout=120'
  - '-o entry_timeout=120'
  - '-o negative_timeout=120'
  - '--log-level=LOG_WARNING'
  - '--cache-size-mb=1000'
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: blob.csi.azure.com
  name: pv-${sa}-${container}-blobfuse
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azureblob-fuse
  mountOptions:
    - -o allow_other
    - --file-cache-timeout-in-seconds=120
  csi:
    driver: blob.csi.azure.com
    volumeHandle: ${volUniqId}
    volumeAttributes:
      resourceGroup: ${rG}
      storageAccount: ${sa}
      containerName: ${container}
      AzureStorageAuthType: msi
      AzureStorageIdentityClientID: ${aksKubeletClientId}
---
apiVersion: v1
kind: Namespace
metadata:
  name: blobfuse
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-${sa}-${container}-blobfuse
  namespace: blobfuse
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azureblob-fuse
  volumeName: pv-${sa}-${container}-blobfuse
  resources:
    requests:
      storage: 5Gi
EOF

If the AKS is using “Service Principal” as security principal, use the following manifest:
a. Setting environment variables in secret

tenantId=$(az ad sp show --id ${aksKubeletClientId} \
--query appOwnerOrganizationId -o tsv)
k8sSecret=azure-storage-account-${sa}-secret
spSecret=<SP_secret_here>

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: ${k8sSecret}
type: Opaque
data:
  azurestoragespnclientid: |
    $(echo ${aksKubeletClientId} | base64)
  azurestoragespntenantid: |
    $(echo ${tenantId} | base64)
  azurestoragespnclientsecret: |
    $(echo ${spSecret} | base64)
EOF

unset spSecret

b. Deploying Kubernetes storage resources

cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azureblob-fuse
provisioner: blob.csi.azure.com
parameters:
  skuName: Standard_LRS
reclaimPolicy: Delete
mountOptions:
  - '-o allow_other'
  - '--file-cache-timeout-in-seconds=120'
  - '--use-attr-cache=true'
  - '--cancel-list-on-mount-seconds=10'
  - '-o attr_timeout=120'
  - '-o entry_timeout=120'
  - '-o negative_timeout=120'
  - '--log-level=LOG_WARNING'
  - '--cache-size-mb=1000'
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: blob.csi.azure.com
  name: pv-${sa}-${container}-blobfuse
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azureblob-fuse
  mountOptions:
    - -o allow_other
    - --file-cache-timeout-in-seconds=120
  csi:
    driver: blob.csi.azure.com
    volumeHandle: ${volUniqId}
    volumeAttributes:
      resourceGroup: ${rG}
      storageAccount: ${sa}
      containerName: ${container}
      AzureStorageAuthType: spn
    nodeStageSecretRef:
      name: ${k8sSecret}
      namespace: default
---
apiVersion: v1
kind: Namespace
metadata:
  name: blobfuse
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-${sa}-${container}-blobfuse
  namespace: blobfuse
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azureblob-fuse
  volumeName: pv-${sa}-${container}-blobfuse
  resources:
    requests:
      storage: 5Gi
EOF
  1. Mount the block storage into Pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: blobfuse-mount-1
  namespace: blobfuse
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${container}-blobfuse 
EOF
  1. Write the message to a file and check if it works after the Pod starts
kubectl exec blobfuse-mount-1 -n blobfuse -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
kubectl logs blobfuse-mount-1 -n blobfuse

Output:

cat: can't open '/mnt/azure/text': No such file or directory
cat: can't open '/mnt/azure/text': No such file or directory
hello!
  1. Check if other Pods can get the same file
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: blobfuse-mount-2
  namespace: blobfuse
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${container}-blobfuse
EOF
sleep 20; kubectl logs blobfuse-mount-2 -n blobfuse

Output:

pod/blobfuse-mount-2 created
hello!
hello!
hello!
hello!

Since we can successfully read the message “hello!” from the other Pod, the write operation succeeds.

Use NFS to access Azure block blob container#

In this section, we will use NFS to access Azure block blob container.

CAUTION

It is not possible to mount the same path using both the Blobfuse2 and NFS protocols concurrently, as this may cause unpredictable issues.
Before proceeding with this section, remember to unmount the related volumes if you followed the steps in the section “Use BlobFuse to access Azure block blob container”.

See also: Un-Supported Scenarios in azure-storage-fuse.

  1. Grant access from AKS to storage account

You can use two different methods:

Method 1: Disable public access to storage account and use private link
Connnection from private links are considered as whitelisted. To create private link, see also: Creating a private endpoint.
To disable public access:

az storage account update -n ${sa} -g ${rG} \
--public-network-access Disabled -o none

Method 2: Add AKS subnet to the list of allowed virtual networks in storage account
You can allow only specific subnets to access the storage account. See also: Grant access from a virtual network.

NOTE

For AKS using dynamic IP allocation feature, add node subnets to the list of allowed subnets.

  1. Prepraing Kubernetes storage resources
# Randomize volumeHandle ID
volUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azureblob-nfs
provisioner: blob.csi.azure.com
parameters:
  protocol: nfs
  skuName: Standard_LRS
reclaimPolicy: Delete
mountOptions:
  - nconnect=4  # Azure Linux node does not support nconnect option
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: blob.csi.azure.com
  name: pv-${sa}-${container}-nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azureblob-nfs
  mountOptions:
    - nconnect=4  # Azure Linux node does not support nconnect option
  csi:
    driver: blob.csi.azure.com
    volumeHandle: ${volUniqId}
    volumeAttributes:
      resourceGroup: ${rG}
      storageAccount: ${sa}
      containerName: ${container}
      protocol: nfs
---
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-${sa}-${container}-nfs
  namespace: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: azureblob-nfs
  volumeName: pv-${sa}-${container}-nfs
  resources:
    requests:
      storage: 5Gi
EOF
  1. Mount the block storage into Pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nfs-mount-1
  namespace: nfs
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${container}-nfs
EOF
  1. Write the message to a file and check if it works after the Pod starts
kubectl exec nfs-mount-1 -n nfs -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
kubectl logs nfs-mount-1 -n nfs

Output:

cat: can't open '/mnt/azure/text': No such file or directory
hello!
  1. Check if other Pods can get the same file
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nfs-mount-2
  namespace: nfs
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${sa}-${container}-nfs
EOF
sleep 20; kubectl logs nfs-mount-2 -n nfs

Output:

pod/nfs-mount-2 created
hello!
hello!
hello!
hello!

Since we can successfully read the message “hello!” from the other Pod, the write operation succeeds.

Use Azure Disk (ReadWriteOnce)#

In this section, we will use Azure Disk as the remote storage.
Normally, when mounting an Azure disk to an AKS node, only the “ReadWriteOnce” mode is supported, because a disk can only be mounted to one node. In this article, we will demonstrate it in this way.

NOTE

Technically, you do can use “ReadWriteMany” when shared disk feature is enabled. However, I will not recommend this feature: as there is no official static provisioning demo for mounting existing shared disk, and I have no successful attempt in my environment.
For official dynamic shared-disk provisioning example, see also: kubernetes-sigs/azuredisk-csi-driver - Shared disk.

Use Azure Disk with CSI driver#

In this section, we will use Azure Disk CSI driver to mount Azure Disk.

  1. Preparing Azure Disk
    Since we are using “ReadWriteOnce”, we will create a 10 GB Standard HDD for storage sharing.
disk=aks-disk-$(tr -dc a-z0-9 < /dev/urandom | head -c 4)

az disk create -n ${disk} -g ${rG} \
--size-gb 10 --sku Standard_LRS -o none

diskId=$(az disk show \
-n ${disk} -g ${rG} --query id -o tsv)
  1. Grant permission to mount disk to AKS nodes

a. Get AKS security principal object ID

aksIdentityType=$(az aks show -n ${aks} -g ${rG} \
--query identity.type -o tsv)

if [[ "$aksIdentityType" == "SystemAssigned" ]]
then
aksIdentityId=$(az aks show -n ${aks} -g ${rG} \
--query identity.principalId -o tsv)
fi
if [[ "$aksIdentityType" == "UserAssigned" ]]
then
aksIdentityId=$(az aks show -n ${aks} -g ${rG} \
--query identity.userAssignedIdentities.*.principalId -o tsv)
fi
if [[ "$aksIdentityType" == "" ]]
then
aksSPclientId=$(az aks show -n ${aks} -g ${rG} \
--query servicePrincipalProfile.clientId -o tsv)
aksIdentityId=$(az ad sp show \
--id ${aksSPclientId} --query id -o tsv)
fi

echo ${aksIdentityId}

b. Grant permission

az role assignment create --role "Virtual Machine Contributor" \
--assignee-object-id ${aksIdentityId} -o none \
--scope ${diskId} --assignee-principal-type ServicePrincipal
NOTE

If you see the error message below later:

"The client '00000000-0000-0000-0000-000000000000' with object id '00000000-0000-0000-0000-000000000000' 
does not have authorization to perform action 'Microsoft.Compute/disks/read' over scope 
'/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rG/providers/Microsoft.Compute/disks/aks-disk-xxxx' 
or the scope is invalid. If access was recently granted, please refresh your credentials."  

Wait 5 minutes and you’re all set. Somehow this takes 5 minutes to have role assignment being provisioned.

  1. Prepraing Kubernetes storage resources
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: disk
provisioner: disk.csi.azure.com
parameters:
  skuName: Standard_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: disk.csi.azure.com
  name: pv-${disk}
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: disk
  csi:
    driver: disk.csi.azure.com
    volumeHandle: ${diskId}
    volumeAttributes:
      fsType: ext4
---
apiVersion: v1
kind: Namespace
metadata:
  name: disk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-${disk}
  namespace: disk
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: disk
  volumeName: pv-${disk}
  resources:
    requests:
      storage: 5Gi
EOF
  1. Mount the disk into Pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: disk-mount-1
  namespace: disk
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${disk}
EOF
  1. Write the message to a file and check if it works after the Pod starts
kubectl exec disk-mount-1 -n disk -- sh -c 'touch /mnt/azure/text; echo hello\! > /mnt/azure/text'
kubectl logs disk-mount-1 -n disk

Output:

cat: can't open '/mnt/azure/text': No such file or directory
cat: can't open '/mnt/azure/text': No such file or directory
cat: can't open '/mnt/azure/text': No such file or directory
hello!
  1. Check if other Pods can get the same file
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: disk-mount-2
  namespace: disk
spec:
  containers:
  - name: demo
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
    volumeMounts:
      - mountPath: /mnt/azure
        name: volume
        readOnly: false
  volumes:
   - name: volume
     persistentVolumeClaim:
       claimName: pvc-${disk}
  nodeSelector:
    kubernetes.io/hostname: $(kubectl get po disk-mount-${ranChar}-1 -n disk -o jsonpath='{.spec.nodeName}')
EOF
sleep 20; kubectl logs disk-mount-2 -n disk
NOTE

Make sure that the second Pod is assigned to the same node as the first one; otherwise, it will fail with “Multiple Attach Error”.

Output:

pod/disk-mount-2 created
hello!
hello!
hello!
hello!

Since we can successfully read the message “hello!” from the other Pod, the write operation succeeds.

Use Azure Disk with Azure Container Storage (non-CSI)#

Since the official documentation is already clear and very detailed, I won’t go into detail here, as there is no need to repeating work. Note that this is a non-CSI driver solution.
For full introdution to this feature, see also: What is Azure Container Storage.

Epilogue#

The reason I wrote this is that the official documentation is very confusing and does not explain such scenario in detail. I have to take hours to read the document and think about feasibility.
As I was writing this, I had to check multiple pages, including the source code, to confirm what exactly I was doing - as some information were un-documented or not well-documented. It’s really sad to use Azure products in this way.

Set up volumes using Azure Storage without account access keys in Azure Kubernetes Service
https://blog.joeyc.dev/posts/aks-non-token-storage/
Author
Joey Chen
Published at
2025-02-27