You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 22, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: workshop/Lab2/README.md
+95-25Lines changed: 95 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Lab 2: File storage with Kubernetes
2
2
3
-
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacit ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS.
3
+
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacity ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS. The IBM Cloud File Storage provides data across all worker nodes within a single availability zone.
4
4
5
5
Following topics are covered in this exercise:
6
6
- Claim a classic file storage volume.
@@ -9,6 +9,9 @@ Following topics are covered in this exercise:
9
9
- Use the `Guestbook` application to view the images.
10
10
- Claim back the storage resources and clean up.
11
11
12
+
## Prereqs
13
+
14
+
Follow the [prereqs](../Lab0/README.md) if you haven't already.
IKS comes with storage class definitions for file storage. This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
40
43
44
+
```
45
+
kubectl describe storageclass ibmc-file-silver
46
+
```
47
+
Expected output:
41
48
```
42
49
$ kubectl describe storageclass ibmc-file-silver
43
50
@@ -58,7 +65,25 @@ File sliver has an IOPS of 4GB and a max capacity of 12TB.
58
65
59
66
## Claim a file storage volume
60
67
61
-
Review the yaml for the file storage `PersistentVolumeClaim`
68
+
IBM Cloud File Storage provides fast access to your data for a cluster running in a single available zone. For higher availability, use a storage option that is designed for [geographically distributed data](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#persistent_storage_overview).
69
+
70
+
To create a `PersistentVolumeClaim` we must first get the availability zone of our cluster. Run the following commands to find out which region and availablility zone the worker nodes of your cluster are running in:
71
+
72
+
```
73
+
ibmcloud ks clusters [CLUSTER NAME]
74
+
ibmcloud ks workers -c [CLUSTER NAME]
75
+
```
76
+
77
+
Example output:
78
+
```
79
+
$ ibmcloud ks workers -c zaccone-guestbook2
80
+
81
+
OK
82
+
ID Public IP Private IP Flavor State Status Zone Version
83
+
kube-bun9o7vw0klq4okmgou0-zacconegues-default-0000010a 169.55.112.195 10.148.14.60 b3c.4x16.encrypted normal Ready wdc04 1.18.10_1532
84
+
```
85
+
86
+
Change the Review the yaml for the file storage `PersistentVolumeClaim`
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
240
+
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
Note that the volume and the data are available on all the pods running the Guestbook application.
307
340
341
+
IBM Cloud File Storage is a NFS-based file storage that is available across all worker nodes within a single availability zone. If you are running a cluster with multiple nodes (within a single AZ) you can run the following commands to prove that your data is available across different nodes:
342
+
343
+
```
344
+
kubectl get pods -o wide
345
+
kubectl get nodes
346
+
```
347
+
Expected output:
348
+
```
349
+
$ kubectl get pods -o wide
350
+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
| Storage Type | Persisted at which level | Example Uses
362
+
| - | - | - |
363
+
| Container local storage | Container | ephermal state
364
+
| Secondary Storage ([EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)) | Pod | Checkpoint a long computation process
365
+
| Primary Storage ([HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) | Node | Running cAdvisor in a container
366
+
| IBM Cloud File Storage (NFS) | Availabilty Zone | Applications running in a single availabilty zone
367
+
368
+
<br>
369
+
Data is available to all nodes within the availability zone where the file storage exists, but the `accessMode` parameter on the `PersistentVolumeClaim` determines if multiple pods are able to mount a volume specificed by a PVC. The possible values for this parameter are:
370
+
371
+
-**ReadWriteMany**: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
372
+
-**ReadOnlyMany**: The PVC can be mounted by multiple pods. All pods have read-only access.
373
+
-**ReadWriteOnce**: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
374
+
308
375
309
376
## [Optional exercises]
310
377
311
-
Back up data.
312
-
Delete pods to confirm that it does not impact the data used by the application.
313
-
Delete the Kubernetes cluster.
314
-
Create a new cluster and reuse the volume.
378
+
Another way to see that the data is persisted at the availability zone level, you can:
379
+
380
+
- Back up data.
381
+
- Delete pods to confirm that it does not impact the data used by the application.
0 commit comments