You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 22, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: workshop/Lab2/README.md
+82-29Lines changed: 82 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Lab 2: File storage with Kubernetes
2
2
3
-
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacit ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS.
3
+
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacity ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS. The IBM Cloud File Storage provides data across all worker nodes within a single availability zone.
4
4
5
5
Following topics are covered in this exercise:
6
6
- Claim a classic file storage volume.
@@ -9,18 +9,21 @@ Following topics are covered in this exercise:
9
9
- Use the `Guestbook` application to view the images.
10
10
- Claim back the storage resources and clean up.
11
11
12
+
## Prereqs
13
+
14
+
Follow the [prereqs](../Lab0/README.md) if you haven't already.
12
15
13
16
## Claim file storage volume
14
17
15
18
Review the [storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_storageclass_reference) for file storage. In addition to the standard set of storage classes, [custom storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_custom_storageclass) can be defined to meet the storage requirements.
This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
42
+
IKS comes with storage class definitions for file storage. This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
40
43
44
+
```
45
+
kubectl describe storageclass ibmc-file-silver
46
+
```
47
+
Expected output:
41
48
```
42
49
$ kubectl describe storageclass ibmc-file-silver
43
50
@@ -58,7 +65,9 @@ File sliver has an IOPS of 4GB and a max capacity of 12TB.
58
65
59
66
## Claim a file storage volume
60
67
61
-
Review the yaml for the file storage `PersistentVolumeClaim`
68
+
IBM Cloud File Storage provides fast access to your data for a cluster running in a single available zone. For higher availability, use a storage option that is designed for [geographically distributed data](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#persistent_storage_overview).
69
+
70
+
Review the yaml for the file storage `PersistentVolumeClaim`. When we create this `PersistentVolumeClaim`, it automatically creates it within an availability zone where the worker nodes are located.
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
224
+
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
Note that the volume and the data are available on all the pods running the Guestbook application.
307
324
325
+
IBM Cloud File Storage is a NFS-based file storage that is available across all worker nodes within a single availability zone. If you are running a cluster with multiple nodes (within a single AZ) you can run the following commands to prove that your data is available across different nodes:
326
+
327
+
```
328
+
kubectl get pods -o wide
329
+
kubectl get nodes
330
+
```
331
+
Expected output:
332
+
```
333
+
$ kubectl get pods -o wide
334
+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
| Storage Type | Persisted at which level | Example Uses
347
+
| - | - | - |
348
+
| Container local storage | Container | ephermal state
349
+
| Secondary Storage ([EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)) | Pod | Checkpoint a long computation process
350
+
| Primary Storage ([HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) | Node | Running cAdvisor in a container
351
+
| IBM Cloud File Storage (NFS) | Availabilty Zone | Applications running in a single availabilty zone
352
+
353
+
Data is available to all nodes within the availability zone where the file storage exists, but the `accessMode` parameter on the `PersistentVolumeClaim` determines if multiple pods are able to mount a volume specificed by a PVC. The possible values for this parameter are:
354
+
355
+
-**ReadWriteMany**: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
356
+
-**ReadOnlyMany**: The PVC can be mounted by multiple pods. All pods have read-only access.
357
+
-**ReadWriteOnce**: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
358
+
308
359
309
360
## [Optional exercises]
310
361
311
-
Back up data.
312
-
Delete pods to confirm that it does not impact the data used by the application.
313
-
Delete the Kubernetes cluster.
314
-
Create a new cluster and reuse the volume.
362
+
Another way to see that the data is persisted at the availability zone level, you can:
363
+
364
+
- Back up data.
365
+
- Delete pods to confirm that it does not impact the data used by the application.
0 commit comments