Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Commit ef9ed30

Browse files
authored
Merge pull request #5 from jzaccone/lab1changes
enhance lab1
2 parents 7ec9635 + eda8ea6 commit ef9ed30

File tree

1 file changed

+63
-9
lines changed

1 file changed

+63
-9
lines changed

workshop/Lab1/README.md

Lines changed: 63 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ From the cloud shell prompt, run the following commands to get the guestbook app
2222
cd $HOME
2323
git clone --branch fs https://github.com/IBM/guestbook-nodejs.git
2424
git clone --branch storage https://github.com/rojanjose/guestbook-config.git
25-
cd $HOME/guestbook-config/storage
25+
cd $HOME/guestbook-config/storage/lab1
2626
```
2727

2828
Let's start with reserving the Persistent volume from the primary storage.
@@ -74,7 +74,7 @@ spec:
7474
Create PVC:
7575

7676
```
77-
kubectl create -f guestbook-local-pvc.yaml
77+
kubectl create -f pvc-hostpath.yaml
7878
persistentvolumeclaim/guestbook-local-pvc created
7979
❯ kubectl get pvc
8080
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
@@ -115,7 +115,6 @@ module.exports = function(Entry) {
115115
Run the commands listed below to build the guestbook image and copy into the docker hub registry:
116116

117117
```
118-
cd $HOME/guestbook-nodejs/src
119118
docker build -t $DOCKERUSER/guestbook-nodejs:storage .
120119
docker login -u $DOCKERUSER
121120
docker push $DOCKERUSER/guestbook-nodejs:storage
@@ -180,7 +179,7 @@ service/guestbook created
180179
Find the URL for the guestbook application by joining the worker node external IP and service node port.
181180

182181
```
183-
HOSTNAME=`ibmcloud ks workers --cluster $CLUSTERNAME | grep Ready | head -n 1 | awk '{print $2}'`
182+
HOSTNAME=`kubectl get nodes -o wide | tail -n 1 | awk '{print $7}'`
184183
SERVICEPORT=`kubectl get svc guestbook -o=jsonpath='{.spec.ports[0].nodePort}'`
185184
echo "http://$HOSTNAME:$SERVICEPORT"
186185
```
@@ -189,12 +188,10 @@ Open the URL in a browser and create guest book entries.
189188

190189
![Guestbook entries](images/lab1-guestbook-entries.png)
191190

192-
Log into the pod:
191+
Next, inspect the data. To do this, run a bash process inside the application container using `kubectl exec`. Reference the pod name from the previous `kubectl get pods` command. Once inside the container, use the subsequent comands to inspect the data.
193192

194193
```
195-
kubectl exec -it guestbook-v1-6f55cb54c5-jb89d bash
196-
197-
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
194+
kubectl exec -it [POD NAME] -- bash
198195
199196
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# ls -al
200197
total 256
@@ -246,7 +243,57 @@ tmpfs 7.9G 0 7.9G 0% /sys/firmware
246243
247244
```
248245

249-
Kill the pod to see the impact of deleting the pod on data.
246+
While still inside the container, create a file on the container file system. This file will not persist when we kill the container. Then run `/sbin/killall5` to terminate the container.
247+
248+
```
249+
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# touch dontdeletemeplease
250+
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# ls dontdeletemeplease
251+
dontdeletemeplease
252+
root@guestbook-v1-66798779d6-fqh2j:/home/node/app# /sbin/killall5
253+
root@guestbook-v1-66798779d6-fqh2j:/home/node/app# command terminated with exit code 137
254+
```
255+
256+
The `killall5` command will kick you out of the container (which is no longer running), but the pod is still running. Verify this with `kubectl get pods`. Not the **0/1** status indicating the application container is no longer running.
257+
```
258+
kubectl get pods
259+
NAME READY STATUS RESTARTS AGE
260+
guestbook-v1-66798779d6-fqh2j 0/1 CrashLoopBackOff 2 16m
261+
```
262+
263+
After a few seconds, the Pod will restart the container:
264+
```
265+
kubectl get pods
266+
NAME READY STATUS RESTARTS AGE
267+
guestbook-v1-66798779d6-fqh2j 1/1 Running 3 16m
268+
```
269+
270+
Run a bash process inside the container to inspect your data again:
271+
272+
```
273+
kubectl exec -it [POD NAME] -- bash
274+
275+
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# cat data/cache.txt
276+
Hello Kubernetes!
277+
Hola Kubernetes!
278+
Zdravstvuyte Kubernetes!
279+
Nǐn hǎo Kubernetes!
280+
Goedendag Kubernetes!
281+
282+
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# cat logs/debug.txt
283+
Received message: Hello Kubernetes!
284+
Received message: Hola Kubernetes!
285+
Received message: Zdravstvuyte Kubernetes!
286+
Received message: Nǐn hǎo Kubernetes!
287+
Received message: Goedendag Kubernetes!
288+
289+
290+
root@guestbook-v1-6f55cb54c5-jb89d:/home/node/app# ls dontdeletemeplease
291+
ls: dontdeletemeplease: No such file or directory
292+
293+
```
294+
Notice how the storage from the primary (`hostPath`) and secondary (`emptyDir`) storage types persisted beyond the lifecycle of the container, but the `dontdeletemeplease` file, did not.
295+
296+
Next, we'll kill the pod to see the impact of deleting the pod on data.
250297

251298
```
252299
kubectl get pods
@@ -297,6 +344,13 @@ root@guestbook-v1-5cbc445dc9-sx58j:/home/node/app#
297344

298345
This shows that the storage type `emptyDir` loose data on a pod restart whereas `hostPath` data lives until the worker node or cluster is deleted.
299346

347+
| Storage Type | Persisted at which level | Example Uses
348+
| - | - | - |
349+
| Container local storage | Container | ephermal state
350+
| Secondary Storage ([EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)) | Pod | Checkpoint a long computation process
351+
| Primary Storage ([HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) | Node | Running cAdvisor in a container
352+
353+
Normally Kubernetes clusters have multiple worker nodes in a cluster with replicas for a single application running across different worker nodes. In this case, only applications running on the same worker node will share data persisted with IKS Primary Storage (HostPath). More suitable solutions are available for cross worker nodes, cross availability-zone and cross-region storage.
300354

301355
## Clean up
302356

0 commit comments

Comments
 (0)