Skip to content

Commit 278e29e

Browse files
committed
Update docs
1 parent 500dfbd commit 278e29e

File tree

3 files changed

+249
-7
lines changed

3 files changed

+249
-7
lines changed

docs/Usage.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -161,12 +161,13 @@ For templates using `CNI`s you're required to create `ConfigMaps` to make `Clust
161161

162162
We provide the following templates:
163163

164-
| Flavor | Tepmlate File | CRS File |
165-
|----------------| ----------------------------------------------- |-------------------------------|
166-
| cilium | templates/cluster-template-cilium.yaml | templates/crs/cni/cilium.yaml |
167-
| calico | templates/cluster-template-calico.yaml | templates/crs/cni/calico.yaml |
168-
| multiple-vlans | templates/cluster-template-multiple-vlans.yaml | - |
169-
| default | templates/cluster-template.yaml | - |
164+
| Flavor | Tepmlate File | CRS File |
165+
|----------------|------------------------------------------------|-------------------------------|
166+
| dhcp | templates/cluster-template-dhcp.yaml | - |
167+
| cilium | templates/cluster-template-cilium.yaml | templates/crs/cni/cilium.yaml |
168+
| calico | templates/cluster-template-calico.yaml | templates/crs/cni/calico.yaml |
169+
| multiple-vlans | templates/cluster-template-multiple-vlans.yaml | - |
170+
| default | templates/cluster-template.yaml | - |
170171

171172
For more information about advanced clusters please check our [advanced setups docs](advanced-setups.md).
172173

docs/advanced-setups.md

Lines changed: 44 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,49 @@
22

33
To get started with CAPMOX please refer to the [Getting Started](Usage.md#quick-start) section.
44

5+
## DHCP
6+
7+
If you want to use DHCP to assign ip addresses for the machines, you can use `flavor=dhcp` when generating a new cluster.
8+
9+
first we need to define variables:
10+
```bash
11+
# The node that hosts the VM template to be used to provision VMs
12+
export PROXMOX_SOURCENODE="stg-ceph01"
13+
# The template VM ID used for cloning VMs
14+
export TEMPLATE_VMID=164
15+
# The Proxmox VE nodes used for VM deployments
16+
export ALLOWED_NODES="[stg-ceph01,stg-ceph02,stg-ceph04,stg-ceph04,stg-ceph05]"
17+
# The ssh authorized keys used to ssh to the machines.
18+
export VM_SSH_KEYS="ssh-ed25519 ..., ssh-ed25519 ..."
19+
# The IP that kube-vip is going to use as a control plane endpoint
20+
export CONTROL_PLANE_ENDPOINT_IP="10.10.10.4"
21+
# The dns nameservers for the machines network-config.
22+
export DNS_SERVERS="[10.4.1.1]"
23+
# The device used for the boot disk.
24+
export BOOT_VOLUME_DEVICE=scsi0
25+
# The size of the boot disk in GB.
26+
export BOOT_VOLUME_SIZE=100
27+
# The number of sockets for the VMs.
28+
export NUM_SOCKETS=2
29+
# The number of cores for the VMs.
30+
export NUM_CORES=4
31+
# The memory size for the VMs.
32+
export MEMORY_MIB=16384
33+
# The network bridge device for Proxmox VE VMs
34+
export BRIDGE=vmbr0
35+
```
36+
37+
#### Generate a Cluster
38+
39+
```bash
40+
clusterctl generate cluster test-dhcp \
41+
--infrastructure proxmox \
42+
--kubernetes-version v1.28.5 \
43+
--control-plane-machine-count=1 \
44+
--worker-machine-count=2 \
45+
--flavor=dhcp > cluster.yaml
46+
```
47+
548
## Multiple NICs
649

750
If you want to create VMs with multiple network devices,
@@ -71,7 +114,7 @@ clusterctl generate cluster test-duacl-stack \
71114
--flavor=dual-stack > cluster.yaml
72115
```
73116

74-
#### Node over-/ underprovisioning
117+
## Node over-/ underprovisioning
75118

76119
By default our scheduler only allows to allocate as much memory to guests as the host has. This might not be a desirable behaviour in all cases. For example, one might to explicitly want to overprovision their host's memory, or to reserve bit of the host's memory for itself.
77120

Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
---
2+
apiVersion: cluster.x-k8s.io/v1beta1
3+
kind: Cluster
4+
metadata:
5+
name: "${CLUSTER_NAME}"
6+
spec:
7+
clusterNetwork:
8+
pods:
9+
cidrBlocks: ["192.168.0.0/16"]
10+
infrastructureRef:
11+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
12+
kind: ProxmoxCluster
13+
name: "${CLUSTER_NAME}"
14+
controlPlaneRef:
15+
kind: KubeadmControlPlane
16+
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
17+
name: "${CLUSTER_NAME}-control-plane"
18+
---
19+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
20+
kind: ProxmoxCluster
21+
metadata:
22+
name: "${CLUSTER_NAME}"
23+
spec:
24+
controlPlaneEndpoint:
25+
host: ${CONTROL_PLANE_ENDPOINT_IP}
26+
port: 6443
27+
ipv4Config:
28+
dhcp: true
29+
dnsServers: ${DNS_SERVERS}
30+
allowedNodes: ${ALLOWED_NODES:=[]}
31+
---
32+
kind: KubeadmControlPlane
33+
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
34+
metadata:
35+
name: "${CLUSTER_NAME}-control-plane"
36+
spec:
37+
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
38+
machineTemplate:
39+
infrastructureRef:
40+
kind: ProxmoxMachineTemplate
41+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
42+
name: "${CLUSTER_NAME}-control-plane"
43+
kubeadmConfigSpec:
44+
users:
45+
- name: root
46+
sshAuthorizedKeys: [${VM_SSH_KEYS}]
47+
files:
48+
- content: |
49+
apiVersion: v1
50+
kind: Pod
51+
metadata:
52+
creationTimestamp: null
53+
name: kube-vip
54+
namespace: kube-system
55+
spec:
56+
containers:
57+
- args:
58+
- manager
59+
env:
60+
- name: cp_enable
61+
value: "true"
62+
- name: vip_interface
63+
value: ${VIP_NETWORK_INTERFACE=""}
64+
- name: address
65+
value: ${CONTROL_PLANE_ENDPOINT_IP}
66+
- name: port
67+
value: "6443"
68+
- name: vip_arp
69+
value: "true"
70+
- name: vip_leaderelection
71+
value: "true"
72+
- name: vip_leaseduration
73+
value: "15"
74+
- name: vip_renewdeadline
75+
value: "10"
76+
- name: vip_retryperiod
77+
value: "2"
78+
image: ghcr.io/kube-vip/kube-vip:v0.5.11
79+
imagePullPolicy: IfNotPresent
80+
name: kube-vip
81+
resources: {}
82+
securityContext:
83+
capabilities:
84+
add:
85+
- NET_ADMIN
86+
- NET_RAW
87+
volumeMounts:
88+
- mountPath: /etc/kubernetes/admin.conf
89+
name: kubeconfig
90+
hostAliases:
91+
- hostnames:
92+
- kubernetes
93+
ip: 127.0.0.1
94+
hostNetwork: true
95+
volumes:
96+
- hostPath:
97+
path: /etc/kubernetes/admin.conf
98+
type: FileOrCreate
99+
name: kubeconfig
100+
status: {}
101+
owner: root:root
102+
path: /etc/kubernetes/manifests/kube-vip.yaml
103+
initConfiguration:
104+
nodeRegistration:
105+
kubeletExtraArgs:
106+
provider-id: "proxmox://'{{ ds.meta_data.instance_id }}'"
107+
joinConfiguration:
108+
nodeRegistration:
109+
kubeletExtraArgs:
110+
provider-id: "proxmox://'{{ ds.meta_data.instance_id }}'"
111+
version: "${KUBERNETES_VERSION}"
112+
---
113+
kind: ProxmoxMachineTemplate
114+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
115+
metadata:
116+
name: "${CLUSTER_NAME}-control-plane"
117+
spec:
118+
template:
119+
spec:
120+
sourceNode: "${PROXMOX_SOURCENODE}"
121+
templateID: ${TEMPLATE_VMID}
122+
format: "qcow2"
123+
full: true
124+
numSockets: ${NUM_SOCKETS:=2}
125+
numCores: ${NUM_CORES:=4}
126+
memoryMiB: ${MEMORY_MIB:=16384}
127+
disks:
128+
bootVolume:
129+
disk: ${BOOT_VOLUME_DEVICE}
130+
sizeGb: ${BOOT_VOLUME_SIZE:=100}
131+
network:
132+
default:
133+
bridge: ${BRIDGE}
134+
model: virtio
135+
---
136+
apiVersion: cluster.x-k8s.io/v1beta1
137+
kind: MachineDeployment
138+
metadata:
139+
name: "${CLUSTER_NAME}-workers"
140+
spec:
141+
clusterName: "${CLUSTER_NAME}"
142+
replicas: ${WORKER_MACHINE_COUNT}
143+
selector:
144+
matchLabels:
145+
template:
146+
metadata:
147+
labels:
148+
node-role.kubernetes.io/node: ""
149+
spec:
150+
clusterName: "${CLUSTER_NAME}"
151+
version: "${KUBERNETES_VERSION}"
152+
bootstrap:
153+
configRef:
154+
name: "${CLUSTER_NAME}-worker"
155+
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
156+
kind: KubeadmConfigTemplate
157+
infrastructureRef:
158+
name: "${CLUSTER_NAME}-worker"
159+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
160+
kind: ProxmoxMachineTemplate
161+
---
162+
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
163+
kind: ProxmoxMachineTemplate
164+
metadata:
165+
name: "${CLUSTER_NAME}-worker"
166+
spec:
167+
template:
168+
spec:
169+
sourceNode: "${PROXMOX_SOURCENODE}"
170+
templateID: ${TEMPLATE_VMID}
171+
format: "qcow2"
172+
full: true
173+
numSockets: ${NUM_SOCKETS:=2}
174+
numCores: ${NUM_CORES:=4}
175+
memoryMiB: ${MEMORY_MIB:=16384}
176+
disks:
177+
bootVolume:
178+
disk: ${BOOT_VOLUME_DEVICE}
179+
sizeGb: ${BOOT_VOLUME_SIZE:=100}
180+
network:
181+
default:
182+
bridge: ${BRIDGE}
183+
model: virtio
184+
---
185+
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
186+
kind: KubeadmConfigTemplate
187+
metadata:
188+
name: "${CLUSTER_NAME}-worker"
189+
spec:
190+
template:
191+
spec:
192+
users:
193+
- name: root
194+
sshAuthorizedKeys: [${VM_SSH_KEYS}]
195+
joinConfiguration:
196+
nodeRegistration:
197+
kubeletExtraArgs:
198+
provider-id: "proxmox://'{{ ds.meta_data.instance_id }}'"

0 commit comments

Comments
 (0)