Skip to content

Commit cd81593

Browse files
authored
feat: upgrade Kubernetes API to version 1.34 (#45)
* feat: upgrade Kubernetes API to version 1.34 * fix: the issues while upgrading to 1.34 * feat: add a script for manual cleanups
1 parent 2d02761 commit cd81593

File tree

11 files changed

+527
-57
lines changed

11 files changed

+527
-57
lines changed

.terraform-docs.yaml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,9 @@ content: |-
7777
{{ .Resources }}
7878
{{ .Inputs }}
7979
{{ .Outputs }}
80-
80+
81+
For detailed documentation about the structure and contents of each output, refer to the [Module Outputs](./docs/GUIDES.md#module-outputs)
82+
section in the guides.
8183
output:
8284
file: "README.md"
8385
mode: replace

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The instruments used to develop and maintain this Module are the following:
2121

2222
| Id | Tool |
2323
|-----|----------------------|
24-
| I0 | Docker |
24+
| I0 | Docker |
2525
| I1 | Make |
2626
| I2 | Terraform |
2727
| I3 | AWS CLI version 2 |

README.md

Lines changed: 51 additions & 48 deletions
Large diffs are not rendered by default.

autoscaler.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ spec:
8484
role: "${module.kubernetes.eks_managed_node_groups["main"].iam_role_name}"
8585
subnetSelectorTerms:
8686
- tags:
87-
karpenter.sh/discovery: "${try(var.cluster_autoscaler_subnet_selector, module.kubernetes.cluster_name)}"
87+
karpenter.sh/discovery: "${coalesce(var.cluster_autoscaler_subnet_selector, module.kubernetes.cluster_name)}"
8888
securityGroupSelectorTerms:
8989
- tags:
9090
karpenter.sh/discovery: "${module.kubernetes.cluster_name}"

docs/GUIDES.md

Lines changed: 156 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -271,3 +271,159 @@ Prometheus Operator monitors at cluster level for any PrometheusRule object.
271271
The module generates when the module is instantiated for the first time a random password and random user for Grafana root user creds.
272272
These creds are stored in the AWS Secrets Manager. The actual path to the secret is stored in the output variable [cluster_ssm_params_paths](../outputs.tf).
273273

274+
## Module Outputs
275+
276+
This section documents all outputs exposed by the module, providing detailed information about the structure and contents of each output.
277+
278+
### `cluster`
279+
280+
The main EKS cluster configuration output. Contains comprehensive information about the Kubernetes cluster, including:
281+
282+
- **`access_entries`**: Map of access entries for cluster authentication. Each entry contains:
283+
- `access_entry_arn`: ARN of the access entry
284+
- `cluster_name`: Name of the EKS cluster
285+
- `created_at` / `modified_at`: Timestamps
286+
- `id`: Unique identifier
287+
- `kubernetes_groups`: Set of Kubernetes groups
288+
- `principal_arn`: ARN of the IAM principal
289+
- `tags` / `tags_all`: Resource tags
290+
- `type`: Access entry type (e.g., "STANDARD")
291+
- `user_name`: Assumed role user name
292+
293+
- **`access_policy_associations`**: Map of access policy associations, including:
294+
- `access_scope`: List of access scopes (cluster/namespace level)
295+
- `associated_at` / `modified_at`: Timestamps
296+
- `policy_arn`: ARN of the associated policy
297+
- `principal_arn`: ARN of the IAM principal
298+
299+
- **`cloudwatch_log_group_arn`** / **`cloudwatch_log_group_name`**: CloudWatch log group details for cluster logs
300+
301+
- **`cluster_addons`**: Map of installed EKS addons (e.g., `coredns`, `vpc-cni`, `kube-proxy`, `eks-pod-identity-agent`, `snapshot-controller`). Each addon contains:
302+
- `addon_name`: Name of the addon
303+
- `addon_version`: Version string
304+
- `arn`: ARN of the addon
305+
- `created_at` / `modified_at`: Timestamps
306+
- `configuration_values`: YAML configuration
307+
- `preserve`: Whether addon is preserved on delete
308+
- `resolve_conflicts_on_create` / `resolve_conflicts_on_update`: Conflict resolution strategy
309+
- `service_account_role_arn`: IAM role ARN for the addon
310+
311+
- **`cluster_arn`**: ARN of the EKS cluster
312+
- **`cluster_certificate_authority_data`**: Base64-encoded certificate authority data
313+
- **`cluster_dualstack_oidc_issuer_url`** / **`cluster_oidc_issuer_url`**: OIDC issuer URLs
314+
- **`cluster_endpoint`**: Kubernetes API server endpoint URL
315+
- **`cluster_iam_role_arn`** / **`cluster_iam_role_name`** / **`cluster_iam_role_unique_id`**: Cluster IAM role details
316+
- **`cluster_ip_family`**: IP family (e.g., "ipv4")
317+
- **`cluster_name`**: Name of the cluster
318+
- **`cluster_platform_version`**: EKS platform version
319+
- **`cluster_primary_security_group_id`**: Primary security group ID
320+
- **`cluster_security_group_arn`** / **`cluster_security_group_id`**: Cluster security group details
321+
- **`cluster_service_cidr`**: Service CIDR block
322+
- **`cluster_status`**: Current cluster status (e.g., "ACTIVE")
323+
- **`cluster_version`**: Kubernetes version (e.g., "1.34")
324+
- **`cluster_tls_certificate_sha1_fingerprint`**: TLS certificate fingerprint
325+
326+
- **`eks_managed_node_groups`**: Map of EKS managed node groups. Each group contains:
327+
- `iam_role_arn` / `iam_role_name` / `iam_role_unique_id`: Node group IAM role details
328+
- `node_group_arn`: ARN of the node group
329+
- `node_group_autoscaling_group_names`: List of Auto Scaling group names
330+
- `node_group_id`: Unique identifier
331+
- `node_group_labels`: Labels applied to nodes
332+
- `node_group_resources`: Resource details including Auto Scaling groups
333+
- `node_group_status`: Current status (e.g., "ACTIVE")
334+
- `node_group_taints`: Set of taints applied to nodes
335+
- `platform`: Platform type (e.g., "linux")
336+
- `launch_template_*`: Launch template details (if used)
337+
338+
- **`eks_managed_node_groups_autoscaling_group_names`**: List of all Auto Scaling group names
339+
340+
- **`fargate_profiles`**: Map of Fargate profiles (if configured)
341+
342+
- **`kms_key_arn`** / **`kms_key_id`** / **`kms_key_policy`**: KMS encryption key details
343+
344+
- **`node_security_group_arn`** / **`node_security_group_id`**: Node security group details
345+
346+
- **`oidc_provider`** / **`oidc_provider_arn`**: OIDC provider details
347+
348+
- **`self_managed_node_groups`** / **`self_managed_node_groups_autoscaling_group_names`**: Self-managed node groups (if configured)
349+
350+
### `cluster_network`
351+
352+
Network configuration output containing both internal and external network details.
353+
354+
- **`internal`**: Internal network configuration (when using module-managed VPC):
355+
- **`network`**: List containing VPC network objects with:
356+
- **Subnet information**:
357+
- `public_subnets` / `public_subnet_arns` / `public_subnets_cidr_blocks`: Public subnet details
358+
- `private_subnets` / `private_subnet_arns` / `private_subnets_cidr_blocks`: Private subnet details
359+
- `intra_subnets` / `intra_subnet_arns` / `intra_subnets_cidr_blocks`: Intra-VPC subnet details
360+
- `database_subnets` / `database_subnet_arns` / `database_subnets_cidr_blocks`: Database subnet details
361+
- Subnet objects with full details (IDs, ARNs, availability zones, CIDR blocks, tags, etc.)
362+
- **Route tables**: IDs and association IDs for public, private, intra, and database subnets
363+
- **NAT Gateways**: IDs, interface IDs, Elastic IP allocation IDs, and public IPs
364+
- **Internet Gateway**: ID and ARN
365+
- **VPC details**: ID, ARN, CIDR block, DNS settings, owner ID, main route table ID
366+
- **Availability zones**: List of AZs used
367+
- **`vpc_endpoints`**: List of VPC endpoint configurations with security group details
368+
369+
- **`external`**: External network configuration (when using existing VPC):
370+
- `vpc_id`: VPC ID
371+
- `node_subnet_ids`: Subnet IDs for worker nodes
372+
- `control_plane_subnet_ids`: Subnet IDs for control plane
373+
374+
### `cluster_storage_classes`
375+
376+
Map of storage class configurations. Contains:
377+
378+
- **`default`**: List of default storage classes (e.g., `standard`, `golden`, `platinum`). Each storage class includes:
379+
- `id`: Storage class name
380+
- `allow_volume_expansion`: Whether volume expansion is allowed
381+
- `reclaim_policy`: Policy (e.g., "Retain")
382+
- `volume_binding_mode`: Binding mode (e.g., "WaitForFirstConsumer")
383+
- `storage_provisioner`: Provisioner (e.g., "ebs.csi.aws.com")
384+
- `parameters`: Map of storage class parameters (e.g., `type`, `encrypted`, `csi.storage.k8s.io/fstype`)
385+
- `metadata`: List containing Kubernetes metadata (annotations, labels, name, etc.)
386+
387+
- **`additional`**: List of additional custom storage classes (if configured)
388+
389+
### `cluster_autoscaler_resources`
390+
391+
Autoscaler resource names for use by cluster users. Contains:
392+
393+
- **`default`**: Default autoscaler resources:
394+
- `ec2_node_class`: Name of the default EC2NodeClass resource
395+
- `node_pool`: Name of the default NodePool resource
396+
397+
These can be referenced in Kubernetes manifests to use the default autoscaler configuration.
398+
399+
### `cluster_ssm_params_paths`
400+
401+
SSM Parameter Store paths for sensitive values stored by the module. Contains:
402+
403+
- **`prometheus_stack`**: Prometheus/Grafana stack credentials:
404+
- `grafana_root_username`: SSM parameter path for Grafana admin username
405+
- `grafana_root_password`: SSM parameter path for Grafana admin password
406+
407+
Use these paths to retrieve credentials from AWS Systems Manager Parameter Store.
408+
409+
### Sensitive Outputs
410+
411+
The following outputs are marked as sensitive and contain Helm chart values or deployment configurations:
412+
413+
- **`cluster_autoscaler`**: Complete autoscaler (Karpenter) Helm chart values and deployment configuration
414+
- **`cluster_descheduler`**: Descheduler Helm chart values and deployment configuration
415+
- **`cluster_ingresses`**: Ingress controller configurations:
416+
- `private`: Private ingress controller details:
417+
- `values`: Helm chart values
418+
- `hostname`: Load balancer hostname
419+
- `public`: Public ingress controller details:
420+
- `values`: Helm chart values
421+
- `hostname`: Load balancer hostname
422+
- **`cluster_logging`**: Logging stack configuration:
423+
- `storage`: Grafana Loki storage configuration (Helm values)
424+
- `collector`: Promtail collector configuration (Helm values)
425+
- **`cluster_monitoring`**: Prometheus/Grafana monitoring stack Helm chart values and configuration
426+
- **`cluster_node_rebooter`**: Node rebooter/patcher Helm chart values and configuration
427+
428+
**Note**: These sensitive outputs contain complete Helm chart configurations and should be handled carefully. They are primarily useful for debugging or when you need to reference specific deployment details in other Terraform configurations.
429+

internal_network.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ locals {
4545

4646
module "internal_network" {
4747
source = "terraform-aws-modules/vpc/aws"
48-
version = "5.19.0"
48+
version = "5.20.0"
4949

5050
create_vpc = var.cluster_network_type == "internal"
5151
name = local.network_prefix_name

lib/Makefiles/Terraform.mk

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ terraform-sec:
4444
> @docker run --rm -it -v "${PWD}:/src" aquasec/tfsec:v0.62.0 /src
4545

4646
# It generates the README.md file. It depends on the rules in the specification of the rule.
47-
terraform-docs: clean
47+
terraform-docs:
4848
> @echo "[info] Formatting"
4949
> @terraform fmt -recursive
5050
> @echo "[info] Generating README.md."

0 commit comments

Comments
 (0)