Skip to content

Commit 54698d1

Browse files
authored
Merge pull request #2461 from kiranram/kiramram-feature-cloudwatch-metric-streams-firehose-terraform
New serverless pattern for cloudwatch metric streams to kinesis firehose
2 parents f186dee + 7495da7 commit 54698d1

File tree

6 files changed

+404
-0
lines changed

6 files changed

+404
-0
lines changed
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Amazon CloudWatch Mertics streaming using Amazon Data Firehose with Terraform
2+
3+
This pattern demonstrates how to create the Amazon CloudWatch Metric Streams to Amazon Data Firehose. Metrics are saved to S3 from Amazon Data Firehose. Metric selection is also demonstrated to stream only certain metrics related to certain AWS services to be sent from Cloudwatch to Amazon Data Firehose.
4+
5+
Learn more about this pattern at Serverless Land Patterns: https://serverlessland.com/patterns/cloudwatch-metric-streams-firehose-terraform
6+
7+
Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example.
8+
9+
## Requirements
10+
11+
* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources.
12+
* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured
13+
* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
14+
* [Terraform](https://www.terraform.io/) installed
15+
16+
## Deployment Instructions
17+
18+
1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository:
19+
```
20+
git clone https://github.com/aws-samples/serverless-patterns
21+
```
22+
2. Change directory to the pattern directory:
23+
```
24+
cd serverless-patterns/cloudwatch-metric-streams-firehose-terraform
25+
```
26+
3. Run below terraform commands to deploy to your AWS account in the desired region (default is eu-west-2):
27+
```
28+
terraform init
29+
terraform validate
30+
terraform plan -var region=<YOUR_REGION>
31+
terraform apply -var region=<YOUR_REGION>
32+
```
33+
34+
## How it works
35+
When AWS services are provisioned, the listed metrics(in the IaC) will be captured and streamed to Amazon Data Firehose. The destination in this case is a S3 bucket, where the metrics are saved. The code is configured to eu-west-2, but can be changed to any desired region via CLI as shown above. The example code includes AWS/EC2 and AWS/RDS namespaces with couple of metrics in each, which can be easily changed or new ones appended with new namespaces and/or metrics as required.
36+
37+
![pattern](Images/pattern.png)
38+
39+
## Testing
40+
41+
After deployment, launch an EC2 instance in the same region, and after a few minutes the metrics data will appear in the S3 bucket. The file is in GZIP format and has metrics saved as JSON objects.
42+
43+
44+
## Cleanup
45+
46+
1. Delete the stack:
47+
```
48+
terraform destroy -var region=<YOUR_REGION>
49+
```
50+
----
51+
Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
52+
53+
SPDX-License-Identifier: MIT-0
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
{
2+
"title": "CloudWatch Metric Streams to Amazon Data Firehose",
3+
"description": "Create CloudWatch Metric stream using Amazon Data Firehose and save them in Amazon S3",
4+
"language": "",
5+
"level": "300",
6+
"framework": "Terraform",
7+
"introBox": {
8+
"headline": "How it works",
9+
"text": [
10+
"This pattern sets up Amazon CloudWatch Metric stream and associates that with Amazon Data Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.",
11+
"Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.",
12+
"One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.",
13+
"This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege."
14+
]
15+
},
16+
"gitHub": {
17+
"template": {
18+
"repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/cloudwatch-metric-streams-firehose-terraform",
19+
"templateURL": "serverless-patterns/cloudwatch-metric-streams-firehose-terraform",
20+
"projectFolder": "cloudwatch-metric-streams-firehose-terraform",
21+
"templateFile": "main.tf"
22+
}
23+
},
24+
"resources": {
25+
"bullets": [
26+
{
27+
"text": "Use metric streams to continually stream CloudWatch Metrics",
28+
"link": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html"
29+
},
30+
{
31+
"text": "Amazon Data Firehose - Streaming Data Pipeline",
32+
"link": "https://aws.amazon.com/firehose/"
33+
},
34+
{
35+
"text": "Amazon S3 - Cloud Object Storage",
36+
"link": "https://aws.amazon.com/s3/"
37+
}
38+
]
39+
},
40+
"deploy": {
41+
"text": [
42+
"terraform init",
43+
"terraform plan",
44+
"terraform apply"
45+
]
46+
},
47+
"testing": {
48+
"text": [
49+
"In the same account and region, launch an EC2 instance. You should be able to see metrics arrive n S3 bucket in few minutes."
50+
]
51+
},
52+
"cleanup": {
53+
"text": [
54+
"terraform destroy"
55+
]
56+
},
57+
"authors": [
58+
{
59+
"name": "Kiran Ramamurthy",
60+
"image": "n/a",
61+
"bio": "I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.",
62+
"linkedin": "kiran-ramamurthy-a96341b",
63+
"twitter": "n/a"
64+
}
65+
],
66+
"patternArch": {
67+
"icon1": {
68+
"x": 20,
69+
"y": 50,
70+
"service": "cloudwatch",
71+
"label": "Amazon CloudWatch"
72+
},
73+
"icon2": {
74+
"x": 50,
75+
"y": 50,
76+
"service": "kinesis-firehose",
77+
"label": "Amazon Kinesis Firehose"
78+
},
79+
"line1": {
80+
"from": "icon1",
81+
"to": "icon2",
82+
"label": "Mertics"
83+
},
84+
"icon3": {
85+
"x": 80,
86+
"y": 50,
87+
"service": "s3",
88+
"label": "Amazon S3"
89+
},
90+
"line2": {
91+
"from": "icon2",
92+
"to": "icon3",
93+
"label": "Mertics"
94+
}
95+
}
96+
}
Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
{
2+
"title": "CloudWatch Metric Streams to Amazon Data Firehose",
3+
"description": "Create CloudWatch Metric stream using Amazon Data Firehose and save them in Amazon S3",
4+
"language": "",
5+
"level": "300",
6+
"framework": "Terraform",
7+
"introBox": {
8+
"headline": "How it works",
9+
"text": [
10+
"This pattern sets up Amazon CloudWatch Metric stream and associates that with Amazon Data Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.",
11+
"Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.",
12+
"One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.",
13+
"This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege."
14+
]
15+
},
16+
"gitHub": {
17+
"template": {
18+
"repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/cloudwatch-metric-streams-firehose-terraform",
19+
"templateURL": "serverless-patterns/cloudwatch-metric-streams-firehose-terraform",
20+
"projectFolder": "cloudwatch-metric-streams-firehose-terraform",
21+
"templateFile": "main.tf"
22+
}
23+
},
24+
"resources": {
25+
"bullets": [
26+
{
27+
"text": "Use metric streams to continually stream CloudWatch Metrics",
28+
"link": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html"
29+
},
30+
{
31+
"text": "Amazon Data Firehose - Streaming Data Pipeline",
32+
"link": "https://aws.amazon.com/firehose/"
33+
},
34+
{
35+
"text": "Amazon S3 - Cloud Object Storage",
36+
"link": "https://aws.amazon.com/s3/"
37+
}
38+
]
39+
},
40+
"deploy": {
41+
"text": [
42+
"terraform init",
43+
"terraform plan",
44+
"terraform apply"
45+
]
46+
},
47+
"testing": {
48+
"text": [
49+
"In the same account and region, launch an EC2 instance. You should be able to see metrics arrive n S3 bucket in few minutes."
50+
]
51+
},
52+
"cleanup": {
53+
"text": [
54+
"terraform destroy"
55+
]
56+
},
57+
"authors": [
58+
{
59+
"name": "Kiran Ramamurthy",
60+
"image": "n/a",
61+
"bio": "I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.",
62+
"linkedin": "kiran-ramamurthy-a96341b",
63+
"twitter": "n/a"
64+
}
65+
]
66+
}
314 KB
Loading
Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
provider "aws" {
2+
region = var.region
3+
4+
default_tags {
5+
tags = {
6+
metrics-test = "aws-metric-streams"
7+
}
8+
}
9+
}
10+
11+
data "aws_availability_zones" "available" {
12+
state = "available"
13+
}
14+
15+
data "aws_caller_identity" "current" {}
16+
17+
# Define role for firehose to send metrics to S3
18+
resource "aws_iam_role" "firehose_to_s3" {
19+
name_prefix = "test_streams"
20+
assume_role_policy = <<EOF
21+
{
22+
"Version": "2012-10-17",
23+
"Statement": [
24+
{
25+
"Action": "sts:AssumeRole",
26+
"Principal": {
27+
"Service": "firehose.amazonaws.com"
28+
},
29+
"Effect": "Allow",
30+
"Sid": ""
31+
}
32+
]
33+
}
34+
EOF
35+
}
36+
37+
# Define a policy for permissios to write to S3
38+
resource "aws_iam_role_policy" "firehose_to_s3" {
39+
name_prefix = "test_streams"
40+
role = aws_iam_role.firehose_to_s3.id
41+
policy = <<EOF
42+
{
43+
"Version": "2012-10-17",
44+
"Statement": [
45+
{
46+
"Effect": "Allow",
47+
"Action": [
48+
"s3:AbortMultipartUpload",
49+
"s3:GetBucketLocation",
50+
"s3:GetObject",
51+
"s3:ListBucket",
52+
"s3:ListBucketMultipartUploads",
53+
"s3:PutObject"
54+
],
55+
"Resource": [
56+
"${aws_s3_bucket.metric_stream.arn}",
57+
"${aws_s3_bucket.metric_stream.arn}/*"
58+
]
59+
}
60+
]
61+
}
62+
EOF
63+
}
64+
65+
# Associate the IAM role
66+
resource "aws_iam_role" "metric_stream_to_firehose" {
67+
name_prefix = "test_streams"
68+
assume_role_policy = <<EOF
69+
{
70+
"Version": "2012-10-17",
71+
"Statement": [
72+
{
73+
"Action": "sts:AssumeRole",
74+
"Principal": {
75+
"Service": "streams.metrics.cloudwatch.amazonaws.com"
76+
},
77+
"Effect": "Allow",
78+
"Sid": ""
79+
}
80+
]
81+
}
82+
EOF
83+
}
84+
85+
resource "aws_iam_role_policy" "metric_stream_to_firehose" {
86+
name_prefix = "test_streams"
87+
role = aws_iam_role.metric_stream_to_firehose.id
88+
policy = <<EOF
89+
{
90+
"Version": "2012-10-17",
91+
"Statement": [
92+
{
93+
"Effect": "Allow",
94+
"Action": [
95+
"firehose:PutRecord",
96+
"firehose:PutRecordBatch"
97+
],
98+
"Resource": "${aws_kinesis_firehose_delivery_stream.metrics.arn}"
99+
}
100+
]
101+
}
102+
EOF
103+
}
104+
105+
# Create the S3 bucket to hold the metrics
106+
resource "aws_s3_bucket" "metric_stream" {
107+
bucket = "test-streams-${data.aws_caller_identity.current.account_id}-${var.region}"
108+
109+
tags = var.tags
110+
111+
# 'true' allows terraform to delete this bucket even if it is not empty.
112+
force_destroy = true
113+
}
114+
115+
# Create the Amazon Data Firehose instance
116+
resource "aws_kinesis_firehose_delivery_stream" "metrics" {
117+
name = "test_streams"
118+
destination = "extended_s3"
119+
120+
extended_s3_configuration {
121+
role_arn = aws_iam_role.firehose_to_s3.arn
122+
bucket_arn = aws_s3_bucket.metric_stream.arn
123+
124+
compression_format = var.s3_compression_format
125+
}
126+
127+
}
128+
129+
# Create the metric streams for the desired services
130+
resource "aws_cloudwatch_metric_stream" "metric-stream" {
131+
name = "test_streams"
132+
role_arn = aws_iam_role.metric_stream_to_firehose.arn
133+
firehose_arn = aws_kinesis_firehose_delivery_stream.metrics.arn
134+
output_format = var.output_format
135+
136+
137+
# There can be an exclude_filter block, but it is
138+
# mutually exclusive with the include_filter, which means
139+
# you can have one of them at any time.
140+
141+
include_filter {
142+
namespace = "AWS/EC2"
143+
metric_names = ["CPUUtilization", "NetworkOut"]
144+
}
145+
146+
include_filter {
147+
namespace = "AWS/RDS"
148+
metric_names = ["CPUUtilization", "DatabaseConnections"]
149+
}
150+
151+
tags = var.tags
152+
}

0 commit comments

Comments
 (0)