You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 14, 2022. It is now read-only.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,9 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
9
9
- Now the Application-Id shown in Traffic-Monitor column: Subject resolves to the Application-Name [#69](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/69)
10
10
- Now it is possible to perform a Full-Text search (search for a part of value) on the Subject-Column in Traffic-Monitor [#70](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/70)
11
11
12
+
### Changed
13
+
- ILM policies optimized to reduce the required disk space [#73](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/73)
14
+
12
15
### Fixed
13
16
- Indices are rolled over too often when an Index-Template is changed [#72](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/72)
14
17
@@ -18,7 +21,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
18
21
- Index-Rollover error when using regional indices [#66](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/66)
19
22
20
23
### Changed
21
-
- ILM policies optimized for the ideal index sizes [#68](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/68)
24
+
- ILM policies optimized for the ideal index sizes and number of shards [#68](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/68)
22
25
23
26
### Added
24
27
- Initial version of Update instructions. See [UPDATE.md](UPDATE.md)
Copy file name to clipboardExpand all lines: README.md
+24-15Lines changed: 24 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -735,11 +735,11 @@ The configuration is defined here per data type (e.g. Summary, Details, Audit, .
735
735
736
736
| Data-Type | Description | Hot (Size/Days) | Warm | Cold | Delete | Total |
737
737
| :--- |:--- | :--- | :--- | :--- | :--- | :--- |
738
-
|**Traffic-Summary**| Main index for traffic-monitor overview and primary dashboard | 30GB / 15 days |15 days | 30 days |10 days | 70 days |
739
-
|**Traffic-Details**| Details in Traffic-Monitor for Policy, Headers and Payload reference | 30GB / 15 days |7 days | 10 days |5 days |37 days |
740
-
|**Traffic-Trace**| Trace-Messages belonging to an API-Request shown in Traffic-Monitor | 30GB / 60 days |7 days | 10 days |5 days |82 days |
741
-
|**General-Trace**| General trace messages, like Start- & Stop-Messages | 30GB / 60 days |7 days | 10 days |5 days |82 days |
742
-
|**Gateway-Monitoring**| System status information (CPU, HDD, etc.) from Event-Files | 30GB / 60 days |15 days | 15 days |15 days | 105 days |
738
+
|**Traffic-Summary**| Main index for traffic-monitor overview and primary dashboard | 30GB / 15 days |5 days | 10 days |0 days | 30 days |
739
+
|**Traffic-Details**| Details in Traffic-Monitor for Policy, Headers and Payload reference | 30GB / 15 days |5 days | 10 days |0 days |30 days |
740
+
|**Traffic-Trace**| Trace-Messages belonging to an API-Request shown in Traffic-Monitor | 30GB / 60 days |5 days | 10 days |0 days |75 days |
741
+
|**General-Trace**| General trace messages, like Start- & Stop-Messages | 30GB / 60 days |5 days | 10 days |0 days |75 days |
742
+
|**Gateway-Monitoring**| System status information (CPU, HDD, etc.) from Event-Files | 30GB / 60 days |30 days | 15 days |0 days| 105 days |
743
743
|**Domain-Audit**| Domain Audit-Information as configured in Admin-Node-Manager | 10GB / 270 days | 270 days| 720 days| 15 days | >3 years|
744
744
745
745
Please note:
@@ -780,16 +780,23 @@ Please note:
780
780
#### Retention period
781
781
782
782
The second important aspect for sizing is the retention period, which defines how long data should be available. Accordingly, disk space must be made available.
783
-
The Traffic-Summary, Traffic-Details and Trace-Messages indicies play a particularly important role here. The solution is delivered with default values which you can read [here](#lifecycle-management). Based on the these default values which result in ap. 60 days the following disk space is required.
784
-
785
-
| Volume per day | Stored documents | Total Disk-Space | Comment |
786
-
| :--- | :--- | :--- | :--- |
787
-
| up to 1 Mio (~15 TPS) | 60 Mio. | 30 GB | 2 Elasticsearch nodes, each with 15 GB |
788
-
| up to 5 Mio (~60 TPS) | 300 Mio. | 60 GB | 2 Elasticsearch nodes, each with 30 GB |
789
-
| up to 10 Mio (~120 TPS) | 600 Mio. | 160 GB | 2 Elasticsearch nodes, each with 80 GB |
790
-
| up to 25 Mio (~300 TPS) | 1.500 Bil. | 500 GB | 3 Elasticsearch nodes, each with 200 GB |
791
-
| up to 50 Mio (~600 TPS) | 3.000 Bil. | 1 TB | 4 Elasticsearch nodes, each with 250 GB |
792
-
783
+
In particular the Traffic-Summary and Traffic-Details indicies become huge and therefore play a particularly important role here. The solution is delivered with default values which you can read [here](#lifecycle-management). Based on the these default values which result in ap. 30 days the following disk space is required.
784
+
785
+
| Volume per day | Total Disk-Space | Comment |
786
+
| :--- | :--- | :--- |
787
+
| up to 1 Mio (~15 TPS) | 60 GB | 2 Elasticsearch nodes, each with 50 GB |
788
+
| up to 5 Mio (~60 TPS) | 250 GB | 2 Elasticsearch nodes, each with 150 GB |
789
+
| up to 10 Mio (~120 TPS) | 500 GB | 2 Elasticsearch nodes, each with 250 GB |
790
+
| up to 25 Mio (~300 TPS) | 1 TB | 3 Elasticsearch nodes, each with 500 GB |
791
+
| up to 50 Mio (~600 TPS) | 2 TB | 4 Elasticsearch nodes, each with 500 GB |
792
+
793
+
If the required storage space is unexpectedly higher, then you can do the following:
794
+
- add an additional Elasticsearch cluster node at a later time.
795
+
- Elasticsearch will then start balancing the cluster by moving shards to this new node
796
+
- this additional node will of course also improve the overall performance of the cluster
797
+
- increase the disk space of an existing node
798
+
- if the cluster state is green, you can stop a node, allocate more disk space, and then start it again
799
+
- the available disk space is used automatically by allocating shards
793
800
794
801
### Test infrastructure
795
802
@@ -801,6 +808,8 @@ The following test infrastructure was used to determine the [maximum capacity or
801
808
| 4x | AWS EC2 t2.xlarge instance | 4 vCPUS | 16GB | 30GB | Logstash, API-Builder, Memcached | 7.10.0 | Logstash instances started as needed for the test. Logstash, API-Builder and Memcache always run together |
802
809
| 5x | AWS EC2 t2.xlarge instance | 4 vCPUS | 16GB | 80GB | Elasticsearch | 7.10.0 | Elasticsearch instances started as needed. Kibana running on the first node |
803
810
811
+
There is no specific reason that EC2 t2.xlarge instances were used for the test setup. The deciding factor was simply the number of CPU cores and 16 GB RAM.
0 commit comments