You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 14, 2022. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -715,24 +715,24 @@ There are two important aspects for sizing the platform.
715
715
716
716
The number of concurrent transactions per second (TPS) that the entire platform must handle. The platform must therefore be designed so that the events that occur on the basis of the transactions can be processed (Ingested) in real time. It is important to consider the permanent load. As a general rule, more capacity should be planned in order to also quickly enable catch-up operation after a downtime or maintenance.
717
717
718
-
The following table explains what a single component, such as Logstash, Filebeat, ... can process in terms of TPS with INFO Trace-Messages enabled to stay real-time. Please understand these values as the absolute maximum, which do not give any margin upwards for downtimes or maintenance of the platform. More is not possible per component, so obviously more capacity must be planned for in production. The tests were performed on AWS EC2 instances with the default parameters for this solution. In order to be able to reliably determine the limiting component, all other components were adequate sized and only the component under test was as stated in the table.
718
+
The following table explains what a single component, such as Logstash, Filebeat, ... can process in terms of TPS with INFO Trace-Messages enabled to stay real-time. Please understand these values as the absolute maximum, which do not give any margin upwards for downtimes or maintenance of the platform. More is not possible per component, so obviously more capacity must be planned for production. The tests were performed on a number of [AWS EC2 instances](#test-infrastructure) with the default parameters for this solution. In order to be able to reliably determine the limiting component, all other components were adequate sized and only the component under test was as stated in the table.
| Filebeat | >300 | t2.xlarge | Standard | Test was limited by the TPS the Mock-Service was able to handle. Filebeat can very likely handle much more volume.|
723
-
| Logstash | 530 | t2.xlarge | 6GB RAM for Logstash | Includes API-Builder & Memcache on the same machine running along with Logstash. Has processed ap. 3500 events per second. CPU is finally the limiting factor.|
723
+
| Logstash | 530 | t2.xlarge | 6GB RAM for Logstash | Includes API-Builder & Memcache on the same machine running along with Logstash. Has processed ap. 3500 events per second. CPU is finally the limiting factor. A production setup should have two Logstash nodes for high availability, which provides sufficient capacity for most requirements.|
724
724
| 2 Elasticsearch nodes | 480 | t2.xlarge | 8GB RAM for each node | Starting with a Two-Node cluster as this should be the mininum for a production setup. Kibana running on the first node.|
725
725
| 3 Elasticsearch nodes | 740 | t2.xlarge | 8GB RAM for each node | Data is searchable with a slight delay, but ingesting is not falling behind real-time in general up to the max. TPS.|
726
726
| 4 Elasticsearch nodes | 1010 | t2.xlarge | 8GB RAM for each node ||
727
727
728
728
Please note:
729
-
- Logstash, API-Builder, Filebeat (for monitoring only) and Kibana are load balanced across all available Elasticsearch nodes. An external Load-Balancer is not required as this is handled internally by each the Elasticsearch clients.
730
-
- The solution scales up to 5 Elasticsearch nodes as indicies are stored with 5 shards. More will require custom configuration, please create an issue if you have this requirement.
729
+
- Logstash, API-Builder, Filebeat (for monitoring only) and Kibana are load balanced across all available Elasticsearch nodes. An external Load-Balancer is not required as this is handled internally by each of the Elasticsearch clients.
730
+
-do not size the Elasticsearch Cluster-Node too large. The servers should not have more than 32GB memory, because after that the memory management kills the advantage again. It is better to add another server. See the [Test-Infrastructure](#test-infrastructure) for reference.
731
731
732
732
#### Rentention period
733
733
734
734
The second important aspect for sizing is the rentention period, which defines how long data should be available. Accordingly, disk space must be made available.
735
-
The traffic summary, traffic details and trace messages play a particularly important role here. The solution is delivered with default values which you can read here. Based on the these default values which result in ap. 60 days the following disk space is required.
735
+
The Traffic-Summary, Traffic-Details and Trace-Messages indicies play a particularly important role here. The solution is delivered with default values which you can read [here](#lifecycle-management). Based on the these default values which result in ap. 60 days the following disk space is required.
736
736
737
737
| Volume per day | Stored documents | Total Disk-Space | Comment |
738
738
| :--- | :--- | :--- | :--- |
@@ -745,6 +745,8 @@ The traffic summary, traffic details and trace messages play a particularly impo
745
745
746
746
### Test infrastructure
747
747
748
+
The following test infrastructure was used to determine the [maximum capacity or throughput](#transactions-per-second). The information is presented here so that you can derive your own sizing from it.
0 commit comments