Skip to content
This repository was archived by the owner on Dec 14, 2022. It is now read-only.

Commit 7eec28e

Browse files
author
Chris Wiechmann
committed
[skip ci] Small fixes / enhancements
1 parent e325438 commit 7eec28e

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -715,24 +715,24 @@ There are two important aspects for sizing the platform.
715715

716716
The number of concurrent transactions per second (TPS) that the entire platform must handle. The platform must therefore be designed so that the events that occur on the basis of the transactions can be processed (Ingested) in real time. It is important to consider the permanent load. As a general rule, more capacity should be planned in order to also quickly enable catch-up operation after a downtime or maintenance.
717717

718-
The following table explains what a single component, such as Logstash, Filebeat, ... can process in terms of TPS with INFO Trace-Messages enabled to stay real-time. Please understand these values as the absolute maximum, which do not give any margin upwards for downtimes or maintenance of the platform. More is not possible per component, so obviously more capacity must be planned for in production. The tests were performed on AWS EC2 instances with the default parameters for this solution. In order to be able to reliably determine the limiting component, all other components were adequate sized and only the component under test was as stated in the table.
718+
The following table explains what a single component, such as Logstash, Filebeat, ... can process in terms of TPS with INFO Trace-Messages enabled to stay real-time. Please understand these values as the absolute maximum, which do not give any margin upwards for downtimes or maintenance of the platform. More is not possible per component, so obviously more capacity must be planned for production. The tests were performed on a number of [AWS EC2 instances](#test-infrastructure) with the default parameters for this solution. In order to be able to reliably determine the limiting component, all other components were adequate sized and only the component under test was as stated in the table.
719719

720720
| Component | Max. TPS | Host-Machine | Config | Comment |
721721
| :--- | :--- | :--- | :--- | :--- |
722722
| Filebeat | >300 | t2.xlarge | Standard | Test was limited by the TPS the Mock-Service was able to handle. Filebeat can very likely handle much more volume.|
723-
| Logstash | 530 | t2.xlarge | 6GB RAM for Logstash | Includes API-Builder & Memcache on the same machine running along with Logstash. Has processed ap. 3500 events per second. CPU is finally the limiting factor.|
723+
| Logstash | 530 | t2.xlarge | 6GB RAM for Logstash | Includes API-Builder & Memcache on the same machine running along with Logstash. Has processed ap. 3500 events per second. CPU is finally the limiting factor. A production setup should have two Logstash nodes for high availability, which provides sufficient capacity for most requirements.|
724724
| 2 Elasticsearch nodes | 480 | t2.xlarge | 8GB RAM for each node | Starting with a Two-Node cluster as this should be the mininum for a production setup. Kibana running on the first node.|
725725
| 3 Elasticsearch nodes | 740 | t2.xlarge | 8GB RAM for each node | Data is searchable with a slight delay, but ingesting is not falling behind real-time in general up to the max. TPS.|
726726
| 4 Elasticsearch nodes | 1010 | t2.xlarge | 8GB RAM for each node | |
727727

728728
Please note:
729-
- Logstash, API-Builder, Filebeat (for monitoring only) and Kibana are load balanced across all available Elasticsearch nodes. An external Load-Balancer is not required as this is handled internally by each the Elasticsearch clients.
730-
- The solution scales up to 5 Elasticsearch nodes as indicies are stored with 5 shards. More will require custom configuration, please create an issue if you have this requirement.
729+
- Logstash, API-Builder, Filebeat (for monitoring only) and Kibana are load balanced across all available Elasticsearch nodes. An external Load-Balancer is not required as this is handled internally by each of the Elasticsearch clients.
730+
- do not size the Elasticsearch Cluster-Node too large. The servers should not have more than 32GB memory, because after that the memory management kills the advantage again. It is better to add another server. See the [Test-Infrastructure](#test-infrastructure) for reference.
731731

732732
#### Rentention period
733733

734734
The second important aspect for sizing is the rentention period, which defines how long data should be available. Accordingly, disk space must be made available.
735-
The traffic summary, traffic details and trace messages play a particularly important role here. The solution is delivered with default values which you can read here. Based on the these default values which result in ap. 60 days the following disk space is required.
735+
The Traffic-Summary, Traffic-Details and Trace-Messages indicies play a particularly important role here. The solution is delivered with default values which you can read [here](#lifecycle-management). Based on the these default values which result in ap. 60 days the following disk space is required.
736736

737737
| Volume per day | Stored documents | Total Disk-Space | Comment |
738738
| :--- | :--- | :--- | :--- |
@@ -745,6 +745,8 @@ The traffic summary, traffic details and trace messages play a particularly impo
745745

746746
### Test infrastructure
747747

748+
The following test infrastructure was used to determine the [maximum capacity or throughput](#transactions-per-second). The information is presented here so that you can derive your own sizing from it.
749+
748750
| Count | Node/Instance |CPUS | RAM |Disc | Component | Version | Comment |
749751
| :---: | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
750752
| 6x | AWS EC2 t2.xlarge instance | 4 vCPUS | 16GB | 30GB | API-Management | 7.7-July| 6 API-Gateways Classical deployment, Simulate Traffic based on Test-Case |

0 commit comments

Comments
 (0)