You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 14, 2022. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+10-11Lines changed: 10 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1172,15 +1172,15 @@ No, as it is not a 7.x version and not tested with this solution at all.
1172
1172
1173
1173
### Can I run Filebeat as a native process?
1174
1174
1175
-
Yes, you can run Filebeat natively instead of a Docker-Container if you prefer. As long as you are using the filebeat.yml and correct configuration that is supported. However, the solution has been tested with Filebeat 7.9& 7.10. If you see performance degregation with an 'old' Filebeat the first advice would be to upgrade the Filebeat version.
1175
+
Yes, you can run Filebeat natively instead of a Docker-Container if you prefer. As long as you are using the filebeat.yml and correct configuration that is supported. However, the solution has been tested with Filebeat 7.9, 7.10, 7.11 & 7.12. If you see performance degregation with an 'old' Filebeat the first advice would be to upgrade the Filebeat version.
1176
1176
1177
1177
### What happens if Logstash is down for a while?
1178
1178
1179
1179
In case Logstash is stopped, Filebeat cannot sent events any longer to Logstash. However, Filebeat remembers the position of the last sent events on each files and resumes at that position, when Logstash is available again. Of course, you have to make sure, files are available long enough. For instance the OpenTraffic-Event Logs are configured by default to 1GB, which is sufficient for around 30 minutes when having 300 TPS. You should increase the disk space.
1180
1180
1181
1181
### What happens if Filebeat is down for a while?
1182
1182
1183
-
Filebeat stores the current position in a file and which events have already been passed on. If Filebeat is shut down cleanly and then restarted, Filebeat will pick up where it left off.
1183
+
Filebeat stores the current position in a file (registry) storeing which events have already been passed on. If Filebeat is shut down cleanly and then restarted, Filebeat will pick up where it left off.
1184
1184
Tests have shown that it can still be that a few events are lost.
1185
1185
1186
1186
### Can I run multiple Logstash instances?
@@ -1189,11 +1189,11 @@ Yes, Logstash is stateless (besides what is stored is Memcache), hence you can r
1189
1189
1190
1190
### Can I run multiple API-Builder instances?
1191
1191
1192
-
Yes, and just provide the same configuration for each API-Builder Docker-Container. It's recommended to run the API-Builder process along Logstash & Memcache as these components are working closely together.
1192
+
Yes, and just provide the same configuration for each API-Builder Docker-Container. It's recommended to run the API-Builder process along Logstash & Memcache as these components are working closely together. Especially a low latency between Logstash & Memcache is crucial.
1193
1193
1194
1194
### Are there any limits in terms of TPS?
1195
1195
1196
-
The solution was tested up to a maximum of 1,000 TPS and was thus able to process the events that occurred in real time. You can read more details in the Infrastructure Sizing section.
1196
+
The solution was tested up to a maximum of 1,000 TPS and was thus able to process the events that occurred in real time. You can read more details in the [Infrastructure Sizing](#size-your-infrastructure) section.
1197
1197
The solution is designed to scale to 5 Elasticsearch nodes as the indicies are configured on 5 shards. This means that Elasticsearch distributes each index evenly across the available nodes.
1198
1198
More Elasticsearch nodes can also have an impact as there are a number of indices (e.g. per region, per type) and this allows Elasticsearch to balance the cluster even better.
1199
1199
So in principle, there is no limit. Of course, the components like Logstash/API builder then need to scale as well.
@@ -1210,17 +1210,17 @@ Yes. Trace-Messages you see in the Traffic-Monitor for an API-Request are retrie
1210
1210
1211
1211
No, but this file is used by docker-compose only. If you would like to avoid to have sensitive data stored, the recommended approach it deploy the solution in Kubernetes environment and store passwords in the Secure-Vault. With that, sensitive data is injected into the containers from a secure source.
1212
1212
1213
-
### Why does Kibana a Kibana dashboards show different volumes than the Traffic Monitor?
1213
+
### Why does Kibana dashboards show different volumes than the Traffic Monitor?
1214
1214
1215
1215
If the traffic monitor dashboard shows significantly more transactions than the Kibana dashboards, for example for the last 10 minutes, very likely the ingestion rate is not high enough. In other words, the ELK stack cannot process the data fast enough. If the behavior does not recover, you may need to add more Logstash and/or Elasticsearch nodes.
1216
1216
1217
1217
### Can I increase the disk volume on an Elasticsearch machine?
1218
1218
1219
-
Yes, all indicies are configured to have one replica and therefore it's safe to stop machine, increae the disk-space and restart it. Please make sure, the cluster state is green, before stopping an instance. The increased volume will be automatically detected by Elasticsearch and used to assign more shards to it.
1219
+
Yes, all indicies are configured to have one replica and therefore it's safe to stop a machine, increase the disk-space and restart it. Please make sure, the cluster state is green, before stopping an instance. The increased volume will be automatically detected by Elasticsearch and used to assign more shards to it. It's recommended to assign the same disk space to all Elasticsearch-Cluster nodes.
1220
1220
1221
1221
### During catch up, what should be the total events rate for Filebeat?
1222
1222
1223
-
Tests show that Filebeat can send more than 3,000 events per second to Logstash instances. Tests show that Filebeat can send more than 2,200 events per second to Logstash instances. Of course, this number also depends on the event type. Trace messages can be processed faster than OpenTraffic event logs, for example. You can find an example [here](#imgs/stack-monitoring/stack-monitoring-beats-instances.png).
1223
+
Tests show that Filebeat can send more than 3,000 events per second to Logstash instances. Of course, this number also depends on the event type. Trace messages can be processed faster than OpenTraffic event logs, for example. You can find an example [here](#imgs/stack-monitoring/stack-monitoring-beats-instances.png).
1224
1224
1225
1225
### Can I use AWS Elasticsearch service?
1226
1226
@@ -1232,16 +1232,15 @@ In a healthy environment, the event latency shown for Logstash event processing
1232
1232
1233
1233
### Filebeat is reporting errors?
1234
1234
1235
-
When Filebeat is reporting errors like: `Harvester could not be started on new file: /var/log/opentraffic/group-2_instance-1_traffic.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: file info is not identical with opened file. Aborting harvesting and retrying file later again`, it might be, that the registry is corrupt for any reason. When this happens, Filebeat basically stops for a second to send events, which may cause issues to stay real-time when running very high volume. If possible
1235
+
When Filebeat is reporting errors like: `Harvester could not be started on new file: /var/log/opentraffic/group-2_instance-1_traffic.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: file info is not identical with opened file. Aborting harvesting and retrying file later again`, it might be, that the registry is corrupt for any reason. When this happens, Filebeat basically stops for a second to send events, which may cause issues to stay real-time when running very high volume.
1236
1236
1237
1237
### Why only Administrators see JMS-Traffic?
1238
1238
1239
-
JMS requests are not controlled by the API-Manager, therefore there is no association with an organization and therefore the result cannot be restricted accordingly. If you want, you can disable the complete user authorization by setting the parameter: enableUserAuthorization to false. See here: https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk#customize-user-authorization
1239
+
JMS requests are not controlled by the API-Manager, therefore there is no association with an organization and therefore the result cannot be restricted accordingly. If you want, you can disable the complete user authorization by setting the parameter: enableUserAuthorization to false. See here: https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk#customize-user-authorization or you can use the parameter: `UNRESTRICTED_PERMISSIONS` to configure which users should see the entire traffic.
1240
1240
1241
1241
### Is EMT-Mode supported?
1242
1242
1243
-
Yes, the solution can be used when the API-;anagement platform is deployed in a Docker orchestration platform in [EMT mode](https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_installation/apigw_containers/container_intro/index.html). With that, it is for instance possible to see traffic from containers (PODs) that have already been removed again in the traffic monitor. However, there is a limitation here that the server name is not displayed correctly. [Learn more](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/114#issuecomment-864941677) on this limitation.
1244
-
1243
+
Yes, the solution can be used when the API-Mnagement platform is deployed in a Docker orchestration platform in [EMT mode](https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_installation/apigw_containers/container_intro/index.html). With that, it is for instance possible to see traffic from containers (PODs) that have already being removed again in the traffic monitor. However, there is a limitation here that the server name is not displayed correctly. [Learn more](https://github.com/Axway-API-Management-Plus/apigateway-openlogging-elk/issues/114#issuecomment-864941677) on this limitation.
0 commit comments