-
Notifications
You must be signed in to change notification settings - Fork 679
Ensure cluster is in a green state before stopping a pod #134
base: master
Are you sure you want to change the base?
Conversation
The timeout is set to 8h before releasing the hook and forcing ES node to shutdown.
|
What happens if I'm deleting the deployment? |
|
Good question, I didn't test. Anyway, I think you can force a delete to bypass hooks. |
|
👍 .. any plans merging this? |
|
Works for me |
pires
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rebase master.
| preStop: | ||
| httpGet: | ||
| path: /_cluster/health?wait_for_status=green&timeout=28800s | ||
| port: 9300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't this be port: 9200?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without terminationGracePeriodSeconds: 28800 the preStop command will be terminated after 30s even if it's still waiting for the elasticsearch api endpoint to timeout.
|
@deimosfr what about actively deallocating shards off that node as part of the lifecycle hook? (e.g. setting exclude._ip and waiting for the node to become empty? |
|
With validation webhooks, it may be possible but it's a far-fetched thing to do here. Maybe an operator feature request? |
|
Regarding the deallocation of shards in the preStop hook, does anyone have a working example? It would be a nice feature to have. Could something like https://github.com/kayrus/elk-kubernetes/blob/master/docker/elasticsearch/pre-stop-hook.sh be used? |
|
It is not working in my case, the data pod scaled from 3 to 1 without waiting for status to be "green". |
|
@psalaberria002 @zhujinhe 1. Delocate all shards before proceeding with the next one with
|
The timeout is set to 8h before releasing the hook and forcing ES node
to shutdown.