-
Notifications
You must be signed in to change notification settings - Fork 38
Add safety checks to site production environments & Lock instances after site.yml #844
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
f7efa6c
ca47578
ceaba17
36a10e7
ccde8b4
675d3ba
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,11 @@ | ||
| --- | ||
|
|
||
| - hosts: "{{ target_hosts | default('all') }}" | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think target_hosts is necessary, this can just be I know you suggested being able to "tweak" this for rebuild groups (presumably when running this from site, rather than as an ad-hoc, but TBH with the way that's passed via |
||
| gather_facts: no | ||
| become: no | ||
| tasks: | ||
| - name: Lock/Unlock instances | ||
| openstack.cloud.server_action: | ||
| action: "{{ server_action | default('lock') }}" | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the fac the parameter is "action", not "state" seems a bit crackers but that's not on you, so the var name makes sense, although I'm maybe tempted by |
||
| server: "{{ inventory_hostname }}" | ||
| delegate_to: localhost | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,22 @@ | ||
| --- | ||
| - hosts: localhost | ||
| gather_facts: no | ||
| become: no | ||
| vars: | ||
| protected_environments: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In common inventory please, so it can be overriden. And in keeping with naming for other similar things I'd:
|
||
| - prd | ||
| tasks: | ||
| - name: Confirm continuing if using production environment | ||
| ansible.builtin.pause: | ||
| prompt: | | ||
| ************************************* | ||
| * WARNING: PROTECTED ENVIRONMENT! * | ||
| ************************************* | ||
| Current environment: {{ appliances_environment_name }} | ||
| Do you really want to continue (yes/no)? | ||
| register: env_confirm_safe | ||
| when: | ||
| - appliances_environment_name in protected_environments | ||
| - not (prd_continue | default(false) | bool) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I said in the ticket I didn't like
|
||
| failed_when: not (env_confirm_safe.user_input | bool) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,13 @@ | ||
| --- | ||
|
|
||
| - ansible.builtin.import_playbook: safe-env.yml | ||
|
|
||
| - name: Lock all instances | ||
| vars: | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. See above. |
||
| server_action: lock | ||
| target_hosts: all | ||
| ansible.builtin.import_playbook: adhoc/lock_unlock_instances.yml | ||
|
|
||
| - name: Run pre.yml hook | ||
| vars: | ||
| # hostvars not available here, so have to recalculate environment root: | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah the "highest" group we should ever use is
cluster. That is all instances controlled by the appliance. Hosts inallbut not inclusterare ones we have maybe added into an inventory but don't want to control. E.g. external NFS, external Pulp, ...TBF we should document that somewhere!