Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions ansible/adhoc/lock_unlock_instances.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---

- hosts: "{{ target_hosts | default('all') }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah the "highest" group we should ever use is cluster. That is all instances controlled by the appliance. Hosts in all but not in cluster are ones we have maybe added into an inventory but don't want to control. E.g. external NFS, external Pulp, ...

TBF we should document that somewhere!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think target_hosts is necessary, this can just be hosts: cluster. If you want to only run on some as an adhoc you can just use ansible-playbook --limit ... which is what we do for e.g. the rebuild adhoc.

I know you suggested being able to "tweak" this for rebuild groups (presumably when running this from site, rather than as an ad-hoc, but TBH with the way that's passed via vars: at the moment you can't override it from inventory anyway, so I wouldn't bother.

gather_facts: no

Check failure on line 4 in ansible/adhoc/lock_unlock_instances.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[truthy]

Truthy value should be one of [false, true]
become: no

Check failure on line 5 in ansible/adhoc/lock_unlock_instances.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[truthy]

Truthy value should be one of [false, true]
tasks:
- name: Lock/Unlock instances
openstack.cloud.server_action:
action: "{{ server_action | default('lock') }}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the fac the parameter is "action", not "state" seems a bit crackers but that's not on you, so the var name makes sense, although I'm maybe tempted by appliances_server_action (see below). For this case I think having the default defined here does make sense TBH - there's no way really I can see having this set in inventory to provide differences per site or per instance would make sense.

server: "{{ inventory_hostname }}"
delegate_to: localhost

Check failure on line 11 in ansible/adhoc/lock_unlock_instances.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[new-line-at-end-of-file]

No new line character at the end of file
6 changes: 6 additions & 0 deletions ansible/adhoc/rebuild-via-slurm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,12 @@

# See docs/slurm-controlled-rebuild.md.

- name: Unlock compute instances for rebuild
vars:
server_action: unlock
target_hosts: compute
ansible.builtin.import_playbook: lock_unlock_instances.yml

- hosts: login
run_once: true
gather_facts: false
Expand Down
22 changes: 22 additions & 0 deletions ansible/safe-env.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
- hosts: localhost
gather_facts: no

Check failure on line 3 in ansible/safe-env.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[truthy]

Truthy value should be one of [false, true]
become: no

Check failure on line 4 in ansible/safe-env.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[truthy]

Truthy value should be one of [false, true]
vars:
protected_environments:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In common inventory please, so it can be overriden. And in keeping with naming for other similar things I'd:

  1. Define it in environments/common/group_vars/all/defaults.yml
  2. Call it appliances_protected_environments (note the plural, I don't like it TBH but it's what we've got)

- prd
tasks:
- name: Confirm continuing if using production environment
ansible.builtin.pause:
prompt: |
*************************************
* WARNING: PROTECTED ENVIRONMENT! *
*************************************
Current environment: {{ appliances_environment_name }}
Do you really want to continue (yes/no)?
register: env_confirm_safe
when:
- appliances_environment_name in protected_environments
- not (prd_continue | default(false) | bool)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I said in the ticket I didn't like prd_continue. There are two options here:

  1. A better name. Not sure but initial ideas: appliances_protected_environment_continue or appliances_protected_environment_autoapprove (a la TF) or something
  2. Maybe the logic copes with appliances_protected_environment being falsy, and in that case it always continues? Then you can just set from extravars or whatever without needing a 2nd var at all (Note for extravars you need a | bool as that lets people do -e foo=no and it works "as expected".

failed_when: not (env_confirm_safe.user_input | bool)

Check failure on line 22 in ansible/safe-env.yml

View workflow job for this annotation

GitHub Actions / Lint / Lint

yaml[new-line-at-end-of-file]

No new line character at the end of file
9 changes: 9 additions & 0 deletions ansible/site.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
---

- ansible.builtin.import_playbook: safe-env.yml

- name: Lock all instances
vars:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

server_action: lock
target_hosts: all
ansible.builtin.import_playbook: adhoc/lock_unlock_instances.yml

- name: Run pre.yml hook
vars:
# hostvars not available here, so have to recalculate environment root:
Expand Down
Loading