@@ -116,13 +116,13 @@ configuring the baremetal-compute inventory.
116116 disk : 480
117117 vcpus : 256
118118 extra_specs :
119- " resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS> " : 1
120- " resources:VCPU " : 0
121- " resources:MEMORY_MB " : 0
122- " resources:DISK_GB " : 0
119+ " resources:CUSTOM_<YOUR_BAREMETAL_RESOURCE_CLASS> " : 1
120+ " resources:VCPU " : 0
121+ " resources:MEMORY_MB " : 0
122+ " resources:DISK_GB " : 0
123123
124- Enabling conntrack
125- ==================
124+ Enabling conntrack (ML2/OVS only)
125+ =================================
126126
127127Conntrack_helper will be required when UEFI booting on a cloud with ML2/OVS
128128and using the iptables firewall_driver, otherwise TFTP traffic is dropped due
@@ -173,52 +173,81 @@ CLI
173173 Baremetal inventory
174174===================
175175
176- To begin enrolling nodes you will need to define them in the hosts file .
177-
178- .. code-block :: ini
179-
180- [r1]
181- hv1 ipmi_address =10.1.28.16
182- hv2 ipmi_address =10.1.28.17
183- …
184-
185- [baremetal-compute:children]
186- r1
187-
188- The baremetal nodes will also require some extra variables to be defined
189- in the group_vars for your rack, these should include the BMC credentials
190- and the Ironic driver you wish to use .
176+ The baremetal inventory is constructed with three different group types .
177+ The first group is the default baremetal compute group for Kayobe called
178+ [baremetal-compute] and will contain all baremetal nodes including tenant
179+ and hypervisor nodes. This group acts as a parent for all baremetal nodes
180+ and config that can be shared between all baremetal nodes will be defined
181+ here.
182+
183+ We will need to create a Kayobe group_vars file for the baremetal-compute
184+ group that contains all the variables we want to define for the group. We
185+ can put all these variables in the inventory in
186+ ‘inventory/group_vars/baremetal-compute/ironic-vars’ The ironic_driver_info
187+ template dict contains all variables to be templated into the driver_info
188+ property in Ironic. This includes the BMC address, username, password,
189+ IPA configuration etc. We also currently define the ironic_driver here as
190+ all nodes currently use the Redfish driver .
191191
192192.. code-block :: yaml
193193
194194 ironic_driver : redfish
195195
196196 ironic_driver_info :
197- redfish_system_id : " {{ ironic_redfish_system_id }}"
198- redfish_address : " {{ ironic_redfish_address }}"
199- redfish_username : " {{ ironic_redfish_username }}"
200- redfish_password : " {{ ironic_redfish_password }}"
201- redfish_verify_ca : " {{ ironic_redfish_verify_ca }}"
202- ipmi_address : " {{ ipmi_address }}"
197+ redfish_system_id : " {{ ironic_redfish_system_id }}"
198+ redfish_address : " {{ ironic_redfish_address }}"
199+ redfish_username : " {{ ironic_redfish_username }}"
200+ redfish_password : " {{ ironic_redfish_password }}"
201+ redfish_verify_ca : " {{ ironic_redfish_verify_ca }}"
202+ ipmi_address : " {{ ipmi_address }}"
203203
204204 ironic_properties :
205- capabilities : " {{ ironic_capabilities }}"
205+ capabilities : " {{ ironic_capabilities }}"
206206
207- ironic_resource_class : " example_resouce_class"
208- ironic_redfish_system_id : " /redfish/v1/Systems/System.Embedded.1"
209- ironic_redfish_verify_ca : " {{ inspector_rule_var_redfish_verify_ca }}"
210207 ironic_redfish_address : " {{ ipmi_address }}"
211208 ironic_redfish_username : " {{ inspector_redfish_username }}"
212209 ironic_redfish_password : " {{ inspector_redfish_password }}"
213210 ironic_capabilities : " boot_option:local,boot_mode:uefi"
214211
215- The typical layout for baremetal nodes are separated by racks, for instance
216- in rack 1 we have the following configuration set up where the BMC addresses
217- are defined for all nodes, and Redfish information such as username, passwords
218- and the system ID are defined for the rack as a whole.
212+ The second group type will be the hardware type that a baremetal node belongs
213+ to, These variables will be in the inventory too in ‘inventory/group_vars/
214+ baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>’
215+
216+ Specific variables to the hardware type include the resource_class which is
217+ used to associate the hardware type to the flavour in Nova we defined earlier
218+ in Openstack Config.
219+
220+ .. code-block :: yaml
221+
222+ ironic_resource_class : " example_resource_class"
223+ ironic_redfish_system_id : " example_system_id"
224+ ironic_redfish_verify_ca : " {{ inspector_rule_var_redfish_verify_ca }}"
225+
226+ The third group type will be the rack where the node is installed. This is the
227+ group in which the rack specific networking configuration is defined here and
228+ where the BMC address is entered as a host variable for each baremetal node.
229+ Nodes can now be entered directly into the hosts file as part of this group.
230+
231+ .. code-block :: ini
232+
233+ [rack1]
234+ hv001 ipmi_address =10.1.28.16
235+ hv002 ipmi_address =10.1.28.17
236+ …
237+
238+ This rack group contains the baremetal hosts but will also need to be
239+ associated with the baremetal-compute and baremetal-sr645 groups in order for
240+ those variables to be associated with the rack group.
241+
242+ .. code-block :: ini
219243
220- You can add more racks to the deployment by replicating the rack 1 example and
221- adding that as an entry to the baremetal-compute group.
244+ [baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>:children]
245+ rack1
246+ …
247+
248+ [baremetal-compute:children]
249+ rack1
250+ …
222251
223252 Node enrollment
224253===============
@@ -230,85 +259,194 @@ invoking the Kayobe commmand
230259
231260 (kayobe) $ kayobe baremetal compute register
232261
233- Following registration, the baremetal nodes can be inspected and made
234- available for provisioning by Nova via the Kayobe commands
262+ All nodes that were not defined in Ironic previously should’ve been enrolled
263+ following this playbook and should now be in ‘manageable’ state if Ironic was
264+ able to reach the BMC of the node. We will need to inspect the baremetal nodes
265+ to gather information about their hardware to prepare for deployment. Kayobe
266+ provides an inspection workflow and can be run using:
235267
236268.. code-block :: console
237269
238270 (kayobe) $ kayobe baremetal compute inspect
271+
272+ Inspection would require PXE booting the nodes into IPA. If the nodes were able
273+ to PXE boot properly they would now be in ‘manageable’ state again. If an error
274+ developed during PXE booting, the nodes will now be in ‘inspect failed’ state
275+ and issues preventing the node from booting or returning introspection data
276+ will need to be resolved before continuing. If the nodes did inspect properly,
277+ they can be cleaned and made available to Nova by running the provide workflow.
278+
279+ .. code-block :: console
280+
239281 (kayobe) $ kayobe baremetal compute provide
240282
241283 Baremetal hypervisors
242284=====================
243285
244- To deploy baremetal hypervisor nodes it will be neccessary to split out
245- the nodes you wish to use as hypervisors and add it to the Kayobe compute
246- group to ensure the hypervisor is configured as a compute node during
247- host configure.
286+ Nodes that will not be dedicated as baremetal tenant nodes can be converted
287+ into hypervisors as required. StackHPC Kayobe configuration provides a workflow
288+ to provision baremetal tenants with the purpose of converted these nodes to
289+ hypervisors. To begin the process of converting nodes we will need to define a
290+ child group of the rack which will contain baremetal nodes dedicated to compute
291+ hosts.
248292
249293.. code-block :: ini
250294
251- [r1]
252- hv1 ipmi_address =10.1.28.16
295+ [rack1]
296+ hv001 ipmi_address =10.1.28.16
297+ hv002 ipmi_address =10.1.28.17
298+ …
253299
254- [r1-hyp]
255- hv2 ipmi_address =10.1.28.17
300+ [rack1-compute]
301+ hv003 ipmi_address =10.1.28.18
302+ hv004 ipmi_address =10.1.28.19
303+ …
256304
257- [r1 :children]
258- r1-hyp
305+ [rack1 :children]
306+ rack1-compute
259307
260- [compute:children]
261- r1-hyp
308+ [compute:children]
309+ rack1-compute
262310
263- [baremetal-compute:children]
264- r1
311+ The rack1-compute group as shown above is also associated with the Kayobe
312+ compute group in order for Kayobe to run the compute Kolla workflows on these
313+ nodes during service deployment.
265314
266- The hypervisor nodes will also need to define hypervisor specific variables
267- such as the image to be used, network to provision on and the availability zone.
268- These can be defined under group_vars.
315+ You will also need to setup the Kayobe network configuration for the rack1
316+ group. In networks.yml you should create an admin network for the rack1 group,
317+ this should consist of the correct CIDR for the rack being deployed.
318+ The configuration should resemble below in networks.yml:
269319
270320.. code-block :: yaml
271321
272- hypervisor_image : " 37825714-27da-48e0-8887-d609349e703b"
273- key_name : " testing"
274- availability_zone : " nova"
275- baremetal_flavor : " baremetal-A"
276- baremetal_network : " rack-net"
277- auth :
278- auth_url : " {{ lookup('env', 'OS_AUTH_URL') }}"
279- username : " {{ lookup('env', 'OS_USERNAME') }}"
280- password : " {{ lookup('env', 'OS_PASSWORD') }}"
281- project_name : " {{ lookup('env', 'OS_PROJECT_NAME') }}"
322+ physical_rack1_admin_oc_net_cidr: “172.16.208.128/27”
323+ physical_rack1_admin_oc_net_gateway: “172.16.208.129”
324+ physical_rack1_admin_net_defroute: true
282325
283- To begin deploying these nodes as instances you will need to run the Ansible
284- playbook deploy-baremetal-instance.yml.
326+ You will also need to configure a neutron network for racks to deploy instances
327+ on, we can configure this in openstack-config as before. We will need to define
328+ this network and associate a subnet for it for each rack we want to enroll in
329+ Ironic.
330+
331+ .. code-block :: yaml
332+
333+ openstack_network_rack:
334+ name : " rack-net"
335+ project : " admin"
336+ provider_network_type : " vlan"
337+ provider_physical_network : " provider"
338+ provider_segmentation_id : 450
339+ shared : false
340+ external : false
341+ subnets :
342+ - "{{ openstack_subnet_rack1 }}"
343+
344+ openstack_subnet_rack1 :
345+ name : " rack1-subnet"
346+ project : " admin"
347+ cidr : " 172.16.208.128/27"
348+ enable_dhcp : false
349+ gateway_ip : " 172.16.208.129"
350+ allocation_pool_start : " 172.16.208.130"
351+ allocation_pool_end : " 172.16.208.130"
352+
353+ The subnet configuration largely resembles the Kayobe network configuration,
354+ however we do not need to define an allocation pool or enable dhcp as we will
355+ be associating neutron ports with our hypervisor instances per IP address to
356+ ensure they match up properly.
357+
358+ Now we should ensure the network interfaces are properly configured for the
359+ rack1-compute group, the interfaces should include the kayobe admin network
360+ for rack1 and the kayobe internal API network and be defined in the group_vars.
361+
362+ .. code-block :: yaml
363+
364+ network_interfaces :
365+ - " internal_net"
366+ - " physical_rack1_admin_oc_net"
367+
368+ admin_oc_net_name : " physical_rack1_admin_oc_net"
369+
370+ physical_rack1_admin_oc_net_bridge_ports :
371+ - eth0
372+ physical_rack1_admin_oc_net_interface : br0
373+
374+ internal_net_interface : " br0.{{ internal_net_vlan }}"
375+
376+ We should also ensure some variables are configured properly for our group,
377+ such as the hypervisor image. These variables can be defined anywhere in
378+ group_vars, we can place them in the ironic-vars file we used before for
379+ baremetal node registration.
380+
381+ .. code-block :: yaml
382+
383+ hypervisor_image: "<image_uuid>"
384+ key_name: "<key_name>"
385+ availability_zone: "nova"
386+ baremetal_flavor: "<ironic_flavor_name>"
387+ baremetal_network: "rack-net"
388+ auth:
389+ auth_url : " {{ lookup('env', 'OS_AUTH_URL') }}"
390+ username : " {{ lookup('env', 'OS_USERNAME') }}"
391+ password : " {{ lookup('env', 'OS_PASSWORD') }}"
392+ project_name : " {{ lookup('env', 'OS_PROJECT_NAME') }}"
393+
394+ With these variables defined we can now begin deploying the baremetal nodes as
395+ instances, to begin we invoke the deploy-baremetal-hypervisor ansible playbook.
285396
286397.. code-block :: console
287398
288- (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/deploy-baremetal-instance.yml
399+ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/deploy-baremetal-hypervisor.yml
400+
401+ This playbook will update the Kayobe network allocations with the the admin
402+ network addresses associated with that rack for each baremetal server, e.g.
403+ in the case of rack 1 this will appear in network-allocations.yml as
404+
405+ .. code-block :: yaml
406+
407+ physical_rack1_admin_oc_net_ips :
408+ hv003 : 172.16.208.133
409+ hv004 : 172.16.208.134
410+
411+ Once the network allocations have been updated, the playbook will then create a
412+ Neutron port configured with the address of the baremetal node admin network.
413+ The baremetal hypervisors will then be imaged and deployed associated with that
414+ Neutron port. You should ensure that all nodes are correctly associated with
415+ the right baremetal instance, you can do this by running a baremetal node show
416+ on any given hypervisor node and comparing the server uuid to the metadata on
417+ the Nova instance.
289418
290- This playbook will update network allocations with the new baremetal hypervisor
291- IP addresses, create a Neutron port corresponding to the address and deploy
292- an image on the baremetal instance.
419+ Once the nodes are deployed, we can use Kayobe to configure them as compute
420+ hosts, running kayobe overcloud host configure on these nodes will ensure that
421+ all networking, package and various other host configurations are setup
422+
423+ .. code-block :: console
424+
425+ kayobe overcloud host configure --limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
426+
427+ Following host configuration we can begin deploying OpenStack services to the
428+ baremetal hypervisors by invoking kayobe overcloud service deploy. Nova
429+ services will be deployed to the baremetal hosts.
430+
431+ .. code-block :: console
293432
294- When the playbook has finished and the rack is successfully imaged, they can be
295- configured with ``kayobe overcloud host configure `` and kolla compute services
296- can be deployed with ``kayobe overcloud service deploy ``.
433+ kayobe overcloud service deploy --kolla-limit baremetal-<YOUR_BAREMETAL_HARDWARE_TYPE>
297434
298435 Un-enrolling hypervisors
299436========================
300437
301- To convert baremetal hypervisors into regular baremetal compute instances you will need
302- to drain the hypervisor of all running compute instances, you should first invoke the
303- nova-compute-disable playbook to ensure all Nova services on the baremetal node are disabled
304- and compute instances will not be allocated to this node.
438+ To convert baremetal hypervisors into regular baremetal compute instances you
439+ will need to drain the hypervisor of all running compute instances, you should
440+ first invoke the nova-compute-disable playbook to ensure all Nova services on
441+ the baremetal node are disabled and compute instances will not be allocated to
442+ this node.
305443
306444.. code-block :: console
307445
308446 (kayobe) $ kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/nova-compute-disable.yml
309447
310- Now the Nova services are disabled you should also ensure any existing compute instances
311- are moved elsewhere by invoking the nova-compute-drain playbook
448+ Now the Nova services are disabled you should also ensure any existing compute
449+ instances are moved elsewhere by invoking the nova-compute-drain playbook
312450
313451.. code-block :: console
314452
0 commit comments