Hi, Dnia piÄ…tek, 27 listopada 2020 14:57:27 CET Oliver Wenz pisze:
So this looks like issue during sending notification to Nova. You should check in Nova logs now why it returned You 422
The relevant nova-compute log on the node where I try to start the instance looks like this:
Nov 27 13:33:24 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:24.870 4240 INFO nova.compute.claims [req-4125d882-0753-4a0a-844c-5a948681ffa1 920e739127a14018a55fb4422b0885e7 0f14905dab5546e0adec2b56c0f6be88 - default default] [instance: 20a23a18-0d13-4aba-b0be-37e243b21336] Claim successful on node bc1blade15.openstack.local Nov 27 13:33:26 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:26.151 4240 INFO nova.virt.libvirt.driver [req-4125d882-0753-4a0a-844c-5a948681ffa1 920e739127a14018a55fb4422b0885e7 0f14905dab5546e0adec2b56c0f6be88 - default default] [instance: 20a23a18-0d13-4aba-b0be-37e243b21336] Creating image Nov 27 13:33:31 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:31.463 4240 INFO os_vif [req-4125d882-0753-4a0a-844c-5a948681ffa1 920e739127a14018a55fb4422b0885e7 0f14905dab5546e0adec2b56c0f6be88 - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:57:d5:f4,bridge_name='brq497a58ad-e5 ',has_traffic_filtering=True,id=c2e13a92-86bc-4c8e-ad74-1cad0a6bcffc,network =Network(497a58ad-e57c-4bf1-a1c7-dad58f4795ec),plugin='linux_bridge',port_pr ofile=<?>,preserve_on_delete=False,vif_name='tapc2e13a92-86') Nov 27 13:33:33 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:33.961 4240 INFO nova.compute.manager [req-633ff74b-b16d-4063-9f54-5ee618ea2bb9 - - - - -] [instance: 20a23a18-0d13-4aba-b0be-37e243b21336] VM Started (Lifecycle Event) Nov 27 13:33:34 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:34.104 4240 INFO nova.compute.manager [req-633ff74b-b16d-4063-9f54-5ee618ea2bb9 - - - - -] [instance: 20a23a18-0d13-4aba-b0be-37e243b21336] VM Paused (Lifecycle Event) Nov 27 13:33:34 bc1blade15 nova-compute[4240]: 2020-11-27 13:33:34.388 4240 INFO nova.compute.manager [req-633ff74b-b16d-4063-9f54-5ee618ea2bb9 - - - - -] [instance: 20a23a18-0d13-4aba-b0be-37e243b21336] During sync_power_state the instance has a pending task (spawning).
Again, the corresponding neutron-server log:
Nov 27 13:39:02 infra1-neutron-server-container-cbf4b105 neutron-server[87]: 2020-11-27 13:39:02.106 87 WARNING neutron.notifiers.nova [-] Nova event: {'server_uuid': '20a23a18-0d13-4aba-b0be-37e243b21336', 'name': 'network-vif-deleted', 'tag': 'c2e13a92-86bc-4c8e-ad74-1cad0a6bcffc', 'status': 'failed', 'code': 422} returned with failed status
I also looked in all the logs of the nova related services on the managament node for the instance id but didn't find it. So the nova-compute log seems to indicate that the instance can connect to the bridge successfully?!
Neutron sends this notification to the nova-api so You should check in nova-api logs. Whole workflow for that process of spawning vm is more or less like below: 1. nova-compute asks neutron for port, 2. neutron creates port and binds it with some mechanism driver - so it has vif_type e.g. "ovs" or "linuxbridge" or some other, 3. nova, based on that vif details plugs port to the proper bridge on host and pauses instance until neutron will not do its job, 4. neutron-l2-agent (linuxbrige or ovs) starts provisioning port and reports to neutron-server when it is done, 5. if there is no any provisioning blocks for that port in neutron db (can be also one from the dhcp agent), neutron sends notification to nova-api that port is ready, 6. nova unpauses vm. In Your case it seems that on step 5 nova reports some error and that You should IMO check. -- Slawek Kaplonski Principal Software Engineer Red Hat