[Openstack-operators] attaching network cards to VMs taking a very long time

Matt Riedemann mriedemos at gmail.com
Wed May 16 21:09:42 UTC 2018


On 5/16/2018 10:30 AM, Radu Popescu | eMAG, Technology wrote:
> but I can see nova attaching the interface after a huge amount of time.

What specifically are you looking for in the logs when you see this?

Are you passing pre-created ports to attach to nova or are you passing a 
network ID so nova will create the port for you during the attach call?

This is where the ComputeManager calls the driver to plug the vif on the 
host:

https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L5187

Assuming you're using the libvirt driver, the host vif plug happens here:

https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L1463

And the guest is updated here:

https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L1472

vif_plugging_is_fatal and vif_plugging_timeout don't come into play here 
because we're attaching an interface to an existing server - or are you 
talking about during the initial creation of the guest, i.e. this code 
in the driver?

https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L5257

Are you seeing this in the logs for the given port?

https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L6875

If not, it could mean that neutron-server never send the event to nova, 
so nova-compute timed out waiting for the vif plug callback event to 
tell us that the port is ready and the server can be changed to ACTIVE 
status.

The neutron-server logs should log when external events are being sent 
to nova for the given port, you probably need to trace the requests and 
compare the nova-compute and neutron logs for a given server create request.

-- 

Thanks,

Matt



More information about the OpenStack-operators mailing list