Hi, zhangcf,

The maillist openstack@lists.openstack.org is deprecated. I add this mail to the current maillist openstack-discuss@lists.openstack.org.

And I'm not sure I understand your problem clearly. But I will try to ask some questions.

zhangcf9988@aliyun.com <zhangcf9988@aliyun.com> 于2020年4月7日周二 下午11:11写道:
Hello, contributors to the Openstack community:
    I've got a problem and I'd love to get some advice. When we used the openstack-pike version, we encountered the following problems:
Deployment environment:
    The openstack-pike version of openstack was installed on CentOS7.7.1908 system using yum, where nova version was 9.1.3
Nova - compute node:
    The Dell R730 server was used, and three graphics CARDS of the same model Nvidia 1050 Ti were installed. At the same time, the iommu was enabled and the graphics card pass-through was configured.
    But we had a problem:
    Two PCI expansion board CARDS were installed on Dell R730, two Nvidia 1050 Ti graphics CARDS were installed on the first PCI expansion board card, and one was installed on the second PCI expansion board card.
    As an Nvidia 1050 Ti graphics card is composed of two chips, respectively:
        Video card chip (Proudct_id:1c82, Vendor_id:10de)
        Sound card chip (Proudct_id:0fb9, Vendor_id:10de)
    Create a VM, we hope that the VM can passthrough for graphics and sound card chip at the same time, but we test found that VM will get 2 graphics or sound card chip, rather than a complete physical graphics should be some graphics and sound card chip, consequences of this, we installed the three pieces of graphics card, no one can be normal use, because the first VM occupies two graphics chips, lead to the second created, VM libvirtd detected a VM occupies physical graphics graphics chips, cause the failure of the second always create a VM.
    At the same time, I also tested that when I removed one of the two Nvidia graphics CARDS on the first PCI expansion board card and installed a physical Nvidia graphics card on the first and second PCI expansion board CARDS, the test created a VM that could obtain a video card and a sound card chip at the same time, so that the VM successfully obtained a complete Nivida physical graphics card.
    We hope that we can insert pieces of the same type of graphics card, but now we tried by modifying the nova pci_devices table in the database, the first VM can be modified by the graphics card is assigned the wrong question, but in fact, the second VM still complains when created, and will also we manually modify the database information to clean up, at present there are two possible solutions, the first is to modify the source code, the second is using different types of graphics CARDS, don't know the problem, can be solved by modifying the source code.
    Thank you very much.

The following is the configuration of nova on the control node:
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 172.16.160.206
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:mqpass123@baseserver
available_filters=nova.scheduler.filters.all_filters
available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter
enabled_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter

[api]
auth_strategy = keystone

[api_database]
connection = mysql+pymysql://nova:novapass123@baseserver/nova_api

[cinder]
os_region_name = RegionOne

[database]
connection = mysql+pymysql://nova:novapass123@baseserver/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
[keystone_authtoken]
memcached_servers = baseserver:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = novapass123

[neutron]
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = netpass123
service_metadata_proxy = true
metadata_proxy_shared_secret = 20190909

[oslo_concurrency]
lock_path=/var/lib/nova/tmp

[pci]
alias = { "name": "nvidia1050ti", "product_id": "1c82", "vendor_id": "10de", "device_type": "type-PCI" }
alias = { "name": "nvidia1050ti", "product_id": "0fb9", "vendor_id": "10de", "device_type": "type-PCI" }

Have you tried to give different alias for sound card? If they have same alias, nova probably just thinks they are same devices.
 

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = novapass123

[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip




#The following is the configuration for nova-compute:
/etc/nova/nova.conf:

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:mqpass123@baseserver
my_ip = 172.16.160.204
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
instance_name_template = instance-%(uuid)s
remove_unused_base_images=false

[api]
auth_strategy = keystone

[keystone_authtoken]
memcached_servers = baseserver:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = novapass123

[vnc]
enabled = True
#vncserver_listen = 0.0.0.0
#vncserver_proxyclient_address = $my_ip

vncserver_listen = 172.16.160.204
vncserver_proxyclient_address = 172.16.160.204
keymap=en-us

[glance]

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = novapass123

[libvirt]
virt_type = kvm
#virt_type = qemu
use_usb_tablet=true

[filter_scheduler]
enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateCoreFilter,AggregateDiskFilter,DifferentHostFilter,SameHostFilter, PciPassthroughFilter
available_filters = nova.scheduler.filters.all_filters

[PCI] 
passthrough_whitelist = [{ "vendor_id": "10de", "product_id": "1c82" }, {"vendor_id": "10de", "product_id": "0fb9"}]
alias = { "name": "nvidia1050ti",  "product_id": "1c82",  "vendor_id": "10de",  "device_type": "type-PCI"}
alias = { "name": "nvidia1050ti",  "product_id": "0fb9",  "vendor_id": "10de",  "device_type": "type-PCI"}

You didn't add the sound card chip to the white list.
 


[neutron]
metadata_proxy_shared_secret = 20190909
service_metadata_proxy = true
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = netpass123


flavor: 
openstack flavor create --public --ram 10240 --disk 50 --vcpus 4  nvidia1050ti --id 1 \
 --property os_type=windows \
 --property hw_firmware_type=uefi \ 
 --property hw_machine_type=q35 \
 --property img_hide_hypervisor_id=true \
 --property os_secure_boot=required \
 --property hw_cpu_cores=4 \ 
 --property hw_cpu_sockets=1 \
 --property hw_cpu_threads=2 \
 --property pci_passthrough:alias='nvidia1050ti:2' 

If you have separated alias for sound and graphic card, then you can request the sound and graphic explicitly.
 



image:
openstack image create win10base1903UEFI --file win10base1903Q35V1.qcow2 --container-format bare --disk-format qcow2 --public \
 --property os_type=windows \
 --property hw_firmware_type=uefi \
 --property hw_machine_type=q35 \
 --property img_hide_hypervisor_id=true \
 --property os_secure_boot=required \
 --property hw_cpu_cores=4 \
 --property hw_cpu_sockets=1 \
 --property hw_cpu_threads=2 \
 --property pci_passthrough:alias='nvidia1050ti:2' 


#The following is an error reported in openstack-nova -compute:

2020-03-31 10:57:04.555 2643 INFO os_vif [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:51:25:34,bridge_name='brq8d505aa1-51',has_traffic_filtering=True,id=1a673f99-8a71-41be-a87e-40f3308fcfc3,network=Network(8d505aa1-5146-49db-8814-7dcef91bd1c1),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap1a673f99-8a')
2020-03-31 10:57:04.723 2643 ERROR nova.virt.libvirt.guest [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Error launching a defined domain with XML: <domain type='kvm'>
  <name>instance-7e5d283f-5ed5-4ebe-b145-d348e94addca</name>
  <uuid>7e5d283f-5ed5-4ebe-b145-d348e94addca</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="16.1.7-1.el7"/>
      <nova:name>test1</nova:name>
      <nova:creationTime>2020-03-31 02:57:04</nova:creationTime>
      <nova:flavor name="nvidia1050ti_50G">
        <nova:memory>10240</nova:memory>
        <nova:disk>50</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>4</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="68711849a60849d7807d71968ba6b275">demo</nova:user>
        <nova:project uuid="fa79259f6d2a442c86d4cd5e0e6c788c">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="24bb256e-9cea-4dcd-96e0-e6bd81156052"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <shares>4096</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>RDO</entry>
      <entry name='product'>OpenStack Compute</entry>
      <entry name='version'>16.1.7-1.el7</entry>
      <entry name='serial'>30535dfb-90e3-401e-bbf7-f2c43120fb65</entry>
      <entry name='uuid'>7e5d283f-5ed5-4ebe-b145-d348e94addca</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel7.6.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/instance-7e5d283f-5ed5-4ebe-b145-d348e94addca_VARS.fd</nvram>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='123456789ab'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
  </features>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca/disk'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:51:25:34'/>
      <source bridge='brq8d505aa1-51'/>
      <target dev='tap1a673f99-8a'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <log file='/var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca/console.log' append='off'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <log file='/var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca/console.log' append='off'/>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='172.16.160.204' keymap='en-us'>
      <listen type='address' address='172.16.160.204'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <stats period='10'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
</domain>
: libvirtError: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain instance-2de376e6-eb86-49aa-b35f-a8852143969d
2020-03-31 10:57:04.724 2643 ERROR nova.virt.libvirt.driver [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Failed to start libvirt guest: libvirtError: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain instance-2de376e6-eb86-49aa-b35f-a8852143969d
2020-03-31 10:57:04.726 2643 INFO os_vif [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:51:25:34,bridge_name='brq8d505aa1-51',has_traffic_filtering=True,id=1a673f99-8a71-41be-a87e-40f3308fcfc3,network=Network(8d505aa1-5146-49db-8814-7dcef91bd1c1),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap1a673f99-8a')
2020-03-31 10:57:04.741 2643 INFO nova.virt.libvirt.driver [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Deleting instance files /var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca_del
2020-03-31 10:57:04.743 2643 INFO nova.virt.libvirt.driver [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Deletion of /var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca_del complete
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Instance failed to spawn: libvirtError: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain instance-2de376e6-eb86-49aa-b35f-a8852143969d
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Traceback (most recent call last):
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2203, in _build_resources
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     yield resources
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2018, in _build_and_run_instance
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     block_device_info=block_device_info)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2903, in spawn
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     destroy_disks_on_failure=True)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5453, in _create_domain_and_network
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     destroy_disks_on_failure)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     self.force_reraise()
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     six.reraise(self.type_, self.value, self.tb)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5423, in _create_domain_and_network
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     post_xml_callback=post_xml_callback)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5341, in _create_domain
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     guest.launch(pause=pause)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 144, in launch
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     self._encoded_xml, errors='ignore')
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     self.force_reraise()
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     six.reraise(self.type_, self.value, self.tb)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 139, in launch
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     return self._domain.createWithFlags(flags)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     rv = execute(f, *args, **kwargs)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     six.reraise(c, e, tb)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     rv = meth(*args, **kwargs)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] libvirtError: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain instance-2de376e6-eb86-49aa-b35f-a8852143969d
2020-03-31 10:57:04.823 2643 ERROR nova.compute.manager [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] 
2020-03-31 10:57:04.887 2643 INFO nova.compute.manager [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Terminating instance
2020-03-31 10:57:04.896 2643 INFO nova.virt.libvirt.driver [-] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Instance destroyed successfully.
2020-03-31 10:57:04.898 2643 INFO os_vif [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:51:25:34,bridge_name='brq8d505aa1-51',has_traffic_filtering=True,id=1a673f99-8a71-41be-a87e-40f3308fcfc3,network=Network(8d505aa1-5146-49db-8814-7dcef91bd1c1),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap1a673f99-8a')


2020-03-31 10:57:04.939 2643 INFO nova.virt.libvirt.driver [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Deletion of /var/lib/nova/instances/7e5d283f-5ed5-4ebe-b145-d348e94addca_del complete
2020-03-31 10:57:05.025 2643 INFO nova.compute.manager [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Took 0.13 seconds to destroy the instance on the hypervisor.
2020-03-31 10:57:05.869 2643 INFO nova.compute.manager [req-2e175998-52b0-44fc-9bc9-a84d1766fe70 68711849a60849d7807d71968ba6b275 fa79259f6d2a442c86d4cd5e0e6c788c - default default] [instance: 7e5d283f-5ed5-4ebe-b145-d348e94addca] Took 0.62 seconds to deallocate network for instance.

2020-03-31 10:57:12.019 2643 INFO nova.compute.resource_tracker [req-5a0cf06d-ee8e-4839-a5a5-ee5269299d20 - - - - -] Final resource view: name=dellserver phys_ram=261922MB used_ram=10752MB phys_disk=399GB used_disk=50GB total_vcpus=48 used_vcpus=4 pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='1c82',tags={dev_type='type-PCI'},vendor_id='10de'), PciDevicePool(count=2,numa_node=0,product_id='0fb9',tags={dev_type='type-PCI'},vendor_id='10de'), PciDevicePool(count=1,numa_node=1,product_id='1c82',tags={dev_type='type-PCI'},vendor_id='10de'), PciDevicePool(count=1,numa_node=1,product_id='0fb9',tags={dev_type='type-PCI'},vendor_id='10de')]