Openstack attaching same pci device to 2 different vms?
Sean Mooney
smooney at redhat.com
Fri Apr 5 10:12:08 UTC 2019
On Fri, 2019-04-05 at 00:52 +0000, Manuel Sopena Ballesteros wrote:
> Dear Openstack community,
>
> I recently setup pci-passthrough against nvme drives to directly attach the disks to the vm as a block device.
>
> I created 2 vms on the same physical host and I can see the disks (nvme0n1) on both of them:
>
> [centos at test-nvme-small ~]$ lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> vda 253:0 0 10G 0 disk
> └─vda1 253:1 0 10G 0 part /
> nvme0n1 259:0 0 1.8T 0 disk
>
> [centos at test-nvme-small-2 ~]$ lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> vda 253:0 0 10G 0 disk
> └─vda1 253:1 0 10G 0 part /
> nvme0n1 259:0 0 1.8T 0 disk
>
> So I wanted to check which nvme device was attached to which vm. I thought one option would be to dump the vm xml file
> from virsh which can be found below (I am only putting the information between the disks section but happy to attach
> the hole file if needed):
>
>
> Device/disk configuration for instance-00000094
>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2' cache='none'/>
> <source file='/var/lib/nova/instances/fd95cc45-1501-4693-8643-944be2ff4625/disk'/>
> <backingStore type='file' index='1'>
> <format type='raw'/>
> <source file='/var/lib/nova/instances/_base/4cc6eebe175e35178cb81853818a1eb103cea937'/>
> <backingStore/>
> </backingStore>
> <target dev='vda' bus='virtio'/>
> <alias name='virtio-disk0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
> </disk>
>
>
>
> Device/disk configuration for instance-00000093
>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2' cache='none'/>
> <source file='/var/lib/nova/instances/7e6a055c-1b4e-458c-89cd-cb8c1d10e939/disk'/>
> <backingStore type='file' index='1'>
> <format type='raw'/>
> <source file='/var/lib/nova/instances/_base/4cc6eebe175e35178cb81853818a1eb103cea937'/>
> <backingStore/>
> </backingStore>
> <target dev='vda' bus='virtio'/>
> <alias name='virtio-disk0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
> </disk>
can you provide the full xmls
these diss are not the nvme disk
this is the info for the root disk wihc is a qcow 2 file on you host.
when you are passing the pci devices form the host they will show up in the libvirt xml
as a hostdev element.
it will look something like this
<devices>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x06' slot='0x02' function='0x0'/>
</source>
</hostdev>
</devices>
in this case teh address in the source element will be the host pci address of the nvme device
you may also have other fields like a second guest address element which will be outside the source element
>
>
> Based on this output and If I am not mistaken I am under the impression that the host pci device attached to both vms
> is 0000:00:04.0
no this is the pci address where the virtio disk (your root disk) will be presented to the guest.
>
>
> Questions:
>
> Is there an Openstack way to check which pci devices are attached to which vms instead of having to dump the vm xml
> file?
technically its sored in the nova db in the pci_devices table but i would much perfer operators to look at the xmls
first before touching the db.
> Am I right assuming that the same disk is attached to both vms?
no luclly that is not whats happening. if it was all maner of things would be broken :)
> If yes, how can that happen?
>
> Thank you very much
> NOTICE
> Please consider the environment before printing this email. This message and any attachments are intended for the
> addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended
> recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this
> message in error please notify us at once by return email and then delete both messages. We accept no liability for
> the distribution of viruses or similar in electronic communications. This notice should not be removed.
More information about the openstack-discuss
mailing list