Hi Laurent
We are running Openstack Victoria:
System is deployed using kolla build container and kolla-ansible.
This mismatch is visible in Libvirt, and its reliably reproducible using our test automation
Pozdrawiam / Best regards,
Aleksander Wojtal, Junior Software Engineer
TietoEvry, BU Telco Cloud Infra & Applications
From: Laurent Dumont <laurentfdumont@gmail.com>
Sent: Monday, July 26, 2021 11:38 PM
To: Aleksander Wojtal <aleksander.wojtal@tietoevry.com>
Cc: openstack-discuss@lists.openstack.org; Balázs Gibizer <balazs.gibizer@ericsson.com>; Piotr Mossakowski <Piotr.Mossakowski@tietoevry.com>
Subject: Re: [openstack-nova] Attached volume is not available in the VM
We'll probably need a few more details
Openstack is usually pretty good about failing and reverting when one step of the operation chain failed. That said, this is anecdotal but the most amount of mismatch between the VM and the DB is always around Cinder. Which kind of makes
sense since it's probably volume attach + detach that are the most common. Maybe port operations after.
On Mon, Jul 26, 2021 at 12:24 PM Aleksander Wojtal <aleksander.wojtal@tietoevry.com> wrote:
Hello,
During our testing we perform reboot of compute during volume attached task. Result is as follows.
Volume is marked as attached to VM (also in DB)
ceeinfra@lcm1:~> openstack volume list
+--------------------------------------+-----------------------------------------------------------------------------+-----------+------+--------------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+-----------------------------------------------------------------------------+-----------+------+--------------------------------------------------------------------------------------------------------------+
| c52d406c-5587-445d-9da1-436ffbbe3541 | vol_neXt-488 | available | 10 | |
| f3e45efc-35eb-4f95-8e71-f50b5cb69028 | RebootOfComputeHostWhilePerformingVolumeOperations-Volume-0618_19_45_12_669 | in-use | 20 | Attached to neXt-377_VM1--RebootOfComputeHostWhilePerformingVolumeOperations--0618_19_44_16_775 on /dev/vdb |
| 1bd81392-e2c9-4725-93c8-baa484924f21 | vol_neXt-488 | available | 10 | |
| 919db17c-0729-409c-bf59-74539edcea47 | vol_neXt-488 | available | 10 | |
+--------------------------------------+-----------------------------------------------------------------------------+-----------+------+--------------------------------------------------------------------------------------------------------------+
From VM perspective too.
ceeinfra@lcm1:~> openstack server show neXt-377_VM1--RebootOfComputeHostWhilePerformingVolumeOperations--0618_19_44_16_775
+-------------------------------------+-------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | compute-0-11.k2.ericsson.se |
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0-11.k2.ericsson.se |
| OS-EXT-SRV-ATTR:instance_name | instance-00001edc |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-06-18T14:14:58.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | NovaController-Network-0618_19_44_16_852=10.0.0.5 |
| config_drive | True |
| created | 2021-06-18T14:14:19Z |
| flavor | m1.small (2) |
| hostId | 5078bfd4fabd8b2e7c9f4554b3ac08d536ceebb5f393db4d38c8ff60 |
| id | 95ed49ed-b3f8-46ae-80a0-544b939c50b7 |
| image | JCAT_Common_CirrOS_i386 (b71f3fbd-59f1-4c29-a296-5065fe0406f7) |
| key_name | None |
| name | neXt-377_VM1--RebootOfComputeHostWhilePerformingVolumeOperations--0618_19_44_16_775 |
| progress | 0 |
| project_id | b63725c8ebaf43ebabbed497f41eb71c |
| properties | ha-policy='managed-on-host' |
| scheduler_hints | {} |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-06-18T14:22:07Z |
| user_id | f490f07b813748698e56ca7641d46a72 |
| volumes_attached | id='f3e45efc-35eb-4f95-8e71-f50b5cb69028' |
+-------------------------------------+-------------------------------------------------------------------------------------+
From Libvirt perspective, volume is not attached to VM
compute-0-11:/var/log # virsh dumpxml instance-00001edc
<domain type='kvm' id='1'>
<name>instance-00001edc</name>
<uuid>95ed49ed-b3f8-46ae-80a0-544b939c50b7</uuid>
(…)
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='directsync'/>
<source file='/var/lib/nova/instances/95ed49ed-b3f8-46ae-80a0-544b939c50b7/disk' index='2'/>
<backingStore type='file' index='3'>
<format type='raw'/>
<source file='/var/lib/nova/instances/_base/af82cd8216adb002916bb9e422a3fbb637022ec6'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw' cache='directsync'/>
<source file='/var/lib/nova/instances/95ed49ed-b3f8-46ae-80a0-544b939c50b7/disk.config' index='1'/>
<backingStore/>
<target dev='hdd' bus='ide'/>
<readonly/>
<alias name='ide0-1-1'/>
<address type='drive' controller='0' bus='1' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
(…)</domain>
System does not provide any indication of the problem. There should be some kind of indication to user that volume attachment was not completed.
Pozdrawiam / Best regards,
Aleksander Wojtal, Junior Software Engineer
TietoEVRY
BU Telco Cloud Infra & Applications
email aleksander.wojtal@tietoevry.com
al. Piastów 30, 71-064 Szczecin, tietoevry.com
Please note: The information contained in this message may be legally privileged,
confidential and protected from disclosure. If you received this in error, please notify
the sender immediately and delete the message from your computer. Thank you.