On Tue, 2024-05-07 at 15:00 +0700, Nguyễn Hữu Khôi wrote:
Hello.
Is it ok if we set force=true when disconnect volume. I see this code
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/iscsi...
not without carful condiertation https://github.com/openstack/nova/commit/db455548a12beac1153ce04eca5e728d7b7... based on the commit mssage we need to consider if there will be possible data loss and the behaivor of os-brick also i need to get back to https://review.opendev.org/c/openstack/nova/+/916322 but there is a race today if we have 2 vms with the same mutlit attach volume on the same hsot we need to be careful in that edge case too. we shoudl not actully call disconnect in that case but the reason i bring it up is there are potentlly multile vms that could be fulshign data to the device so in generall force is only safe if os-brick is ensureing we flush internally.
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:38 PM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi