Hello, You’d need to use the `multipath` and `dmsetup` commands to figure out what the multipath device there is still mapped to. It might also help to see your /etc/multipath.conf configuration. A couple of things regarding multipath/SCSI. * Make sure you are safe against this https://security.openstack.org/ossa/OSSA-2023-003.html * Check sure you have a correct multipath.conf I’ve had issues many years ago with the blocklist statement there * I have a vague memory that several years ago I had a one-byte error in my multipath.conf that caused issue I wonder if it was the `failback` config option Best regards Tobias
On 7 May 2024, at 15:51, Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes. https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345>
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289>
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com>>>:
> Hello. > > I would like to know why we set instance-state to ERROR when > resizing/migrating failed? I have a problem with resizing/migrating > instances. I must reset state then shelve and unshelve instances to make it > come back. > > Thank you for your time. > > Nguyen Huu Khoi