[openstack][nova]about instance-state when resizing failed
Hello. I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back. Thank you for your time. Nguyen Huu Khoi
Hi, I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know. Regards, Eugen Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi
Hello. I am using Openstack Yoga with Dell Powerstore 5000T is backend. This is my report: https://bugs.launchpad.net/os-brick/+bug/2063345 This bug is also with Victoria and NetApp as backend: https://bugs.launchpad.net/os-brick/+bug/1992289 It looks like with iscsi multipath only. I don't see any reports with Ceph. I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate) My log tells that 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n Nguyen Huu Khoi On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi
Hello. Is it ok if we set force=true when disconnect volume. I see this code https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/iscsi... Nguyen Huu Khoi On Mon, May 6, 2024 at 9:38 PM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi
On Tue, 2024-05-07 at 15:00 +0700, Nguyễn Hữu Khôi wrote:
Hello.
Is it ok if we set force=true when disconnect volume. I see this code
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/iscsi...
not without carful condiertation https://github.com/openstack/nova/commit/db455548a12beac1153ce04eca5e728d7b7... based on the commit mssage we need to consider if there will be possible data loss and the behaivor of os-brick also i need to get back to https://review.opendev.org/c/openstack/nova/+/916322 but there is a race today if we have 2 vms with the same mutlit attach volume on the same hsot we need to be careful in that edge case too. we shoudl not actully call disconnect in that case but the reason i bring it up is there are potentlly multile vms that could be fulshign data to the device so in generall force is only safe if os-brick is ensureing we flush internally.
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:38 PM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi
Hello @Sean Mooney <smooney@redhat.com> Thank you for your response. In my case, instances have only 1 volume. As I know when we resize/migrate an instance, it means that rebuild, it will shutdown and destroy the instance so I think it won't loss data. This problem happens more and more when I resize/migrate instances to make them migrate to other hosts. It makes me dont want to resize or migrate instances. is ok if we use force=true when we destroy instances for resizing or migrating only?. Nguyen Huu Khoi On Tue, May 7, 2024 at 7:28 PM <smooney@redhat.com> wrote:
On Tue, 2024-05-07 at 15:00 +0700, Nguyễn Hữu Khôi wrote:
Hello.
Is it ok if we set force=true when disconnect volume. I see this code
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/iscsi...
not without carful condiertation
https://github.com/openstack/nova/commit/db455548a12beac1153ce04eca5e728d7b7...
based on the commit mssage we need to consider if there will be possible data loss and the behaivor of os-brick
also i need to get back to https://review.opendev.org/c/openstack/nova/+/916322 but there is a race today if we have 2 vms with the same mutlit attach volume on the same hsot
we need to be careful in that edge case too.
we shoudl not actully call disconnect in that case but the reason i bring it up is there are potentlly multile vms that could be fulshign data to the device so in generall force is only safe if os-brick is ensureing we flush internally.
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:38 PM Nguyễn Hữu Khôi <
nguyenhuukhoinw@gmail.com>
wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>:
Hello.
I would like to know why we set instance-state to ERROR when resizing/migrating failed? I have a problem with resizing/migrating instances. I must reset state then shelve and unshelve instances to make it come back.
Thank you for your time.
Nguyen Huu Khoi
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check - If instances which fails to be resized/migrated have lvm deployed in their volumes - If these lvms on instance volumes are detected by host operating system - If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes. https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... On 5/6/24 23:38, Nguyễn Hữu Khôi wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345>
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289>
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag>> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com>>:
> Hello. > > I would like to know why we set instance-state to ERROR when > resizing/migrating failed? I have a problem with resizing/migrating > instances. I must reset state then shelve and unshelve instances to make it > come back. > > Thank you for your time. > > Nguyen Huu Khoi
Hello. I do filter as https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... I check and dont see any wrong mapping. I also set skip_kpartx "yes". I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state. This problem happens only with iscsi. On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes.
https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345>
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289>
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto: eblock@nde.ag>> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto: nguyenhuukhoinw@gmail.com>>:
> Hello. > > I would like to know why we set instance-state to ERROR when > resizing/migrating failed? I have a problem with resizing/migrating > instances. I must reset state then shelve and unshelve instances to make it > come back. > > Thank you for your time. > > Nguyen Huu Khoi
Hello, You’d need to use the `multipath` and `dmsetup` commands to figure out what the multipath device there is still mapped to. It might also help to see your /etc/multipath.conf configuration. A couple of things regarding multipath/SCSI. * Make sure you are safe against this https://security.openstack.org/ossa/OSSA-2023-003.html * Check sure you have a correct multipath.conf I’ve had issues many years ago with the blocklist statement there * I have a vague memory that several years ago I had a one-byte error in my multipath.conf that caused issue I wonder if it was the `failback` config option Best regards Tobias
On 7 May 2024, at 15:51, Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes. https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345>
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289>
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com>>>:
> Hello. > > I would like to know why we set instance-state to ERROR when > resizing/migrating failed? I have a problem with resizing/migrating > instances. I must reset state then shelve and unshelve instances to make it > come back. > > Thank you for your time. > > Nguyen Huu Khoi
Hello. I will check again if it happens again then let you know. Ty for advising me. On Tue, May 7, 2024, 9:42 PM Tobias Urdin <tobias.urdin@binero.com> wrote:
Hello,
You’d need to use the `multipath` and `dmsetup` commands to figure out what the multipath device there is still mapped to.
It might also help to see your /etc/multipath.conf configuration.
A couple of things regarding multipath/SCSI.
* Make sure you are safe against this https://security.openstack.org/ossa/OSSA-2023-003.html * Check sure you have a correct multipath.conf I’ve had issues many years ago with the blocklist statement there * I have a vague memory that several years ago I had a one-byte error in my multipath.conf that caused issue I wonder if it was the `failback` config option
Best regards Tobias
On 7 May 2024, at 15:51, Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes.
https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote:
Hello.
I am using Openstack Yoga with Dell Powerstore 5000T is backend.
This is my report:
https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345>
This bug is also with Victoria and NetApp as backend:
https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289>
It looks like with iscsi multipath only. I don't see any reports with Ceph.
I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate)
My log tells that
2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
Nguyen Huu Khoi
On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto: eblock@nde.ag>> wrote:
Hi,
I think you'll need to be more specific here. Which openstack version are you running? Just recently I have been shuffling around instances with live and cold migration (in version Victoria, so quite old), some of them did fail but I didn't have to reset any state or shelve/unshelve instances. I can't recall if the instances were in ERROR state, though. I didn't need to check because after fixing whatever broke the migration I just issued another (live-)migrate command and then it worked. We use only ceph as backend for glance, nova and cinder, which backend(s) do you use? Maybe that has an impact as well, I don't know.
Regards, Eugen
Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto: nguyenhuukhoinw@gmail.com>>:
> Hello. > > I would like to know why we set instance-state to ERROR when > resizing/migrating failed? I have a problem with resizing/migrating > instances. I must reset state then shelve and unshelve instances to make it > come back. > > Thank you for your time. > > Nguyen Huu Khoi
On 5/7/24 22:51, Nguyễn Hữu Khôi wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... <https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu.html>
To make sure that we have no misunderstanding between us, did you configured the filter not only in storage node (or probably controller node if you don't have dedicated nodes for cinder-volume) BUT ALSO in all compute nodes ? I'm asking this because the current documentation is not specific about this requirement but the filter should be configured in all nodes which may attach instance volumes.
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
So you mean you don't see any lvms from guest instances when you do lvs/vgs/pvs in your compute nodes ?
I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes. https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... <https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.html#prerequisites>
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote: > Hello. > > I am using Openstack Yoga with Dell Powerstore 5000T is backend. > > This is my report: > > https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345> <https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345>> > > This bug is also with Victoria and NetApp as backend: > > https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289> <https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289>> > > It looks like with iscsi multipath only. I don't see any reports with Ceph. > > I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate) > > My log tells that > > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n > > > Nguyen Huu Khoi > > > On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote: > > Hi, > > I think you'll need to be more specific here. Which openstack version > are you running? Just recently I have been shuffling around instances > with live and cold migration (in version Victoria, so quite old), some > of them did fail but I didn't have to reset any state or > shelve/unshelve instances. I can't recall if the instances were in > ERROR state, though. I didn't need to check because after fixing > whatever broke the migration I just issued another (live-)migrate > command and then it worked. > We use only ceph as backend for glance, nova and cinder, which > backend(s) do you use? Maybe that has an impact as well, I don't know. > > Regards, > Eugen > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com>>>: > > > Hello. > > > > I would like to know why we set instance-state to ERROR when > > resizing/migrating failed? I have a problem with resizing/migrating > > instances. I must reset state then shelve and unshelve instances to make it > > come back. > > > > Thank you for your time. > > > > Nguyen Huu Khoi > > >
Hello. I do filter for both storage and compute nodes because it has warning section in docs. Yes, i make like this https://serverfault.com/questions/965942/centos-multipath-dev-mapper-dont-ma... So no partN-mpath-xxxx display when i use dmsetup info -C On Tue, May 7, 2024, 9:58 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
On 5/7/24 22:51, Nguyễn Hữu Khôi wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... < https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
To make sure that we have no misunderstanding between us, did you configured the filter not only in storage node (or probably controller node if you don't have dedicated nodes for cinder-volume) BUT ALSO in all compute nodes ?
I'm asking this because the current documentation is not specific about this requirement but the filter should be configured in all nodes which may attach instance volumes.
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
So you mean you don't see any lvms from guest instances when you do lvs/vgs/pvs in your compute nodes ?
I still find a solution for it or atleast when resizing/migrating
failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com
<mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm
deployed in their volumes
- If these lvms on instance volumes are detected by host
operating system
- If you configure appropriate lvm filter in your compute nodes
to avoid host lvm system
from detecting lvms on devices for instance volumes.
https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... < https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote: > Hello. > > I am using Openstack Yoga with Dell Powerstore 5000T is backend. > > This is my report: > > https://bugs.launchpad.net/os-brick/+bug/2063345 <
https://bugs.launchpad.net/os-brick/+bug/2063345> < https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345>>
> > This bug is also with Victoria and NetApp as backend: > > https://bugs.launchpad.net/os-brick/+bug/1992289 <
https://bugs.launchpad.net/os-brick/+bug/1992289> < https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289>>
> > It looks like with iscsi multipath only. I don't see any
reports with Ceph.
> > I try to create problem but It is random when I do resize
instance(instance will move to another compute-cold migrate)
> > My log tells that > > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server
Command: multipath -f 3600a09803831486e695d536269665144
> 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit
code: 1
> 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout:
''
> 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr:
'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n
> > > Nguyen Huu Khoi > > > On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag
<mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote:
> > Hi, > > I think you'll need to be more specific here. Which openstack
version
> are you running? Just recently I have been shuffling around
instances
> with live and cold migration (in version Victoria, so quite
old), some
> of them did fail but I didn't have to reset any state or > shelve/unshelve instances. I can't recall if the instances
were in
> ERROR state, though. I didn't need to check because after
fixing
> whatever broke the migration I just issued another
(live-)migrate
> command and then it worked. > We use only ceph as backend for glance, nova and cinder, which > backend(s) do you use? Maybe that has an impact as well, I
don't know.
> > Regards, > Eugen > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:
nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto: nguyenhuukhoinw@gmail.com>>>:
> > > Hello. > > > > I would like to know why we set instance-state to ERROR
when
> > resizing/migrating failed? I have a problem with
resizing/migrating
> > instances. I must reset state then shelve and unshelve
instances to make it
> > come back. > > > > Thank you for your time. > > > > Nguyen Huu Khoi > > >
On Tue, 2024-05-07 at 23:58 +0900, Takashi Kajinami wrote:
On 5/7/24 22:51, Nguyễn Hữu Khôi wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... < https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu.html>
To make sure that we have no misunderstanding between us, did you configured the filter not only in storage node (or probably controller node if you don't have dedicated nodes for cinder-volume) BUT ALSO in all compute nodes ?
right this is imporant because iscsi volumes are host mounted on the compute nodes and if you dod not have the filter configured the host can mount the file systems and that causes the device to be coniserded busy when disconnecting tripleo has some docs on this here https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... but it applies to any intaller. it was implented in https://github.com/openstack-archive/tripleo-ansible/tree/stable/zed/tripleo... triple just invokes lvmconfig to generate a file and then copies it to /etc/lvm/lvm.conf in general you want to ensure on your compute nodes that lvm filterign is enabled and that you only allow the specific devices used by the OS not any iscsi devices.
I'm asking this because the current documentation is not specific about this requirement but the filter should be configured in all nodes which may attach instance volumes.
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
So you mean you don't see any lvms from guest instances when you do lvs/vgs/pvs in your compute nodes ?
I still find a solution for it or atleast when resizing/migrating failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm deployed in their volumes
- If these lvms on instance volumes are detected by host operating system
- If you configure appropriate lvm filter in your compute nodes to avoid host lvm system from detecting lvms on devices for instance volumes. https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... <https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote: > Hello. > > I am using Openstack Yoga with Dell Powerstore 5000T is backend. > > This is my report: > > https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345> <https://bugs.launchpad.net/os-brick/+bug/2063345 <https://bugs.launchpad.net/os-brick/+bug/2063345>> > > This bug is also with Victoria and NetApp as backend: > > https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289> <https://bugs.launchpad.net/os-brick/+bug/1992289 <https://bugs.launchpad.net/os-brick/+bug/1992289>> > > It looks like with iscsi multipath only. I don't see any reports with Ceph. > > I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate) > > My log tells that > > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n > > > Nguyen Huu Khoi > > > On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote: > > Hi, > > I think you'll need to be more specific here. Which openstack version > are you running? Just recently I have been shuffling around instances > with live and cold migration (in version Victoria, so quite old), some > of them did fail but I didn't have to reset any state or > shelve/unshelve instances. I can't recall if the instances were in > ERROR state, though. I didn't need to check because after fixing > whatever broke the migration I just issued another (live-)migrate > command and then it worked. > We use only ceph as backend for glance, nova and cinder, which > backend(s) do you use? Maybe that has an impact as well, I don't know. > > Regards, > Eugen > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com>>>: > > > Hello. > > > > I would like to know why we set instance-state to ERROR when > > resizing/migrating failed? I have a problem with resizing/migrating > > instances. I must reset state then shelve and unshelve instances to make it > > come back. > > > > Thank you for your time. > > > > Nguyen Huu Khoi > > >
Hello @Sean Mooney <smooney@redhat.com> . After doing some reviews. I see that my instances which have this problem not only use Ubuntu(LVM) but also Rocky(without LVM) too. I am planning to upgrade node OS and Openstack Zed. I will update here, But can we not set instances to ERROR if this task is failed. Nguyen Huu Khoi On Tue, May 7, 2024 at 10:26 PM <smooney@redhat.com> wrote:
On Tue, 2024-05-07 at 23:58 +0900, Takashi Kajinami wrote:
On 5/7/24 22:51, Nguyễn Hữu Khôi wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
To make sure that we have no misunderstanding between us, did you configured the filter not only in storage node (or probably controller node if you don't have dedicated nodes for cinder-volume) BUT ALSO in all compute nodes ? right this is imporant because iscsi volumes are host mounted on the compute nodes and if you dod not have the filter configured the host can mount the file systems and that causes
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... < the device to be coniserded busy when disconnecting
tripleo has some docs on this here
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... but it applies to any intaller.
it was implented in
https://github.com/openstack-archive/tripleo-ansible/tree/stable/zed/tripleo...
triple just invokes lvmconfig to generate a file and then copies it to /etc/lvm/lvm.conf
in general you want to ensure on your compute nodes that lvm filterign is enabled and that you only allow the specific devices used by the OS not any iscsi devices.
I'm asking this because the current documentation is not specific about
but the filter should be configured in all nodes which may attach instance volumes.
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
So you mean you don't see any lvms from guest instances when you do lvs/vgs/pvs in your compute nodes ?
I still find a solution for it or atleast when resizing/migrating
failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <
kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm
deployed in their volumes
- If these lvms on instance volumes are detected by host
operating system
- If you configure appropriate lvm filter in your compute nodes
to avoid host lvm system
from detecting lvms on devices for instance volumes.
https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... < https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote: > Hello. > > I am using Openstack Yoga with Dell Powerstore 5000T is backend. > > This is my report: > > https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345> <https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345>> > > This bug is also with Victoria and NetApp as backend: > > https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289> <https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289>> > > It looks like with iscsi multipath only. I don't see any reports with Ceph. > > I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate) > > My log tells that > > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n > > > Nguyen Huu Khoi > > > On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote: > > Hi, > > I think you'll need to be more specific here. Which openstack version > are you running? Just recently I have been shuffling around instances > with live and cold migration (in version Victoria, so quite
this requirement old), some
> of them did fail but I didn't have to reset any state or > shelve/unshelve instances. I can't recall if the instances
were in
> ERROR state, though. I didn't need to check because after
fixing
> whatever broke the migration I just issued another
(live-)migrate
> command and then it worked. > We use only ceph as backend for glance, nova and cinder,
which
> backend(s) do you use? Maybe that has an impact as well, I
don't know.
> > Regards, > Eugen > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com
<mailto:nguyenhuukhoinw@gmail.com>
<mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com
: > > > Hello. > > > > I would like to know why we set instance-state to ERROR when > > resizing/migrating failed? I have a problem with resizing/migrating > > instances. I must reset state then shelve and unshelve instances to make it > > come back. > > > > Thank you for your time. > > > > Nguyen Huu Khoi > > >
Hello. I saw the error "map in use" still happened with ubuntu 22.04 but instances won't enter to ERROR state and could live migrate to other hosts. Nguyen Huu Khoi On Wed, May 8, 2024 at 9:57 AM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> wrote:
Hello @Sean Mooney <smooney@redhat.com> .
After doing some reviews.
I see that my instances which have this problem not only use Ubuntu(LVM) but also Rocky(without LVM) too.
I am planning to upgrade node OS and Openstack Zed. I will update here, But can we not set instances to ERROR if this task is failed.
Nguyen Huu Khoi
On Tue, May 7, 2024 at 10:26 PM <smooney@redhat.com> wrote:
On Tue, 2024-05-07 at 23:58 +0900, Takashi Kajinami wrote:
On 5/7/24 22:51, Nguyễn Hữu Khôi wrote:
Hello. I do filter as
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu...
To make sure that we have no misunderstanding between us, did you configured the filter not only in storage node (or probably controller node if you don't have dedicated nodes for cinder-volume) BUT ALSO in all compute nodes ? right this is imporant because iscsi volumes are host mounted on the compute nodes and if you dod not have the filter configured the host can mount the file systems and that causes
https://docs.openstack.org/cinder/pike/install/cinder-storage-install-ubuntu... < the device to be coniserded busy when disconnecting
tripleo has some docs on this here
https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features... but it applies to any intaller.
it was implented in
https://github.com/openstack-archive/tripleo-ansible/tree/stable/zed/tripleo...
triple just invokes lvmconfig to generate a file and then copies it to /etc/lvm/lvm.conf
in general you want to ensure on your compute nodes that lvm filterign is enabled and that you only allow the specific devices used by the OS not any iscsi devices.
I'm asking this because the current documentation is not specific about
this requirement
but the filter should be configured in all nodes which may attach instance volumes.
I check and dont see any wrong mapping. I also set skip_kpartx "yes".
So you mean you don't see any lvms from guest instances when you do lvs/vgs/pvs in your compute nodes ?
I still find a solution for it or atleast when resizing/migrating
failed, my instance wont be set to ERROR state.
This problem happens only with iscsi.
On Tue, May 7, 2024, 8:12 PM Takashi Kajinami <
kajinamit@oss.nttdata.com <mailto:kajinamit@oss.nttdata.com>> wrote:
The failure reminds me of the "well-known" problem caused by lvm. I'd suggest that you check
- If instances which fails to be resized/migrated have lvm
deployed in their volumes
- If these lvms on instance volumes are detected by host
operating system
- If you configure appropriate lvm filter in your compute
nodes to avoid host lvm system
from detecting lvms on devices for instance volumes.
https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo.... < https://docs.openstack.org/cinder/latest/install/cinder-storage-install-rdo....
On 5/6/24 23:38, Nguyễn Hữu Khôi wrote: > Hello. > > I am using Openstack Yoga with Dell Powerstore 5000T is backend. > > This is my report: > > https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345> <https://bugs.launchpad.net/os-brick/+bug/2063345 < https://bugs.launchpad.net/os-brick/+bug/2063345>> > > This bug is also with Victoria and NetApp as backend: > > https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289> <https://bugs.launchpad.net/os-brick/+bug/1992289 < https://bugs.launchpad.net/os-brick/+bug/1992289>> > > It looks like with iscsi multipath only. I don't see any reports with Ceph. > > I try to create problem but It is random when I do resize instance(instance will move to another compute-cold migrate) > > My log tells that > > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Command: multipath -f 3600a09803831486e695d536269665144 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Exit code: 1 > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stdout: '' > 2022-10-04 10:50:12.238 7 ERROR oslo_messaging.rpc.server Stderr: 'Oct 04 10:49:32 | 3600a09803831486e695d536269665144: map in use\n > > > Nguyen Huu Khoi > > > On Mon, May 6, 2024 at 9:22 PM Eugen Block <eblock@nde.ag <mailto:eblock@nde.ag> <mailto:eblock@nde.ag <mailto:eblock@nde.ag>>> wrote: > > Hi, > > I think you'll need to be more specific here. Which openstack version > are you running? Just recently I have been shuffling around instances > with live and cold migration (in version Victoria, so quite old), some > of them did fail but I didn't have to reset any state or > shelve/unshelve instances. I can't recall if the instances were in > ERROR state, though. I didn't need to check because after fixing > whatever broke the migration I just issued another (live-)migrate > command and then it worked. > We use only ceph as backend for glance, nova and cinder, which > backend(s) do you use? Maybe that has an impact as well, I don't know. > > Regards, > Eugen > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com> <mailto:nguyenhuukhoinw@gmail.com <mailto:nguyenhuukhoinw@gmail.com
: > > > Hello. > > > > I would like to know why we set instance-state to ERROR when > > resizing/migrating failed? I have a problem with resizing/migrating > > instances. I must reset state then shelve and unshelve instances to make it > > come back. > > > > Thank you for your time. > > > > Nguyen Huu Khoi > > >
participants (5)
-
Eugen Block
-
Nguyễn Hữu Khôi
-
smooney@redhat.com
-
Takashi Kajinami
-
Tobias Urdin