[Openstack] Live Migration + Ceph + ConfigDrive

Tyler Wilson kupo at linuxdigital.net
Fri Jul 10 20:52:39 UTC 2015


Hey All,

I was able to deploy a kilo cluster with ceph to test this out and am
getting the following error:

2015-07-10 20:50:35.765 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Starting monitoring of
live migration _live_migration
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5685
2015-07-10 20:50:35.767 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Operation thread is still
running _live_migration_monitor
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5537
2015-07-10 20:50:35.767 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Migration not running yet
_live_migration_monitor
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5568
2015-07-10 20:50:35.777 138421 ERROR nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Live Migration failure:
Cannot access storage file
'/var/lib/nova/instances/86888dfb-43fa-496c-bc36-be03aa1b8c1b/disk.config'
(as uid:107, gid:107): No such file or directory
2015-07-10 20:50:35.777 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Migration operation thread
notification thread_finished
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5676
2015-07-10 20:50:36.268 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] VM running on src,
migration failed _live_migration_monitor
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5543
2015-07-10 20:50:36.269 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Fixed incorrect job type
to be 4 _live_migration_monitor
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5563
2015-07-10 20:50:36.269 138421 ERROR nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Migration operation has
aborted
2015-07-10 20:50:36.396 138421 DEBUG nova.virt.libvirt.driver [-]
[instance: 86888dfb-43fa-496c-bc36-be03aa1b8c1b] Live migration monitoring
is all done _live_migration
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:5696

The disk.config shows ownership under the qemu user;

-rw-r--r--. 1 qemu qemu 67108864 Jul 10 20:49
/var/lib/nova/instances/86888dfb-43fa-496c-bc36-be03aa1b8c1b/disk.config

Any ideas what could fix this?

On Thu, May 7, 2015 at 5:29 PM, Pádraig Brady <P at draigbrady.com> wrote:

> On 07/05/15 18:28, Tyler Wilson wrote:
> > Hello All,
> >
> > Thank you for the replies! Will this patch be usable in Juno?
> >
> > On Thu, May 7, 2015 at 3:18 AM, Pádraig Brady <P at draigbrady.com <mailto:
> P at draigbrady.com>> wrote:
> >
> >     On 07/05/15 09:50, Sebastien Han wrote:
> >     > Actually the issue is that the configdrive is store a file on the
> fs under /var/lib/nova/instances/$uuid/config.drive
> >     > AFAIR the other problem is the format of that file that is not
> supported by libvirt for the live migration.
> >     >
> >     > I think you have to apply this patch:
> https://review.openstack.org/#/c/123073/
> >
> >     Yes that's the correct one.
> >
> >     > I’ve heard from sileht (in cc) that this might work in Kilo (the
> vfat format of the config drive for the live migration).
> >
> >     Right, by setting vfat format, this function will work in Kilo
> >     due to https://github.com/openstack/nova/commit/4e665112
>
> Not directly, as the backport to RHOSP 6 (Juno based) has:
>
>   Conflicts:
>          nova/tests/virt/libvirt/test_libvirt.py
>          nova/virt/libvirt/driver.py
>
> Though the adjustments are simple enough.
>
> cheers,
> Pádraig.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150710/a320d1af/attachment.html>


More information about the Openstack mailing list