smooney at redhat.com
Wed Aug 4 12:02:10 UTC 2021
On Wed, 2021-08-04 at 08:45 +0000, Jiri Stransky wrote:
> On Tue, Aug 3, 2021 at 5:04 PM Sean Mooney <smooney at redhat.com> wrote:
> > On Tue, 2021-08-03 at 16:49 +0000, Jiri Stransky wrote:
> > > Hello folks,
> > >
> > > On Thu, Jul 29, 2021 at 6:24 PM Sean Mooney <smooney at redhat.com> wrote:
> > > >
> > > > On Thu, 2021-07-29 at 15:28 +0000, Jiri Stransky wrote:
> > > > > OS Migrate might be worth a look, it's an Ansible collection. It does
> > > > > cold migration that does not require admin privileges (is runnable by
> > > > > tenants who are sufficiently technically savvy).
> > > > >
> > > >
> > > > default nova policy requrie admin right for cold migration
> > > > https://github.com/openstack/nova/blob/master/nova/policies/migrate_server.py#L25-L36
> > > >
> > > > os-migrate is not useing nova migration apis its migrating the data and vms behind novas back.
> > >
> > > I used the term "cold migration" in the generic sense (migration of
> > > VMs while they're stopped), not in the particular "Nova cold
> > > migration" sense. While we're not using any migration-specific APIs
> > > (not sure if these would work *between* clouds, incl. copying Cinder
> > > volumes), we do still interact with OpenStack only through the normal
> > > REST APIs, we do not touch the backing MariaDB database directly. (So
> > > i do not see our approach as migrating "behind Nova's back".)
> > do you run any process on the host or interact with any files on the host or libvirt/qemu in any way with
> > the data migrator? if you are just using snapshots and grabign datat form glance or doing data copeis with tools in the
> > vms i retract my statement but otherwise any interaction with the nova instance directly outside of the nova api would be
> > out of band and behind novas back.
> We do not run any process on the compute hosts. OS Migrate workload
> migration must work without admin access, which also means no access
> to the compute hosts.
> There are tenant-owned VMs which facilitate direct data copying
> between the clouds, which we call "conversion hosts", and they do one
> thing on top of a pure copy: sparsification using virt-sparsify before
> the copying starts. This significantly speeds up copying of "empty
> space", and is only done on partitions where virt-sparsify can
> recognize a filesystem that can be sparsified. (We should probably
> have an option to disable this behavior, in case the user would be
> interested in a pure copy including e.g. leftovers of deleted files
> which are considered inaccessible empty space by the filesystem.)
ah ok that is much more supportable then in a production cloud.
thanks for explaining. itought the data mover vm was basically connecting
to the host to copy the qcows directly im glad to hear it is not.
> Have a good day,
More information about the openstack-discuss