[Openstack-operators] Normal user operated Live Migration
alopgeek at gmail.com
Mon Aug 4 16:53:06 UTC 2014
Going out on a limb here.
I think the intent of live migration was for the operators to be able to perform scheduled maintenance on a compute node, not really something a user would be savvy to.
On Aug 4, 2014, at 12:00 AM, Alvise Dorigo <alvise.dorigo at pd.infn.it> wrote:
> I experienced the impossibility for a normal user to (Live)Migrate her/his virtual instances from a compute node (eventually failed or not) to another one:
> ERROR: Live migration of instance b2c60d5a-831e-4fa6-856c-ddf3d8d287ce to host compute-01.cloud.pd.infn.it failed (HTTP 400) (Request-ID: req-d32ebc3f-d74a-41a3-821e-149009ea2cbb)
> In the controller node I see this in the conductor.log:
> 2014-08-04 08:56:35.988 2346 ERROR nova.openstack.common.rpc.common [req-d32ebc3f-d74a-41a3-821e-149009ea2cbb ca9b92d86e184def8e4d651ced8f67eb a6c9f4d7e973430db7f9615fe2a2bfec] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data\n **args)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 439, in inner\n return catch_client_exception(exceptions, func, *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 420, in catch_client_exception\n return func(*args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 645, in migrate_server\n block_migration, disk_over_commit)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 747, in _live_migrate\n ex, request_spec, self.db)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 719, in _live_migrate\n block_migration, disk_over_commit)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py", line 205, in execute\n return task.execute()\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py", line 59, in execute\n self._check_host_is_up(self.source)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/live_migrate.py", line 90, in _check_host_is_up\n service = db.service_get_by_compute_host(self.context, host)\n', ' File "/usr/lib/python2.6/site-packages/nova/db/api.py", line 151, in service_get_by_compute_host\n return IMPL.service_get_by_compute_host(context, host)\n', ' File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 107, in wrapper\n nova.context.require_admin_context(args)\n', ' File "/usr/lib/python2.6/site-packages/nova/context.py", line 195, in require_admin_context\n raise exception.AdminRequired()\n', 'AdminRequired: User does not have admin privileges\n']
> This seems to be strictly related to other restrictions (like "nova host-list" or the missing compute node in the output of "nova show <server>").
> I tried to live-migrate VM using the admin user and it worked fine. But of course admin doesn't see regular user's VM.
> I'm wondering what the live migration is for, if it cannot be used by regular users ... If a compute node fails there seems to be no chance to restore your work seamlessly.
> Even after modification of /etc/nova/policy.json, the situation haven't changed.
> After a quick search on google, I've found this bug https://review.openstack.org/#/c/26972/: I tried to implemented the fix (el_context.elevate()), but for example the command "nova host-list" still gives up with privilege problem.
> Is there a way to perform the important task of VM migration ?
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
More information about the OpenStack-operators