[Openstack] SSH permissions (was: Re: Error during live migration)
Guilherme Russi
luisguilherme.cr at gmail.com
Fri Aug 23 02:05:05 UTC 2013
Hey guys, Is my cloud's problem a bug?
Regards.
Guilherme.
2013/8/21 Guilherme Russi <luisguilherme.cr at gmail.com>
> I got another doubt, how do I instance a VM through the terminal with a
> desired tenant in a desired network? When I run nova boot --flavor x
> --image x --key_name x --security_groups x vm-name it starts with admin
> tenant on my ext_network (that contains my floating_ip), but I want the VM
> with internal_network IP and after set up a floating ip.
>
> Thank you.
>
>
> 2013/8/21 Guilherme Russi <luisguilherme.cr at gmail.com>
>
>> My nova version is 1:2013.1-0ubuntu2.1~cloud0. Do you need another
>> information?
>>
>> Regards.
>>
>> Guilherme.
>>
>>
>> 2013/8/20 Razique Mahroua <razique.mahroua at gmail.com>
>>
>>> Maybe a bug….
>>> can you give us the version u are using?
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua at gmail.com
>>> Tel : +33 9 72 37 94 15
>>>
>>>
>>> Le 20 août 2013 à 19:04, Guilherme Russi <luisguilherme.cr at gmail.com> a
>>> écrit :
>>>
>>> Hello again guys, thank you all for your support, well here is what I've
>>> done:
>>> root at hemera:/var/lib/nova# nova
>>> live-migration d7d84b8d-937f-4061-8e1b-865aa6b78dba tiresias
>>>
>>> And here is what come next:
>>> root at hemera:/var/lib/nova#
>>>
>>> I'm trying to run the command from my Controller Node, the VM is first
>>> created at the caos Compute Node and it must migrate to tiresias.
>>>
>>> Here is what the log is showing now:
>>>
>>> 2013-08-20 14:02:31.333 12667 AUDIT nova.compute.resource_tracker [-]
>>> Auditing locally available compute resources
>>> 2013-08-20 14:02:31.423 12667 ERROR nova.virt.libvirt.driver [-]
>>> Getting disk size of instance-00000084: [Errno 2] No such file or
>>> directory:
>>> '/var/lib/nova/instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd/disk'
>>> 2013-08-20 14:02:31.426 12667 ERROR nova.virt.libvirt.driver [-]
>>> Getting disk size of instance-00000077: [Errno 2] No such file or
>>> directory:
>>> '/var/lib/nova/instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd/disk'
>>> 2013-08-20 14:02:31.428 12667 ERROR nova.virt.libvirt.driver [-]
>>> Getting disk size of instance-000000bd: [Errno 2] No such file or
>>> directory:
>>> '/var/lib/nova/instances/66abd40e-fb19-4cbe-a248-61d968fd84b7/disk'
>>> 2013-08-20 14:02:31.430 12667 ERROR nova.virt.libvirt.driver [-]
>>> Getting disk size of instance-0000007d: [Errno 2] No such file or
>>> directory:
>>> '/var/lib/nova/instances/72ec37a3-b209-4729-b628-005fdcea5a3c/disk'
>>> 2013-08-20 14:02:31.531 12667 AUDIT nova.compute.resource_tracker [-]
>>> Free ram (MB): 6842
>>> 2013-08-20 14:02:31.532 12667 AUDIT nova.compute.resource_tracker [-]
>>> Free disk (GB): 93
>>> 2013-08-20 14:02:31.532 12667 AUDIT nova.compute.resource_tracker [-]
>>> Free VCPUS: 3
>>> 2013-08-20 14:02:31.608 12667 INFO nova.compute.resource_tracker [-]
>>> Compute_service record updated for caos:caos
>>> 2013-08-20 14:03:50.911 12667 ERROR nova.virt.libvirt.driver [-]
>>> [instance: d7d84b8d-937f-4061-8e1b-865aa6b78dba] Live Migration failure:
>>> not all arguments converted during string formatting
>>>
>>>
>>> Regards.
>>>
>>> Guilherme.
>>>
>>>
>>>
>>> 2013/8/20 Joe Breu <joseph.breu at rackspace.com>
>>>
>>>> You need to use the hostname of the compute node, not the IP address
>>>> for migration. In your case this would be either caos or tiersias
>>>>
>>>>
>>>> On Aug 20, 2013, at 8:06 AM, Guilherme Russi wrote:
>>>>
>>>> Hello Razique, I'm trying to run this command: nova live-migration
>>>> f18b6d2b-f685-4476-a7c0-e735db5113cf 192.168.3.3
>>>>
>>>> Where f18b6d2b-f685-4476-a7c0-e735db5113cf is the ID from my new VM
>>>> and 192.168.3.3 is the destiny compute node.
>>>> I'm executing this command from my controller node and I have all IPs
>>>> and machine names at my /etc/hosts, but I'm getting this error:
>>>>
>>>> ERROR: Compute service of 192.168.3.3 is unavailable at this time.
>>>> (HTTP 400) (Request-ID: req-117a0eea-14e8-4db2-a3d0-cbb14e438f85)
>>>>
>>>> It's almost there (I guess), hope you all can help me :)
>>>>
>>>> And Julie, thank you for saying that, now I'm only trying to migrate
>>>> through the terminal.
>>>>
>>>> Regards.
>>>>
>>>> Guilherme.
>>>>
>>>>
>>>> 2013/8/20 Razique Mahroua <razique.mahroua at gmail.com>
>>>>
>>>>> It's getting weird really…
>>>>> now can you run a live migration from the command line?
>>>>>
>>>>> Le 20 août 2013 à 00:20, Guilherme Russi <luisguilherme.cr at gmail.com>
>>>>> a écrit :
>>>>>
>>>>> Yes:
>>>>>
>>>>> nova-manage service list
>>>>> Binary Host Zone
>>>>> Status State Updated_At
>>>>> nova-cert hemera internal
>>>>> enabled :-) 2013-08-19 22:20:14
>>>>> nova-scheduler hemera internal
>>>>> enabled :-) 2013-08-19 22:20:14
>>>>> nova-consoleauth hemera internal
>>>>> enabled :-) 2013-08-19 22:20:17
>>>>> nova-conductor hemera internal
>>>>> enabled :-) 2013-08-19 22:20:14
>>>>> nova-compute caos nova
>>>>> enabled :-) 2013-08-19 22:20:17
>>>>> nova-compute tiresias nova
>>>>> enabled :-) 2013-08-19 22:20:17
>>>>>
>>>>>
>>>>>
>>>>> 2013/8/19 Razique Mahroua <razique.mahroua at gmail.com>
>>>>>
>>>>>> Can you run nova-manage service list ?
>>>>>>
>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>> razique.mahroua at gmail.com
>>>>>> Tel : +33 9 72 37 94 15
>>>>>>
>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>
>>>>>> Le 19 août 2013 à 22:27, Guilherme Russi <luisguilherme.cr at gmail.com>
>>>>>> a écrit :
>>>>>>
>>>>>> Hello again Razique, I was trying to migrate through the terminal
>>>>>> and I'm getting this line from my controller node:
>>>>>>
>>>>>> nova live-migration f18b6d2b-f685-4476-a7c0-e735db5113cf 192.168.3.3
>>>>>> ERROR: Compute service of 192.168.3.3 is unavailable at this time.
>>>>>> (HTTP 400) (Request-ID: req-454960fb-6652-46ed-9080-6d23b384cbb3)
>>>>>>
>>>>>> Haven't found anything yet. Any ideas?
>>>>>>
>>>>>> Regards.
>>>>>>
>>>>>> Guilherme.
>>>>>>
>>>>>>
>>>>>> 2013/8/19 Guilherme Russi <luisguilherme.cr at gmail.com>
>>>>>>
>>>>>>> I'm migrating it through the dashboard with the admin user. Is it
>>>>>>> wrong?
>>>>>>>
>>>>>>> Thank you.
>>>>>>>
>>>>>>> Guilherme.
>>>>>>>
>>>>>>>
>>>>>>> 2013/8/19 Razique Mahroua <razique.mahroua at gmail.com>
>>>>>>>
>>>>>>>> quick question, how do you migrate the instance ? and does it
>>>>>>>> belong to the tenant?
>>>>>>>>
>>>>>>>>
>>>>>>>> Le 19 août 2013 à 15:36, Guilherme Russi <
>>>>>>>> luisguilherme.cr at gmail.com> a écrit :
>>>>>>>>
>>>>>>>> The VM looks to be there:
>>>>>>>>
>>>>>>>> du -h instances/
>>>>>>>> 4,0K instances/keys
>>>>>>>> 1,9G instances/_base
>>>>>>>> 0 instances/locks
>>>>>>>> 310M instances/edca1460-9c97-475e-997b-3266b235e797
>>>>>>>> 260M instances/0fe59b28-04a3-41bb-9926-b7d139a70548
>>>>>>>> 323M instances/aeda1c56-72ba-40a7-9857-01a622540505
>>>>>>>> 7,0M instances/f4af662d-ae55-4346-82cf-a997192706b5
>>>>>>>> 264M instances/483f98e3-8ef5-43e2-8c3a-def55abdabcd
>>>>>>>> 268M instances/eeb81e32-32ce-49fa-8249-afc572f26233
>>>>>>>> 268M instances/c0224f0d-d7e8-4621-8c1a-af68740a3267
>>>>>>>> 188M instances/72ec37a3-b209-4729-b628-005fdcea5a3c
>>>>>>>> 12M instances/2c7c123b-456a-4c70-a991-80efee541324
>>>>>>>> 11M instances/c9e1c5ed-a108-4196-bfbc-24495e2e71bd
>>>>>>>> 6,5M instances/96d8f2de-a72d-40b5-b083-1f5e511f91ce
>>>>>>>> 531M instances/1b35a4a6-e158-48d3-a804-44b05ec171d0
>>>>>>>> 529M instances/1a2b6146-6ef0-4d76-a603-e4715b4dfb62
>>>>>>>> 514M instances/f6cb9f81-9281-419f-9e20-adaa6a07cea3
>>>>>>>> 2,7G instances/75e479cf-37a4-4511-b2a5-d4f3027702d4_resize
>>>>>>>> 2,4G instances/75e479cf-37a4-4511-b2a5-d4f3027702d4
>>>>>>>> 11G instances/
>>>>>>>>
>>>>>>>>
>>>>>>>> It's still running, no Error messagem from dashboard (it is
>>>>>>>> showing Status: Resize/Migrate - Task: Resizing or Migrating -
>>>>>>>> Power State: Running). The logs are not showing that permission
>>>>>>>> error anymore. Should I wait to see if it will migrate or returns another
>>>>>>>> error status? One thing is true, the machine is really slow now :P
>>>>>>>>
>>>>>>>> Regards.
>>>>>>>>
>>>>>>>> Guilherme.
>>>>>>>>
>>>>>>>>
>>>>>>>> 2013/8/19 Razique Mahroua <razique.mahroua at gmail.com>
>>>>>>>>
>>>>>>>>> Looks like the file doesn't exist.
>>>>>>>>> Can you spawn a new instance and try to migrate it again?
>>>>>>>>> thanks
>>>>>>>>>
>>>>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>>>>> razique.mahroua at gmail.com
>>>>>>>>> Tel : +33 9 72 37 94 15
>>>>>>>>>
>>>>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>>>>
>>>>>>>>> Le 19 août 2013 à 13:41, Guilherme Russi <
>>>>>>>>> luisguilherme.cr at gmail.com> a écrit :
>>>>>>>>>
>>>>>>>>> Hello Peter,
>>>>>>>>>
>>>>>>>>> Thank you for your help too :)
>>>>>>>>> I've changed the permission and now I'm getting another error
>>>>>>>>> when I try to migrate:
>>>>>>>>>
>>>>>>>>> cat /var/log/nova/nova-compute.log | grep -E ERROR
>>>>>>>>> 2013-08-19 08:17:19.990 12667 ERROR nova.manager [-] Error during
>>>>>>>>> ComputeManager._run_image_cache_manager_pass: Unexpected error while
>>>>>>>>> running command.
>>>>>>>>> 2013-08-19 08:23:40.309 12667 ERROR nova.network.quantumv2 [-]
>>>>>>>>> _get_auth_token() failed
>>>>>>>>> 2013-08-19 08:24:27.013 12667 ERROR nova.virt.libvirt.driver [-]
>>>>>>>>> Getting disk size of instance-000000bc: [Errno 2] No such file or
>>>>>>>>> directory:
>>>>>>>>> '/var/lib/nova/instances/75e479cf-37a4-4511-b2a5-d4f3027702d4/disk'
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> cat /var/log/nova/nova-compute.log | grep -E TRACE
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager Traceback (most
>>>>>>>>> recent call last):
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/manager.py", line 241, in
>>>>>>>>> periodic_tasks
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager task(self,
>>>>>>>>> context)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4020, in
>>>>>>>>> _run_image_cache_manager_pass
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> self.driver.manage_image_cache(context, filtered_instances)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3401,
>>>>>>>>> in manage_image_cache
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> self.image_cache_manager.verify_base_images(context, all_instances)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagecache.py", line
>>>>>>>>> 588, in verify_base_images
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> self._handle_base_image(img, base_file)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagecache.py", line
>>>>>>>>> 553, in _handle_base_image
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> virtutils.chown(base_file, os.getuid())
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 449, in
>>>>>>>>> chown
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> execute('chown', owner, path, run_as_root=True)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 47, in
>>>>>>>>> execute
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager return
>>>>>>>>> utils.execute(*args, **kwargs)
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 239, in execute
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager cmd='
>>>>>>>>> '.join(cmd))
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> ProcessExecutionError: Unexpected error while running command.
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager Command: sudo
>>>>>>>>> nova-rootwrap /etc/nova/rootwrap.conf chown 123
>>>>>>>>> /var/lib/nova/instances/_base/b0d90eb60f048f6de16c3efbc0eb4c16cbac77a6
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager Exit code: 1
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager Stdout: ''
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager Stderr:
>>>>>>>>> 'Traceback (most recent call last):\n File "/usr/bin/nova-rootwrap", line
>>>>>>>>> 108, in <module>\n os.getlogin(),
>>>>>>>>> pwd.getpwuid(os.getuid())[0],\nOSError: [Errno 22] Invalid argument\n'
>>>>>>>>> 2013-08-19 08:17:19.990 12667 TRACE nova.manager
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2
>>>>>>>>> Traceback (most recent call last):
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2 File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/__init__.py", line
>>>>>>>>> 40, in _get_auth_token
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2
>>>>>>>>> httpclient.authenticate()
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2 File
>>>>>>>>> "/usr/lib/python2.7/dist-packages/quantumclient/client.py", line 198, in
>>>>>>>>> authenticate
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2
>>>>>>>>> raise exceptions.Unauthorized(message=body)
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2
>>>>>>>>> Unauthorized: Request Timeout
>>>>>>>>> 2013-08-19 08:23:40.309 12667 TRACE nova.network.quantumv2
>>>>>>>>>
>>>>>>>>> Any ideas?
>>>>>>>>>
>>>>>>>>> Regards.
>>>>>>>>>
>>>>>>>>> Guilherme.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2013/8/19 Peter Funk <pf at artcom-gmbh.de>
>>>>>>>>>
>>>>>>>>>> Guilherme Russi wrote 16.08.2013 um 09:53:
>>>>>>>>>> > Here is my directory configs:
>>>>>>>>>> >
>>>>>>>>>> > drwxrwxrwx 2 nova nova 76 Ago 15 14:54 .ssh
>>>>>>>>>> >
>>>>>>>>>> > I made chmod a+xwr at the folder at all computers, I made it
>>>>>>>>>> because I
>>>>>>>>>> > couldn't find the reason I was no allowed to migrate, can it be
>>>>>>>>>> like that?
>>>>>>>>>>
>>>>>>>>>> For SSH to work the permissions of $HOME/.ssh should look like
>>>>>>>>>> this:
>>>>>>>>>> drwx------ 2 nova nova 76 Ago 15 14:54 .ssh
>>>>>>>>>>
>>>>>>>>>> Use:
>>>>>>>>>> chmod -R go-rwx .ssh
>>>>>>>>>>
>>>>>>>>>> Regards, Peter Funk
>>>>>>>>>> --
>>>>>>>>>> Peter Funk, home: ✉Oldenburger Str.86, D-27777 Ganderkesee
>>>>>>>>>> mobile:+49-179-640-8878 phone:+49-421-20419-0 <
>>>>>>>>>> http://www.artcom-gmbh.de/>
>>>>>>>>>> office: ArtCom GmbH, ✉Haferwende 2, D-28357 Bremen, Germany
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Mailing list:
>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>>> Post to : openstack at lists.openstack.org
>>>>>>>>>> Unsubscribe :
>>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Mailing list:
>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>> Post to : openstack at lists.openstack.org
>>>>>>>>> Unsubscribe :
>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>> _______________________________________________
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to : openstack at lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>>
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130822/c4dae983/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: NUAGECO-LOGO-Fblan_petit.jpg
Type: image/jpeg
Size: 10122 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130822/c4dae983/attachment.jpg>
More information about the Openstack
mailing list