[Openstack] [ceph-users] Openstack with Ceph, boot from volume
Josh Durgin
josh.durgin at inktank.com
Thu May 30 21:24:19 UTC 2013
On 05/30/2013 02:18 PM, Martin Mailand wrote:
> Hi Josh,
>
> that's working.
>
> I have to more things.
> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
> update your configuration to the new path. What is the new path?
cinder.volume.drivers.rbd.RBDDriver
> 2. I have in the glance-api.conf show_image_direct_url=True, but the
> volumes are not clones of the original which are in the images pool.
Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.
> That's what I did.
>
> root at controller:~/vm_images# !1228
> glance add name="Precise Server" is_public=true container_format=ovf
> disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
> Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
> root at controller:~/vm_images# rbd -p images -l ls
> NAME SIZE PARENT FMT PROT LOCK
> 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 2048M 2
> 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 at snap 2048M 2 yes
> root at controller:~/vm_images# cinder create --image-id
> 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
> 10+---------------------+--------------------------------------+
> | Property | Value |
> +---------------------+--------------------------------------+
> | attachments | [] |
> | availability_zone | nova |
> | bootable | false |
> | created_at | 2013-05-30T21:08:16.506094 |
> | display_description | None |
> | display_name | volcli1 |
> | id | 34838911-6613-4140-93e0-e1565054a2d3 |
> | image_id | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
> | metadata | {} |
> | size | 10 |
> | snapshot_id | None |
> | source_volid | None |
> | status | creating |
> | volume_type | None |
> +---------------------+--------------------------------------+
> root at controller:~/vm_images# cinder list
> +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
> | ID | Status | Display Name |
> Size | Volume Type | Bootable | Attached to |
> +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
> | 34838911-6613-4140-93e0-e1565054a2d3 | downloading | volcli1 |
> 10 | None | false | |
> +--------------------------------------+-------------+--------------+------+-------------+----------+-------------+
> root at controller:~/vm_images# rbd -p volumes -l ls
> NAME SIZE PARENT FMT PROT LOCK
> volume-34838911-6613-4140-93e0-e1565054a2d3 10240M 2
>
> root at controller:~/vm_images#
>
> -martin
>
> On 30.05.2013 22:56, Josh Durgin wrote:
>> On 05/30/2013 01:50 PM, Martin Mailand wrote:
>>> Hi Josh,
>>>
>>> I found the problem, nova-compute tries to connect to the publicurl
>>> (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
>>> the management network.
>>> I thought the internalurl is the one, which is used for the internal
>>> communication of the openstack components and the publicurl is the ip
>>> for "customer" of the cluster?
>>> Am I wrong here?
>>
>> I'd expect that too, but it's determined in nova by the
>> cinder_catalog_info option, which defaults to volume:cinder:publicURL.
>>
>> You can also override it explicitly with
>> cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
>> in your nova.conf.
>>
>> Josh
>>
>>> -martin
>>>
>>> On 30.05.2013 22:22, Martin Mailand wrote:
>>>> Hi Josh,
>>>>
>>>> On 30.05.2013 21:17, Josh Durgin wrote:
>>>>> It's trying to talk to the cinder api, and failing to connect at all.
>>>>> Perhaps there's a firewall preventing that on the compute host, or
>>>>> it's trying to use the wrong endpoint for cinder (check the keystone
>>>>> service and endpoint tables for the volume service).
>>>>
>>>> the keystone endpoint looks like this:
>>>>
>>>> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
>>>> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
>>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>>> 5ad684c5a0154c13b54283b01744181b
>>>>
>>>> where 192.168.192.2 is the IP from the controller node.
>>>>
>>>> And from the compute node a telnet 192.168.192.2 8776 is working.
>>>>
>>>> -martin
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users at lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>
More information about the Openstack
mailing list