[Openstack] Fwd: How should I spawn an instance using ceph

Tom Walsh expresswebsys at gmail.com
Wed Apr 6 15:31:34 UTC 2016


Yang,

> what do you mean by "usernames for CephX don't match"

I meant that on the page I linked the Ceph documentation doesn't use
the same usernames that the OpenStack documentation uses (cinder
versus volumes). You are using your own usernames, so the point is
moot.

Raw is the correct format. It is required in order to allow for CoW
volumes which is what you are trying to do.

Your configuration looks correct and I don't see anything that might
indicate a problem. There are a few configuration directives that I
don't have in my configuration:

cinder.conf:

volume_backend_name = rbd

glance-api.conf:

stores = glance.store.rbd.Store

I have:
stores = rbd

(I am not sure which is correct.)

My next suggestion is to turn on debug in the configs of both and
watch them carefully when you are spawning a new instance from an
image. You should see some log entries that indicate where the problem
was. That is how I realized that I didn't have the correct permissions
on my images store so that the volume user could access it.

I hope that helps.


On Mon, Apr 4, 2016 at 9:22 AM, yang sheng <forsaks.30 at gmail.com> wrote:
>
> Hi Tom
>
> thanks for your reply!
>
> what do you mean by "usernames for CephX don't match" ?
>
> I checked my configuration files and updated ceph user permission.
>
> In my deployment, I defined 2 pools in my ceph cluster, imagesliberty and
> volumesliberty. Now I granted full access for both users(glanceliberty and
> cinderliberty) to both pools.
>
> $ ceph auth list
>
> client.cinderliberty
> key: AQA4v7tW823HLhAAmqf/rbxCbQgyfrfFJMTxDQ==
> caps: [mon] allow r
> caps: [osd] allow *
> client.glanceliberty
> key: AQBHv7tWY5ofNxAAIueTUXRUs2lJWkfjiJkLKw==
> caps: [mon] allow r
> caps: [osd] allow *
>
> When I ran "glance image-show" command, I can see my image url and the
> format is raw (I read somewhere, saying the format has to be raw):
>
> $ glance image-show 595ee912-993c-4878-a833-7bdffda1f692
> +------------------+----------------------------------------------------------------------------------+
> | Property         | Value
> |
> +------------------+----------------------------------------------------------------------------------+
> | checksum         | 1ee004d7fd75fd518ab5c8dba589ba73
> |
> | container_format | bare
> |
> | created_at       | 2016-03-29T19:44:39Z
> |
> | direct_url       | rbd://2e906379-f211-4329-8faf-
> |
> |                  |
> a8e7600b8418/imagesliberty/595ee912-993c-4878-a833-7bdffda1f692/snap
> |
> | disk_format      | raw
> |
> | id               | 595ee912-993c-4878-a833-7bdffda1f692
> |
> | min_disk         | 0
> |
> | min_ram          | 0
> |
> | name             | centosraw
> |
> | owner            | 0f861e423bc248f3896dc17b5bc3f140
> |
> | protected        | False
> |
> | size             | 10737418240
> |
> | status           | active
> |
> | tags             | []
> |
> | updated_at       | 2016-03-29T19:52:05Z
> |
> | virtual_size     | None
> |
> | visibility       | public
> |
> +------------------+----------------------------------------------------------------------------------+
>
>  But still cinder has to download and upload the image. Just wondering is
> there anything I missed or misconfigured?
>
>
> Here is my glance-api.conf:
>
> [database]
>
> connection =
> mysql://glanceliberty:b7828017cd0e939c3625@vsusnjhhdiosdbwvip/glanceliberty
>
> [keystone_authtoken]
>
> auth_uri = http://vsusnjhhdiosconvip:5000
> auth_url = http://vsusnjhhdiosconvip:35357
> auth_plugin = password
> project_domain_id = default
> user_domain_id = default
> project_name = service
> username =glanceliberty
> password =91f0bffdb95a11432eeb
>
> [paste_deploy]
>
> flavor = keystone
>
>
> [DEFAULT]
>
> notification_driver = noop
> verbose = True
> registry_host=vsusnjhhdiosconvip
> show_image_direct_url = True
>
> [glance_store]
> stores = glance.store.rbd.Store
> default_store = rbd
> rbd_store_pool = imagesliberty
> rbd_store_user = glanceliberty
> rbd_store_ceph_conf = /etc/ceph/ceph_glance.conf
> rbd_store_chunk_size = 8
>
> [oslo_messaging_rabbit]
> rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672
> rabbit_retry_interval=1
> rabbit_retry_backoff=2
> rabbit_max_retries=0
> rabbit_durable_queues=true
> rabbit_ha_queues=true
> rabbit_userid = osliberty
> rabbit_password = 8854da21c3881e45a269
>
> and my cinder.conf file:
>
> [database]
>
> connection =
> mysql://cinderliberty:a679ac3149ead0562135@vsusnjhhdiosdbwvip/cinderliberty
>
> [DEFAULT]
>
> rpc_backend = rabbit
> auth_strategy = keystone
> my_ip = 192.168.2.12
> verbose = True
>
> [paste_deploy]
>
> flavor = keystone
>
> [oslo_messaging_rabbit]
> rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672
> rabbit_retry_interval=1
> rabbit_retry_backoff=2
> rabbit_max_retries=0
> rabbit_durable_queues=true
> rabbit_ha_queues=true
> rabbit_userid = osliberty
> rabbit_password = 8854da21c3881e45a269
>
> [keystone_authtoken]
>
> auth_uri = http://vsusnjhhdiosconvip:5000
> auth_url = http://vsusnjhhdiosconvip:35357
> auth_plugin = password
> project_domain_id = default
> user_domain_id = default
> project_name = service
> username = cinderliberty
> password = fb11f7fc97c40a51616b
>
> [oslo_concurrency]
>
> lock_path = /var/lib/cinder/tmp
>
> [rbd]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = volumesliberty
> volume_backend_name = rbd
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
> rbd_max_clone_depth = 5
> rbd_store_chunk_size = 4
> rados_connect_timeout = -1
> glance_api_version = 2
> rbd_user = cinderliberty
> rbd_secret_uuid = 5279328b-0f31-4a69-99bc-75ad2637a946
>
>
>
>
>
>
> On Sat, Mar 26, 2016 at 1:48 PM, Tom Walsh
> <expresswebsys+openstack at gmail.com> wrote:
>>
>> Yang,
>>
>> If your Glance images are stored in the same Ceph cluster as your
>> Cinder volumes then you should be able to do Copy on Write instance
>> cloning. Basically what happens is your Glance images are passed to
>> Cinder as a snapshot pointer and then you boot from that. Boot times
>> with this method are very fast, typically less than 30 seconds.
>>
>> There are a few things you need to make sure you have in place though.
>>
>> 1) If you are using Ceph authentication (CephX) then you must make
>> sure that your pool users have the correct permissions to access the
>> other pools. In our setup we allow rwx on the images pool from the
>> cinder user "client.volumes".
>>
>> 2) You must tell Glance to provide a snapshot URL instead of copying
>> the entire image to the volume in glance-api.conf:
>>
>> show_image_direct_url = True
>>
>> There is one minor gotcha with this method though. Once you create an
>> instance volume from an image, you can no longer remove that image
>> from Glance for the lifetime of the Cinder volume it is based off of.
>> We set our images to be "protected" for that reason.
>>
>> This page provided by Ceph covers this information as well, but some
>> of it doesn't work right in Liberty, and the values that they use
>> don't match up with other RBD guides provided by OpenStack to get Ceph
>> working (usernames for CephX don't match - so don't just use it to
>> copy and paste).
>> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>
>> Hope that helps.
>>
>> Tom Walsh
>> ExpressHosting
>> https://expresshosting.net/
>>
>> On Fri, Mar 25, 2016 at 12:30 PM, yang sheng <forsaks.30 at gmail.com> wrote:
>> > Hi All
>> >
>> > I am new to openstack. I just deployed Openstack liberty using ceph as
>> > cinder and glance backend.
>> >
>> > I have some images (raw format, about 10G) in glance (stored in ceph).
>> >
>> > I tried 2 different methods to spawn the instance.
>> >
>> > Because my image size are huge. If I want to spawn an instance by using
>> > creating a volume from an image and boot from that volume,
>> >
>> > (http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html)
>> > cinder-volume will download entire image from glance (also in the ceph).
>> > Instance will in error status after about 3 minutes (internal time-out
>> > mechanism?), saying block device mapping problem. However, my cinder is
>> > still creating the volume (  It will take 10 minutes for cinder to
>> > upload,
>> > download the image and create volume).
>> >
>> > So I came up with another method. I create a volume from glance first
>> > (also
>> > 10 minutes). If I want to spawn an new  instance, I just clone that
>> > volume
>> > (it is instant). Then boot from that volume directly.
>> >
>> > My glance and cinder are using the same ceph cluster (different pools).
>> >
>> > I download some images from openstack
>> > (http://docs.openstack.org/image-guide/obtain-images.html). Since their
>> > size
>> > are not that large, most are no more than 1G, it is fine to use the
>> > first
>> > method.
>> >
>> > Just wondering why cinder-volume has to download the image. Is there
>> > anyway
>> > to bypass this process? Or can ceph handle this internally?
>> >
>> > thanks for any advise!
>> >
>> >
>> >
>> > _______________________________________________
>> > Mailing list:
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to     : openstack at lists.openstack.org
>> > Unsubscribe :
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>




More information about the Openstack mailing list