<div dir="ltr"><div class="gmail_quote"><br><div dir="ltr">Hi Tom<div><br></div><div>thanks for your reply! </div><div><br></div><div>what do you mean by "<span style="font-size:12.8px">usernames for CephX don't match" ?</span></div><div><span style="font-size:12.8px"><br></span></div><div>I checked my configuration files and updated ceph user permission. <br></div><div><br></div><div><div>In my deployment, I defined 2 pools in my ceph cluster, imagesliberty and volumesliberty. Now I granted full access for both users(glanceliberty and cinderliberty) to both pools.</div><div><br></div><div>$ ceph auth list</div><div><br></div><div><div>client.cinderliberty</div><div><span style="white-space:pre-wrap">    </span>key: AQA4v7tW823HLhAAmqf/rbxCbQgyfrfFJMTxDQ==</div><div><span style="white-space:pre-wrap">    </span>caps: [mon] allow r</div><div><span style="white-space:pre-wrap">      </span>caps: [osd] allow *</div><div>client.glanceliberty</div><div><span style="white-space:pre-wrap">   </span>key: AQBHv7tWY5ofNxAAIueTUXRUs2lJWkfjiJkLKw==</div><div><span style="white-space:pre-wrap">    </span>caps: [mon] allow r</div><div><span style="white-space:pre-wrap">      </span>caps: [osd] allow *</div></div></div><div><br></div><div>When I ran "glance image-show" command, I can see my image url and the format is raw (I read somewhere, saying the format has to be raw):</div><div><br></div><div>$ glance image-show 595ee912-993c-4878-a833-7bdffda1f692</div><div><div>+------------------+----------------------------------------------------------------------------------+</div><div>| Property         | Value                                                                            |</div><div>+------------------+----------------------------------------------------------------------------------+</div><div>| checksum         | 1ee004d7fd75fd518ab5c8dba589ba73                                                 |</div><div>| container_format | bare                                                                             |</div><div>| created_at       | 2016-03-29T19:44:39Z                                                             |</div><div>| direct_url       | rbd://2e906379-f211-4329-8faf-                                                   |</div><div>|                  | a8e7600b8418/imagesliberty/595ee912-993c-4878-a833-7bdffda1f692/snap             |</div><div>| disk_format      | raw                                                                              |</div><div>| id               | 595ee912-993c-4878-a833-7bdffda1f692                                             |</div><div>| min_disk         | 0                                                                                |</div><div>| min_ram          | 0                                                                                |</div><div>| name             | centosraw                                                                        |</div><div>| owner            | 0f861e423bc248f3896dc17b5bc3f140                                                 |</div><div>| protected        | False                                                                            |</div><div>| size             | 10737418240                                                                      |</div><div>| status           | active                                                                           |</div><div>| tags             | []                                                                               |</div><div>| updated_at       | 2016-03-29T19:52:05Z                                                             |</div><div>| virtual_size     | None                                                                             |</div><div>| visibility       | public                                                                           |</div><div>+------------------+----------------------------------------------------------------------------------+</div></div><div><br></div><div> But still cinder has to download and upload the image. Just wondering is there anything I missed or misconfigured?<br></div><div><br></div><div><br></div><div><font size="4">Here is my glance-api.conf:</font></div><div><br></div><div><div>[database]</div><div><br></div><div>connection = mysql://glanceliberty:b7828017cd0e939c3625@vsusnjhhdiosdbwvip/glanceliberty</div><div><br></div><div>[keystone_authtoken]</div><div><br></div><div>auth_uri = <a href="http://vsusnjhhdiosconvip:5000" target="_blank">http://vsusnjhhdiosconvip:5000</a></div><div>auth_url = <a href="http://vsusnjhhdiosconvip:35357" target="_blank">http://vsusnjhhdiosconvip:35357</a></div><div>auth_plugin = password</div><div>project_domain_id = default</div><div>user_domain_id = default</div><div>project_name = service</div><div>username =glanceliberty</div><div>password =91f0bffdb95a11432eeb</div><div><br></div><div>[paste_deploy]</div><div><br></div><div>flavor = keystone</div><div><br></div><div><br></div><div>[DEFAULT]</div><div><br></div><div>notification_driver = noop</div><div>verbose = True</div><div>registry_host=vsusnjhhdiosconvip</div><div>show_image_direct_url = True</div><div><br></div><div>[glance_store]</div><div>stores = glance.store.rbd.Store</div><div>default_store = rbd</div><div>rbd_store_pool = imagesliberty</div><div>rbd_store_user = glanceliberty</div><div>rbd_store_ceph_conf = /etc/ceph/ceph_glance.conf</div><div>rbd_store_chunk_size = 8</div><div><br></div><div>[oslo_messaging_rabbit]</div><div>rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672</div><div>rabbit_retry_interval=1</div><div>rabbit_retry_backoff=2</div><div>rabbit_max_retries=0</div><div>rabbit_durable_queues=true</div><div>rabbit_ha_queues=true</div><div>rabbit_userid = osliberty</div><div>rabbit_password = 8854da21c3881e45a269</div></div><div><br></div><div><font size="4">and my cinder.conf file:</font></div><div><br></div><div><div>[database]</div><div><br></div><div>connection = mysql://cinderliberty:a679ac3149ead0562135@vsusnjhhdiosdbwvip/cinderliberty</div><div><br></div><div>[DEFAULT]</div><div><br></div><div>rpc_backend = rabbit</div><div>auth_strategy = keystone</div><div>my_ip = 192.168.2.12</div><div>verbose = True</div><div><br></div><div>[paste_deploy]</div><div><br></div><div>flavor = keystone</div><div><br></div><div>[oslo_messaging_rabbit]</div><div>rabbit_hosts=psusnjhhdlc7ioscon001:5672,psusnjhhdlc7ioscon002:5672</div><div>rabbit_retry_interval=1</div><div>rabbit_retry_backoff=2</div><div>rabbit_max_retries=0</div><div>rabbit_durable_queues=true</div><div>rabbit_ha_queues=true</div><div>rabbit_userid = osliberty</div><div>rabbit_password = 8854da21c3881e45a269</div><div><br></div><div>[keystone_authtoken]</div><div><br></div><div>auth_uri = <a href="http://vsusnjhhdiosconvip:5000" target="_blank">http://vsusnjhhdiosconvip:5000</a></div><div>auth_url = <a href="http://vsusnjhhdiosconvip:35357" target="_blank">http://vsusnjhhdiosconvip:35357</a></div><div>auth_plugin = password</div><div>project_domain_id = default</div><div>user_domain_id = default</div><div>project_name = service</div><div>username = cinderliberty</div><div>password = fb11f7fc97c40a51616b</div><div><br></div><div>[oslo_concurrency]</div><div><br></div><div>lock_path = /var/lib/cinder/tmp</div></div><div><br></div><div><div>[rbd]</div><div>volume_driver = cinder.volume.drivers.rbd.RBDDriver</div><div>rbd_pool = volumesliberty</div><div>volume_backend_name = rbd</div><div>rbd_ceph_conf = /etc/ceph/ceph.conf</div><div>rbd_flatten_volume_from_snapshot = false</div><div>rbd_max_clone_depth = 5</div><div>rbd_store_chunk_size = 4</div><div>rados_connect_timeout = -1</div><div>glance_api_version = 2</div><div>rbd_user = cinderliberty</div><div>rbd_secret_uuid = 5279328b-0f31-4a69-99bc-75ad2637a946</div></div><div><div class="h5"><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 26, 2016 at 1:48 PM, Tom Walsh <span dir="ltr"><<a href="mailto:expresswebsys+openstack@gmail.com" target="_blank">expresswebsys+openstack@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Yang,<br>
<br>
If your Glance images are stored in the same Ceph cluster as your<br>
Cinder volumes then you should be able to do Copy on Write instance<br>
cloning. Basically what happens is your Glance images are passed to<br>
Cinder as a snapshot pointer and then you boot from that. Boot times<br>
with this method are very fast, typically less than 30 seconds.<br>
<br>
There are a few things you need to make sure you have in place though.<br>
<br>
1) If you are using Ceph authentication (CephX) then you must make<br>
sure that your pool users have the correct permissions to access the<br>
other pools. In our setup we allow rwx on the images pool from the<br>
cinder user "client.volumes".<br>
<br>
2) You must tell Glance to provide a snapshot URL instead of copying<br>
the entire image to the volume in glance-api.conf:<br>
<br>
show_image_direct_url = True<br>
<br>
There is one minor gotcha with this method though. Once you create an<br>
instance volume from an image, you can no longer remove that image<br>
from Glance for the lifetime of the Cinder volume it is based off of.<br>
We set our images to be "protected" for that reason.<br>
<br>
This page provided by Ceph covers this information as well, but some<br>
of it doesn't work right in Liberty, and the values that they use<br>
don't match up with other RBD guides provided by OpenStack to get Ceph<br>
working (usernames for CephX don't match - so don't just use it to<br>
copy and paste).<br>
<a href="http://docs.ceph.com/docs/master/rbd/rbd-openstack/" rel="noreferrer" target="_blank">http://docs.ceph.com/docs/master/rbd/rbd-openstack/</a><br>
<br>
Hope that helps.<br>
<br>
Tom Walsh<br>
ExpressHosting<br>
<a href="https://expresshosting.net/" rel="noreferrer" target="_blank">https://expresshosting.net/</a><br>
<div><div><br>
On Fri, Mar 25, 2016 at 12:30 PM, yang sheng <<a href="mailto:forsaks.30@gmail.com" target="_blank">forsaks.30@gmail.com</a>> wrote:<br>
> Hi All<br>
><br>
> I am new to openstack. I just deployed Openstack liberty using ceph as<br>
> cinder and glance backend.<br>
><br>
> I have some images (raw format, about 10G) in glance (stored in ceph).<br>
><br>
> I tried 2 different methods to spawn the instance.<br>
><br>
> Because my image size are huge. If I want to spawn an instance by using<br>
> creating a volume from an image and boot from that volume,<br>
> (<a href="http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html" rel="noreferrer" target="_blank">http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html</a>)<br>
> cinder-volume will download entire image from glance (also in the ceph).<br>
> Instance will in error status after about 3 minutes (internal time-out<br>
> mechanism?), saying block device mapping problem. However, my cinder is<br>
> still creating the volume (  It will take 10 minutes for cinder to upload,<br>
> download the image and create volume).<br>
><br>
> So I came up with another method. I create a volume from glance first (also<br>
> 10 minutes). If I want to spawn an new  instance, I just clone that volume<br>
> (it is instant). Then boot from that volume directly.<br>
><br>
> My glance and cinder are using the same ceph cluster (different pools).<br>
><br>
> I download some images from openstack<br>
> (<a href="http://docs.openstack.org/image-guide/obtain-images.html" rel="noreferrer" target="_blank">http://docs.openstack.org/image-guide/obtain-images.html</a>). Since their size<br>
> are not that large, most are no more than 1G, it is fine to use the first<br>
> method.<br>
><br>
> Just wondering why cinder-volume has to download the image. Is there anyway<br>
> to bypass this process? Or can ceph handle this internally?<br>
><br>
> thanks for any advise!<br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
> Post to     : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
> Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
><br>
</blockquote></div><br></div></div></div></div>
</div><br></div>