<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:10pt"><div><br>Hi John,<br><br>I found a solution to the problem that I asked you help for and I consider that you do not need to look anymore at it.</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">I will also comment to the bug that I logged (https://bugs.launchpad.net/cinder/+bug/1280367) but I think I could not close it myself.</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida
 Grande,sans-serif; background-color: transparent; font-style: normal;">The solution for HA cinder is the parameter <host> in /etc/cinder/cinder.conf on the machines that are running cinder-volume service.</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">If you set this parameter to the same string on all the hosts that are running cinder-volume service than all the hosts will be parent to any volume in the ceph cluster, so any host can create a volume from an already existing volume.</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida
 Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">Problem fixed....:) I will log a documentation bug to both ceph and openstack because of this parameter depends HA for cinder.</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color:
 transparent; font-style: normal;">Thanks a lot,</div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">Gabriel<br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br> </div><div style="display: block;" class="yahoo_quoted"> <br> <br> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 10pt;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div dir="ltr"> <font face="Arial" size="2"> On Wednesday, February 12, 2014 7:07 PM, John Griffith <john.griffith@solidfire.com> wrote:<br> </font> </div>  <div class="y_msg_container">On Wed, Feb
 12, 2014 at 10:57 AM, Staicu Gabriel<br clear="none"><<a shape="rect" ymailto="mailto:gabriel_staicu@yahoo.com" href="mailto:gabriel_staicu@yahoo.com">gabriel_staicu@yahoo.com</a>> wrote:<br clear="none">> Thanks for the answer John.<br clear="none">> This is really tricky and I will try to explain why:<br clear="none">> You were right. The snap volume itself was created from a volume on opstck01<br clear="none">> when it was up.<br clear="none">> This is the offending snap. Here is the proof:<br clear="none">> <a shape="rect" ymailto="mailto:root@opstck10" href="mailto:root@opstck10">root@opstck10</a>:~# cinder snapshot-show 30093123-0da2-4864-b8e6-87e023e842a4<br clear="none">> +--------------------------------------------+--------------------------------------+<br clear="none">> |                  Property                  | 
               Value<br clear="none">> |<br clear="none">> +--------------------------------------------+--------------------------------------+<br clear="none">> |                 created_at                 |<br clear="none">> 2014-02-12T13:20:41.000000      |<br clear="none">> |            display_description             |<br clear="none">> |<br clear="none">> |                display_name                |          cirros-0.3.1-snap<br clear="none">> |<br clear="none">> |                     id                     |<br clear="none">>
 30093123-0da2-4864-b8e6-87e023e842a4 |<br clear="none">> |                  metadata                  |                  {}<br clear="none">> |<br clear="none">> |  os-extended-snapshot-attributes:progress  |                 100%<br clear="none">> |<br clear="none">> | os-extended-snapshot-attributes:project_id |<br clear="none">> 8c25ff44225f4e78ab3f526d99c1b7e1   |<br clear="none">> |                    size                    |                  1<br clear="none">> |<br clear="none">> |                   status           
        |              available<br clear="none">> |<br clear="none">> |                 volume_id                  |<br clear="none">> 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |<br clear="none">> +--------------------------------------------+--------------------------------------+<br clear="none">><br clear="none">> <a shape="rect" ymailto="mailto:root@opstck10" href="mailto:root@opstck10">root@opstck10</a>:~# cinder show 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9<br clear="none">> +--------------------------------+--------------------------------------+<br clear="none">> |            Property            |                Value                 |<br clear="none">>
 +--------------------------------+--------------------------------------+<br clear="none">> |          attachments           |                  []                  |<br clear="none">> |       availability_zone        |                 nova                 |<br clear="none">> |            bootable            |                false                 |<br clear="none">> |           created_at           |      2014-02-11T15:33:58.000000      |<br clear="none">> |   
   display_description       |                                      |<br clear="none">> |          display_name          |             cirros-0.3.1             |<br clear="none">> |               id               | 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |<br clear="none">> |            metadata            |       {u'readonly': u'False'}        |<br clear="none">> |     os-vol-host-attr:host      |               opstck01               |<br clear="none">> |
 os-vol-mig-status-attr:migstat |                 None                 |<br clear="none">> | os-vol-mig-status-attr:name_id |                 None                 |<br clear="none">> |  os-vol-tenant-attr:tenant_id  |   8c25ff44225f4e78ab3f526d99c1b7e1   |<br clear="none">> |              size              |                  1                   |<br clear="none">> |          snapshot_id           |                 None                 |<br clear="none">> |     
     source_volid          |                 None                 |<br clear="none">> |             status             |              available               |<br clear="none">> |          volume_type           |                 None                 |<br clear="none">> +--------------------------------+--------------------------------------+<br clear="none">><br clear="none">> And now follows the interesting part. I am using ceph as a backend for<br clear="none">> cinder and I have multiple cinder-volume agents for HA reasons. The volumes<br clear="none">>
 themselves are available. The agents can fall.  How can I overcome the<br clear="none">> limitation to create a volume on the same agent as the snap itself was<br clear="none">> created?<br clear="none">><br clear="none">> Thanks a lot,<br clear="none">> Gabriel<br clear="none">><br clear="none">><br clear="none">><br clear="none">> On Wednesday, February 12, 2014 6:20 PM, John Griffith<br clear="none">> <<a shape="rect" ymailto="mailto:john.griffith@solidfire.com" href="mailto:john.griffith@solidfire.com">john.griffith@solidfire.com</a>> wrote:<br clear="none">> On Wed, Feb 12, 2014 at 3:24 AM, Staicu Gabriel<br clear="none">> <<a shape="rect" ymailto="mailto:gabriel_staicu@yahoo.com" href="mailto:gabriel_staicu@yahoo.com">gabriel_staicu@yahoo.com</a>> wrote:<br clear="none">>><br clear="none">>><br clear="none">>> Hi,<br clear="none">>><br clear="none">>> I have a setup
 with Openstack Havana on ubuntu precise with multiple<br clear="none">>> schedulers and volumes.<br clear="none">>> <a shape="rect" ymailto="mailto:root@opstck10" href="mailto:root@opstck10">root@opstck10</a>:~# cinder service-list<br clear="none">>><br clear="none">>> +------------------+----------+------+---------+-------+----------------------------+<br clear="none">>> |      Binary      |  Host  | Zone |  Status | State |        Updated_at<br clear="none">>> |<br clear="none">>><br clear="none">>> +------------------+----------+------+---------+-------+----------------------------+<br clear="none">>> | cinder-scheduler | opstck08 | nova | enabled |  up  |<br clear="none">>> 2014-02-12T10:08:28.000000 |<br clear="none">>> | cinder-scheduler | opstck09 | nova | enabled |  up  |<br clear="none">>>
 2014-02-12T10:08:29.000000 |<br clear="none">>> | cinder-scheduler | opstck10 | nova | enabled |  up  |<br clear="none">>> 2014-02-12T10:08:28.000000 |<br clear="none">>> |  cinder-volume  | opstck01 | nova | enabled |  down |<br clear="none">>> 2014-02-12T09:39:09.000000 |<br clear="none">>> |  cinder-volume  | opstck04 | nova | enabled |  down |<br clear="none">>> 2014-02-12T09:39:09.000000 |<br clear="none">>> |  cinder-volume  | opstck05 | nova | enabled |  down |<br clear="none">>> 2014-02-12T09:39:09.000000 |<br clear="none">>> |  cinder-volume  | opstck08 | nova | enabled |  up  |<br clear="none">>> 2014-02-12T10:08:28.000000 |<br clear="none">>> |  cinder-volume  | opstck09 | nova | enabled |  up  |<br clear="none">>> 2014-02-12T10:08:28.000000 |<br clear="none">>> | 
 cinder-volume  | opstck10 | nova | enabled |  up  |<br clear="none">>> 2014-02-12T10:08:28.000000 |<br clear="none">>><br clear="none">>> +------------------+----------+------+---------+-------+----------------------------+<br clear="none">>><br clear="none">>> When I am trying to create a new instance from a volume snapshot it keeps<br clear="none">>> choosing for the creation of the volume opstck01 on which cinder-volume is<br clear="none">>> down.<br clear="none">>> Did anyone encounter the same problem?<br clear="none">>> Thanks<br clear="none">>><br clear="none">>><br clear="none">>><br clear="none">>> _______________________________________________<br clear="none">>> Mailing list:<br clear="none">>> <a shape="rect" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack"
 target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br clear="none">>> Post to    : <a shape="rect" ymailto="mailto:openstack@lists.openstack.org" href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br clear="none">>> Unsubscribe :<br clear="none">>> <a shape="rect" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br clear="none">>><br clear="none">><br clear="none">> Which node is the parent volume and snapshot on?  In the case of<br clear="none">> create from source-volume and create from snapshot these need to be<br clear="none">> created on the same node as the source or snapshot.  There's currently<br clear="none">> a bug ([1]), where we don't check/enforce the type settings to make<br clear="none">> sure these match up. 
 That's in progress right now and will be<br clear="none">> backported.<br clear="none">><br clear="none">> [1]: <a shape="rect" href="https://bugs.launchpad.net/cinder/+bug/1276787" target="_blank">https://bugs.launchpad.net/cinder/+bug/1276787</a><br clear="none">><br clear="none">><br clear="none">> Thanks,<br clear="none">><br clear="none">> John<br clear="none">><br clear="none">><br clear="none">><br clear="none">Glad we answered the question... sad that you've hit another little<br clear="none">problem (bug) in how we do things here.<br clear="none"><br clear="none">Since you're HA we should be able to discern that the volume is<br clear="none">available via multiple hosts.  I'll have to look at that more closely,<br clear="none">I believe this is might be a known issue.  We need to open a bug on<br clear="none">this and try and get it sorted out.<br clear="none"><br clear="none">Would you be willing/able
 to provide some details on how you did your<br clear="none">HA config?  Maybe log a bug in launchpad with the details and I can<br clear="none">try to look at fixing things up here shortly?<div class="yqt0734962444" id="yqtfd16525"><br clear="none"><br clear="none">Thanks,<br clear="none">John<br clear="none"></div><br><br></div>  </div> </div>  </div> </div></body></html>