[Openstack] openstack havana cinder chooses wrong host to create new volume

Staicu Gabriel gabriel_staicu at yahoo.com
Wed Feb 12 17:57:48 UTC 2014


Thanks for the answer John.
This is really tricky and I will try to explain why:
You were right. The snap volume itself was created from a volume on opstck01 when it was up.
This is the offending snap. Here is the proof:
root at opstck10:~# cinder snapshot-show 30093123-0da2-4864-b8e6-87e023e842a4
+--------------------------------------------+--------------------------------------+
|                  Property                  |                Value                 |
+--------------------------------------------+--------------------------------------+
|                 created_at                 |      2014-02-12T13:20:41.000000      |
|            display_description             |                                      |
|                display_name                |          cirros-0.3.1-snap           |
|                     id                     | 30093123-0da2-4864-b8e6-87e023e842a4 |
|                  metadata                  |                  {}                  |
|  os-extended-snapshot-attributes:progress  |                 100%                 |
| os-extended-snapshot-attributes:project_id |   8c25ff44225f4e78ab3f526d99c1b7e1   |
|                    size                    |                  1                   |
|                   status                   |              available               |
|                 volume_id                  | 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
+--------------------------------------------+--------------------------------------+


root at opstck10:~# cinder show 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|           created_at           |      2014-02-11T15:33:58.000000      |
|      display_description       |                                      |
|          display_name          |             cirros-0.3.1             |
|               id               | 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
|            metadata            |       {u'readonly': u'False'}        |
|     os-vol-host-attr:host      |              opstck01               |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   8c25ff44225f4e78ab3f526d99c1b7e1   |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |
|          volume_type           |                 None                 |
+--------------------------------+--------------------------------------+


And now follows the interesting part. I am using ceph as a backend for cinder and I have multiple cinder-volume agents for HA reasons. The volumes themselves are available. The agents can fall.  How can I overcome the limitation to create a volume on the same agent as the snap itself was created? 


Thanks a lot,
Gabriel





On Wednesday, February 12, 2014 6:20 PM, John Griffith <john.griffith at solidfire.com> wrote:
 
On Wed, Feb 12, 2014 at 3:24 AM, Staicu Gabriel
<gabriel_staicu at yahoo.com> wrote:
>
>
> Hi,
>
> I have a setup with Openstack Havana on ubuntu precise with multiple
> schedulers and volumes.
> root at opstck10:~# cinder service-list
> +------------------+----------+------+---------+-------+----------------------------+
> |      Binary      |   Host   | Zone |  Status | State |         Updated_at
> |
> +------------------+----------+------+---------+-------+----------------------------+
> | cinder-scheduler | opstck08 | nova | enabled |   up  |
> 2014-02-12T10:08:28.000000 |
> | cinder-scheduler | opstck09 | nova | enabled |   up  |
> 2014-02-12T10:08:29.000000 |
> | cinder-scheduler | opstck10 | nova | enabled |   up  |
> 2014-02-12T10:08:28.000000 |
> |  cinder-volume   | opstck01 | nova | enabled |  down |
> 2014-02-12T09:39:09.000000 |
> |  cinder-volume   | opstck04 | nova | enabled |  down |
> 2014-02-12T09:39:09.000000 |
> |  cinder-volume   | opstck05 | nova | enabled |  down |
> 2014-02-12T09:39:09.000000 |
> |  cinder-volume   | opstck08 | nova | enabled |   up  |
> 2014-02-12T10:08:28.000000 |
> |  cinder-volume   | opstck09 | nova | enabled |   up  |
> 2014-02-12T10:08:28.000000 |
> |  cinder-volume   | opstck10 | nova | enabled |   up  |
> 2014-02-12T10:08:28.000000 |
> +------------------+----------+------+---------+-------+----------------------------+
>
> When I am trying to create a new instance from a volume snapshot it keeps
> choosing for the creation of the volume opstck01 on which cinder-volume is
> down.
> Did anyone encounter the same problem?
> Thanks
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

Which node is the parent volume and snapshot on?  In the case of
create from source-volume and create from snapshot these need to be
created on the same node as the source or snapshot.  There's currently
a bug ([1]), where we don't check/enforce the type settings to make
sure these match up.  That's in progress right now and will be
backported.

[1]: https://bugs.launchpad.net/cinder/+bug/1276787


Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140212/d4847ed8/attachment.html>


More information about the Openstack mailing list