[Openstack] openstack havana cinder chooses wrong host to create new volume
Staicu Gabriel
gabriel_staicu at yahoo.com
Fri Feb 28 10:17:37 UTC 2014
Hi John,
I found a solution to the problem that I asked you help for and I consider that you do not need to look anymore at it.
I will also comment to the bug that I logged (https://bugs.launchpad.net/cinder/+bug/1280367) but I think I could not close it myself.
The solution for HA cinder is the parameter <host> in /etc/cinder/cinder.conf on the machines that are running cinder-volume service.
If you set this parameter to the same string on all the hosts that are running cinder-volume service than all the hosts will be parent to any volume in the ceph cluster, so any host can create a volume from an already existing volume.
Problem fixed....:) I will log a documentation bug to both ceph and openstack because of this parameter depends HA for cinder.
Thanks a lot,
Gabriel
On Wednesday, February 12, 2014 7:07 PM, John Griffith <john.griffith at solidfire.com> wrote:
On Wed, Feb 12, 2014 at 10:57 AM, Staicu Gabriel
<gabriel_staicu at yahoo.com> wrote:
> Thanks for the answer John.
> This is really tricky and I will try to explain why:
> You were right. The snap volume itself was created from a volume on opstck01
> when it was up.
> This is the offending snap. Here is the proof:
> root at opstck10:~# cinder snapshot-show 30093123-0da2-4864-b8e6-87e023e842a4
> +--------------------------------------------+--------------------------------------+
> | Property | Value
> |
> +--------------------------------------------+--------------------------------------+
> | created_at |
> 2014-02-12T13:20:41.000000 |
> | display_description |
> |
> | display_name | cirros-0.3.1-snap
> |
> | id |
> 30093123-0da2-4864-b8e6-87e023e842a4 |
> | metadata | {}
> |
> | os-extended-snapshot-attributes:progress | 100%
> |
> | os-extended-snapshot-attributes:project_id |
> 8c25ff44225f4e78ab3f526d99c1b7e1 |
> | size | 1
> |
> | status | available
> |
> | volume_id |
> 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
> +--------------------------------------------+--------------------------------------+
>
> root at opstck10:~# cinder show 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9
> +--------------------------------+--------------------------------------+
> | Property | Value |
> +--------------------------------+--------------------------------------+
> | attachments | [] |
> | availability_zone | nova |
> | bootable | false |
> | created_at | 2014-02-11T15:33:58.000000 |
> | display_description | |
> | display_name | cirros-0.3.1 |
> | id | 2f5be8c9-941e-49cb-8eb8-f6def3ca8af9 |
> | metadata | {u'readonly': u'False'} |
> | os-vol-host-attr:host | opstck01 |
> | os-vol-mig-status-attr:migstat | None |
> | os-vol-mig-status-attr:name_id | None |
> | os-vol-tenant-attr:tenant_id | 8c25ff44225f4e78ab3f526d99c1b7e1 |
> | size | 1 |
> | snapshot_id | None |
> | source_volid | None |
> | status | available |
> | volume_type | None |
> +--------------------------------+--------------------------------------+
>
> And now follows the interesting part. I am using ceph as a backend for
> cinder and I have multiple cinder-volume agents for HA reasons. The volumes
> themselves are available. The agents can fall. How can I overcome the
> limitation to create a volume on the same agent as the snap itself was
> created?
>
> Thanks a lot,
> Gabriel
>
>
>
> On Wednesday, February 12, 2014 6:20 PM, John Griffith
> <john.griffith at solidfire.com> wrote:
> On Wed, Feb 12, 2014 at 3:24 AM, Staicu Gabriel
> <gabriel_staicu at yahoo.com> wrote:
>>
>>
>> Hi,
>>
>> I have a setup with Openstack Havana on ubuntu precise with multiple
>> schedulers and volumes.
>> root at opstck10:~# cinder service-list
>>
>> +------------------+----------+------+---------+-------+----------------------------+
>> | Binary | Host | Zone | Status | State | Updated_at
>> |
>>
>> +------------------+----------+------+---------+-------+----------------------------+
>> | cinder-scheduler | opstck08 | nova | enabled | up |
>> 2014-02-12T10:08:28.000000 |
>> | cinder-scheduler | opstck09 | nova | enabled | up |
>> 2014-02-12T10:08:29.000000 |
>> | cinder-scheduler | opstck10 | nova | enabled | up |
>> 2014-02-12T10:08:28.000000 |
>> | cinder-volume | opstck01 | nova | enabled | down |
>> 2014-02-12T09:39:09.000000 |
>> | cinder-volume | opstck04 | nova | enabled | down |
>> 2014-02-12T09:39:09.000000 |
>> | cinder-volume | opstck05 | nova | enabled | down |
>> 2014-02-12T09:39:09.000000 |
>> | cinder-volume | opstck08 | nova | enabled | up |
>> 2014-02-12T10:08:28.000000 |
>> | cinder-volume | opstck09 | nova | enabled | up |
>> 2014-02-12T10:08:28.000000 |
>> | cinder-volume | opstck10 | nova | enabled | up |
>> 2014-02-12T10:08:28.000000 |
>>
>> +------------------+----------+------+---------+-------+----------------------------+
>>
>> When I am trying to create a new instance from a volume snapshot it keeps
>> choosing for the creation of the volume opstck01 on which cinder-volume is
>> down.
>> Did anyone encounter the same problem?
>> Thanks
>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
> Which node is the parent volume and snapshot on? In the case of
> create from source-volume and create from snapshot these need to be
> created on the same node as the source or snapshot. There's currently
> a bug ([1]), where we don't check/enforce the type settings to make
> sure these match up. That's in progress right now and will be
> backported.
>
> [1]: https://bugs.launchpad.net/cinder/+bug/1276787
>
>
> Thanks,
>
> John
>
>
>
Glad we answered the question... sad that you've hit another little
problem (bug) in how we do things here.
Since you're HA we should be able to discern that the volume is
available via multiple hosts. I'll have to look at that more closely,
I believe this is might be a known issue. We need to open a bug on
this and try and get it sorted out.
Would you be willing/able to provide some details on how you did your
HA config? Maybe log a bug in launchpad with the details and I can
try to look at fixing things up here shortly?
Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140228/2e2e4cf2/attachment.html>
More information about the Openstack
mailing list