[openstack-dev] [Cinder] volume / host coupling
Jordan Pittier
jordan.pittier at scality.com
Thu Jan 8 17:29:02 UTC 2015
Arne,
>I imagine this has an
>impact on things using the services table, such as “cinder-manage” (how
>does your “cinder-manage service list” output look like? :-)
It has indeed. I have 3 cinder-volume services, but only one line output in
“cinder-manage service list”. But it's a minor inconvenience to me.
Duncan,
>There are races, e.g. do snapshot and delete at the same time, backup and
delete at the same time, etc. The race windows are pretty tight on ceph but
they are there. It is worse on some other >backends
Okay, never ran into those, yet ! I cross fingers :p
Thanks, and sorry if I hijacked this thread a little.
Jordan
On Thu, Jan 8, 2015 at 5:30 PM, Arne Wiebalck <Arne.Wiebalck at cern.ch> wrote:
> Hi Jordan,
>
> As Duncan pointed out there may be issues if you have multiple backends
> and indistinguishable nodes (which you could probably avoid by separating
> the hosts per backend and use different “host” flags for each set).
>
> But also if you have only one backend: the “host" flag will enter the
> ‘services'
> table and render the host column more or less useless. I imagine this has
> an
> impact on things using the services table, such as “cinder-manage” (how
> does your “cinder-manage service list” output look like? :-), and it may
> make it
> harder to tell if the individual services are doing OK, or to control them.
>
> I haven’t run Cinder with identical “host” flags in production, but I
> imagine
> there may be other areas which are not happy about indistinguishable hosts.
>
> Arne
>
>
> On 08 Jan 2015, at 16:50, Jordan Pittier <jordan.pittier at scality.com>
> wrote:
>
> Hi,
> >Some people apparently use the ‘host’ option in cinder.conf to make the
> hosts indistinguishable, but this creates problems in other places.
> I use shared storage mounted on several cinder-volume nodes, with "host"
> flag set the same everywhere. Never ran into problems so far. Could you
> elaborate on "this creates problems in other places" please ?
>
> Thanks !
> Jordan
>
> On Thu, Jan 8, 2015 at 3:40 PM, Arne Wiebalck <Arne.Wiebalck at cern.ch>
> wrote:
>
>> Hmm. Not sure how widespread installations with multiple Ceph backends
>> are where the
>> Cinder hosts have access to only one of the backends (which is what you
>> assume, right?)
>> But, yes, if the volume type names are also the same (is that also needed
>> for this to be a
>> problem?), this will be an issue ...
>>
>> So, how about providing the information the scheduler does not have by
>> introducing an
>> additional tag to identify ‘equivalent’ backends, similar to the way some
>> people already
>> use the ‘host’ option?
>>
>> Thanks!
>> Arne
>>
>>
>> On 08 Jan 2015, at 15:11, Duncan Thomas <duncan.thomas at gmail.com> wrote:
>>
>> The problem is that the scheduler doesn't currently have enough info to
>> know which backends are 'equivalent' and which aren't. e.g. If you have 2
>> ceph clusters as cinder backends, they are indistinguishable from each
>> other.
>>
>> On 8 January 2015 at 12:14, Arne Wiebalck <Arne.Wiebalck at cern.ch> wrote:
>>
>>> Hi,
>>>
>>> The fact that volume requests (in particular deletions) are coupled with
>>> certain Cinder hosts is not ideal from an operational perspective:
>>> if the node has meanwhile disappeared, e.g. retired, the deletion gets
>>> stuck and can only be unblocked by changing the database. Some
>>> people apparently use the ‘host’ option in cinder.conf to make the hosts
>>> indistinguishable, but this creates problems in other places.
>>>
>>> From what I see, even for backends that would support it (such as Ceph),
>>> Cinder currently does not provide means to ensure that any of
>>> the hosts capable of performing a volume operation would be assigned the
>>> request in case the original/desired one is no more available,
>>> right?
>>>
>>> If that is correct, how about changing the scheduling of delete
>>> operation to use the same logic as create operations, that is pick any of
>>> the
>>> available hosts, rather than the one which created a volume in the first
>>> place (for backends where that is possible, of course)?
>>>
>>> Thanks!
>>> Arne
>>>
>>> —
>>> Arne Wiebalck
>>> CERN IT
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Duncan Thomas
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150108/b2f1842f/attachment.html>
More information about the OpenStack-dev
mailing list