[Openstack-operators] Potential deprecation of cinder.cross_az_attach option in nova
sean at seanhamilton.co.uk
Fri Sep 25 15:55:18 UTC 2015
We use cinder.cross_az_attach=False as a way of separating failure domains.
Each AZ is effectively a set of racks with compute and storage in them,
this way we keep the compute and storage close to each other and in the
event of an issue in one AZ it won't affect the other (mostly).
In this scenario a user could attach a volume from AZ1 to a nova compute in
AZ2. If AZ1 went down then the instance would be affected.
On 24 September 2015 at 16:46, Matt Riedemann <mriedem at linux.vnet.ibm.com>
> On 9/23/2015 6:27 PM, Sam Morrison wrote:
>> We very much rely on this and I see this is already merged! Great another
>> patch I have to manage locally.
>> I don’t understand what the confusion is. We have multiple availability
>> zones in nova and each zone has a corresponding cinder-volume service(s) in
>> the same availability zone.
>> We don’t want people attaching a volume from one zone to another as the
>> network won’t allow that as the zones are in different network domains and
>> different data centres.
>> I will reply in the mailing list post on the dev channel but it seems
>> it’s too late.
>> On 24 Sep 2015, at 6:49 am, Matt Riedemann <mriedem at linux.vnet.ibm.com>
>>> I wanted to bring this to the attention of the operators mailing list in
>>> case someone is relying on the cinder.cross_az_attach.
>>> There is a -dev thread here  that started this discussion. That led
>>> to a change proposed to deprecate the cinder.cross_az_attach option in nova
>>> This is for deprecation in mitaka and removal in N. If this affects
>>> you, please speak up in the mailing list or in the review.
>>>  https://review.openstack.org/#/c/226977/
>>> Matt Riedemann
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
> The revert is approved. Having done that, this is a mess of a feature, at
> least in the boot from volume case where source != volume. The details on
> that are in the -dev thread but I'd appreciate operators that are using
> this to weigh in there on how they are handling the BFV case with
> cinder.cross_az_attach=False. My main issue is the amount of API policy
> being defined in config options and when BFV fails to create the volume
> it's in the compute layer where we end up with a NoValidHost for the user.
> I want to figure out how we can fail fast with a 400 response from nova API
> if we know the volume create is going to fail.
> Matt Riedemann
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators