[Openstack-operators] [nova] Cinder cross_az_attach=False changes/fixes

Matt Riedemann mriedemos at gmail.com
Sun Jul 15 14:18:51 UTC 2018


Just an update on an old thread, but I've been working on the 
cross_az_attach=False issues again this past week and I think I have a 
couple of decent fixes.

On 5/31/2017 6:08 PM, Matt Riedemann wrote:
> This is a request for any operators out there that configure nova to set:
> 
> [cinder]
> cross_az_attach=False
> 
> To check out these two bug fixes:
> 
> 1. https://review.openstack.org/#/c/366724/
> 
> This is a case where nova is creating the volume during boot from volume 
> and providing an AZ to cinder during the volume create request. Today we 
> just pass the instance.availability_zone which is None if the instance 
> was created without an AZ set. It's unclear to me if that causes the 
> volume creation to fail (someone in IRC was showing the volume going 
> into ERROR state while Nova was waiting for it to be available), but I 
> think it will cause the later attach to fail here [1] because the 
> instance AZ (defaults to None) and volume AZ (defaults to nova) may not 
> match. I'm still looking for more details on the actual failure in that 
> one though.
> 
> The proposed fix in this case is pass the AZ associated with any host 
> aggregate that the instance is in.

This was indirectly fixed by change 
https://review.openstack.org/#/c/446053/ in Pike where we now set the 
instance.availability_zone in conductor after we get a selected host 
from the scheduler (we get the AZ for the host and set that on the 
instance before sending the instance to compute to build it).

While investigating this on master, I found a new bug where we do an 
up-call to the API DB which fails in a split MQ setup, and I have a fix 
here:

https://review.openstack.org/#/c/582342/

> 
> 2. https://review.openstack.org/#/c/469675/
> 
> This is similar, but rather than checking the AZ when we're on the 
> compute and the instance has a host, we're in the API and doing a boot 
> from volume where an existing volume is provided during server create. 
> By default, the volume's AZ is going to be 'nova'. The code doing the 
> check here is getting the AZ for the instance, and since the instance 
> isn't on a host yet, it's not in any aggregate, so the only AZ we can 
> get is from the server create request itself. If an AZ isn't provided 
> during the server create request, then we're comparing 
> instance.availability_zone (None) to volume['availability_zone'] 
> ("nova") and that results in a 400.
> 
> My proposed fix is in the case of BFV checks from the API, we default 
> the AZ if one wasn't requested when comparing against the volume. By 
> default this is going to compare "nova" for nova and "nova" for cinder, 
> since CONF.default_availability_zone is "nova" by default in both projects.

I've refined this fix a bit to be more flexible:

https://review.openstack.org/#/c/469675/

So now if doing boot from volume and we're checking 
cross_az_attach=False in the API and the user didn't explicitly request 
an AZ for the instance, we do a few checks:

1. If [DEFAULT]/default_schedule_zone is not None (the default), we use 
that to compare against the volume AZ.

2. If the volume AZ is equal to the [DEFAULT]/default_availability_zone 
(nova by default in both nova and cinder), we're OK - no issues.

3. If the volume AZ is not equal to [DEFAULT]/default_availability_zone, 
it means either the volume was created with a specific AZ or cinder's 
default AZ is configured differently from nova's. In that case, I take 
the volume AZ and put it into the instance RequestSpec so that during 
scheduling, the nova scheduler picks a host in the same AZ as the volume 
- if that AZ isn't in nova, we fail to schedule (NoValidHost) (but that 
shouldn't really happen, why would one have cross_az_attach=False w/o 
mirrored AZ in both cinder and nova?).

> 
> -- 
> 
> I'm requesting help from any operators that are setting 
> cross_az_attach=False because I have to imagine your users have run into 
> this and you're patching around it somehow, so I'd like input on how you 
> or your users are dealing with this.
> 
> I'm also trying to recreate these in upstream CI [2] which I was already 
> able to do with the 2nd bug.

This devstack patch has recreated both issues above and I'm adding the 
fixes to it as dependencies to show the problems are resolved.

> 
> Having said all of this, I really hate cross_az_attach as it's 
> config-driven API behavior which is not interoperable across clouds. 
> Long-term I'd really love to deprecate this option but we need a 
> replacement first, and I'm hoping placement with compute/volume resource 
> providers in a shared aggregate can maybe make that happen.
> 
> [1] 
> https://github.com/openstack/nova/blob/f278784ccb06e16ee12a42a585c5615abe65edfe/nova/virt/block_device.py#L368 
> 
> [2] https://review.openstack.org/#/c/467674/


-- 

Thanks,

Matt



More information about the OpenStack-operators mailing list