[openstack-dev] [nova] Latest and greatest on trying to get n-sch to require placement

Sylvain Bauza sbauza at redhat.com
Thu Jan 26 13:50:13 UTC 2017



Le 26/01/2017 05:42, Matt Riedemann a écrit :
> This is my public hand off to Sylvain for the work done tonight.
> 

Thanks Matt for your help yesterday, was awesome to count you in even
you're personally away.


> Starting with the multinode grenade failure in the nova patch to
> integrate placement with the filter scheduler:
> 
> https://review.openstack.org/#/c/417961/
> 
> The test_schedule_to_all_nodes tempest test was failing in there because
> that test explicitly forces hosts using AZs to build two instances.
> Because we didn't have nova.conf on the Newton subnode in the multinode
> grenade job configured to talk to placement, there was no resource
> provider for that Newton subnode when we started running smoke tests
> after the upgrade to Ocata, so that test failed since the request to the
> subnode had a NoValidHost (because no resource provider was checking in
> from the Newton node).
> 

That's where I think the current implementation is weird : if you force
the scheduler to return you a destination (without even calling the
filters) by just verifying if the corresponding service is up, then why
are you needing to get the full list of computes before that ?

To the placement extend, if you just *force* the scheduler to return you
a destination, then why should we verify if the resources are happy ?
FWIW, we now have a fully different semantics that replaces the
"force_hosts" thing that I hate : it's called
RequestSpec.requested_destination and it actually verifies the filters
only for that destination. No straight bypass of the filters like
force_hosts does.

> Grenade is not topology aware so it doesn't know anything about the
> subnode. When the subnode is stacked, it does so via a post-stack hook
> script that devstack-gate writes into the grenade run, so after stacking
> the primary Newton node, it then uses Ansible to ssh into the subnode
> and stack Newton there too:
> 
> https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L629
> 
> 
> logs.openstack.org/61/417961/26/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/15545e4/logs/grenade.sh.txt.gz#_2017-01-26_00_26_59_296
> 
> 
> And placement was optional in Newton so, you know, problems.
> 

That's where I think we have another problem, which is bigger than the
corner case you mentioned above : when upgrading from Newton to Ocata,
we said that all Newton computes have be upgraded to the latest point
release. Great. But we forgot to identify that it would also require to
*modify* their nova.conf so they would be able to call the placement API.

That looks to me more than just a rolling upgrade mechanism. In theory,
a rolling upgrade process accepts that N-1 versioned computes can talk
to N versioned other services. That doesn't imply a necessary
configuration change (except the upgrade_levels flag) on the computes to
achieve that, right?

http://docs.openstack.org/developer/nova/upgrade.html


> Some options came to mind:
> 
> 1. Change the test to not be a smoke test which would exclude it from
> running during grenade. QA would barf on this.
> 
> 2. Hack some kind of pre-upgrade callback from d-g into grenade just for
> configuring placement on the compute subnode. This would probably
> require adding a script to devstack just so d-g has something to call so
> we could keep branch logic out of d-g, like what we did for the
> discover_hosts stuff for cells v2. This is more complicated than what I
> wanted to deal with tonight with limited time on my hands.
> 
> 3. Change the nova filter scheduler patch to fallback to get all compute
> nodes if there are no resource providers. We've already talked about
> this a few times already in other threads and I consider it a safety net
> we'd like to avoid if all else fails. If we did this, we could
> potentially restrict it to just the forced-host case...
> 
> 4. Setup the Newton subnode in the grenade run to configure placement,
> which I think we can do from d-g using the features yaml file. That's
> what I opted to go with and the patch is here:
> 
> https://review.openstack.org/#/c/425524/
> 
> I've made the nova patch dependent on that *and* the other grenade patch
> to install and configure placement on the primary node when upgrading
> from Newton to Ocata.
> 
> -- 
> 
> That's where we're at right now. If #4 fails, I think we are stuck with
> adding a workaround for #3 into Ocata and then remove that in Pike when
> we know/expect computes to be running placement (they would be in our
> grenade runs from ocata->pike at least).
> 


Given the above two problems that I stated, I think I'm in favor of a #3
approach now that would do the following :

 - modify the scheduler so that it's acceptable to have the placement
returning nothing if you force hosts

 - modify the scheduler so in the event of an empty list returned by the
placement API, fallback getting the list of all computes


That still leaves the problem where a few computes are not all upgraded
to Ocata but some are : in that case, we would return only a subset of
what's in the cloud which is terribly suboptimal.


Thoughts ? Another option could be to verify the compute service
versions to know the state of the cloud, but we turned down that option
previously.

-Sylvain



More information about the OpenStack-dev mailing list