[openstack-dev] [TC] Stein Goal Selection

Sean McGinnis sean.mcginnis at gmx.com
Tue Jun 5 12:26:27 UTC 2018


On Mon, Jun 04, 2018 at 06:44:15PM -0500, Matt Riedemann wrote:
> On 6/4/2018 5:13 PM, Sean McGinnis wrote:
> > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would
> > check here. We don't have to see if placement has been set up or if cell0
> > has been configured. Maybe once we have the facility in place we would
> > find some things worth checking, but at present I don't know what that
> > would be.
> 
> Here is an example from the Cinder Queens upgrade release notes:
> 
> "RBD/Ceph backends should adjust max_over_subscription_ratio to take into
> account that the driver is no longer reporting volume’s physical usage but
> it’s provisioned size."
> 
> I'm assuming you could check if rbd is configured as a storage backend and
> if so, is max_over_subscription_ratio set? If not, is it fatal? Does the
> operator need to configure it before upgrading to Rocky? Or is it something
> they should consider but don't necessary have to do - if that, there is a
> 'WARNING' status for those types of things.
> 
> Things that are good candidates for automating are anything that would stop
> the cinder-volume service from starting, or things that require data
> migrations before you can roll forward. In nova we've had blocking DB schema
> migrations for stuff like this which basically mean "you haven't run the
> online data migrations CLI yet so we're not letting you go any further until
> your homework is done".
> 

Thanks, I suppose we probably could find some things to at least WARN on. Maybe
that would be useful.

I suppose as far as a series goal goes, even if each project doesn't come up
with a comprehensive set of checks, this would be a known thing deployers could
use and potentially build some additional tooling around. This could be a good
win for the overall ease of upgrade story.

> Like I said, it's not black and white, but chances are good there are things
> that fall into these categories.
> 
> -- 
> 
> Thanks,
> 
> Matt



More information about the OpenStack-dev mailing list