[placement][ptg] Enabling other projects to continue with placement or get started

Matt Riedemann mriedemos at gmail.com
Fri Apr 26 14:21:27 UTC 2019


On 4/26/2019 8:50 AM, Jay Bryant wrote:
> The team has talked about this a little bit in our team meetings.  We 
> had previously talked to the placement team about how it could benefit 
> Cinder and I think we had reach the conclusion that there wasn't really 
> any benefit that Cinder could get from placement.
> 
> I think, however, the open item is if Placement can benefit from Cinder 
> if we were to make available volume and storage backend information to 
> Placement.  If so we would need to understand the work involved.
> 
> It might be worth planning some cross project time at the PTG just to 
> sync up on where things are at.  Let me know if you are interested in 
> doing this.

Modeling AZ affinity in a central location (placement) between compute 
nodes and volumes would likely benefit the wonky 
[cinder]/cross_az_attach and related config options in cinder. We have a 
class of bugs in nova when that is enforced (cross_az_attach=False) 
which maybe useful for HPC and Edge workloads, but isn't tested or 
supported very well at this time. Granted, it might be as simple as 
reporting volumes (or their backend pool) as a resource provider and 
then putting that provider and the compute node provider in a resource 
provider aggregate (sort of like how we model [but don't yet use] shared 
DISK_GB resources). My thinking is if you had that modeled and nova is 
configured with cross_az_attach=False, and a server is created with some 
pre-existing volumes, the nova scheduler translates that to a request to 
placement for compute nodes only in the aggregate with whatever storage 
backend is providing those volume resources (the same AZ essentially). 
But this is probably low priority and arguably re-inventing an already 
somewhat broken wheel. Would have to think about how doing this with 
placement would be superior to what we have today.

-- 

Thanks,

Matt



More information about the openstack-discuss mailing list