[openstack-dev] [Openstack-operators] [openstack-operators] [nova] Nova-scheduler filter, for domain level isolation

Matt Riedemann mriedemos at gmail.com
Mon Oct 2 22:01:18 UTC 2017


On 9/20/2017 5:17 AM, Georgios Kaklamanos wrote:
> Hello,
> 
> Usecase: We have to deploy instances that belong in different domains,
> to different compute hosts.
> 
> Does anyone else have the same usecase? If so, how did you implement
> it?
> 
> [The rest of the mail is a more detailed explanation on the question:
> what have we tried and probable solutions that we though -- but not
> yet implemented.]

First, thanks for starting this discussion upstream rather than just 
assuming you have to use an out of tree filter.

> 
> * Details
> 
> In our Openstack Deployment (Mitaka), we have to support 3 different
> domains (besides the default). We need a way to separate the compute
> hosts in three groups (aggregates), so that VMs that belong to users
> of domain A, start in group A, etc. Initially we assume that each
> compute host will belong only to one group, but that might change.
> 
> We have looked at the nova filter scheduler [1] and on the
> Aggregate_Multitenacy_Isolation filter [2], which is doing what we
> want but it works on project level (as demonstrated here [3]). Given
> that in one of our domains, we'll have at least 200 projects, we'd
> prefer to leave this as a last choice.
> 
> Modifying the above filter to make the check based on the "domain",
> isn't possible. The object that the filter receives, and contains the
> information, is the RequestSpec Object [4]. The information contained
> in its fields, doesn't include the domain_id attribute.
> 
> * Possible solutions that we've thought of:
> 
> 1. Write our own filter: Modify a filter to contain a call to
>     keystone, where it would send the project_id, and get back it's
>     domain. But this feels more like a hack than a proper solution. And
>     it might require storing the admin credentials to the node that the
>     filters are running (controller?), which we'd like to avoid.

Isn't the domain_id in the RequestContext somewhere? That's the thing 
that holds the user token so I'd expect it has information about the 
domain that the project is in.

https://github.com/openstack/oslo.context/blob/2.19.0/oslo_context/context.py#L180-L182

> 
> 2. Make the separation on another level (project/flavor/image):
>     Besides the isolation per project, we could also isolate the hosts,
>     by providing different images / flavors to the different
>     users. There are available filters for that (image_props_filter
>     [6]) , (aggregate_instance_extra_specs [7]). But again, due to the
>     high number of projects, this would not scale well.

Agree that this sounds complicated and therefore will be error-prone.

> 
> 3. Modify the RequestSpec object: Finally, we could include the
>     domain_id field on the object, then modify the
>     Aggregate_Multitenacy_Isolation filter, to work on that. Of course
>     this would be the most elegant solution. However, (besides not
>     knowing how to do that), we don't know what kind of implication
>     that will have or how to package / deploy it.

Shouldn't have to do this if the request context has the domain in it. 
However, the request context isn't persisted but the request spec is, so 
if you needed the request spec later for other operations, like 
migrating the instance, then you might want to persist the domain. But 
then you probably get into other issues, like can the user/project move 
to another domain in Keystone? If so, what do you do about your host 
aggregate policy in Nova since Nova isn't going to be tracking that 
Keystone change. Maybe there is a policy rule in keystone that you can 
disable updating a user/project domain once it's set?

> 
> 
> Is anyone having the same usecase? How would you go about solving it?
> 
> It's interesting, since we though that this would be a common usecase,
> but as far as I've searched, I only found one request about this
> functionality in a mailing list from 2013 [8], but didn't seem to have
> progressed.
> 
> Thank you for your time,
> George
> 
> 
> [1]:https://docs.openstack.org/nova/latest/user/filter-scheduler.html
> [2]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_multitenancy_isolation.py
> [3]:https://www.brad-x.com/2016/01/01/dedicate-compute-hosts-to-projects/
> [4]:http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/request-spec-object-mitaka.html#
> [5]:https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L46
> [6]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/image_props_filter.py
> [7]:https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_instance_extra_specs.py
> [8]:https://lists.launchpad.net/openstack/msg23275.html
> 
> 
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 


-- 

Thanks,

Matt



More information about the OpenStack-dev mailing list