[nova] Strict isolation of group of hosts for image and flavor, modifying command 'nova-manage placement sync_aggregates'

Kamde, Vrushali Vrushali.Kamde at nttdata.com
Thu Jun 6 09:32:09 UTC 2019


Working on implementation of 'Support filtering of allocation_candidates by forbidden aggregates' spec.

Need discussion particularly for point [1] where traits needs to be sync along with aggregates at placement.

Master implementation for 'nova-manage placement sync_aggregates' command is to sync the nova host aggregates.

Modifying this command to sync trait metadata of aggregate at placement.

Below are the aggregate restful APIs which currently supports:

1. 'POST'-- /os-aggregates/{aggregate_id}/action(Add host) getting synced on the placement service 2. 'POST'-- /os-aggregates/{aggregate_id}/action(Remove host) getting synced on the placement service 3. 'POST'-- /os-aggregates/{aggregate_id}/action(set metadata) Doesn't get sync on the placement service.

4. 'POST'-- /os-aggregates/{aggregate_id}/action(unset metadata) Doesn't get sync on the placement service.

I have added code to sync traits for below APIs and I don't see any issues there:

1. 'POST'-- /os-aggregates/{aggregate_id}/action(Add host) 2. 'POST'-- /os-aggregates/{aggregate_id}/action(set metadata)

But there is an issue while removing traits for below APIs:

1. 'POST'-- /os-aggregates/{aggregate_id}/action(Remove host) 2. 'POST'-- /os-aggregates/{aggregate_id}/action(unset metadata)

Ideally, we should remove traits set in the aggregate metadata from the resource providers associated with the aggregate for above two APIs but it could cause a problem for below scenario:-

For example:

1. Create two aggregates 'agg1' and 'agg2' by using:

'POST'-- /os-aggregates(Create aggregate)

2. Associate above aggregates to host 'RP1' by using:

'POST'-- /os-aggregates/{aggregate_id}/action(Add host)

3. Setting metadata (trait:STORAGE_DISK_SSD='required') on the aggregate agg1 by using:

'POST'-- /os-aggregates/{aggregate_id}/action(set metadata)

4. Setting metadata (trait:STORAGE_DISK_SSD='required', trait:HW_CPU_X86_SGX='required') on the aggregate agg2 by using:

'POST'-- /os-aggregates/{aggregate_id}/action(set metadata)

Traits set to 'RP1' are:



Note: Here trait 'STORAGE_DISK_SSD' is set on both agg1 and agg2.

Now, If we remove host 'RP1' from 'agg1' then the trait 'STORAGE_DISK_SSD' set to `RP1`  also needs to be removed but since 'RP1' is also assigned to 'agg2', removing 'STORAGE_DISK_SSD' trait from 'RP1' is not correct.

I have discussed about syncing traits issues with Eric on IRC [2], he has suggested few approaches as below:

- Leave all traits alone. If they need to be removed, it would have to be manually via a separate step.

- Support a new option so the caller can dictate whether the operation should remove the traits. (This is all-or-none.)

- Define a "namespace" - a trait substring - and remove only traits in that namespace.

If I'm not wrong, for last two approaches, we would need to change RestFul APIs.

Need your feedback whether traits should be deleted from resource provider or not for below two cases?

                1. 'POST'-- /os-aggregates/{aggregate_id}/action(Remove host)

                2. 'POST'-- /os-aggregates/{aggregate_id}/action(unset metadata)

[1]: https://review.opendev.org/#/c/609960/8/specs/train/approved/placement-req-filter-forbidden-aggregates.rst@203
[2]: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-05-30.log.html

Thanks & Regards,
Vrushali Kamde

Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190606/29bc50e9/attachment.html>

More information about the openstack-discuss mailing list