[nova][ptg] Allow compute nodes to use DISK_GB from shared storage RP by using aggregate relationship

Bal√°zs Gibizer balazs.gibizer at est.tech
Thu Nov 14 07:45:01 UTC 2019



On Thu, Nov 14, 2019 at 02:58, "Patil, Tushar" 
<Tushar.Patil at nttdata.com> wrote:
> On 11/13/2019 8:34 AM, Sylvain Bauza wrote:
>>>  Me too. To be clear, I don't think operators would modify the 
>>> above but
>>>  if so, they would need reshapes.
> 
>>  Maybe not, but this is the kind of detail that should be in the 
>> spec and
>>  functional tests to make sure it's solid since this is a big
>>  architectural change in nova.
> 
> It depends on how the aggregates are created on the nova and 
> placement side.
> 
> A) From placement point of view, operator can create a new aggregate 
> and add shared storage RP to it (tag MISC_SHARES_VIA_AGGREGATE trait 
> to this RP). The newly created valid UUID would then be set in the 
> config option ``sharing_disk_aggregate`` on the compute node side. 
> This aggregate UUID wouldn't be present in the nova aggregate. so 
> it's not possible to add host to the nova aggregate unless a new 
> aggregate is created on nova side.
> 
> B) If nova aggregates are synced to the placement service and say 
> below is the picture:
> 
> Nova:
> 
> Agg1 - metadata (pinned=True)
>  - host1
>  - host2
> 
> Now, operator adds a new shared storage RP to Agg1 on placement side 
> and then set Agg1 UUID in ``sharing_disk_aggregate`` on compute nodes 
> along with ``using_shared_disk_provider`=True``, then it would add 
> compute node RP to the Agg1 on the placement without any issues but 
> when you want to reverse the configuration, 
> using_shared_disk_provider=False, then it not that straight to remove 
> the host from the placement/nova aggregate because there would be 
> other traits set to compute RPs which could cause those functions 
> stop working.

For me from the sharing disk provider feature perspective the placement 
aggregate that is needed for the sharing to work, and any kind of nova 
host aggregate (either synced to placement or not) is independent. The 
placement aggregate is a must for the feature. On top of that if the 
operator wants to create a nova host aggregate as well and sync it to 
placement then at the end there will be two, independent placement 
aggregates. One to express the sharing relationship and one to express 
a host aggregate from nova. These two aggregate will not be the same as 
the first one will have the sharing provider in it while the second one 
doesn't.

gibi

> 
> We had same kind of discussion [1] when implementing forbidden 
> aggregates where we want to sync traits set to the aggregates but 
> later it was concluded that operator will do it manually.
> 
> I will include the details Matt has pointed out in this email in my 
> next patchset.
> 
> [1] : 
> http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006950.html
> 
> Regards,
> tpatil
> 
> 
> 
> ________________________________________
> From: Matt Riedemann <mriedemos at gmail.com>
> Sent: Wednesday, November 13, 2019 11:41 PM
> To: openstack-discuss at lists.openstack.org
> Subject: Re: [nova][ptg] Allow compute nodes to use DISK_GB from 
> shared storage RP by using aggregate relationship
> 
> On 11/13/2019 8:34 AM, Sylvain Bauza wrote:
>>  Me too. To be clear, I don't think operators would modify the above 
>> but
>>  if so, they would need reshapes.
> 
> Maybe not, but this is the kind of detail that should be in the spec 
> and
> functional tests to make sure it's solid since this is a big
> architectural change in nova.
> 
> --
> 
> Thanks,
> 
> Matt
> 
> Disclaimer: This email and any attachments are sent in strictest 
> confidence for the sole use of the addressee and may contain legally 
> privileged, confidential, and proprietary data. If you are not the 
> intended recipient, please advise the sender by replying promptly to 
> this email and then delete and destroy this email and any attachments 
> without any further use, copying or forwarding.
> 





More information about the openstack-discuss mailing list