[nova] Proper way to regenerate request_specs of existing instances?
Hi! I have a Rocky deployment and I want to enable AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm trying to solve in a proper way: fixing the request_specs of instances that are already running. After enabling the filter, I want to add necessary metadata keys to flavors, but this won't be propagated into request_specs of running instances, and this will cause issues later on (like scheduler selecting wrong destination hosts for migration, for example) Few years ago I encountered a similar problem on Mitaka: that deployment already had the filter enabled, but some flavors were misconfigured and lacked the metadata keys. I ended up writing a crude Python script which connected directly into the Nova database, searched for bad request_specs and manually appended the necessary extra_specs keys into request_specs JSON blob. Now, my question is: has anyone encountered a similar scenario before? Is there a more clean method for regeneration of instance request_specs, or do I have to modify the JSON blobs manually by writing directly into the database? -- Regards, Patryk Jakuszew
On Tue, Jun 1, 2021 at 2:17 PM Patryk Jakuszew <patryk.jakuszew@gmail.com> wrote:
Hi!
I have a Rocky deployment and I want to enable AggregateInstanceExtraSpecsFilter on it. There is one slight problem I'm trying to solve in a proper way: fixing the request_specs of instances that are already running.
After enabling the filter, I want to add necessary metadata keys to flavors, but this won't be propagated into request_specs of running instances, and this will cause issues later on (like scheduler selecting wrong destination hosts for migration, for example)
Few years ago I encountered a similar problem on Mitaka: that deployment already had the filter enabled, but some flavors were misconfigured and lacked the metadata keys. I ended up writing a crude Python script which connected directly into the Nova database, searched for bad request_specs and manually appended the necessary extra_specs keys into request_specs JSON blob.
Now, my question is: has anyone encountered a similar scenario before? Is there a more clean method for regeneration of instance request_specs, or do I have to modify the JSON blobs manually by writing directly into the database?
As Nova looks at the RequestSpec records for knowing what the user was asking when creating the instance, and as the instance values can be modified when for example you move an instance, that's why we don't support to modify the RequestSpec directly. In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one. As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again. -Sylvain --
Regards, Patryk Jakuszew
On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza <sbauza@redhat.com> wrote:
In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one.
To be more specific: we do have AZs already, but we also want to add AggregateInstanceExtraSpecsFilter in order to prepare for a scenario with having multiple CPU generations in each AZ.
As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again.
Alright, I will try that again, but using the Nova objects class as you suggest. Thanks for the answer! -- Regards, Patryk
On Tue, 2021-06-01 at 18:06 +0200, Patryk Jakuszew wrote:
On Tue, 1 Jun 2021 at 14:35, Sylvain Bauza <sbauza@redhat.com> wrote:
In general, this question is about AZs : as in general some operators want to modify the AZ value of a specific RequestSpec, this would also mean that the users using the related instance would not understand why now this instance would be on another AZ if the host is within another one.
To be more specific: we do have AZs already, but we also want to add AggregateInstanceExtraSpecsFilter in order to prepare for a scenario with having multiple CPU generations in each AZ.
the supported way to do that woudl be to resize the instance. nova currently does not suppout updating the embedded flavor any other way. that said this is yet another usecase for a recreate api that would allow updating the embedded flavor and image metadta. nova expect flavours to be effectively immutable once an instace start to use them. the same is true of image properties so partly be design this has not been easy to support in nova because it was a usgage model we have declard out of scope. the solution that is vaiable today is rebuidl ro resize but a recreate api is really want you need.
As you said, if you really want to modify the RequestSpec object, please then write a Python script that would use the objects class by getting the RequestSpec object directly and then persisting it again.
Alright, I will try that again, but using the Nova objects class as you suggest.
this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) consider adding a nova manage command to do this. e.g. nova-mange instance flavor-regenerate <instance uuid> and nova-mange instance image-regenerate <instance uuid> those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. you would then have to hard reboot it or migrate it sepereatlly. im not convicned this is a capablity we should provide to operators in tree however via nova-manage. with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues that are hard to debug an fix.
Thanks for the answer!
-- Regards, Patryk
On Tue, 1 Jun 2021 at 23:14, Sean Mooney <smooney@redhat.com> wrote:
this has come up often enough that we __might__ (im stressing might since im not sure we really want to do this) consider adding a nova manage command to do this.
e.g. nova-mange instance flavor-regenerate <instance uuid> and nova-mange instance image-regenerate <instance uuid>
those command woudl just recrate the embeded flavor and image metadta without moving the vm or otherwise restarting it. you would then have to hard reboot it or migrate it sepereatlly.
im not convicned this is a capablity we should provide to operators in tree however via nova-manage.
with my downstream hat on im not sure how supportable it woudl for example since like nova reset-state it woudl be very easy to render vms unbootable in there current localthouh if a tenatn did a hard reboot and cause all kinds of stange issues that are hard to debug an fix.
I have the same thoughts - initially I wanted to figure out whether such feature could be added to nova-manage toolset, but I'm not sure it would be a welcome contribution due to the risks it creates. *Maybe* it would help to add some warnings around it and add an obligatory '--yes-i-really-really-mean-it' switch, but still - it may cause undesired long-term consequences if used improperly. On the other hand, other projects do have options that one can consider to be similiar in nature ('cinder-manage volume update_host' comes to mind), and I think nova-manage is considered to be a low-level utility that shouldn't be used in day-to-day operations anyway...
participants (3)
-
Patryk Jakuszew
-
Sean Mooney
-
Sylvain Bauza