[openstack][octavia] amphora cannot live migrate
Hello everyone. I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error. *Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks Woud we have a solution for this? Thank you. Nguyen Huu Khoi
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI) However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead. [1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details>
compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Hi, I am using share storage for Octavia instance. We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies. Nguyen Huu Khoi On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ? Are you using volume backed amphora or shared storage in your compute nodes ? In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances ) If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host
On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details>
compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Hello. I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node. However, I don't know why I can do live migration with block migration. I read from redhat: If the instance uses config-drive, block migration is required to live-migrate the instance This command to check: openstack server show <instance name/id> -c config_drive My Octavia instance returns "True". Nguyen Huu Khoi On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Ah, yes. config-drive. That's a good point. When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required. AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances. So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes. [1] https://review.opendev.org/c/openstack/octavia/+/855441 On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
|openstack server show <instance name/id> -c config_drive|
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host
On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details>
compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Thanks for your explanation. :). I get it now. On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
👍 Nguyễn reacted via Gmail <https://www.google.com/gmail/about/?utm_source=gmail-in-product&utm_medium=et&utm_campaign=emojireactionemail#app> On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
sorry to top post but just a few general comments first we supprot live migration with or without config drive so that shoudl not be a problem second passign --block-migation or similer is condierd bad partcies we added auto as a value in mitaka https://docs.openstack.org/nova/latest/reference/api-microversion-history.ht... so if you want to triger a migration of the amphora instace regardless of how the storage is providsioned you should use microverion 2.25 or higher and use the auto mode. openstack --os-compute-api 2.25 server migrate --live-migration <uuid> do not use --shared-migration or --block-migration both can and shoudl be considerd legacy you shoudl also ignore the --disk-overcommit and --no-disk-overcommit those were remvoed in mitka too. wether or not live migration of a loadblance is advisable in general is a sepreate matter but just form a nova perspecteive a live migrage via os shoudl look like this openstack --os-compute-api 2.30 server migrate --live-migration [--host <hostname>] [--wait] <uuid> and a cold migrate shoudl look like this openstack --os-compute-api 2.56 server migrate [--host <host>] [--wait] <uuid> where [] is optional sections and <> are required parmaters. On Wed, 2023-11-29 at 19:29 +0700, Nguyễn Hữu Khôi wrote:
👍
Nguyễn reacted via Gmail <https://www.google.com/gmail/about/?utm_source=gmail-in-product&utm_medium=et&utm_campaign=emojireactionemail#app>
On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami <kajinamit@oss.nttdata.com> wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1] https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.... from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Hi @Sean Mooney <smooney@redhat.com>. Thank you for comments. Could you help me to understand why it is a bad practice if we use --shared-migration or --block-migration? Nguyen Huu Khoi On Wed, Nov 29, 2023 at 9:06 PM <smooney@redhat.com> wrote:
sorry to top post but just a few general comments
first we supprot live migration with or without config drive so that shoudl not be a problem
second passign --block-migation or similer is condierd bad partcies
we added auto as a value in mitaka
https://docs.openstack.org/nova/latest/reference/api-microversion-history.ht...
so if you want to triger a migration of the amphora instace regardless of how the storage is providsioned you should use microverion 2.25 or higher and use the auto mode.
openstack --os-compute-api 2.25 server migrate --live-migration <uuid>
do not use --shared-migration or --block-migration both can and shoudl be considerd legacy you shoudl also ignore the --disk-overcommit and --no-disk-overcommit those were remvoed in mitka too.
wether or not live migration of a loadblance is advisable in general is a sepreate matter but just form a nova perspecteive a live migrage via os shoudl look like this
openstack --os-compute-api 2.30 server migrate --live-migration [--host <hostname>] [--wait] <uuid>
and a cold migrate shoudl look like this
openstack --os-compute-api 2.56 server migrate [--host <host>] [--wait] <uuid>
where [] is optional sections and <> are required parmaters.
On Wed, 2023-11-29 at 19:29 +0700, Nguyễn Hữu Khôi wrote:
👍
Nguyễn reacted via Gmail < https://www.google.com/gmail/about/?utm_source=gmail-in-product&utm_medium=et&utm_campaign=emojireactionemail#app
On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd
suggest
you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1]
https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance....
from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
On Wed, 2023-11-29 at 21:19 +0700, Nguyễn Hữu Khôi wrote:
Hi @Sean Mooney <smooney@redhat.com>. Thank you for comments.
Could you help me to understand why it is a bad practice if we use --shared-migration or --block-migration?
generally admins frequently get this wrong and nova can determin the correct settting automatically
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 9:06 PM <smooney@redhat.com> wrote:
sorry to top post but just a few general comments
first we supprot live migration with or without config drive so that shoudl not be a problem
second passign --block-migation or similer is condierd bad partcies
we added auto as a value in mitaka
https://docs.openstack.org/nova/latest/reference/api-microversion-history.ht...
so if you want to triger a migration of the amphora instace regardless of how the storage is providsioned you should use microverion 2.25 or higher and use the auto mode.
openstack --os-compute-api 2.25 server migrate --live-migration <uuid>
do not use --shared-migration or --block-migration both can and shoudl be considerd legacy you shoudl also ignore the --disk-overcommit and --no-disk-overcommit those were remvoed in mitka too.
wether or not live migration of a loadblance is advisable in general is a sepreate matter but just form a nova perspecteive a live migrage via os shoudl look like this
openstack --os-compute-api 2.30 server migrate --live-migration [--host <hostname>] [--wait] <uuid>
and a cold migrate shoudl look like this
openstack --os-compute-api 2.56 server migrate [--host <host>] [--wait] <uuid>
where [] is optional sections and <> are required parmaters.
On Wed, 2023-11-29 at 19:29 +0700, Nguyễn Hữu Khôi wrote:
👍
Nguyễn reacted via Gmail < https://www.google.com/gmail/about/?utm_source=gmail-in-product&utm_medium=et&utm_campaign=emojireactionemail#app
On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your compute nodes ?
In case you have shared storage among some compute nodes then I'd
suggest
you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the libvirt xml of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
This is not an issue with Octavia but is more about usage of Nova. By default, amphora instances are deployed using image boot, and use ephemeral disks stored in compute nodes. Unless you have a shared storage in your compute nodes to store ephemeral disks(nfs, ceph and so on), you have to enable block-migration to live migrate the instance. (See --block-migration option in openstack CLI)
However, AFAIK the general recommendation is to try amphora failover (which internally recreates the amphora instance) instead of live-migrating it, as is described in the guide[1]. You might want to try that method instead.
[1]
https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance....
from-a-host On 11/29/23 14:17, Nguyễn Hữu Khôi wrote:
Hello everyone.
I deploy amphora instances using shared storage but when I do live migrate it to another host, I encounter this error.
*Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". Details <https://cloudhn.fpt.net/admin/instances/#message_details> compute12 is not on shared storage: Shared storage live-migration requires either shared storage or boot-from-volume with no local disks
Woud we have a solution for this?
Thank you.
Nguyen Huu Khoi
Hi. I would to like how does it impact when we do with --shared-migration or --block-migration? Nguyen Huu Khoi On Thu, Nov 30, 2023 at 1:40 AM <smooney@redhat.com> wrote:
On Wed, 2023-11-29 at 21:19 +0700, Nguyễn Hữu Khôi wrote:
Hi @Sean Mooney <smooney@redhat.com>. Thank you for comments.
Could you help me to understand why it is a bad practice if we use --shared-migration or --block-migration?
generally admins frequently get this wrong and nova can determin the correct settting automatically
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 9:06 PM <smooney@redhat.com> wrote:
sorry to top post but just a few general comments
first we supprot live migration with or without config drive so that shoudl not be a problem
second passign --block-migation or similer is condierd bad partcies
we added auto as a value in mitaka
so if you want to triger a migration of the amphora instace regardless
of
how the storage is providsioned you should use microverion 2.25 or higher and use the auto mode.
openstack --os-compute-api 2.25 server migrate --live-migration <uuid>
do not use --shared-migration or --block-migration both can and shoudl be considerd legacy you shoudl also ignore the --disk-overcommit and --no-disk-overcommit those were remvoed in mitka too.
wether or not live migration of a loadblance is advisable in general is a sepreate matter but just form a nova perspecteive a live migrage via os shoudl look
this
openstack --os-compute-api 2.30 server migrate --live-migration [--host <hostname>] [--wait] <uuid>
and a cold migrate shoudl look like this
openstack --os-compute-api 2.56 server migrate [--host <host>] [--wait] <uuid>
where [] is optional sections and <> are required parmaters.
On Wed, 2023-11-29 at 19:29 +0700, Nguyễn Hữu Khôi wrote:
👍
Nguyễn reacted via Gmail <
On Wed, Nov 29, 2023, 7:25 PM Takashi Kajinami <
kajinamit@oss.nttdata.com>
wrote:
Ah, yes. config-drive. That's a good point.
When config-drive is used then nova creates a local disk file which stores instance metadata and attach the disk to an instance. The cloud-init service in the instance detects the disk and read metadata from the disk rather than metadata api. In case an instance with config drive is migrated then the drive file should be also transferred to the destination compute node, and that's why block migration is required.
AFAIK Octavia is heavily dependent on instance metadata. There is an option to use usual user data instead of config driver but the option is deprecated because of known problems caused by sizelimit in user data[1], and now config drive is required for all amphora instances.
So as a result, live migration of amphora instances need block migration, even if volume boot is used, unless you have shared NFS storage in your compute nodes.
[1] https://review.opendev.org/c/openstack/octavia/+/855441
On 11/29/23 21:01, Nguyễn Hữu Khôi wrote:
Hello.
I mean I use volume boot for Octavia Instance. I check and no local disk is used by any instance on my compute node.
However, I don't know why I can do live migration with block migration.
I read from redhat:
If the instance uses config-drive, block migration is required to live-migrate the instance
This command to check:
openstack server show <instance name/id> -c config_drive
My Octavia instance returns "True".
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 6:12 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
On 11/29/23 17:34, Nguyễn Hữu Khôi wrote:
Hi, I am using share storage for Octavia instance.
Do you mind elaborating the "share strage" here ?
Are you using volume backed amphora or shared storage in your
compute
nodes ?
In case you have shared storage among some compute nodes then I'd suggest you check source-destination compute nodes and see whether these two nodes are actually using the same shared storage for instance data (usually located at /var/lib/nova/instances )
If you are using volume backed amphora then I may check the
xml
of the amphora instance and also the nova flavor used, to find out any local disk device attached (eg. If the flavor has swap then it may attach an additional local device)
We can mv Octavia instances by failover but if there is a hundred of lb then it will be a big problem. I am planning to use specific computers for Octavia only and use Active-Standby Topologies.
Nguyen Huu Khoi
On Wed, Nov 29, 2023 at 3:16 PM Takashi Kajinami < kajinamit@oss.nttdata.com> wrote:
> This is not an issue with Octavia but is more about usage of Nova. > By default, amphora instances are deployed using image boot, and use > ephemeral disks stored in compute nodes. Unless you have a shared storage > in your compute nodes to store ephemeral disks(nfs, ceph and so on), > you have to enable block-migration to live migrate the instance. > (See --block-migration option in openstack CLI) > > However, AFAIK the general recommendation is to try amphora failover > (which internally recreates the amphora instance) instead of > live-migrating it, > as is described in the guide[1]. You might want to try that method > instead. > > [1] >
https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance....
> from-a-host > On 11/29/23 14:17, Nguyễn Hữu Khôi wrote: > > Hello everyone. > > I deploy amphora instances using shared storage but when I do
> migrate it to another host, I encounter this error. > > *Error: *Failed to live migrate instance to host "AUTO_SCHEDULE". > Details < https://cloudhn.fpt.net/admin/instances/#message_details> > compute12 is not on shared storage: Shared storage
https://docs.openstack.org/nova/latest/reference/api-microversion-history.ht... like libvirt live live-migration
> requires either shared storage or boot-from-volume with no local disks > > Woud we have a solution for this? > > Thank you. > > > Nguyen Huu Khoi > >
participants (3)
-
Nguyễn Hữu Khôi
-
smooney@redhat.com
-
Takashi Kajinami