Hi Michael, I have realized that the docker version I have on the new compute nodes after running the bootstrap command is 24 while the previous compute and controller nodes have 20. Could that be a problem? Regards Regards Tony Karera On Wed, Aug 30, 2023 at 3:22 PM < openstack-discuss-request@lists.openstack.org> wrote:
Send openstack-discuss mailing list submissions to openstack-discuss@lists.openstack.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
or, via email, send a message with subject or body 'help' to openstack-discuss-request@lists.openstack.org
You can reach the person managing the list at openstack-discuss-owner@lists.openstack.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..."
Today's Topics:
1. Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal) (Brian Rosmaita) 2. Re: [kolla] [train] [cinder] Cinder issues during controller replacement (Danny Webb)
----------------------------------------------------------------------
Message: 1 Date: Wed, 30 Aug 2023 08:31:32 -0400 From: Brian Rosmaita <rosmaita.fossdev@gmail.com> To: openstack-discuss@lists.openstack.org Subject: Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal) Message-ID: <fa39fe25-6651-688a-4bfa-39abbdbb7e04@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed
On 8/29/23 10:17 AM, Rajat Dhasmana wrote:
Hi All,
I would like to nominate myself to be Cinder?PTL during the 2024.1 (Caracal) cycle. [snip] Here are some work items we are planning for the next cycle (2024.1):
* We still lack review bandwidth since one of our active cores, Sofia, couldn't contribute to Cinder anymore hence we will be looking out for potential core reviewers. * Continue working on migrating from cinderclient to OSC (and SDK) support. * Continue with the current cinder events like festival of XS reviews, midcycle, video meeting once a month etc that provides regular team interaction and helps tracking and discussion of ongoing work throughout the cycle.
All this sounds great to me! Thanks for stepping up once again to take on the task of herding the Argonauts and keeping the Cinder project on track.
[1] https://review.opendev.org/q/topic:cinderclient-sdk-migration <https://review.opendev.org/q/topic:cinderclient-sdk-migration> [2] https://review.opendev.org/q/topic:cinder-sdk-gap <https://review.opendev.org/q/topic:cinder-sdk-gap> [3]
https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYb... < https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYb...
[4]
https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branch... < https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branch...
Thanks Rajat Dhasmana
------------------------------
Message: 2 Date: Wed, 30 Aug 2023 13:19:49 +0000 From: Danny Webb <Danny.Webb@thehutgroup.com> To: Eugen Block <eblock@nde.ag>, "openstack-discuss@lists.openstack.org" <openstack-discuss@lists.openstack.org> Subject: Re: [kolla] [train] [cinder] Cinder issues during controller replacement Message-ID: < LO2P265MB5773FCAFE86B26D1E45E31459AE6A@LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM
Content-Type: text/plain; charset="windows-1252"
Just to add to the cornucopia of configuration methods for cinder backends. We use a variation of this for active/active controller setup which seems to work (on pure, ceph rbd and nimble). We use both backend_host and volume_backend_name so any of our controllers can action a request for a given volume. We've not had any issues with this setup since ussuri (we're on Zed / Yoga now).
eg:
[rbd] backend_host = ceph-nvme volume_backend_name = high-performance ...
[pure] backend_host = pure volume_backend_name = high-performance ...
which leaves us with a set of the following:
$ openstack volume service list
+------------------+-------------------------+----------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------------------+----------+---------+-------+----------------------------+ | cinder-volume | nimble@nimble-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:18.000000 | | cinder-volume | nimble@nimble-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:14.000000 | | cinder-volume | nimble@nimble-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:17.000000 | | cinder-volume | ceph-nvme@ceph-nvme-az3 | gb-lon-3 | enabled | up | 2023-08-30T12:34:14.000000 | | cinder-volume | pure@pure-az1 | gb-lon-1 | enabled | up | 2023-08-30T12:34:11.000000 | | cinder-volume | pure@pure-az2 | gb-lon-2 | enabled | up | 2023-08-30T12:34:11.000000 |
+------------------+-------------------------+----------+---------+-------+----------------------------+
________________________________ From: Eugen Block <eblock@nde.ag> Sent: 30 August 2023 11:36 To: openstack-discuss@lists.openstack.org < openstack-discuss@lists.openstack.org> Subject: Re: [kolla] [train] [cinder] Cinder issues during controller replacement
CAUTION: This email originates from outside THG
Just to share our config, we don't use "backend_host" but only "host" on all control nodes. Its value is the hostname (shortname) pointing to the virtual IP which migrates in case of a failure. We only use Ceph as storage backend with different volume types. The volume service list looks like this:
controller02:~ # openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At |
+------------------+--------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-backup | controller | nova | enabled | up | 2023-08-30T10:32:34.000000 | | cinder-volume | controller@rbd | nova | enabled | up | 2023-08-30T10:32:28.000000 | | cinder-volume | controller@rbd2 | nova | enabled | up | 2023-08-30T10:32:36.000000 | | cinder-volume | controller@ceph-ec | nova | enabled | up | 2023-08-30T10:32:28.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+
We haven't seen any issue with this setup in years during failover.
Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 29/08, Albert Braden wrote:
What does backend_host look like? Should it match my internal API URL, i.e. api-int.qde4.ourdomain.com? On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor <geguileo@redhat.com> wrote:
Hi,
If I remember correctly you can set it to anything you want, but for convenience I would recommend setting it to the hostname that currently has more volumes in your system.
Let's say you have 3 hosts: qde4-ctrl1.cloud.ourdomain.com qde4-ctrl2.cloud.ourdomain.com qde4-ctrl3.cloud.ourdomain.com
And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
Then it would be best to set it to ctrl1: [rbd-1] backend_host = qde4-ctrl1.cloud.ourdomain.com
And then use the cinder-manage command to modify the other 2.
For your information the value you see as "os-vol-host-attr:host" when seeing the detailed information of a volume is in the form of: <HostName>@<BackendName>#<PoolName>
In your case: <HostName> = qde4-ctrl1.cloud.ourdomain.com <BackendName> = rbd-1 <PoolName> = rbd-1
In the RBD case the poolname will always be the same as the backendname.
Cheers, Gorka.
We?re replacing controllers, and it takes a few hours to build
On 29/08, Albert Braden wrote: the new controller. We?re following this procedure to remove the old controller:
https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host... < https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-host...
After that the cluster seems to run fine on 2 controllers, but
approximately 1/3 of our volumes can?t be attached to a VM. When we look at those volumes, we see this:
| os-vol-host-attr:host |
qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1 |
Ctrl1 is the controller that is being replaced. Is it possible to
change the os-vol-host-attr on a volume? How can we work around this issue while we are replacing controllers? Do we need to disable the API for the duration of the replacement process, or is there a better way?
Hi,
Assuming you are running cinder volume in Active-Passive mode (which I believe was the only way back in Train) then you should be hardcoding the host name in the cinder.conf file to avoid losing access to your volumes when the volume service starts in another host.
This is done with the "backend_host" configuration option within the specific driver section in cinder.conf.
As for how to change the value of all the volumes to the same host value, you can use the "cinder-manage" command:
cinder-manage volume update_host \ --currenthost <current host> \ --newhost <new host>
Cheers, Gorka.