<div dir="ltr"><div class="gmail_default" style="font-size:small;color:#444444"><div class="gmail_default">Hi Michael,</div><div class="gmail_default"><br></div><div class="gmail_default"><br></div><div class="gmail_default">I have realized that the docker version I have on the new compute nodes after running the bootstrap command is 24 while the previous compute and controller nodes  have 20.</div><div class="gmail_default"><br></div><div class="gmail_default">Could that be a problem?</div><div class="gmail_default"><br></div><div style="color:rgb(34,34,34)"><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div><font face="arial, sans-serif">Regards</font></div></div></div></div></div></div></div></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div><font face="arial, sans-serif">Regards</font></div><div><font face="arial, sans-serif"><br></font></div><div dir="ltr"><font face="arial, sans-serif">Tony Karera</font></div><div dir="ltr"><br></div><div><br></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 30, 2023 at 3:22 PM <<a href="mailto:openstack-discuss-request@lists.openstack.org">openstack-discuss-request@lists.openstack.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Send openstack-discuss mailing list submissions to<br>
        <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
        <a href="https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss" rel="noreferrer" target="_blank">https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss</a><br>
<br>
or, via email, send a message with subject or body 'help' to<br>
        <a href="mailto:openstack-discuss-request@lists.openstack.org" target="_blank">openstack-discuss-request@lists.openstack.org</a><br>
<br>
You can reach the person managing the list at<br>
        <a href="mailto:openstack-discuss-owner@lists.openstack.org" target="_blank">openstack-discuss-owner@lists.openstack.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of openstack-discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
   1. Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)<br>
      (Brian Rosmaita)<br>
   2. Re: [kolla] [train] [cinder] Cinder issues during controller<br>
      replacement (Danny Webb)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 30 Aug 2023 08:31:32 -0400<br>
From: Brian Rosmaita <<a href="mailto:rosmaita.fossdev@gmail.com" target="_blank">rosmaita.fossdev@gmail.com</a>><br>
To: <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
Subject: Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)<br>
Message-ID: <<a href="mailto:fa39fe25-6651-688a-4bfa-39abbdbb7e04@gmail.com" target="_blank">fa39fe25-6651-688a-4bfa-39abbdbb7e04@gmail.com</a>><br>
Content-Type: text/plain; charset=UTF-8; format=flowed<br>
<br>
On 8/29/23 10:17 AM, Rajat Dhasmana wrote:<br>
> Hi All,<br>
> <br>
> I would like to nominate myself to be Cinder?PTL during the 2024.1 (Caracal)<br>
> cycle.<br>
[snip]<br>
> Here are some work items we are planning for the next cycle (2024.1):<br>
> <br>
> * We still lack review bandwidth since one of our active cores, Sofia, <br>
> couldn't contribute to Cinder anymore<br>
> hence we will be looking out for potential core reviewers.<br>
> * Continue working on migrating from cinderclient to OSC (and SDK) support.<br>
> * Continue with the current cinder events like festival of XS reviews, <br>
> midcycle, video meeting once a month etc<br>
> that provides regular team interaction and helps tracking and discussion <br>
> of ongoing work throughout the cycle.<br>
<br>
All this sounds great to me!  Thanks for stepping up once again to take <br>
on the task of herding the Argonauts and keeping the Cinder project on <br>
track.<br>
<br>
> <br>
> [1] <a href="https://review.opendev.org/q/topic:cinderclient-sdk-migration" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinderclient-sdk-migration</a> <br>
> <<a href="https://review.opendev.org/q/topic:cinderclient-sdk-migration" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinderclient-sdk-migration</a>><br>
> [2] <a href="https://review.opendev.org/q/topic:cinder-sdk-gap" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinder-sdk-gap</a> <br>
> <<a href="https://review.opendev.org/q/topic:cinder-sdk-gap" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinder-sdk-gap</a>><br>
> [3] <br>
> <a href="https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729</a> <<a href="https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729</a>><br>
> [4] <br>
> <a href="https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html</a> <<a href="https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html</a>><br>
> <br>
> Thanks<br>
> Rajat Dhasmana<br>
<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 30 Aug 2023 13:19:49 +0000<br>
From: Danny Webb <<a href="mailto:Danny.Webb@thehutgroup.com" target="_blank">Danny.Webb@thehutgroup.com</a>><br>
To: Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>>,<br>
        "<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>"<br>
        <<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>><br>
Subject: Re: [kolla] [train] [cinder] Cinder issues during controller<br>
        replacement<br>
Message-ID:<br>
        <<a href="mailto:LO2P265MB5773FCAFE86B26D1E45E31459AE6A@LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM" target="_blank">LO2P265MB5773FCAFE86B26D1E45E31459AE6A@LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM</a>><br>
<br>
Content-Type: text/plain; charset="windows-1252"<br>
<br>
Just to add to the cornucopia of configuration methods for cinder backends.   We use a variation of this for active/active controller setup which seems to work (on pure, ceph rbd and nimble).  We use both backend_host and volume_backend_name so any of our controllers can action a request for a given volume.  We've not had any issues with this setup since ussuri (we're on Zed / Yoga now).<br>
<br>
eg:<br>
<br>
[rbd]<br>
backend_host = ceph-nvme<br>
volume_backend_name = high-performance<br>
...<br>
<br>
[pure]<br>
backend_host = pure<br>
volume_backend_name = high-performance<br>
...<br>
<br>
which leaves us with a set of the following:<br>
<br>
$ openstack volume service list<br>
+------------------+-------------------------+----------+---------+-------+----------------------------+<br>
| Binary           | Host                    | Zone     | Status  | State | Updated At                 |<br>
+------------------+-------------------------+----------+---------+-------+----------------------------+<br>
| cinder-volume    | nimble@nimble-az1       | gb-lon-1 | enabled | up    | 2023-08-30T12:34:18.000000 |<br>
| cinder-volume    | nimble@nimble-az2       | gb-lon-2 | enabled | up    | 2023-08-30T12:34:14.000000 |<br>
| cinder-volume    | nimble@nimble-az3       | gb-lon-3 | enabled | up    | 2023-08-30T12:34:17.000000 |<br>
| cinder-volume    | ceph-nvme@ceph-nvme-az3 | gb-lon-3 | enabled | up    | 2023-08-30T12:34:14.000000 |<br>
| cinder-volume    | pure@pure-az1           | gb-lon-1 | enabled | up    | 2023-08-30T12:34:11.000000 |<br>
| cinder-volume    | pure@pure-az2           | gb-lon-2 | enabled | up    | 2023-08-30T12:34:11.000000 |<br>
+------------------+-------------------------+----------+---------+-------+----------------------------+<br>
<br>
<br>
________________________________<br>
From: Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>><br>
Sent: 30 August 2023 11:36<br>
To: <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a> <<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>><br>
Subject: Re: [kolla] [train] [cinder] Cinder issues during controller replacement<br>
<br>
CAUTION: This email originates from outside THG<br>
<br>
Just to share our config, we don't use "backend_host" but only "host"<br>
on all control nodes. Its value is the hostname (shortname) pointing<br>
to the virtual IP which migrates in case of a failure. We only use<br>
Ceph as storage backend with different volume types. The volume<br>
service list looks like this:<br>
<br>
controller02:~ # openstack volume service list<br>
+------------------+--------------------+------+---------+-------+----------------------------+<br>
| Binary | Host | Zone | Status | State |<br>
Updated At |<br>
+------------------+--------------------+------+---------+-------+----------------------------+<br>
| cinder-scheduler | controller | nova | enabled | up |<br>
2023-08-30T10:32:36.000000 |<br>
| cinder-backup | controller | nova | enabled | up |<br>
2023-08-30T10:32:34.000000 |<br>
| cinder-volume | controller@rbd | nova | enabled | up |<br>
2023-08-30T10:32:28.000000 |<br>
| cinder-volume | controller@rbd2 | nova | enabled | up |<br>
2023-08-30T10:32:36.000000 |<br>
| cinder-volume | controller@ceph-ec | nova | enabled | up |<br>
2023-08-30T10:32:28.000000 |<br>
+------------------+--------------------+------+---------+-------+----------------------------+<br>
<br>
We haven't seen any issue with this setup in years during failover.<br>
<br>
Zitat von Gorka Eguileor <<a href="mailto:geguileo@redhat.com" target="_blank">geguileo@redhat.com</a>>:<br>
<br>
> On 29/08, Albert Braden wrote:<br>
>> What does backend_host look like? Should it match my internal API<br>
>> URL, i.e. <a href="http://api-int.qde4.ourdomain.com" rel="noreferrer" target="_blank">api-int.qde4.ourdomain.com</a>?<br>
>> On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor<br>
>> <<a href="mailto:geguileo@redhat.com" target="_blank">geguileo@redhat.com</a>> wrote:<br>
>><br>
><br>
> Hi,<br>
><br>
> If I remember correctly you can set it to anything you want, but for<br>
> convenience I would recommend setting it to the hostname that currently<br>
> has more volumes in your system.<br>
><br>
> Let's say you have 3 hosts:<br>
> <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
> <a href="http://qde4-ctrl2.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl2.cloud.ourdomain.com</a><br>
> <a href="http://qde4-ctrl3.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl3.cloud.ourdomain.com</a><br>
><br>
> And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.<br>
><br>
> Then it would be best to set it to ctrl1:<br>
> [rbd-1]<br>
> backend_host = <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
><br>
> And then use the cinder-manage command to modify the other 2.<br>
><br>
> For your information the value you see as "os-vol-host-attr:host" when<br>
> seeing the detailed information of a volume is in the form of:<br>
> <HostName>@<BackendName>#<PoolName><br>
><br>
> In your case:<br>
> <HostName> = <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
> <BackendName> = rbd-1<br>
> <PoolName> = rbd-1<br>
><br>
> In the RBD case the poolname will always be the same as the backendname.<br>
><br>
> Cheers,<br>
> Gorka.<br>
><br>
>> On 29/08, Albert Braden wrote:<br>
>> > We?re replacing controllers, and it takes a few hours to build<br>
>> the new controller. We?re following this procedure to remove the<br>
>> old controller:<br>
>> <a href="https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html" rel="noreferrer" target="_blank">https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html</a><<a href="https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html" rel="noreferrer" target="_blank">https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html</a>><br>
>> ><br>
>> >  After that the cluster seems to run fine on 2 controllers, but<br>
>> approximately 1/3 of our volumes can?t be attached to a VM. When we<br>
>> look at those volumes, we see this:<br>
>> ><br>
>> > | os-vol-host-attr:host          |<br>
>> qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1<br>
>>   |<br>
>> ><br>
>> > Ctrl1 is the controller that is being replaced. Is it possible to<br>
>> change the os-vol-host-attr on a volume? How can we work around<br>
>> this issue while we are replacing controllers? Do we need to<br>
>> disable the API for the duration of the replacement process, or is<br>
>> there a better way?<br>
>> ><br>
>><br>
>> Hi,<br>
>><br>
>> Assuming you are running cinder volume in Active-Passive mode (which I<br>
>> believe was the only way back in Train) then you should be hardcoding<br>
>> the host name in the cinder.conf file to avoid losing access to your<br>
>> volumes when the volume service starts in another host.<br>
>><br>
>> This is done with the "backend_host" configuration option within the<br>
>> specific driver section in cinder.conf.<br>
>><br>
>> As for how to change the value of all the volumes to the same host<br>
>> value, you can use the "cinder-manage" command:<br>
>><br>
>>   cinder-manage volume update_host \<br>
>>     --currenthost <current host> \<br>
>>     --newhost <new host><br>
>><br>
>> Cheers,<br>
>> Gorka.<br>
>><br>
>><br>
>><br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment.htm" rel="noreferrer" target="_blank">https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment.htm</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
openstack-discuss mailing list<br>
<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
<br>
<br>
------------------------------<br>
<br>
End of openstack-discuss Digest, Vol 58, Issue 107<br>
**************************************************<br>
</blockquote></div>