<div dir="ltr"><div class="gmail_default" style="font-size:small;color:#444444">My issue was resolved after upgrading kolla-ansible.</div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div><font face="arial, sans-serif">Regards</font></div><div><font face="arial, sans-serif"><br></font></div><div dir="ltr"><font face="arial, sans-serif">Tony Karera</font></div><div dir="ltr"><br></div><div><br></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Aug 30, 2023 at 3:53 PM <<a href="mailto:openstack-discuss-request@lists.openstack.org">openstack-discuss-request@lists.openstack.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Send openstack-discuss mailing list submissions to<br>
<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss" rel="noreferrer" target="_blank">https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss</a><br>
<br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:openstack-discuss-request@lists.openstack.org" target="_blank">openstack-discuss-request@lists.openstack.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:openstack-discuss-owner@lists.openstack.org" target="_blank">openstack-discuss-owner@lists.openstack.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of openstack-discuss digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. [keystone][election] Self-nomination for Keystone PTL for<br>
2024.1 cycle (Dave Wilde)<br>
2. Re: openstack-discuss Digest, Vol 58, Issue 107 (Karera Tony)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 30 Aug 2023 08:35:00 -0500<br>
From: Dave Wilde <<a href="mailto:dwilde@redhat.com" target="_blank">dwilde@redhat.com</a>><br>
To: OpenStack Discuss <<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>><br>
Subject: [keystone][election] Self-nomination for Keystone PTL for<br>
2024.1 cycle<br>
Message-ID: <54d82977-aee0-4df2-a7c3-2be3b8c2788c@Spark><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hey folks,<br>
<br>
It's Dave here, your current OpenStack Keystone PTL. I'd like to submit my<br>
candidacy to again act as the PTL for the 2024.1 cycle.??We're making great<br>
progress with Keystone, and I would like to continue with that excellent work.<br>
<br>
As PTL in this cycle here are some of the things I'd like to focus on:<br>
<br>
??- Finish the manager role and ensure that the SRBAC implied roles are correct<br>
?? ?for manager and member<br>
??- Continue the OAuth 2.0 Implementation<br>
??- Start a known issues section in the Keystone documentation<br>
??- Start a documentation audit to ensure that our documentation is of the<br>
?? ?highest quality<br>
<br>
Of course we will continue the weekly meetings and the reviewathons.??I think<br>
the reviewathons have been very successful and I'm keen on keeping them going.<br>
<br>
I'm looking forward to another successful cycle and working with everyone<br>
again!<br>
<br>
Thank you!<br>
<br>
/Dave<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/5a8c72c2/attachment-0001.htm" rel="noreferrer" target="_blank">https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/5a8c72c2/attachment-0001.htm</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 30 Aug 2023 15:50:18 +0200<br>
From: Karera Tony <<a href="mailto:tonykarera@gmail.com" target="_blank">tonykarera@gmail.com</a>><br>
To: <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>, <a href="mailto:michael@knox.net.nz" target="_blank">michael@knox.net.nz</a><br>
Subject: Re: openstack-discuss Digest, Vol 58, Issue 107<br>
Message-ID:<br>
<CA+69TL2rUJpKO95VovT3Oo+cKEna2NjAhM=<a href="mailto:iovTfAamkhVxSGA@mail.gmail.com" target="_blank">iovTfAamkhVxSGA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Michael,<br>
<br>
<br>
I have realized that the docker version I have on the new compute nodes<br>
after running the bootstrap command is 24 while the previous compute and<br>
controller nodes have 20.<br>
<br>
Could that be a problem?<br>
<br>
Regards<br>
Regards<br>
<br>
Tony Karera<br>
<br>
<br>
<br>
<br>
On Wed, Aug 30, 2023 at 3:22?PM <<br>
<a href="mailto:openstack-discuss-request@lists.openstack.org" target="_blank">openstack-discuss-request@lists.openstack.org</a>> wrote:<br>
<br>
> Send openstack-discuss mailing list submissions to<br>
> <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
><br>
> To subscribe or unsubscribe via the World Wide Web, visit<br>
><br>
> <a href="https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss" rel="noreferrer" target="_blank">https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss</a><br>
><br>
> or, via email, send a message with subject or body 'help' to<br>
> <a href="mailto:openstack-discuss-request@lists.openstack.org" target="_blank">openstack-discuss-request@lists.openstack.org</a><br>
><br>
> You can reach the person managing the list at<br>
> <a href="mailto:openstack-discuss-owner@lists.openstack.org" target="_blank">openstack-discuss-owner@lists.openstack.org</a><br>
><br>
> When replying, please edit your Subject line so it is more specific<br>
> than "Re: Contents of openstack-discuss digest..."<br>
><br>
><br>
> Today's Topics:<br>
><br>
> 1. Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)<br>
> (Brian Rosmaita)<br>
> 2. Re: [kolla] [train] [cinder] Cinder issues during controller<br>
> replacement (Danny Webb)<br>
><br>
><br>
> ----------------------------------------------------------------------<br>
><br>
> Message: 1<br>
> Date: Wed, 30 Aug 2023 08:31:32 -0400<br>
> From: Brian Rosmaita <<a href="mailto:rosmaita.fossdev@gmail.com" target="_blank">rosmaita.fossdev@gmail.com</a>><br>
> To: <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
> Subject: Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)<br>
> Message-ID: <<a href="mailto:fa39fe25-6651-688a-4bfa-39abbdbb7e04@gmail.com" target="_blank">fa39fe25-6651-688a-4bfa-39abbdbb7e04@gmail.com</a>><br>
> Content-Type: text/plain; charset=UTF-8; format=flowed<br>
><br>
> On 8/29/23 10:17 AM, Rajat Dhasmana wrote:<br>
> > Hi All,<br>
> ><br>
> > I would like to nominate myself to be Cinder?PTL during the 2024.1<br>
> (Caracal)<br>
> > cycle.<br>
> [snip]<br>
> > Here are some work items we are planning for the next cycle (2024.1):<br>
> ><br>
> > * We still lack review bandwidth since one of our active cores, Sofia,<br>
> > couldn't contribute to Cinder anymore<br>
> > hence we will be looking out for potential core reviewers.<br>
> > * Continue working on migrating from cinderclient to OSC (and SDK)<br>
> support.<br>
> > * Continue with the current cinder events like festival of XS reviews,<br>
> > midcycle, video meeting once a month etc<br>
> > that provides regular team interaction and helps tracking and discussion<br>
> > of ongoing work throughout the cycle.<br>
><br>
> All this sounds great to me! Thanks for stepping up once again to take<br>
> on the task of herding the Argonauts and keeping the Cinder project on<br>
> track.<br>
><br>
> ><br>
> > [1] <a href="https://review.opendev.org/q/topic:cinderclient-sdk-migration" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinderclient-sdk-migration</a><br>
> > <<a href="https://review.opendev.org/q/topic:cinderclient-sdk-migration" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinderclient-sdk-migration</a>><br>
> > [2] <a href="https://review.opendev.org/q/topic:cinder-sdk-gap" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinder-sdk-gap</a><br>
> > <<a href="https://review.opendev.org/q/topic:cinder-sdk-gap" rel="noreferrer" target="_blank">https://review.opendev.org/q/topic:cinder-sdk-gap</a>><br>
> > [3]<br>
> ><br>
> <a href="https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729</a><br>
> <<br>
> <a href="https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729</a><br>
> ><br>
> > [4]<br>
> ><br>
> <a href="https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html</a><br>
> <<br>
> <a href="https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html" rel="noreferrer" target="_blank">https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html</a><br>
> ><br>
> ><br>
> > Thanks<br>
> > Rajat Dhasmana<br>
><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 2<br>
> Date: Wed, 30 Aug 2023 13:19:49 +0000<br>
> From: Danny Webb <<a href="mailto:Danny.Webb@thehutgroup.com" target="_blank">Danny.Webb@thehutgroup.com</a>><br>
> To: Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>>,<br>
> "<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>"<br>
> <<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>><br>
> Subject: Re: [kolla] [train] [cinder] Cinder issues during controller<br>
> replacement<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:LO2P265MB5773FCAFE86B26D1E45E31459AE6A@LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM" target="_blank">LO2P265MB5773FCAFE86B26D1E45E31459AE6A@LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM</a><br>
> ><br>
><br>
> Content-Type: text/plain; charset="windows-1252"<br>
><br>
> Just to add to the cornucopia of configuration methods for cinder<br>
> backends. We use a variation of this for active/active controller setup<br>
> which seems to work (on pure, ceph rbd and nimble). We use both<br>
> backend_host and volume_backend_name so any of our controllers can action a<br>
> request for a given volume. We've not had any issues with this setup since<br>
> ussuri (we're on Zed / Yoga now).<br>
><br>
> eg:<br>
><br>
> [rbd]<br>
> backend_host = ceph-nvme<br>
> volume_backend_name = high-performance<br>
> ...<br>
><br>
> [pure]<br>
> backend_host = pure<br>
> volume_backend_name = high-performance<br>
> ...<br>
><br>
> which leaves us with a set of the following:<br>
><br>
> $ openstack volume service list<br>
><br>
> +------------------+-------------------------+----------+---------+-------+----------------------------+<br>
> | Binary | Host | Zone | Status | State<br>
> | Updated At |<br>
><br>
> +------------------+-------------------------+----------+---------+-------+----------------------------+<br>
> | cinder-volume | nimble@nimble-az1 | gb-lon-1 | enabled | up<br>
> | 2023-08-30T12:34:18.000000 |<br>
> | cinder-volume | nimble@nimble-az2 | gb-lon-2 | enabled | up<br>
> | 2023-08-30T12:34:14.000000 |<br>
> | cinder-volume | nimble@nimble-az3 | gb-lon-3 | enabled | up<br>
> | 2023-08-30T12:34:17.000000 |<br>
> | cinder-volume | ceph-nvme@ceph-nvme-az3 | gb-lon-3 | enabled | up<br>
> | 2023-08-30T12:34:14.000000 |<br>
> | cinder-volume | pure@pure-az1 | gb-lon-1 | enabled | up<br>
> | 2023-08-30T12:34:11.000000 |<br>
> | cinder-volume | pure@pure-az2 | gb-lon-2 | enabled | up<br>
> | 2023-08-30T12:34:11.000000 |<br>
><br>
> +------------------+-------------------------+----------+---------+-------+----------------------------+<br>
><br>
><br>
> ________________________________<br>
> From: Eugen Block <<a href="mailto:eblock@nde.ag" target="_blank">eblock@nde.ag</a>><br>
> Sent: 30 August 2023 11:36<br>
> To: <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a> <<br>
> <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a>><br>
> Subject: Re: [kolla] [train] [cinder] Cinder issues during controller<br>
> replacement<br>
><br>
> CAUTION: This email originates from outside THG<br>
><br>
> Just to share our config, we don't use "backend_host" but only "host"<br>
> on all control nodes. Its value is the hostname (shortname) pointing<br>
> to the virtual IP which migrates in case of a failure. We only use<br>
> Ceph as storage backend with different volume types. The volume<br>
> service list looks like this:<br>
><br>
> controller02:~ # openstack volume service list<br>
><br>
> +------------------+--------------------+------+---------+-------+----------------------------+<br>
> | Binary | Host | Zone | Status | State |<br>
> Updated At |<br>
><br>
> +------------------+--------------------+------+---------+-------+----------------------------+<br>
> | cinder-scheduler | controller | nova | enabled | up |<br>
> 2023-08-30T10:32:36.000000 |<br>
> | cinder-backup | controller | nova | enabled | up |<br>
> 2023-08-30T10:32:34.000000 |<br>
> | cinder-volume | controller@rbd | nova | enabled | up |<br>
> 2023-08-30T10:32:28.000000 |<br>
> | cinder-volume | controller@rbd2 | nova | enabled | up |<br>
> 2023-08-30T10:32:36.000000 |<br>
> | cinder-volume | controller@ceph-ec | nova | enabled | up |<br>
> 2023-08-30T10:32:28.000000 |<br>
><br>
> +------------------+--------------------+------+---------+-------+----------------------------+<br>
><br>
> We haven't seen any issue with this setup in years during failover.<br>
><br>
> Zitat von Gorka Eguileor <<a href="mailto:geguileo@redhat.com" target="_blank">geguileo@redhat.com</a>>:<br>
><br>
> > On 29/08, Albert Braden wrote:<br>
> >> What does backend_host look like? Should it match my internal API<br>
> >> URL, i.e. <a href="http://api-int.qde4.ourdomain.com" rel="noreferrer" target="_blank">api-int.qde4.ourdomain.com</a>?<br>
> >> On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor<br>
> >> <<a href="mailto:geguileo@redhat.com" target="_blank">geguileo@redhat.com</a>> wrote:<br>
> >><br>
> ><br>
> > Hi,<br>
> ><br>
> > If I remember correctly you can set it to anything you want, but for<br>
> > convenience I would recommend setting it to the hostname that currently<br>
> > has more volumes in your system.<br>
> ><br>
> > Let's say you have 3 hosts:<br>
> > <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
> > <a href="http://qde4-ctrl2.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl2.cloud.ourdomain.com</a><br>
> > <a href="http://qde4-ctrl3.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl3.cloud.ourdomain.com</a><br>
> ><br>
> > And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.<br>
> ><br>
> > Then it would be best to set it to ctrl1:<br>
> > [rbd-1]<br>
> > backend_host = <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
> ><br>
> > And then use the cinder-manage command to modify the other 2.<br>
> ><br>
> > For your information the value you see as "os-vol-host-attr:host" when<br>
> > seeing the detailed information of a volume is in the form of:<br>
> > <HostName>@<BackendName>#<PoolName><br>
> ><br>
> > In your case:<br>
> > <HostName> = <a href="http://qde4-ctrl1.cloud.ourdomain.com" rel="noreferrer" target="_blank">qde4-ctrl1.cloud.ourdomain.com</a><br>
> > <BackendName> = rbd-1<br>
> > <PoolName> = rbd-1<br>
> ><br>
> > In the RBD case the poolname will always be the same as the backendname.<br>
> ><br>
> > Cheers,<br>
> > Gorka.<br>
> ><br>
> >> On 29/08, Albert Braden wrote:<br>
> >> > We?re replacing controllers, and it takes a few hours to build<br>
> >> the new controller. We?re following this procedure to remove the<br>
> >> old controller:<br>
> >><br>
> <a href="https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html" rel="noreferrer" target="_blank">https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html</a><br>
> <<br>
> <a href="https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html" rel="noreferrer" target="_blank">https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html</a><br>
> ><br>
> >> ><br>
> >> > After that the cluster seems to run fine on 2 controllers, but<br>
> >> approximately 1/3 of our volumes can?t be attached to a VM. When we<br>
> >> look at those volumes, we see this:<br>
> >> ><br>
> >> > | os-vol-host-attr:host |<br>
> >> qde4-ctrl1.cloud.ourdomain.com@rbd-1#rbd-1<br>
> >> |<br>
> >> ><br>
> >> > Ctrl1 is the controller that is being replaced. Is it possible to<br>
> >> change the os-vol-host-attr on a volume? How can we work around<br>
> >> this issue while we are replacing controllers? Do we need to<br>
> >> disable the API for the duration of the replacement process, or is<br>
> >> there a better way?<br>
> >> ><br>
> >><br>
> >> Hi,<br>
> >><br>
> >> Assuming you are running cinder volume in Active-Passive mode (which I<br>
> >> believe was the only way back in Train) then you should be hardcoding<br>
> >> the host name in the cinder.conf file to avoid losing access to your<br>
> >> volumes when the volume service starts in another host.<br>
> >><br>
> >> This is done with the "backend_host" configuration option within the<br>
> >> specific driver section in cinder.conf.<br>
> >><br>
> >> As for how to change the value of all the volumes to the same host<br>
> >> value, you can use the "cinder-manage" command:<br>
> >><br>
> >> cinder-manage volume update_host \<br>
> >> --currenthost <current host> \<br>
> >> --newhost <new host><br>
> >><br>
> >> Cheers,<br>
> >> Gorka.<br>
> >><br>
> >><br>
> >><br>
><br>
><br>
><br>
> -------------- next part --------------<br>
> An HTML attachment was scrubbed...<br>
> URL: <<br>
> <a href="https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment.htm" rel="noreferrer" target="_blank">https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment.htm</a><br>
> ><br>
><br>
> ------------------------------<br>
><br>
> Subject: Digest Footer<br>
><br>
> _______________________________________________<br>
> openstack-discuss mailing list<br>
> <a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
><br>
><br>
> ------------------------------<br>
><br>
> End of openstack-discuss Digest, Vol 58, Issue 107<br>
> **************************************************<br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/86feba0f/attachment.htm" rel="noreferrer" target="_blank">https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/86feba0f/attachment.htm</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
openstack-discuss mailing list<br>
<a href="mailto:openstack-discuss@lists.openstack.org" target="_blank">openstack-discuss@lists.openstack.org</a><br>
<br>
<br>
------------------------------<br>
<br>
End of openstack-discuss Digest, Vol 58, Issue 108<br>
**************************************************<br>
</blockquote></div>