openstack-discuss Digest, Vol 58, Issue 108

Karera Tony tonykarera at gmail.com
Thu Aug 31 10:21:12 UTC 2023


My issue was resolved after upgrading kolla-ansible.
Regards

Tony Karera




On Wed, Aug 30, 2023 at 3:53 PM <
openstack-discuss-request at lists.openstack.org> wrote:

> Send openstack-discuss mailing list submissions to
>         openstack-discuss at lists.openstack.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>
> https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
>
> or, via email, send a message with subject or body 'help' to
>         openstack-discuss-request at lists.openstack.org
>
> You can reach the person managing the list at
>         openstack-discuss-owner at lists.openstack.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openstack-discuss digest..."
>
>
> Today's Topics:
>
>    1. [keystone][election] Self-nomination for Keystone PTL for
>       2024.1 cycle (Dave Wilde)
>    2. Re: openstack-discuss Digest, Vol 58, Issue 107 (Karera Tony)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 30 Aug 2023 08:35:00 -0500
> From: Dave Wilde <dwilde at redhat.com>
> To: OpenStack Discuss <openstack-discuss at lists.openstack.org>
> Subject: [keystone][election] Self-nomination for Keystone PTL for
>         2024.1 cycle
> Message-ID: <54d82977-aee0-4df2-a7c3-2be3b8c2788c at Spark>
> Content-Type: text/plain; charset="utf-8"
>
> Hey folks,
>
> It's Dave here, your current OpenStack Keystone PTL. I'd like to submit my
> candidacy to again act as the PTL for the 2024.1 cycle.??We're making great
> progress with Keystone, and I would like to continue with that excellent
> work.
>
> As PTL in this cycle here are some of the things I'd like to focus on:
>
> ??- Finish the manager role and ensure that the SRBAC implied roles are
> correct
> ?? ?for manager and member
> ??- Continue the OAuth 2.0 Implementation
> ??- Start a known issues section in the Keystone documentation
> ??- Start a documentation audit to ensure that our documentation is of the
> ?? ?highest quality
>
> Of course we will continue the weekly meetings and the reviewathons.??I
> think
> the reviewathons have been very successful and I'm keen on keeping them
> going.
>
> I'm looking forward to another successful cycle and working with everyone
> again!
>
> Thank you!
>
> /Dave
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/5a8c72c2/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 30 Aug 2023 15:50:18 +0200
> From: Karera Tony <tonykarera at gmail.com>
> To: openstack-discuss at lists.openstack.org, michael at knox.net.nz
> Subject: Re: openstack-discuss Digest, Vol 58, Issue 107
> Message-ID:
>         <CA+69TL2rUJpKO95VovT3Oo+cKEna2NjAhM=
> iovTfAamkhVxSGA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Michael,
>
>
> I have realized that the docker version I have on the new compute nodes
> after running the bootstrap command is 24 while the previous compute and
> controller nodes  have 20.
>
> Could that be a problem?
>
> Regards
> Regards
>
> Tony Karera
>
>
>
>
> On Wed, Aug 30, 2023 at 3:22?PM <
> openstack-discuss-request at lists.openstack.org> wrote:
>
> > Send openstack-discuss mailing list submissions to
> >         openstack-discuss at lists.openstack.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
> > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
> >
> > or, via email, send a message with subject or body 'help' to
> >         openstack-discuss-request at lists.openstack.org
> >
> > You can reach the person managing the list at
> >         openstack-discuss-owner at lists.openstack.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of openstack-discuss digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)
> >       (Brian Rosmaita)
> >    2. Re: [kolla] [train] [cinder] Cinder issues during controller
> >       replacement (Danny Webb)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Wed, 30 Aug 2023 08:31:32 -0400
> > From: Brian Rosmaita <rosmaita.fossdev at gmail.com>
> > To: openstack-discuss at lists.openstack.org
> > Subject: Re: [elections][Cinder] PTL Candidacy for 2024.1 (Caracal)
> > Message-ID: <fa39fe25-6651-688a-4bfa-39abbdbb7e04 at gmail.com>
> > Content-Type: text/plain; charset=UTF-8; format=flowed
> >
> > On 8/29/23 10:17 AM, Rajat Dhasmana wrote:
> > > Hi All,
> > >
> > > I would like to nominate myself to be Cinder?PTL during the 2024.1
> > (Caracal)
> > > cycle.
> > [snip]
> > > Here are some work items we are planning for the next cycle (2024.1):
> > >
> > > * We still lack review bandwidth since one of our active cores, Sofia,
> > > couldn't contribute to Cinder anymore
> > > hence we will be looking out for potential core reviewers.
> > > * Continue working on migrating from cinderclient to OSC (and SDK)
> > support.
> > > * Continue with the current cinder events like festival of XS reviews,
> > > midcycle, video meeting once a month etc
> > > that provides regular team interaction and helps tracking and
> discussion
> > > of ongoing work throughout the cycle.
> >
> > All this sounds great to me!  Thanks for stepping up once again to take
> > on the task of herding the Argonauts and keeping the Cinder project on
> > track.
> >
> > >
> > > [1] https://review.opendev.org/q/topic:cinderclient-sdk-migration
> > > <https://review.opendev.org/q/topic:cinderclient-sdk-migration>
> > > [2] https://review.opendev.org/q/topic:cinder-sdk-gap
> > > <https://review.opendev.org/q/topic:cinder-sdk-gap>
> > > [3]
> > >
> >
> https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729
> > <
> >
> https://docs.google.com/spreadsheets/d/1yetPti2XImRnOXvvJH48yQogdKDzRWDl-NYbjTF67Gg/edit#gid=1463660729
> > >
> > > [4]
> > >
> >
> https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html
> > <
> >
> https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html
> > >
> > >
> > > Thanks
> > > Rajat Dhasmana
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Wed, 30 Aug 2023 13:19:49 +0000
> > From: Danny Webb <Danny.Webb at thehutgroup.com>
> > To: Eugen Block <eblock at nde.ag>,
> >         "openstack-discuss at lists.openstack.org"
> >         <openstack-discuss at lists.openstack.org>
> > Subject: Re: [kolla] [train] [cinder] Cinder issues during controller
> >         replacement
> > Message-ID:
> >         <
> >
> LO2P265MB5773FCAFE86B26D1E45E31459AE6A at LO2P265MB5773.GBRP265.PROD.OUTLOOK.COM
> > >
> >
> > Content-Type: text/plain; charset="windows-1252"
> >
> > Just to add to the cornucopia of configuration methods for cinder
> > backends.   We use a variation of this for active/active controller setup
> > which seems to work (on pure, ceph rbd and nimble).  We use both
> > backend_host and volume_backend_name so any of our controllers can
> action a
> > request for a given volume.  We've not had any issues with this setup
> since
> > ussuri (we're on Zed / Yoga now).
> >
> > eg:
> >
> > [rbd]
> > backend_host = ceph-nvme
> > volume_backend_name = high-performance
> > ...
> >
> > [pure]
> > backend_host = pure
> > volume_backend_name = high-performance
> > ...
> >
> > which leaves us with a set of the following:
> >
> > $ openstack volume service list
> >
> >
> +------------------+-------------------------+----------+---------+-------+----------------------------+
> > | Binary           | Host                    | Zone     | Status  | State
> > | Updated At                 |
> >
> >
> +------------------+-------------------------+----------+---------+-------+----------------------------+
> > | cinder-volume    | nimble at nimble-az1       | gb-lon-1 | enabled | up
> > | 2023-08-30T12:34:18.000000 |
> > | cinder-volume    | nimble at nimble-az2       | gb-lon-2 | enabled | up
> > | 2023-08-30T12:34:14.000000 |
> > | cinder-volume    | nimble at nimble-az3       | gb-lon-3 | enabled | up
> > | 2023-08-30T12:34:17.000000 |
> > | cinder-volume    | ceph-nvme at ceph-nvme-az3 | gb-lon-3 | enabled | up
> > | 2023-08-30T12:34:14.000000 |
> > | cinder-volume    | pure at pure-az1           | gb-lon-1 | enabled | up
> > | 2023-08-30T12:34:11.000000 |
> > | cinder-volume    | pure at pure-az2           | gb-lon-2 | enabled | up
> > | 2023-08-30T12:34:11.000000 |
> >
> >
> +------------------+-------------------------+----------+---------+-------+----------------------------+
> >
> >
> > ________________________________
> > From: Eugen Block <eblock at nde.ag>
> > Sent: 30 August 2023 11:36
> > To: openstack-discuss at lists.openstack.org <
> > openstack-discuss at lists.openstack.org>
> > Subject: Re: [kolla] [train] [cinder] Cinder issues during controller
> > replacement
> >
> > CAUTION: This email originates from outside THG
> >
> > Just to share our config, we don't use "backend_host" but only "host"
> > on all control nodes. Its value is the hostname (shortname) pointing
> > to the virtual IP which migrates in case of a failure. We only use
> > Ceph as storage backend with different volume types. The volume
> > service list looks like this:
> >
> > controller02:~ # openstack volume service list
> >
> >
> +------------------+--------------------+------+---------+-------+----------------------------+
> > | Binary | Host | Zone | Status | State |
> > Updated At |
> >
> >
> +------------------+--------------------+------+---------+-------+----------------------------+
> > | cinder-scheduler | controller | nova | enabled | up |
> > 2023-08-30T10:32:36.000000 |
> > | cinder-backup | controller | nova | enabled | up |
> > 2023-08-30T10:32:34.000000 |
> > | cinder-volume | controller at rbd | nova | enabled | up |
> > 2023-08-30T10:32:28.000000 |
> > | cinder-volume | controller at rbd2 | nova | enabled | up |
> > 2023-08-30T10:32:36.000000 |
> > | cinder-volume | controller at ceph-ec | nova | enabled | up |
> > 2023-08-30T10:32:28.000000 |
> >
> >
> +------------------+--------------------+------+---------+-------+----------------------------+
> >
> > We haven't seen any issue with this setup in years during failover.
> >
> > Zitat von Gorka Eguileor <geguileo at redhat.com>:
> >
> > > On 29/08, Albert Braden wrote:
> > >> What does backend_host look like? Should it match my internal API
> > >> URL, i.e. api-int.qde4.ourdomain.com?
> > >> On Tuesday, August 29, 2023 at 10:44:48 AM EDT, Gorka Eguileor
> > >> <geguileo at redhat.com> wrote:
> > >>
> > >
> > > Hi,
> > >
> > > If I remember correctly you can set it to anything you want, but for
> > > convenience I would recommend setting it to the hostname that currently
> > > has more volumes in your system.
> > >
> > > Let's say you have 3 hosts:
> > > qde4-ctrl1.cloud.ourdomain.com
> > > qde4-ctrl2.cloud.ourdomain.com
> > > qde4-ctrl3.cloud.ourdomain.com
> > >
> > > And you have 100 volumes on ctrl1, 20 on ctrl2, and 10 on ctrl3.
> > >
> > > Then it would be best to set it to ctrl1:
> > > [rbd-1]
> > > backend_host = qde4-ctrl1.cloud.ourdomain.com
> > >
> > > And then use the cinder-manage command to modify the other 2.
> > >
> > > For your information the value you see as "os-vol-host-attr:host" when
> > > seeing the detailed information of a volume is in the form of:
> > > <HostName>@<BackendName>#<PoolName>
> > >
> > > In your case:
> > > <HostName> = qde4-ctrl1.cloud.ourdomain.com
> > > <BackendName> = rbd-1
> > > <PoolName> = rbd-1
> > >
> > > In the RBD case the poolname will always be the same as the
> backendname.
> > >
> > > Cheers,
> > > Gorka.
> > >
> > >> On 29/08, Albert Braden wrote:
> > >> > We?re replacing controllers, and it takes a few hours to build
> > >> the new controller. We?re following this procedure to remove the
> > >> old controller:
> > >>
> >
> https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html
> > <
> >
> https://docs.openstack.org/kolla-ansible/train/user/adding-and-removing-hosts.html
> > >
> > >> >
> > >> >  After that the cluster seems to run fine on 2 controllers, but
> > >> approximately 1/3 of our volumes can?t be attached to a VM. When we
> > >> look at those volumes, we see this:
> > >> >
> > >> > | os-vol-host-attr:host          |
> > >> qde4-ctrl1.cloud.ourdomain.com at rbd-1#rbd-1
> > >>   |
> > >> >
> > >> > Ctrl1 is the controller that is being replaced. Is it possible to
> > >> change the os-vol-host-attr on a volume? How can we work around
> > >> this issue while we are replacing controllers? Do we need to
> > >> disable the API for the duration of the replacement process, or is
> > >> there a better way?
> > >> >
> > >>
> > >> Hi,
> > >>
> > >> Assuming you are running cinder volume in Active-Passive mode (which I
> > >> believe was the only way back in Train) then you should be hardcoding
> > >> the host name in the cinder.conf file to avoid losing access to your
> > >> volumes when the volume service starts in another host.
> > >>
> > >> This is done with the "backend_host" configuration option within the
> > >> specific driver section in cinder.conf.
> > >>
> > >> As for how to change the value of all the volumes to the same host
> > >> value, you can use the "cinder-manage" command:
> > >>
> > >>   cinder-manage volume update_host \
> > >>     --currenthost <current host> \
> > >>     --newhost <new host>
> > >>
> > >> Cheers,
> > >> Gorka.
> > >>
> > >>
> > >>
> >
> >
> >
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> >
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/da83b4b1/attachment.htm
> > >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > openstack-discuss mailing list
> > openstack-discuss at lists.openstack.org
> >
> >
> > ------------------------------
> >
> > End of openstack-discuss Digest, Vol 58, Issue 107
> > **************************************************
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230830/86feba0f/attachment.htm
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> openstack-discuss mailing list
> openstack-discuss at lists.openstack.org
>
>
> ------------------------------
>
> End of openstack-discuss Digest, Vol 58, Issue 108
> **************************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230831/05847841/attachment-0001.htm>


More information about the openstack-discuss mailing list