As I said, add "nova_oslomsg_heartbeat_in_pthread: False" to user_variables and re-run os-nova-install.yml

On Tue, Sep 12, 2023, 11:29 Gk Gk <ygk.kmr@gmail.com> wrote:
So, for the moment, how to proceed as a workaround  ?

On Tue, Sep 12, 2023 at 2:39 PM <smooney@redhat.com> wrote:
On Tue, 2023-09-12 at 09:34 +0200, Dmitriy Rabotyagov wrote:
> Hey,
>
> We are aware of this issue happening on compute nodes, when some
> instances have a lot of volumes attached. For this purpose we define
> following variable:
>
> nova_compute_init_overrides:
>   Service:
>     LimitNOFILE: 4096
>
> Same thing happening on nova-conductor makes me think of this bug
> related to heartbeat_in_pthread change[1][2].
the heat beat in pthread is only intented to be used in the api and it activly
breaks the nova-compute agent. i suspect its also not a good idea to have enabled
for the conducotr or scheduler as i expect that to behave like the compute agent.
>  IIRC, it was also able
> to leak file descriptors, that you can see as "Too many open files"
> error.
> Though bugfixes were backported to Yoga from what I see. But you can
> try adding "nova_oslomsg_heartbeat_in_pthread: False" to your
> user_variables and re-run os-nova-install.yml playbook.
if osa is enabling the heartbeat_in_pthread on the conductor or anything
other the the api and metadata its a bug  in osa
>
> [1] https://bugs.launchpad.net/openstack-ansible/+bug/1961603
> [2] https://bugs.launchpad.net/oslo.messaging/+bug/1949964
>
>
> вт, 12 сент. 2023 г. в 05:58, Gk Gk <ygk.kmr@gmail.com>:
> >
> > Hi All,
> >
> > We have a Yoga OSA setup. We have been observing the following errors in the nova conductor service intermittently.
> > After a restart of the service, they seem to disappear. The logs are as follows:
> >
> > ---
> > nova-conductor[3199409]:  ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 24] Too many open
> > files (retrying in 0 seconds): OSError: [Errno 24] Too many open files
> > ---
> >
> > Is it a known issue with yoga ? Please let me know.
> >
> >
> > Thanks
> > Y.G
>