Openstack hypervisor list is empty
Karera Tony
tonykarera at gmail.com
Thu Sep 23 15:24:24 UTC 2021
Hello Sean,
Below are the output on the compute node and deployment
root at compute1:/etc/kolla/nova-compute# ls
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
config.json nova.conf
(kolla-openstack) stack at deployment:~$ ls /etc/kolla/config/nova/
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
And I can confirm that the content is the same.
Regards
Tony Karera
On Thu, Sep 23, 2021 at 3:20 PM Sean Mooney <smooney at redhat.com> wrote:
> On Thu, 2021-09-23 at 08:59 -0400, Laurent Dumont wrote:
> > I would investigate that compute error first. Creating volumes means the
> > controllers are doing the action. Starting a VM on a compute means you
> also
> > need Ceph to works on the compute to mount the rdb target.
>
> nova as part of its startup process in aintiallying the resouce tracker
> will
> try to connect to ceph if you are using the rbd image backend to report
> how much stroage
> is avaiable. if the keyring does not work on the vms pool as the user
> nova is connecting as
> then that will block the agent from starting up fully and will cause it to
> be missing the hypervior list.
>
> the error seams to indicate that the cinder keyring is not in the nova
> container
> that likely means you have not put it in /etc/kolla/config/nova
> i woudl check /etc/kolla/config/nova on the deployment host and sudo ls
> /etc/kolla/nova-compute/
> on the compute node to ensure the cinder keyring is actully copied and has
> the correct content
>
> i have
> stack at cloud:/opt/repos/devstack$ sudo ls /etc/kolla/nova-compute/
> ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
> config.json nova.conf
>
>
> [client.cinder]
> key = *********************************
> caps mgr = "profile rbd pool=volumes, profile rbd pool=vms"
> caps mon = "profile rbd"
> caps osd = "profile rbd pool=volumes, profile rbd pool=vms,
> profile rbd pool=images"
> stack at cloud:/opt/repos/devstack$ sudo cat
> /etc/kolla/nova-compute/ceph.client.nova.keyring
> [client.nova]
> key = *********************************
> caps mgr = "profile rbd pool=volumes, profile rbd pool=vms"
> caps mon = "profile rbd"
> caps osd = "profile rbd pool=volumes, profile rbd pool=vms,
> profile rbd pool=images"
>
> blanked out the key wiht *************** after the fact but you should
> have something similar
>
>
> in my case i decied to use a seperate key for nova rbd backend because i
> was also using EC poosl with a seperate data and metadata pool
> so i neede to modify my ceph.conf to make that work with kolla
>
> stack at cloud:/opt/repos/devstack$ sudo cat
> /etc/kolla/nova-compute/ceph.conf
> # minimal ceph.conf for 15b00858-ba8c-11eb-811f-f9257f38002f
> [global]
> fsid = *********************
> mon_host = [*********************]
>
> [client.glance]
> rbd default data pool = images-data
>
> [client.cinder]
> rbd default data pool = volumes-data
>
> [client.nova]
> rbd default data pool = vms-data
>
> using 2 keyrings/user allows me to set different default data pools for
> cinder and nova.
>
> >
> > Working in Wallaby with the error doesn't mean it would 100% work in
> > Victoria.
> >
> > On Thu, Sep 23, 2021 at 5:02 AM Karera Tony <tonykarera at gmail.com>
> wrote:
> >
> > > Hey Guys, Any other idea ?
> > >
> > > Regards
> > >
> > > Tony Karera
> > >
> > >
> > >
> > >
> > > On Wed, Sep 22, 2021 at 5:20 PM Karera Tony <tonykarera at gmail.com>
> wrote:
> > >
> > > > Just to add on that,
> > > >
> > > > compute service is listed, I can create Volumes, I have the same
> cinder
> > > > keyring in the /etc/kolla/config/nova directory as I have in the
> > > > /etc/kolla/config/cinder/cinder-volume directory along with the nova
> keyring
> > > > Regards
> > > >
> > > > Tony Karera
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Sep 22, 2021 at 5:08 PM Karera Tony <tonykarera at gmail.com>
> wrote:
> > > >
> > > > > Hello Guys,
> > > > >
> > > > > Thanks a lot.
> > > > >
> > > > > I had actually checked the nova -compute.log on the compute node
> and
> > > > > they were showing the error I will post at the end about the
> cinder keyring
> > > > > but I know its correct because its the same I was using on
> wallaby, I even
> > > > > tried to use another ceph cluster with ofcouse different keyrings
> but its
> > > > > the same issue.
> > > > >
> > > > > Below is the error
> > > > >
> > > > > r Stderr: '2021-09-22T15:04:31.574+0000 7fbce2f4f700 -1 auth:
> unable to
> > > > > find a keyring on
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > > > > (2) No such file or directory\n2021-09-22T15:04:31.574+0000
> 7fbce2f4f700 -1
> > > > > AuthRegistry(0x7fbcdc05a8b8) no keyring found at
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,,
> > > > > disabling cephx\n2021-09-22T15:04:31.582+0000 7fbce2f4f700 -1
> auth: unable
> > > > > to find a keyring on
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > > > > (2) No such file or directory\n2021-09-22T15:04:31.582+0000
> 7fbce2f4f700 -1
> > > > > AuthRegistry(0x7fbcdc060698) no keyring found at
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,,
> > > > > disabling cephx\n2021-09-22T15:04:31.582+0000 7fbce2f4f700 -1
> auth: unable
> > > > > to find a keyring on
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
> > > > > (2) No such file or directory\n2021-09-22T15:04:31.582+0000
> 7fbce2f4f700 -1
> > > > > AuthRegistry(0x7fbce2f4e020) no keyring found at
> > > > >
> /etc/ceph/ceph.client.cinder.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,,
> > > > > disabling cephx\n[errno 2] RADOS object not found (error
> connecting to the
> > > > > cluster)\n'
> > > > > 2021-09-22 15:04:31.592 8 ERROR nova.compute.manager
> > > > > 2021-09-22 15:04:31.592 8 ERROR nova.compute.manager During
> handling of
> > > > > the above exception, another exception occurred:
> > > > > Regards
> > > > >
> > > > > Tony Karera
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Sep 22, 2021 at 4:50 PM Sean Mooney <smooney at redhat.com>
> wrote:
> > > > >
> > > > > > On Wed, 2021-09-22 at 10:46 -0400, Laurent Dumont wrote:
> > > > > > > It could also be a compute cell discovery issue maybe?
> > > > > > no they shoudl still show up in the hypervior list api
> > > > > > >
> > > > > > > Do you see anything under "openstack compute service list"?
> > > > > > if they show up in the service list but not they hyperiors api it
> > > > > > means that the comptue service started and registered its
> service entry
> > > > > > but
> > > > > > something broke it before it could create a compute node recored
> in the
> > > > > > db.
> > > > > >
> > > > > > with ceph the case i have hit this most often is when the
> keyright used
> > > > > > by nova to
> > > > > > get the avaiable capastiy of the ceph cluster is wrong whihc
> prevent
> > > > > > the resoucetack and compute manager
> > > > > > form actully creating the compute node record.
> > > > > >
> > > > > >
> > > > > > it can happen for other reason too but best place to start is
> check if
> > > > > > there is an error in the nova compute agent log and go from
> there.
> > > > > > >
> > > > > > > On Wed, Sep 22, 2021 at 10:33 AM Sean Mooney <
> smooney at redhat.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > On Wed, 2021-09-22 at 15:39 +0200, Karera Tony wrote:
> > > > > > > > > Hello Team,
> > > > > > > > >
> > > > > > > > > I have deployed Openstack Victoria using Kolla-ansible on
> Ubuntu
> > > > > > 20.04
> > > > > > > > and
> > > > > > > > > ceph as the backend storage for Nova, Cinder and Glance.
> > > > > > > > >
> > > > > > > > > It finished with no error but it has failed to register
> any on the
> > > > > > > > Compute
> > > > > > > > > Nodes under Hypervisors.
> > > > > > > > >
> > > > > > > > > kolla-openstack) stack at deployment:~$ openstack hypervisor
> list
> > > > > > > > >
> > > > > > > > > (kolla-openstack) stack at deployment:~$
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Any idea on how to resolve this ?
> > > > > > > > that usually means that somehthing prevented the comptue
> agent form
> > > > > > > > strating properly
> > > > > > > >
> > > > > > > > for example incorrect ceph keyrings there are several other
> case
> > > > > > but you
> > > > > > > > mentioned you are
> > > > > > > > using ceph.
> > > > > > > >
> > > > > > > > if this is hte case you should see error in the compute
> agent log.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Regards
> > > > > > > > >
> > > > > > > > > Tony Karera
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > >
> > > > > >
> > > > > >
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20210923/db8c27e3/attachment.htm>
More information about the openstack-discuss
mailing list