[neutron][ovn] networking-ovn-metadata-agent and neutron agent liveness

Chris Apsey bitskrieg at bitskrieg.net
Wed Nov 20 04:57:54 UTC 2019


All,

Currently experimenting with networking-ovn (rdo/train packages on centos7) and I've managed to cobble together a functional deployment with two exceptions: metadata agents and agent liveness.

Ref: the metadata issues, it appears that the local compute node ovsdb server listens on a unix socket at /var/run/openvswitch/db.sock as openvswitch:hugetlbfs 0750.  Since networking-ovn-metadata-agent runs as neutron, it's not able to interact with the local ovs database and gets stuck in a restart loop and complains about the inaccessible database socket.  If I edit the systemd unit file and let the agent run as root, it functions as expected.  This obviously isn't a real solution, but indicates to me a possible packaging bug?  Not sure what the correct mix of permissions is, or if the local database should be listening on tcp:localhost:6640 as well and that's how the metadata agent should connect.  The docs are sparse in this area, but I would imagine that something like the metadata-agent should 'just work' out of the box without having to change systemd unit files or mess with unix socket permissions.  Thoughts?

Secondly, ```openstack network agent list``` shows that all agents (ovn-controller) are all dead, all the time.  However, if I display a single agent ```openstack network agent show $foo```, it shows as live.  I looked around and saw some discussions about getting networking-ovn to deal with this better, but as of now the agents are reported as dead consistently unless they are explicitly polled, at least on centos 7.  I haven't noticed any real impact, but the testing I'm doing is small scale.

Other than those two issues, networking-ovn is great, and based on the discussions around possibly deprecating linuxbridge as an in-tree driver, it would make a great 'default' networking configuration option upstream, given the docs get cleaned up.

Thanks in advance,

r

Chris Apsey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20191120/739dd1a3/attachment.html>


More information about the openstack-discuss mailing list