Greetings Fred, Good to hear from you! I've not heard of anyone using the Mellanox ML2 driver, but it does seem to have VNIC_BAREMETAL support. I have heard some people use the Arista ML2 driver, but haven't really gotten any feedback as of recent. Most operators I speak to have wound up using networking-generic-switch and in some cases even contributing back to it. This is what we use for testing, and while the documentation says not for production use, people do seem to use it for such and seem to be generally happy with it from the feedback I've gotten over the last couple of years. Speaking of flat network size, you may also want to explore using conductor groups to possibly consider delineating pools of conductors, which could also increase your operational security if your spanning beyond a single facility. Then again, you may already be doing that. :) Let us know if you have any other questions we can assist with. -Julia On Wed, Dec 2, 2020 at 8:33 AM fsbiz@yahoo.com <fsbiz@yahoo.com> wrote:
Is anyone using virtual networks for their Openstack Ironic installations?
Our flat network is now past 3000 nodes and I am investigating Arista's ML2 plugin and / or Mellanox's NEO as the ML2 plugin.
In addition to scaling we also have additional requirements like provisioning a bare-metal server in a conference room away from the DC for demo purposes.
I have general questions on whether anyone is actually using the above two (or any other ML2 plugins) with their Openstack ironic installations ?
Thanks, Fred.