[ironic][neutron][ops] Ironic multi-tenant networking, VMs

Julia Kreger juliaashleykreger at gmail.com
Wed May 1 22:38:37 UTC 2019


Greetings Jeremy,

Best Practice wise, I'm not directly aware of any. It is largely going
to depend upon your Neutron ML2 drivers and network fabric.

In essence, you'll need an ML2 driver which supports the vnic type of
"baremetal", which is able to able to orchestrate the switch port port
binding configuration in your network fabric. If your using vlan
networks, in essence you'll end up with a neutron physical network
which is also a trunk port to the network fabric, and the ML2 driver
would then appropriately tag the port(s) for the baremetal node to the
networks required. In the CI gate, we do this in the "multitenant"
jobs where networking-generic-switch modifies the OVS port
configurations directly.

If specifically vxlan is what your looking to use between VMs and
baremetal nodes, I'm unsure of how you would actually configure that,
but in essence the VXLANs would still need to be terminated on the
switch port via the ML2 driver.

In term of Ironic's documentation, If you haven't already seen it, you
might want to check out ironic's multi-tenancy documentation[1].

-Julia

[1]: https://docs.openstack.org/ironic/latest/admin/multitenancy.html

On Wed, May 1, 2019 at 10:53 AM Jeremy Freudberg
<jeremyfreudberg at gmail.com> wrote:
>
> Hi all,
>
> I'm wondering if anyone has any best practices for Ironic bare metal
> nodes and regular VMs living on the same network. I'm sure if involves
> Ironic's `neutron` multi-tenant network driver, but I'm a bit hazy on
> the rest of the details (still very much in the early stages of
> exploring Ironic). Surely it's possible, but I haven't seen mention of
> this anywhere (except the very old spec from 2015 about introducing
> ML2 support into Ironic) nor is there a gate job resembling this
> specific use.
>
> Ideas?
>
> Thanks,
> Jeremy
>



More information about the openstack-discuss mailing list