[ironic][neutron][ops] Ironic multi-tenant networking, VMs

Jeremy Freudberg jeremyfreudberg at gmail.com
Sun May 5 20:24:14 UTC 2019


Sukhdev- yes it helps a ton. Thank you!

If anyone reading the list has a citable example of this, public on
the web, feel free to chime in.

On Sat, May 4, 2019 at 3:43 PM Sukhdev Kapur <sukhdevkapur at gmail.com> wrote:
>
> Jeremy,
>
> If you want to use VxLAN networks for the bremetal hosts, you would use ML2 VLAN networks, as Julia described, between the host and switch port. That VLAN will then terminate into a VTAP on the switch port which will carry appropriate tags in the VxLAN overlay.
>
> Hope this helps
> -Sukhdev
>
>
> On Thu, May 2, 2019 at 9:28 PM Jeremy Freudberg <jeremyfreudberg at gmail.com> wrote:
>>
>> Thanks Julia; this is helpful.
>>
>> Thanks also for reading my mind a bit, as I am thinking of the VXLAN
>> case... I can't help but notice that in the Ironic CI jobs, multi
>> tenant networking being used seems to entail VLANs as the tenant
>> network type (instead of VXLAN). Is it just coincidence / how the gate
>> just is, or is it hinting something about how VXLAN and bare metal get
>> along?
>>
>> On Wed, May 1, 2019 at 6:38 PM Julia Kreger <juliaashleykreger at gmail.com> wrote:
>> >
>> > Greetings Jeremy,
>> >
>> > Best Practice wise, I'm not directly aware of any. It is largely going
>> > to depend upon your Neutron ML2 drivers and network fabric.
>> >
>> > In essence, you'll need an ML2 driver which supports the vnic type of
>> > "baremetal", which is able to able to orchestrate the switch port port
>> > binding configuration in your network fabric. If your using vlan
>> > networks, in essence you'll end up with a neutron physical network
>> > which is also a trunk port to the network fabric, and the ML2 driver
>> > would then appropriately tag the port(s) for the baremetal node to the
>> > networks required. In the CI gate, we do this in the "multitenant"
>> > jobs where networking-generic-switch modifies the OVS port
>> > configurations directly.
>> >
>> > If specifically vxlan is what your looking to use between VMs and
>> > baremetal nodes, I'm unsure of how you would actually configure that,
>> > but in essence the VXLANs would still need to be terminated on the
>> > switch port via the ML2 driver.
>> >
>> > In term of Ironic's documentation, If you haven't already seen it, you
>> > might want to check out ironic's multi-tenancy documentation[1].
>> >
>> > -Julia
>> >
>> > [1]: https://docs.openstack.org/ironic/latest/admin/multitenancy.html
>> >
>> > On Wed, May 1, 2019 at 10:53 AM Jeremy Freudberg
>> > <jeremyfreudberg at gmail.com> wrote:
>> > >
>> > > Hi all,
>> > >
>> > > I'm wondering if anyone has any best practices for Ironic bare metal
>> > > nodes and regular VMs living on the same network. I'm sure if involves
>> > > Ironic's `neutron` multi-tenant network driver, but I'm a bit hazy on
>> > > the rest of the details (still very much in the early stages of
>> > > exploring Ironic). Surely it's possible, but I haven't seen mention of
>> > > this anywhere (except the very old spec from 2015 about introducing
>> > > ML2 support into Ironic) nor is there a gate job resembling this
>> > > specific use.
>> > >
>> > > Ideas?
>> > >
>> > > Thanks,
>> > > Jeremy
>> > >
>>



More information about the openstack-discuss mailing list