[IRONIC] - Various questions around network features.

Gaël THEROND gael.therond at bitswalk.com
Mon Jul 25 13:14:13 UTC 2022


Sorry for the late answer I was out for summer vacation :-)

Thanks a lot for those complementary information, I’ll for sure submit few
documentation fixes

Le jeu. 14 juil. 2022 à 18:45, Julia Kreger <juliaashleykreger at gmail.com> a
écrit :

> On Wed, Jul 13, 2022 at 1:07 PM Gaël THEROND <gael.therond at bitswalk.com>
> wrote:
> >
> > Hi Julia!
> >
> > Thanks a lot for those explanations :-) Most of it confirm my
> understanding, I now have a clearer point of view that will let me select
> our test users for the service.
> >
> > Regarding aruba switches, those are pretty cool, even if as you pointed
> it, this feature can actually lead you to some weird if not dangerous
> situations x)
> >
> > Ok noticed about the horizon issue, it can be a little bit tricky for
> our end users to understand that tbh as they will for sure expect the IP
> selected by neutron and display on the dashboard to be the one used by the
> node even on a full flat network such as the provisioning network, but for
> now we will deal with it by explaining them.
>
> A challenging point here is there is no true way to hint that this is
> the case upfront. Nova acts as an abstraction layer in between and it
> really needs that networking information piece of the puzzle to
> generate metadata for an instance.
>
> I think, embracing it and also supporting an ML2 integrated
> configuration where individual switch ports are changed, is ultimately
> the most powerful configuration, but the challenge we hear from
> operators upstream is generally network operations groups don't want
> software toggling switchport vlan assignments. I get why as I've
> worked in NetOps in the past, it is largely a trust issue, I've just
> not figured out concrete ways to build the trust needed there. :(
>
> >
> > Regarding my point 2, yeah yeah I knew the purpose of direct deploy I
> just explicited it I don’t know why, my point was rather:
> >
> > At first, when I configured our ironic deployment I had that weird issue
> where if I don’t put the pxe_filter option to noop but dnsmasq, deploying
> anything is failing as the conductor doesn’t correctly erase the «ignore »
> part of the string on the dhcp_host_filter file of dnsmasq. If I make this
> filter as noop then obviously I don’t need neutron to provide the
> ironic-provision-network anymore as anyone plugged on ports with my VLAN101
> set as native VLAN will be able to get an ip from the PXE dnsmasq.
>
> I was wondering how you were making it work!
>
> This explains a lot, and is really not the intended pattern of use.
> But it is a pattern upstream generally sees in more "standalone", or
> cases of direct interaction with Ironic's API.
>
>
> >
> > I’m still having hard time to map how ironic needs both PXE dedicated
> dnsmasq for introspection and then can use neutron dnsmasq dhcp once you
> want to provision a host? Is that because neutron (kinda) lack for dhcp
> options support on its managed subnets ?
> >
>
> At this point, dnsmasq for introspection is *largely* for the purposes
> of discovering hardware you don't know about and supporting the oldest
> introspection workflow where inspection is directly triggered with the
> introspection service. Depending on the version of Ironic, and if you
> have a mac address already known to Ironic, you can trigger the
> inspection workflow directly with ironic directly with the state
> machine, and it will populate network configuration in neutron to
> perform introspection on the node.
>
> Neutron doesn't really lack dhcp options support on it's subnets,
> although it is very dnsmasq focused. The challenge we tend to see here
> is getting things properly aligned host configuration and networking
> wise for PXE boot operations doesn't always align perfectly, so it
> becomes just easier to get things to initially work as you did.
>
> > All in all it’s pretty clearer to me about the multi tenancy networking
> requirements now thanks to you!
>
> Excellent to hear!
>
> If you feel like anything is missing in our documentation, we do
> welcome patches! I do suspect the whole bit about introspection
> dnsmasq might need to be further highlighted or delineated in the
> documentation.
>
> -Julia
>
> >
> > Le mar. 12 juil. 2022 à 00:13, Julia Kreger <juliaashleykreger at gmail.com>
> a écrit :
> >>
> >> Greetings! Hopefully these answers help!
> >>
> >> On Sun, Jul 10, 2022 at 4:35 PM Gaël THEROND <gael.therond at bitswalk.com>
> wrote:
> >> >
> >> > I everyone, I’m currently working back again with Ironic and it’s
> amazing!
> >> >
> >> > However, during our demo session to our users few questions arise.
> >> >
> >> > We’re currently deploying nodes using a private vlan that can’t be
> reached from outside of the Openstack network fabric (vlan 101 -
> 192.168.101.0/24) and everything is fine with this provisioning network
> as our ToR switch all know about it and other Control plan VLANs such as
> the internal APIs VLAN which allow the IPA Ramdisk to correctly and
> seamlessly be able to contact the internal IRONIC APIs.
> >>
> >> Nice, I've had my lab configured like this in the past.
> >>
> >> >
> >> > (When you declare a port as a trunk allowed all vlan on a aruba
> switch it seems it automatically analyse the CIDR your host try to reach
> from your VLAN and route everything to the corresponding VLAN that match
> the destination IP).
> >> >
> >>
> >> Ugh, that... could be fun :\
> >>
> >> > So know, I still get few tiny issues:
> >> >
> >> > 1°/- When I spawn a nova instance on a ironic host that is set to use
> flat network (From horizon as a user), why does the nova wizard still ask
> for a neutron network if it’s not set on the provisioned host by the IPA
> ramdisk right after the whole disk image copy? Is that some missing
> development on horizon or did I missed something?
> >>
> >> Horizon just is not aware... and you can actually have entirely
> >> different DHCP pools on the same flat network, so that neutron network
> >> is intended for the instance's addressing to utilize.
> >>
> >> Ironic does just ask from an allocation from a provisioning network,
> >> which can and *should* be a different network than the tenant network.
> >>
> >> >
> >> > 2°/- In a flat network layout deployment using direct deploy scenario
> for images, am I still supposed to create a ironic provisioning network in
> neutron?
> >> >
> >> > From my understanding (and actually my tests) we don’t, as any host
> booting on the provisioning vlan will catch up an IP and initiate the bootp
> sequence as the dnsmasq is just set to do that and provide the IPA ramdisk,
> but it’s a bit confusing as many documentation explicitly require for this
> network to exist on neutron.
> >>
> >> Yes. Direct is short hand for "Copy it over the network and write it
> >> directly to disk". It still needs an IP address on the provisioning
> >> network (think, subnet instead of distinct L2 broadcast domain).
> >>
> >> When you ask nova for an instance, it sends over what the machine
> >> should use as a "VIF" (neutron port), however that is never actually
> >> bound configuration wise into neutron until after the deployment
> >> completes.
> >>
> >> It *could* be that your neutron config is such that it just works
> >> anyway, but I suspect upstream contributors would be a bit confused if
> >> you reported an issue and had no provisioning network defined.
> >>
> >> >
> >> > 3°/- My whole Openstack network setup is using Openvswitch and vxlan
> tunnels on top of a spine/leaf architecture using aruba CX8360 switches
> (for both spine and leafs), am I required to use either the
> networking-generic-switch driver or a vendor neutron driver ? If that’s
> right, how will this driver be able to instruct the switch to assign the
> host port the correct openvswitch vlan id and register the correct vxlan to
> openvswitch from this port? I mean, ok neutron know the vxlan and
> openvswitch the tunnel vlan id/interface but what is the glue of all that?
> >>
> >> If your happy with flat networks, no.
> >>
> >> If you want tenant isolation networking wise, yes.
> >>
> >> NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port
> >> level local link configuration (well, Ironic includes the port
> >> information (local link connection, physical network, and some other
> >> details) to Neutron with the port binding request.
> >>
> >> Those ML2 drivers, then either request the switch configuration be
> >> updated, or take locally configured credentials to modify port
> >> configuration in Neutron, and logs into the switch to toggle the
> >> access port's configuration which the baremetal node is attached to.
> >>
> >> Generally, they are not vxlan network aware, and at least with
> >> networking-generic-switch vlan ID numbers are expected and allocated
> >> via neutron.
> >>
> >> Sort of like the software is logging into the switch and running
> >> something along the lines of "conf t;int gi0/21;switchport mode
> >> access;switchport access vlan 391 ; wri mem"
> >>
> >> >
> >> > 4°/- I’ve successfully used openstack cloud oriented CentOS and
> debian images or snapshot of VMs to provision my hosts, this is an awesome
> feature, but I’m wondering if there is a way to let those host cloud-init
> instance to request for neutron metadata endpoint?
> >> >
> >>
> >> Generally yes, you *can* use network attached metadata with neutron
> >> *as long as* your switches know to direct the traffic for the metadata
> >> IP to the Neutron metadata service(s).
> >>
> >> We know of operators who ahve done it without issues, but often that
> >> additional switch configured route is not always the best hting.
> >> Generally we recommend enabling and using configuration drives, so the
> >> metadata is able to be picked up by cloud-init.
> >>
> >>
> >> > I was a bit surprised about the ironic networking part as I was
> expecting the IPA ramdisk to at least be able to set the host os with the
> appropriate network configuration file for whole disk images that do not
> use encryption by injecting those information from the neutron api into the
> host disk while mounted (right after the image dd).
> >> >
> >>
> >> IPA has no knowledge of how to modify the host OS in this regard.
> >> modifying the host OS has generally been something the ironic
> >> community has avoided since it is not exactly cloudy to have to do so.
> >> Generally most clouds are running with DHCP, so as long as that is
> >> enabled and configured, things should generally "just work".
> >>
> >> Hopefully that provides a little more context. Nothing prevents you
> >> from writing your own hardware manager that does exactly this, for
> >> what it is worth.
> >>
> >> > All in all I really like the ironic approach of the baremetal
> provisioning process, and I’m pretty sure that I’m just missing a bit of
> understanding of the networking part but it’s really the most confusing
> part of it to me as I feel like if there is a missing link in between
> neutron and the host HW or the switches.
> >> >
> >>
> >> Thanks! It is definitely one of the more complex parts given there are
> >> many moving parts, and everyone wants (or needs) to have their
> >> networking configured just a little differently.
> >>
> >> Hopefully I've kind of put some of the details out there, if you need
> >> more information, please feel free to reach out, and also please feel
> >> free to ask questions in #openstack-ironic on irc.oftc.net.
> >>
> >> > Thanks a lot anyone that will take time to explain me this :-)
> >>
> >> :)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20220725/fa478405/attachment.htm>


More information about the openstack-discuss mailing list