[openstack-dev] [ironic] why do we need setting network driver per node?

Sam Betts (sambetts) sambetts at cisco.com
Wed Jun 29 10:08:11 UTC 2016


My use case is supporting mixed hardware environments with hardware that has greater network capabilities and hardware without these capabilities without limiting my all my equipment to the lowest common feature set. The use case I’m describing will have some nodes configured with the neutron multi-tenant driver (generic hardware, smart switch) and some nodes configured with a custom neutron multi-tenant+cisco magic network driver (cisco hardware, smart switch).

Perhaps a it’ll be clearer to understand why we need the driver in the patches if I give a description of the network drivers and why we’re adding them, and how I see the the current patches working in an upgrade scenario:

Flat
This matches the existing networking implementation in Ironic today, but with added validation on conductor start to ensure that Ironic is configured correctly (cleaning_network_uuid set) to support this feature.

None
This is what standalone users have been faking by setting the DHCP provider to None for a long time. The reason this worked before is because DHCP providers did more than just set DHCP options and actually started configuring the cleaning network, this job is now superseded by the network provider, so we need a true no-op driver for when configuring cleaning networks is deprecated out of the DHCP provider interface.

Neutron
Multi-tenant neutron network support

The problem I see is that the existing networking behaviour in Ironic is implicit and affected by different things (DHCP Provider, cleaning enabled/disabled etc, what driver I’ve configured on the node). I believe that the upgrade process can be handled in a sane way by doing a data migration as part of the upgrade and maintaining awareness about what configuration options a user has set and what that means their previous configuration actually was, for example:

DHCP Provider: None -> No neutron integration -> Configure existing nodes to None to match what they’ve actually been getting because of their configuration
DHCP Provider: Neutron -> Existing neutron integration -> Configure existing nodes to Flat to match what they’ve been getting because of their configuration

That was the easy part, to make existing nodes continue to work as before. Now we have to consider what happens when we create a new node in Ironic after we’ve upgraded, and I think that Ironic should behave as such:

DHCP Provider: None,  No network interface in request -> Still no expressed neutron integration -> Configure node to None because no neutron integration expressed
DHCP Provider: Neutron, No network interface in request -> Basic neutron integration implied by DHCP provider -> Configure node to Flat to match how Ironic works today
DHCP Provider: Neutron, network interface in request -> Basic neutron integration implied by DHCP provider, but network interface in request takes priority -> Configure node to requested network interface

I suggest we maintain this behaviour for at least a cycle to allow for people to upgrade and maintain current functionality and then we can work out what we want to do with DHCP providers and the default network interface configuration.

I personally hate the current DHCP provider concept and once we introduce the new network interfaces and deprecate cleaning from the existing DHCP providers I believe we should consider the DHCP provider's usefulness. We have to maintain the current driver interface as it is today for backward compatibility for those who have out of tree DHCP providers. But I believe that it is heavily tied to both the network interface you pick for a node, and also the PXE boot interface (we do not need DHCP options set for virtual media boot for example). I personally have been considering whether it should actually be configured has part of the driver_info when a node is configured for PXE boot or have the logic merged into the network interfaces itself.

Sam

On 28/06/2016 18:28, "Vladyslav Drok" <vdrok at mirantis.com<mailto:vdrok at mirantis.com>> wrote:

Thanks for bringing this up Dmitry, here are my thoughts on this.

Another case is an out-of-tree network driver, that can basically do whatever an operator needs to, and may have some parameters defined in driver_info as is the case for most of other interfaces ironic drivers have.

I think neutron and flat drivers coexistence in the same deployment is unlikely, but neutron and none or flat and none seems to be valid case. As for nova, this might be as easy as adding an availability zone with nodes that have network isolation enabled.

Also, with all the driver composition work, I think we don't want to have some weird things like dhcp providers anymore and go further with interfaces. And if it is an interface, it should be considered as such (the basic spec containing most of the requirements is merged, and we can use it to make network interface as close to the spec as possible, while not going too far, as multitenancy slipping another release would be very bad). There might be some caveats with backwards compatibility in this particular case, but they're all solvable.

Thanks,
Vlad

On Tue, Jun 28, 2016 at 7:08 PM, Mathieu Mitchell <mmitchell at internap.com<mailto:mmitchell at internap.com>> wrote:
Following discussion on IRC, here are my thoughts:

The proposed network_interface allows choosing a "driver" for the network part of a node. The values could be something like "nobody", "neutron-flat networking" and "neutron-tenant separation".

I think this choice should be left per-node. My reasoning is that you could have a bunch of nodes for which you have complete Neutron support, through for example an ML2 plugin. These nodes would be configured using one of the "neutron-*" modes.

On the other hand, that same Ironic installation could also manage nodes for which the switches are unmanaged, or manually configured. In such case, you would probably want to use the "nobody" mode.

An important point is to expose this "capability" to Nova as you might want to offer nodes with neutron integration differently from "any node". I am unsure if the capability should be the value of the network_interface or a boolean "neutron integration?". Thoughts?

Mathieu


On 2016-06-28 11:32 AM, Dmitry Tantsur wrote:
Hi folks!

I was reviewing https://review.openstack.org/317391 and realized I don't
quite understand why we want to have node.network_interface. What's the
real life use case for it?

Do we expect some nodes to use Neutron, some - not?

Do we expect some nodes to benefit from network separation, some - not?
There may be a use case here, but then we have to expose this field to
Nova for scheduling, so that users can request a "secure" node or a
"less secure" one. If we don't do that, Nova will pick at random, which
makes the use case unclear again.
If we do that, the whole work goes substantially beyond what we were
trying to do initially: isolate tenants from the provisioning network
and from each other.

Flexibility it good, but the patches raise upgrade concerns, because
it's unclear how to provide a good default for the new field. And anyway
it makes the whole thing much more complex than it could be.

Any hints are welcome.

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160629/e6a3534e/attachment.html>


More information about the OpenStack-dev mailing list