[openstack-dev] [ironic] why do we need setting network driver per node?

Vladyslav Drok vdrok at mirantis.com
Tue Jun 28 17:28:25 UTC 2016


Thanks for bringing this up Dmitry, here are my thoughts on this.

Another case is an out-of-tree network driver, that can basically do
whatever an operator needs to, and may have some parameters defined in
driver_info as is the case for most of other interfaces ironic drivers have.

I think neutron and flat drivers coexistence in the same deployment is
unlikely, but neutron and none or flat and none seems to be valid case. As
for nova, this might be as easy as adding an availability zone with nodes
that have network isolation enabled.

Also, with all the driver composition work, I think we don't want to have
some weird things like dhcp providers anymore and go further with
interfaces. And if it is an interface, it should be considered as such (the
basic spec containing most of the requirements is merged, and we can use it
to make network interface as close to the spec as possible, while not going
too far, as multitenancy slipping another release would be very bad). There
might be some caveats with backwards compatibility in this particular case,
but they're all solvable.

Thanks,
Vlad

On Tue, Jun 28, 2016 at 7:08 PM, Mathieu Mitchell <mmitchell at internap.com>
wrote:

> Following discussion on IRC, here are my thoughts:
>
> The proposed network_interface allows choosing a "driver" for the network
> part of a node. The values could be something like "nobody", "neutron-flat
> networking" and "neutron-tenant separation".
>
> I think this choice should be left per-node. My reasoning is that you
> could have a bunch of nodes for which you have complete Neutron support,
> through for example an ML2 plugin. These nodes would be configured using
> one of the "neutron-*" modes.
>
> On the other hand, that same Ironic installation could also manage nodes
> for which the switches are unmanaged, or manually configured. In such case,
> you would probably want to use the "nobody" mode.
>
> An important point is to expose this "capability" to Nova as you might
> want to offer nodes with neutron integration differently from "any node". I
> am unsure if the capability should be the value of the network_interface or
> a boolean "neutron integration?". Thoughts?
>
> Mathieu
>
>
> On 2016-06-28 11:32 AM, Dmitry Tantsur wrote:
>
>> Hi folks!
>>
>> I was reviewing https://review.openstack.org/317391 and realized I don't
>> quite understand why we want to have node.network_interface. What's the
>> real life use case for it?
>>
>> Do we expect some nodes to use Neutron, some - not?
>>
>> Do we expect some nodes to benefit from network separation, some - not?
>> There may be a use case here, but then we have to expose this field to
>> Nova for scheduling, so that users can request a "secure" node or a
>> "less secure" one. If we don't do that, Nova will pick at random, which
>> makes the use case unclear again.
>> If we do that, the whole work goes substantially beyond what we were
>> trying to do initially: isolate tenants from the provisioning network
>> and from each other.
>>
>> Flexibility it good, but the patches raise upgrade concerns, because
>> it's unclear how to provide a good default for the new field. And anyway
>> it makes the whole thing much more complex than it could be.
>>
>> Any hints are welcome.
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160628/66515e5e/attachment.html>


More information about the OpenStack-dev mailing list