[openstack-dev] [nova] Exposing provider networks in network_data.json
Devananda van der Veen
devananda.vdv at gmail.com
Mon Jul 20 19:15:27 UTC 2015
On Sat, Jul 18, 2015 at 5:42 AM Sam Stoelinga <sammiestoel at gmail.com> wrote:
> +1 on Kevin Benton's comments.
> Ironic should have integration with switches where the switches are SDN
> compatible. The individual bare metal node should not care which vlan,
> vxlan or other translation is programmed at the switch. The individual bare
> metal nodes just knows I have 2 nics and and these are on Neutron network
> x. The SDN controller is responsible for making sure the baremetal node
> only has access to Neutron Network x through changing the switch
> configuration dynamically.
>
> Making an individual baremetal have access to several vlans and let the
> baremetal node configure a vlan tag at the baremetal node itself is a big
> security risk and should not be supported.
>
I was previously of this opinion, and have changed my mind.
While I agree that this is true in a multi-tenant case and requires an
SDN-capable TOR, there are users (namely, openstack-infra) asking us to
support a single tenant with a statically-configured TOR, where cloud-init
is used to pass in the (external, unchangeable) VLAN configuration to the
bare metal instance.
-Deva
> Unless an operator specifically configures a baremetal node to be vlan
> trunk.
>
> Sam Stoelinga
>
> On Sat, Jul 18, 2015 at 5:10 AM, Kevin Benton <blak111 at gmail.com> wrote:
>
>> > which requires VLAN info to be pushed to the host. I keep hearing "bare
>> metal will never need to know about VLANs" so I want to quash that ASAP.
>>
>> That's leaking implementation details though if the bare metal host only
>> needs to be on one network. It also creates a security risk if the bare
>> metal node is untrusted.
>>
>> If the tagging is to make it so it can access multiple networks, then
>> that makes sense for now but it should ultimately be replaced by the vlan
>> trunk ports extension being worked on this cycle that decouples the
>> underlying network transport from what gets tagged to the VM/bare metal.
>> On Jul 17, 2015 11:47 AM, "Jim Rollenhagen" <jim at jimrollenhagen.com>
>> wrote:
>>
>>> On Fri, Jul 17, 2015 at 10:56:36AM -0600, Kevin Benton wrote:
>>> > Check out my comments on the review. Only Neutron knows whether or not
>>> an
>>> > instance needs to do manual tagging based on the plugin/driver loaded.
>>> >
>>> > For example, Ironic/bare metal ports can be bound by neutron with a
>>> correct
>>> > driver so they shouldn't get the VLAN information at the instance
>>> level in
>>> > those cases. Nova has no way to know whether Neutron is configured
>>> this way
>>> > so Neutron should have an explicit response in the port binding
>>> information
>>> > indicating that an instance needs to tag.
>>>
>>> Agree. However, I just want to point out that there are neutron drivers
>>> that exist today[0] that support bonded NICs with trunked VLANs, which
>>> requires VLAN info to be pushed to the host. I keep hearing "bare metal
>>> will never need to know about VLANs" so I want to quash that ASAP.
>>>
>>> As far as Neutron sending the flag to decide whether the instance should
>>> tag packets, +1, I think that should work.
>>>
>>> // jim
>>> >
>>> > On Fri, Jul 17, 2015 at 9:51 AM, Jim Rollenhagen <
>>> jim at jimrollenhagen.com>
>>> > wrote:
>>> >
>>> > > On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
>>> > > > On 17 July 2015 at 11:23, Sean Dague <sean at dague.net> wrote:
>>> > > > > On 07/16/2015 06:06 PM, Sean M. Collins wrote:
>>> > > > >> On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
>>> > > > >>> So it looks like there is a missing part in this feature. There
>>> > > should
>>> > > > >>> be a way to "hide" this information if the instance does not
>>> require
>>> > > to
>>> > > > >>> configure vlan interfaces to make network functional.
>>> > > > >>
>>> > > > >> I just commented on the review, but the provider network API
>>> extension
>>> > > > >> is admin only, most likely for the reasons that I think someone
>>> has
>>> > > > >> already mentioned, that it exposes details of the phyiscal
>>> network
>>> > > > >> layout that should not be exposed to tenants.
>>> > > > >
>>> > > > > So, clearly, under some circumstances the network operator wants
>>> to
>>> > > > > expose this information, because there was the request for that
>>> > > feature.
>>> > > > > The question in my mind is what circumstances are those, and what
>>> > > > > additional information needs to be provided here.
>>> > > > >
>>> > > > > There is always a balance between the private cloud case which
>>> wants to
>>> > > > > enable more self service from users (and where the users are
>>> often also
>>> > > > > the operators), and the public cloud case where the users are
>>> outsiders
>>> > > > > and we want to hide as much as possible from them.
>>> > > > >
>>> > > > > For instance, would an additional attribute on a provider
>>> network that
>>> > > > > says "this is cool to tell people about" be an acceptable
>>> approach? Is
>>> > > > > there some other creative way to tell our infrastructure that
>>> these
>>> > > > > artifacts are meant to be exposed in this installation?
>>> > > > >
>>> > > > > Just kicking around ideas, because I know a pile of gate
>>> hardware for
>>> > > > > everyone to use is at the other side of answers to these
>>> questions. And
>>> > > > > given that we've been running full capacity for days now,
>>> keeping this
>>> > > > > ball moving forward would be great.
>>> > > >
>>> > > > Maybe we just need to add policy around who gets to see that extra
>>> > > > detail, and maybe hide it by default?
>>> > > >
>>> > > > Would that deal with the concerns here?
>>> > >
>>> > > I'm not so sure. There are certain Neutron plugins that work with
>>> > > certain virt drivers (Ironic) that require this information to be
>>> passed
>>> > > to all instances built by that virt driver. However, it doesn't (and
>>> > > probably shouldn't, as to not confuse cloud-init/etc) need to be
>>> passed
>>> > > to other instances. I think the conditional for passing this as
>>> metadata
>>> > > is going to need to be some combination of operator config, Neutron
>>> > > config/driver, and virt driver.
>>> > >
>>> > > I know we don't like networking things to be conditional on the virt
>>> > > driver, but Ironic is working on feature parity with virt for
>>> > > networking, and baremetal networking is vastly different than virt
>>> > > networking. I think we're going to have to accept that.
>>> > >
>>> > > // jim
>>> > >
>>> > > >
>>> > > > Thanks,
>>> > > > John
>>> > > >
>>> > > >
>>> > >
>>> __________________________________________________________________________
>>> > > > OpenStack Development Mailing List (not for usage questions)
>>> > > > Unsubscribe:
>>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >
>>> > >
>>> __________________________________________________________________________
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > Kevin Benton
>>>
>>> >
>>> __________________________________________________________________________
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150720/3d280b3a/attachment.html>
More information about the OpenStack-dev
mailing list