[openstack-dev] [nova] [ironic] Exposing provider networks in network_data.json
Mathieu Gagné
mgagne at iweb.com
Fri Jul 17 17:02:57 UTC 2015
(adding [ironic] since baremetal use cases are involved)
On 2015-07-17 11:51 AM, Jim Rollenhagen wrote:
> On Fri, Jul 17, 2015 at 01:06:46PM +0100, John Garbutt wrote:
>> On 17 July 2015 at 11:23, Sean Dague <sean at dague.net> wrote:
>>> On 07/16/2015 06:06 PM, Sean M. Collins wrote:
>>>> On Thu, Jul 16, 2015 at 01:23:29PM PDT, Mathieu Gagné wrote:
>>>>> So it looks like there is a missing part in this feature. There should
>>>>> be a way to "hide" this information if the instance does not require to
>>>>> configure vlan interfaces to make network functional.
>>>>
>>>> I just commented on the review, but the provider network API extension
>>>> is admin only, most likely for the reasons that I think someone has
>>>> already mentioned, that it exposes details of the phyiscal network
>>>> layout that should not be exposed to tenants.
>>>
>>> So, clearly, under some circumstances the network operator wants to
>>> expose this information, because there was the request for that feature.
>>> The question in my mind is what circumstances are those, and what
>>> additional information needs to be provided here.
>>>
>>> There is always a balance between the private cloud case which wants to
>>> enable more self service from users (and where the users are often also
>>> the operators), and the public cloud case where the users are outsiders
>>> and we want to hide as much as possible from them.
>>>
>>> For instance, would an additional attribute on a provider network that
>>> says "this is cool to tell people about" be an acceptable approach? Is
>>> there some other creative way to tell our infrastructure that these
>>> artifacts are meant to be exposed in this installation?
>>>
>>> Just kicking around ideas, because I know a pile of gate hardware for
>>> everyone to use is at the other side of answers to these questions. And
>>> given that we've been running full capacity for days now, keeping this
>>> ball moving forward would be great.
>>
>> Maybe we just need to add policy around who gets to see that extra
>> detail, and maybe hide it by default?
>>
>> Would that deal with the concerns here?
>
> I'm not so sure. There are certain Neutron plugins that work with
> certain virt drivers (Ironic) that require this information to be passed
> to all instances built by that virt driver. However, it doesn't (and
> probably shouldn't, as to not confuse cloud-init/etc) need to be passed
> to other instances. I think the conditional for passing this as metadata
> is going to need to be some combination of operator config, Neutron
> config/driver, and virt driver.
>
> I know we don't like networking things to be conditional on the virt
> driver, but Ironic is working on feature parity with virt for
> networking, and baremetal networking is vastly different than virt
> networking. I think we're going to have to accept that.
>
How about we list the known use cases (valid or not) for baremetal so
people understand what we are referring to? We will then be able to
determine what we wish to support.
Here is my take on it:
1. Single NIC with single network in access mode
2. Single NIC with single network in trunk mode (similar to 3.)
3. Single NIC with multiple networks in trunk mode
4. Multiple NICs with 1 network/nic in access mode:
1 NIC == 1 network in access mode
5. Multiple NICs with multiple networks in trunk mode:
1 NIC == multiple networks in trunk mode
(to which NIC is the network associated is left as an exercise to the
reader)
6. Multiple NICs with bonding with 1 network/bond in access mode:
2 NICs == 1 bond == 1 network in access mode
7. Multiple NICs with bonding with multiple networks in trunk mode:
2 NICs == 1 bond == multiple networks in trunk mode
Those examples are based on a setup with VLAN networks.
Let me know if you feel I missed use cases.
(I'm less familiar with VXLAN so excuse me if I get some stuff wrong)
For VXLAN networks, there is the question of where is the VTEP:
a. the baremetal node itself?
b. the TOR switch?
The use case a. leaves a lot of question to be answered regarding
security which I'm definitely not familiar with.
As for use case b., we have to perform VLAN translation.
We can also argue that even without VXLAN encapsulation, an operator
could wish to perform VLAN translation so the tenant isn't aware of the
actual VLAN attributed to him.
This could also allow an operator to bridge multiple L2 network segments
together while still exposing a single common VLAN to the tenant across
the whole datacenter, irregardless of the L2 network segment his
baremetal node landed in.
--
Mathieu
More information about the OpenStack-dev
mailing list