[openstack-dev] [Neutron] Neutron extenstions

Ian Wells ijw.ubuntu at cack.org.uk
Sat Mar 21 01:27:21 UTC 2015


On 20 March 2015 at 15:49, Salvatore Orlando <sorlando at nicira.com> wrote:

> The MTU issue has been a long-standing problem for neutron users. What
> this extension is doing is simply, in my opinion, enabling API control over
> an aspect users were dealing with previously through custom made scripts.
>

Actually, version 1 is not even doing that; it's simply telling the user
what happened, which the user has never previously been able to tell, and
configuring the network consistently.  I don't think we implemented the
'choose an MTU' API, we're simply telling you the MTU you got.

Since this is frequently smaller than you think (there are some
non-standard features that mean you frequently *can* pass larger packets
than should really work, hiding the problem at the cost of a performance
penalty for doing it) and there was previously no way of getting any idea
of what it is, this is a big step forward.

And to reiterate, because this point is often missed: different networks in
Neutron have different MTUs.  My virtual networks might be 1450.  My
external network might be 1500.  The provider network to my NFS server
might be 9000.  There is *nothing* in today's Neutron that lets you do
anything about that, and - since Neutron routers and Neutron DHCP agents
have no means of dealing with different MTU networks - really strange
things happen if you try some sort of workaround.

If a plugin does not support specifically setting the MTU parameter, I
> would raise a 500 NotImplemented error. This will probably create a
> precedent, but as I also stated in the past, I tend to believe this might
> actually be better than the hide & seek game we do with extension.
>

I am totally happy with this, if we agree it's what we want to do, and it
makes plenty of sense for when you request an MTU.

The other half of the interface is when you don't request a specific MTU
but you'd like to know what MTU you got - the approach we have today is
that if the MTU can't be determined (either a plugin with no support or one
that's short on information) then the value on the network object is
unset.  I assume people are OK with that.


> The vlan_transparent feature serves a specific purpose of a class of
> applications - NFV apps.
>

To be pedantic - the uses for it are few and far between but I wouldn't
reduce it to 'NFV apps'.  http://virl.cisco.com/ I wrote on Openstack a
couple of years ago and it's network simulation but not actually NFV.
People implementing resold services (...aaS) in VMs would quite like VLANs
on their virtual networks too, and this has been discussed in at least 3
summits so far.  I'm sure other people can come up with creative reasons.

It has been speculated during the review process whether this was actually
> a "provider network" attribute.
>

Which it isn't, just for reference.


> In theory it is something that characterises how the network should be
> implemented in the backend.
>
However it was not possible to make this ad "admin" attribute because also
> non-admins might require a vlan_transparent network. Proper RBAC might
> allow us to expose this attribute only to a specific class of users, but
> Neutron does not yet have RBAC [1]
>

I think it's a little early to worry about restricting the flag.  The
default implementation pretty much returns a constant (and says if that
constant is true when you'd like it to be) - it's implemented as a call for
future expansion.

Because of its nature vlan_transparent is an attribute that probably
> several plugins will not be able to understand.
>

And again backward compatibility is documented, and actually pretty
horrible now I come to reread it, so if we wanted to go with a 500 as above
that's quite reasonable.


> Regardless of what the community decides regardless extensions vs
> non-extension, the code as it is implies that this flag is present in every
> request - defaulting to False.
>

Which is, in fact, not correct (or at least not the way it's supposed to
be, anyway; I need to check the code).

The original idea was that if it's not present in the request then you
can't assume the network you're returned is a VLAN trunk, but you also
can't assume it isn't - as in, it's the same as the current behaviour,
where the plugin does what it does and you get to put up with the results.
The difference is that the plugin now gets to tell you what it's done.


> This can lead to somewhat confusing situation, because users can set it to
> True, and a get a 200 response. As a user, I would think that Neutron has
> prepared for me a nice network which is vlan transparent... but if Neutron
> is running any plugin which does not support this extension I would be in a
> for a huge disappointment when I discover my network is not vlan
> transparent at all!
>

The spec has detail on how the user works this out, as I say.
Unfortunately it's not by return code....

I reckon that perhaps, as a short term measure, the configuration flag
> Armando mentioned might be used to obscure completely the API attribute if
> a deployer chooses not to support vlan transparent networks.
>

Not unreasonable, though I would prefer a consistent API so that if I ask
for a VLAN transparent network I don't have to cope with:

- I got it
- I didn't get it because the plugin understands the request but it can't
oblige
- I didn't get it because the plugin doesn't support it (and that looks
different to the end user)
- I didn't get it because someone turned the API off today
- I didn't get it and I got an error because I should have checked for an
extension before even trying to ask the question

The answers are 'you got what you asked for' and 'you didn't get what you
asked for': the why is not terribly interesting and the variations in API
are annoying.

One more thing I do not see is how a transparent VLAN is implemented. Going
> back to the spec history it is unclear whether this was at all in the scope
> of this spec.
> Indeed it seems this feature simply gives a ML2 driver a solution to
> specify whether it support transparent networks or not, and then inform the
> user in a fairly backend-agnostic way.
> Indeed, quoting comment in the spec review [2]:
>
> "there's no change to the implementation of the networking because we're
> not changing it - so the change doesn't get down into the agents [..] But I
> don't want to get into fixing the OVS driver to do VLAN trunks because it's
> a can of worms and there is already a totally functional open source
> implementation in core with LB"
>
> Even if we're not gating on it, it would be great at least to clarify how
> the LB agent will configure such trunk networks;
>

The point about 'how it is implemented' is that we aren't actually
implementing 'make a special effort' in the current code.  We're simply
telling the user what they got and putting an API there so that - in the
future - network drivers can make sensible choices.

If you're running with the OVS driver, it programs OVS to VLAN tag traffic
to separate networks.  OVS won't do nested VLAN tags (qinq).  So it drops
tagged packets sent by VMs.  If it's ever involved in a network the network
is not VLAN transparent, and only rewriting the OVS driver would change
that.

If you're running with the Linuxbridge driver, then LB (the way the LB
driver uses it) doesn't care in the slightest about VLAN tags being present
or absent and passes everything happily.  So LB doesn't stop a network from
being VLAN transparent.

Similarly on the physical network side: VLAN bad (switches generally don't
like you), GRE good.

All the current implementation does is examine the drivers in use for a
network and report back if it can be used as a trunk or not.  If you ask
for a trunk it doesn't change behaviour - it works out if the network would
be a trunk, and if it wouldn't be, it refuses to create the network.


> because otherwise we will have to document this attribute stating
> something like the following
> "the vlan_transparent attribute can be set to True to request a network
> which can be used for doing vlan trunks in guests. Whether these kind of
> networks are attainable depends on the backend configuration. If the
> neutron deployment is running a plugin which is aware of vlan transparent
> networks then an error code will be returned if the vlan transparent
> network cannot be implemented. If the neutron deployment is running a
> plugin which is oblivious to vlan transparent networks the plugin will
> simply ignore the setting even if the attribute will be successfully set on
> the network resource. Unfortunately the API at the moment does not offer a
> way to understand whether a particular neutron deployment is aware of
> transparent vlans or not".
>

Per above, and really the short statement of aims: "I can guarantee the
network I've given you is a trunk or I can't.   The reason why is not
terribly useful because it doesn't change anything.  I might make a special
effort to make you a trunk if you ask for one."

PS: One final note - I have seen several references in this thread to
> approved specification as if they were immutable laws cast in stone.
> Personally, I don't think that an approved specification is any sort of
> binding contract between the submitter and the rest of the neutron
> community - I am always glad to receive feedback on my specs after their
> approval; similarly I want to be free to do object as I wish on approved
> specifications, even after the relevant code is merged.
>

I'm absolutely not saying that they're set in stone.  I am saying that the
best time to object is at spec time.  If people are objecting now, the
process has failed, and I'd like to know if we can do something about that
and save work in the future, because it's harder to change things late in
the day.

Aside from that, I'd sooner see a reasonably high bar for changing tack,
along with a bit of discussion.  Right now we're doing admirably, but
remember we started out with a mail saying 'These changes seem odd.  I'm
reverting them', which was, to say the least, concerning.
-- 
Ian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150320/ca6dadae/attachment.html>


More information about the OpenStack-dev mailing list