[openstack-dev] [Stable][Nova] VMware NSXv Support
Salvatore Orlando
salv.orlando at gmail.com
Thu Aug 13 08:39:16 UTC 2015
On 13 August 2015 at 09:50, John Garbutt <john at johngarbutt.com> wrote:
> On Wednesday, August 12, 2015, Thierry Carrez <thierry at openstack.org>
> wrote:
>
>> Gary Kotton wrote:
>> >
>> > On 8/12/15, 12:12 AM, "Mike Perez" <thingee at gmail.com> wrote:
>> >> On 15:39 Aug 11, Gary Kotton wrote:
>> >>> On 8/11/15, 6:09 PM, "Jay Pipes" <jaypipes at gmail.com> wrote:
>> >>>
>> >>>> Are you saying that *new functionality* was added to the stable/kilo
>> >>>> branch of *Neutron*, and because new functionality was added to
>> >>>> stable/kilo's Neutron, that stable/kilo *Nova* will no longer work?
>> >>>
>> >>> Yes. That is exactly what I am saying. The issues is as follows. The
>> >>> NSXv
>> >>> manager requires the virtual machines VNIC index to enable the
>> security
>> >>> groups to work. Without that a VM will not be able to send and receive
>> >>> traffic. In addition to this the NSXv plugin does not have any agents
>> so
>> >>> we need to do the metadata plugin changes to ensure metadata support.
>> So
>> >>> effectively with the patches: https://review.openstack.org/209372 and
>> >>> https://review.openstack.org/209374 the stable/kilo nova code will
>> not
>> >>> work with the stable/kilo neutron NSXv plugin.
>> >> <snip>
>> >>
>> >>> So what do you suggest?
>> >>
>> >> This was added in Neutron during Kilo [1].
>> >>
>> >> It's the responsibility of the patch owner to revert things if
>> something
>> >> doesn't land in a dependency patch of some other project.
>> >>
>> >> I'm not familiar with the patch, but you can see if Neutron folks will
>> >> accept
>> >> a revert in stable/kilo. There's no reason to get other projects
>> involved
>> >> because this wasn't handled properly.
>> >>
>> >> [1] - https://review.openstack.org/#/c/144278/
>> >
>> > So you are suggesting that we revert the neutron plugin? I do not think
>> > that a revert is relevant here.
>>
>> Yeah, I'm not sure reverting the Neutron patch would be more acceptable.
>> That one landed in Neutron kilo in time.
>>
>> The issue here is that due to Nova's review velocity during the kilo
>> cycle (and arguably the failure to raise this as a cross-project issue
>> affecting the release), the VMware NSXv support was shipped as broken in
>> Kilo, and requires non-trivial changes to get fixed.
>
>
> I see this as Nova not shipping with VMware NSXv support in kilo, the
> feature was never completed, rather than it being broken. I could be
> missing something, but I also know that difference doesn't really help
> anyone.
>
>
>> We have two options: bending the stable rules to allow the fix to be
>> backported, or document it as broken in Kilo with the invasive patches
>> being made available for people and distributions who still want to
>> apply it.
>>
>> Given that we are 4 months into Kilo, I'd say stable/kilo users are used
>> to this being broken at this point, so my vote would go for the second
>> option.
>
>
> This would be backporting a new driver to an older release. That seems
> like a bad idea.
>
>
>> That said, we should definitely raise [1] as a cross-project issue and
>> see how we could work it into Liberty, so that we don't end up in the
>> same dark corner in 4 months. I just don't want to break the stable
>> rules (and the user confidence we've built around us applying them) to
>> retroactively pay back review velocity / trust issues within Nova.
>>
>> [1] https://review.openstack.org/#/c/165750/
>>
>>
> So this is the same issue. The VMware neutron driver has merged support
> for a feature where we have not managed to get into Nova yet.
>
> First the long term view...
>
> This is happening more frequently with Cinder drivers/features, Neutron
> things, and to a lesser extent Glance.
>
> The great work the Cinder folks have done with brick, is hopefully going
> to improve the situation for Cinder. There are a group of folks working on
> a similar VIF focused library to help making it easier to add support for
> new Neutron VIF drivers without needing to merge things in Nova.
>
> Right now those above efforts are largely focused on libvirt, but using
> oslo.vmware, or probably something else, I am sure we could evolve
> something similar for VMware, but I haven't dug into that.
>
That is definetely the way to go in my opinion. I reckon VIF plugging is an
area where there is a lot of coupling with Neutron, and "decentralizing"
will be definetely beneficial for both contributors and reviewers. It
should be ok to have a VMware-specific VIF library - it would not work
really like cinderbrick, but from the nova perspective I think this does
not matter.
>
> There are lots of coding efforts and process efforts to make the most of
> our current review bandwidth and to expand that bandwidth, but I don't
> think it's helpful to get into that here.
>
> So, more short term and specific points...
>
> This patch had no bug or blueprint attached. It eventually got noticed a
> few weeks after the blueprint freeze. It's hard to track cross project
> dependencies if we don't know they exist. None of the various escalation
> paths raised this patch. None of those things are good, they happened,
> things happen.
>
The blueprint was indeed attached to the commit message only on the last
patchset. This has been handled poorly by the VMware team.
As you said, things happen. As a result, the patch was pretty much now only
to the submitter and a few other folks.
>
> Now it's a priority call. We have already delayed several blueprints (20
> or 30) to try and get as many bugs fixed on features that have already made
> it into tree (we already have a backlog of well over 100 bug patches to
> review) and keep the priorities moving forward (that are mostly to help
> us go faster in the near future).
>
Priority and common sense, I would say!
>
> Right now my gut tells me, partly in fairness to all the other things we
> have just not managed to get reviewed that did follow the process and met
> all the deadlines but were also unable to get merged, we should wait until
> Mitaka for this one, and in the meantime look at ways to get VMware to
> adopt some of the strategies of splitting out code that the libvirt driver
> is working on, so we can make these things easier to land in the future.
>
>From a fairness perspective I guess you are correct. If you agree to merge
this patch, then somebody who got a blueprint delayed might rightfully
complain.
There is also the fundamental apsect that the VMware team has been far from
proactive - especially because of lack of reviews from SMEs, an issue which
must and will definetely fixed.
The common sense perspective is probably just a tad different. The patch
looks really low-risk (at least to me). Technically speaking it's not even
a new feature, but adapts an existing feature to changes being introduced
in openstack/vmware-nsx for Liberty. Delaying it will either break support
on trunk code, or force the vmware-nsx team to revert changes in Neutron
and revise the strategy for Liberty.
> Now I am guessing there could be a side of this I am totally missing, but
> this feels like the best way forward right now, looking at the whole
> community.
>
I think it all boils down to see how many of the delayed blueprints
presented characteristics similar to this one. If there are other
relatively low-risk, driver-specific blueprints with small code patches to
review which have been delayed, then hands down - merging this will be
absolutely unfair for the whole community.
>
> Thanks,
> John
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150813/49a1233d/attachment.html>
More information about the OpenStack-dev
mailing list