[openstack-dev] [Quantum] [Nova] improving vif-plugging

Gary Kotton gkotton at redhat.com
Fri Jan 25 11:28:35 UTC 2013


On 01/25/2013 01:08 PM, Daniel P. Berrange wrote:
> On Fri, Jan 25, 2013 at 01:03:33PM +0200, Gary Kotton wrote:
>> On 01/25/2013 01:00 PM, Daniel P. Berrange wrote:
>>> On Thu, Jan 24, 2013 at 10:58:43AM -0800, Dan Wendlandt wrote:
>>>>>> I completely agree that some fields make sense to multiple
>>>>> hypervisors... I
>>>>>> certainly did not intend to say anything to the contrary.  The point I
>>>>> was
>>>>>> making was that there is no single set of information is relevant to all
>>>>>> hypervisors.  Do you agree with that statement, or are you advocating
>>>>> that
>>>>>> there is a single set of such information?
>>>>>>
>>>>>> Also, I'm still trying to get confirmation to my question above, namely
>>>>>> that you do intend that Quantum would provide all such data needed to
>>>>> plug
>>>>>> a VIF, for example, providing a bridge name to a hypervisor running KVM,
>>>>> or
>>>>>> a port-group id for a hypervisor running ESX.
>>>>> In essence yes. It is hard for me to answer your question about bridge
>>>>> name vs port-group id for ESX because AFAICK there's no plugin that
>>>>> exists for ESX + Quantum today - nova.virt.vmwareapi.vif certainly
>>>>> doesn't appear to have any such code. I'm not overly concerned though.
>>>>>
>>>> I agree that if you look at the simple linux bridge or OVS plugins, they
>>>> follow a very basic model where a vif_type and even bridge name would be
>>>> uniform for an all KVM deployment.
>>>>
>>>> But, for example, the NVP plugin can control KVM, XenServer, and soon ESX
>>>> (waiting on a code change to add some more logic to ESX vif-plugging, which
>>>> is one of the reasons I'm mentioning it as a specific example).  With KVM
>>>> vs. ESX, the data returned is different in kind (i.e., one is a linux
>>>> bridge name, another is a port-group).  And with KVM and XenServer, even
>>>> though they are same same in kind (both bridge names), they are very likely
>>>> to be different in form, since XenServer generates bridge names using a
>>>> standard format (e.g., xapi0, or xenbr1).  Below you propose something that
>>>> with a very minor tweak would solve this concern, I believe.
>>> Ok, so it sounds like we're in agreement then that my plan for the libvirt
>>> VIF drivers can work, as long as we also do the enhancement we describe
>>> below about passing supported vif types.
>>>
>>> Can you point me to the code for the NVP plugin ? Unless I'm missing
>>> something I don't see any plugin called 'nvp' in Quantum GIT ?
>> Please look at https://github.com/openstack/quantum/tree/master/quantum/plugins/nicira/nicira_nvp_plugin
>>
>>>>> Maybe we should just make Nova pass across its list of supported
>>>>> vif types during 'create port' regardless of whether we need it
>>>>> now and be done with it.
>>>>>
>>>> Yes, this is what I was thinking as well.  Somewhat of a "negotiation"
>>>> where Nova sends a certain amount of information over (e.g., its supported
>>>> vif_types, its node-id) and then Quantum can determine what vif_type to
>>>> respond with.  I'm thinking that node-id may be needed to handle the case
>>>> where Quantum needs to respond with different data even for the same
>>>> vif_type (e.g., two different esx clusters that have different
>>>> port-group-ids).
>>>>
>>>> This adds some more complexity to Quantum, as the centralized
>>>> quantum-server must know the mapping from a node-id to bridge-id +
>>>> vif_type, but some part of the Quantum plugin must know this information
>>>> already (e.g., an agent), so it would really just be a matter of shifting
>>>> bits around within Quantum, which seems reasonable given time to implement
>>>> this.
>>> Ok, so the only thing to worry about now is timing. Is it reasonable to
>>> get this enhancement done for Grizzly, so that I can safely deprecate
>>> the VIF plugins in libvirt, or do we need to wait for one more cycle
>>> meaning I should remove the deprecation log messages temporarily ?
>> I am in favour of removing the deprecation log messages.
> On what basis ? Whether the deprecation messages are appropriate or
> not depends on whether we can do the enhancements described above
> for Grizzly or not. Are you saying we can't do those enhancements
> for Grizzly ?

First and foremost I do not think that you should force a user to change 
their configuration when upgrading. This defeats the purpose in my 
opinion. I have stated that from the start of this development. Not sure 
if others agree here.

The change in Quantum is not trivial. We need to decide on how to 
approach the problem and see how it can be supported by all of the 
existing plugins upstream (I also would hope that this would not break 
the existing ones that are not in the repository at the moment). If 
there is time and we succeed in doing this then great. If not then I 
guess that we will have to wait for the next version.

The above change would have Quantum be aware of the nodes/hosts. The 
ability to support this was added in the port_bindings extension. In 
addition to this there was discussion to move t to the port (which did 
not go down to well - 
https://review.openstack.org/#/c/18216/8/quantum/api/v2/attributes.py)


>
>
> Daniel




More information about the OpenStack-dev mailing list