[openstack-dev] [Quantum] [Nova] improving vif-plugging

Gary Kotton gkotton at redhat.com
Fri Jan 25 11:03:33 UTC 2013


On 01/25/2013 01:00 PM, Daniel P. Berrange wrote:
> On Thu, Jan 24, 2013 at 10:58:43AM -0800, Dan Wendlandt wrote:
>>>> I completely agree that some fields make sense to multiple
>>> hypervisors... I
>>>> certainly did not intend to say anything to the contrary.  The point I
>>> was
>>>> making was that there is no single set of information is relevant to all
>>>> hypervisors.  Do you agree with that statement, or are you advocating
>>> that
>>>> there is a single set of such information?
>>>>
>>>> Also, I'm still trying to get confirmation to my question above, namely
>>>> that you do intend that Quantum would provide all such data needed to
>>> plug
>>>> a VIF, for example, providing a bridge name to a hypervisor running KVM,
>>> or
>>>> a port-group id for a hypervisor running ESX.
>>> In essence yes. It is hard for me to answer your question about bridge
>>> name vs port-group id for ESX because AFAICK there's no plugin that
>>> exists for ESX + Quantum today - nova.virt.vmwareapi.vif certainly
>>> doesn't appear to have any such code. I'm not overly concerned though.
>>>
>> I agree that if you look at the simple linux bridge or OVS plugins, they
>> follow a very basic model where a vif_type and even bridge name would be
>> uniform for an all KVM deployment.
>>
>> But, for example, the NVP plugin can control KVM, XenServer, and soon ESX
>> (waiting on a code change to add some more logic to ESX vif-plugging, which
>> is one of the reasons I'm mentioning it as a specific example).  With KVM
>> vs. ESX, the data returned is different in kind (i.e., one is a linux
>> bridge name, another is a port-group).  And with KVM and XenServer, even
>> though they are same same in kind (both bridge names), they are very likely
>> to be different in form, since XenServer generates bridge names using a
>> standard format (e.g., xapi0, or xenbr1).  Below you propose something that
>> with a very minor tweak would solve this concern, I believe.
> Ok, so it sounds like we're in agreement then that my plan for the libvirt
> VIF drivers can work, as long as we also do the enhancement we describe
> below about passing supported vif types.
>
> Can you point me to the code for the NVP plugin ? Unless I'm missing
> something I don't see any plugin called 'nvp' in Quantum GIT ?

Please look at 
https://github.com/openstack/quantum/tree/master/quantum/plugins/nicira/nicira_nvp_plugin

>
>>> Maybe we should just make Nova pass across its list of supported
>>> vif types during 'create port' regardless of whether we need it
>>> now and be done with it.
>>>
>> Yes, this is what I was thinking as well.  Somewhat of a "negotiation"
>> where Nova sends a certain amount of information over (e.g., its supported
>> vif_types, its node-id) and then Quantum can determine what vif_type to
>> respond with.  I'm thinking that node-id may be needed to handle the case
>> where Quantum needs to respond with different data even for the same
>> vif_type (e.g., two different esx clusters that have different
>> port-group-ids).
>>
>> This adds some more complexity to Quantum, as the centralized
>> quantum-server must know the mapping from a node-id to bridge-id +
>> vif_type, but some part of the Quantum plugin must know this information
>> already (e.g., an agent), so it would really just be a matter of shifting
>> bits around within Quantum, which seems reasonable given time to implement
>> this.
> Ok, so the only thing to worry about now is timing. Is it reasonable to
> get this enhancement done for Grizzly, so that I can safely deprecate
> the VIF plugins in libvirt, or do we need to wait for one more cycle
> meaning I should remove the deprecation log messages temporarily ?

I am in favour of removing the deprecation log messages.

>
> Regards,
> Daniel




More information about the OpenStack-dev mailing list