[openstack-dev] [Quantum] Re: Some thoughts about scalable agent and notification
Dan Wendlandt
dan at nicira.com
Wed Jul 25 05:15:44 UTC 2012
Hi Irena,
On Sun, Jul 22, 2012 at 5:25 AM, Irena Berezovsky <irenab at mellanox.com>wrote:
> Hi Young, Gary, Dan,****
>
> I am not sure that my concerns are directly related to the thread subject,
> but they are related to the thoughts raised by you.****
>
> **1. **Providing compute_host data as part of the 'create_port' can
> help the Plugin if it requires VM physical location knowledge. On the other
> hand, Plugin can 'discover' VM compute_host data by querying OS API.
>
****
>
> **2. **What is the live migration flow that currently implemented
> between Nova and Quantum? As far as I understand nova does not call Quantum
> at all. The only work is done by Agents. Agent at the destination host will
> connect VM tap device and Agent at the source host will disconnect VM tap
> device. Adding call to Quantum for compute_host data update can be very
> helpful. Otherwise Agent notification can be used for getting new Host info
> update.
>
Yes, you are correct. Currently there is no webservice API for indicating
the hypervisor that will be hosting the workload, and instead plugin itself
is responsible for learning about these locations (note: this sometimes
takes the form of an agent, but could just be an OpenFlow controller with
no local agent).
> ****
>
> **3. **If VM vNIC should be provisioned using SR-IOV technology,
> the exact Virtual Function data should be put into DOMXML; so it can be
> only VIF Driver and not Agent. Since some sort of VF resource management
> should be implemented per Host, maybe this means that Vif Driver should
> call Agent for VF allocation. But I think it was decided to avoid such sort
> of communication.
>
I believe the Cisco vif-plugging logic makes (or at least used to make) a
webservice API to an API extension to grab some info required for the
libvirt XML. I think that's OK. My main goal is that the vif-plugging
logic be simple, so that we don't end up shoving a bunch of network
complexity in nova. In the example you're mentioning, is the libvirt
interface xml type=direct? If so, perhaps we want to create a standard
interface driver for type=direct that makes a generic webservice call to
fetch the required parameters.
Dan
> ****
>
> ** **
>
> Thanks a lot,****
>
> Irena****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* Dan Wendlandt [mailto:dan at nicira.com]
> *Sent:* Friday, July 20, 2012 12:23 PM
> *To:* gkotton at redhat.com
> *Cc:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Quantum] Re: Some thoughts about scalable
> agent and notification****
>
> ** **
>
> Hi Yong, Gary, ****
>
> ** **
>
> Please put [Quantum] in the subject line for such emails, so it is easier
> for team members to filter. I've edited the subject in this case. More
> thoughts inline. ****
>
> On Fri, Jul 20, 2012 at 12:30 AM, Gary Kotton <gkotton at redhat.com> wrote:*
> ***
>
> Hi,
> Please see my comments below.
> Thanks
> Gary****
>
>
>
> On 07/20/2012 03:07 AM, Yong Sheng Gong wrote: ****
>
>
> hi,
> I think the workflow can be this:
> 1. nova quantum api: allocate_for_instance(compute_host, device_id)
> 2.quantum-server: create_port(compute_host, device_id), we need to extend
> port model to include compute_host****
>
> ** **
>
> I think that this is problematic in a number of respects:
> 1. not sure how this will work for live migration (that is moving a VM
> that is running on host1 to host2)****
>
> ** **
>
> Nova could probably update the port to point to a new host on live
> migration. ****
>
> ****
>
> 2. I am not sure it would be wise to move this information to
> Quantum. What you have escibed may work for Quantum in OpenStack but
> Quantum in oVirt may behave differently. The API should be as generic as
> possible.****
>
> ** **
>
> I agree that we want to be careful about this. Quantum should be designed
> such that it can work well with Nova, but I don't think the core API should
> be nova-specific. The core Quantum API will also be used to connect other
> OpenStack services that need to plug into networks (e.g., load-balancer as
> a service...) as well as other compute stacks all together (e.g., oVirt, as
> mentioned by garyk). ****
>
> ** **
>
> ** **
>
>
>
> ****
>
> 4. plugin agent on compute_node: create tap device and attach it to
> bridge, set vlan and so on, return****
>
> I'm worried that this is not sufficiently generic. In several cases, it
> is the compute platform that needs to create the device that represents the
> vNIC. My guess is that this model that you describe would primarily work
> for libvirt type=ethernet, I believe, and that model has a several
> limitations. Other approaches that are better integrated with libvirt have
> libvirt create and attach the device based on libvirt XML (checkout out
> libvirt <interface> elements that have type=bridge or type=direct). There
> are also vif-drivers for other platforms like XenServer that definitely
> don't go create tap devices. ****
>
> I don't think this is sufficiently generic. In several cases, it is the
> compute platform that needs to create the device that represents the vNIC.
> My guess is that this model that you describe would primarily work for
> libvirt type=ethernet, I believe, and that model has a several limitations.
> Other approaches that are better integrated with libvirt have libvirt
> create and attach the device based on libvirt XML (checkout out libvirt
> <interface> elements that have type=bridge or type=direct). There are also
> vif-drivers for other platforms like XenServer that definitely don't go
> create tap devices. ****
>
> ** **
>
> ****
>
> 5. quantum -server return the network information to nova, and then
> nova create VM.
>
> This workflow differentiates at:
> 1. tap device is not created by nova, it is created by plugin agent. It
> will help us to use one unified vif-driver on the nova side for quantum.**
> **
>
> ** **
>
> For the same reasons I mentioned above, I believe the complexity of
> several vif-drivers in nova, while undesirable, is actually difficult to
> avoid. ****
>
> ****
>
>
> For notification feature, I hope keep it for metering's purpose.****
>
> ** **
>
> I do not think that we should mix the features. The metering is a feature
> that I think is used for billing. They may use the same infrastructure but
> I do not think that we may need different approaches for both.****
>
> ** **
>
> There are many things that will need to trigger off basic Quantum events
> (port creation/update/delete, floating ip allocation, etc.). Even though
> there will ultimately be different type of consumers (plugin agents,
> services like DHCP/DNS, metering, troubleshooting/logging, etc.) I'm hoping
> we can build a solid base notification mechanism that can be leveraged by
> many/all of them. If there are conflicting goals for different, perhaps we
> cannot, but I think we should first discuss what those conflicting goals,
> as they will inform our technical design. ****
>
> ** **
>
> Thanks,****
>
> ** **
>
> Dan****
>
> ** **
>
>
> Thanks
> Yong Sheng Gong
>
> ****
>
> ** **
>
>
>
> ****
>
> ** **
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt ****
>
> Nicira, Inc: www.nicira.com****
>
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~****
>
> ** **
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20120724/d53aacb5/attachment-0001.html>
More information about the OpenStack-dev
mailing list