[openstack-dev] [Nova][Quantum] Move quantum port creation to nova-api

Jun Cheol Park jun.park.earth at gmail.com
Thu May 16 22:05:12 UTC 2013


Aaron,

>@Mike - I think we'd still want to leave nova-compute to create the tap
interfaces and sticking  external-ids on them though.

Sorry, I don't get this. Why do we need to leave nova-compute to create tap
interfaces? That behavior has been a serious problem and a design flaw in
dealing with ports, as Mike and I presented in Portland (Title: Using
OpenStack In A Traditional Hosting Environment).

All,

Please, let me share the problems that we ran into due to such a design
flaw between nova-compute and quantum-agent.

1. Using external-ids as an implicit triggering mechanism for deploying OVS
flows (we used OVS plugin on hosts) causes inconsistencies between quantum
DB and actual OVS tap interfaces on hosts. For example, even when necessary
OVS flows have not been set up or failed for whatever reason (messaging
system unstable, or quantum-server down, etc), nova-compute
unknowingly declares that a VM is in the "active" state as long as
nova-compute successfully creates OVS taps and sets up external-ids. But,
the VM does not have actual network connectivity until somebody (here
quantum-agent) deploys desired OVS flows. At this point, it is very hard to
track down what goes wrong because nova list shows the VM is "active." This
kind of inconsistency happens a lot because a quantum API (which
quantum-server provided, here e.g., create_port()) only manages its quantum
DB, but does not deal with actual network objects (e.g., OVS taps on
hosts). In this design, there is no way to verify the actual state of
targeting network objects.

  Q. What if a quantum API really deals with network objects (e.g., OVS
taps), not only updating quantum DB?
  A. Nova-compute now can call a truly abstracted quantum API for creating
a real port (or an OVS tap interface) on a targeting host, and then wait
for a response from the call to see if an OVS tap interface is really
created on the host. This way, nova-compute is able to make sure what is
going on before proceeding the rest of tasks for creating a new VM. When
there are some tasks that need to be taken care of regarding ports such as
QoS (as Henry mentioned), quota (as this thread was invoked from), etc,
nova-compute then decides what would be a next step (at least it would not
blindly say that the VM is active).

2. Another example as the side effect of tap being created by nova-compute.
When a host is rebooted, we expect all the VMs are automatically restarted.
However, it's not possible. Here is why. When nova-compute restarts, it
expects to see libvirtd running. Otherwise, nova-compute immediately stops.
So we have to first start libvirtd before nova-compute. Now when libvirtd
starts, it expects that all the OVS taps exist so that it can successfully
start all the VMs that are supposed to use OVS taps. However, at this point
since we have not started nova-compute that would create OVS taps,
restarting libvirtd fails to restart VMs due to no taps found. So I ended
up adding "restart libvirtd" in rc.local so that we can make libvirtd retry
to restart VMs after nova-compute creates OVS taps.

 Q. Again, what if quantum-agent itself is able to deal with actual ports
without relying on nova-compute at all?
 A. We can start quantum-agent which would create all the necessary OVS
taps in its own way. Then, restart libvirtd which then would start all the
VMs with the created OVS taps. This is a good example how to make quantum
truly independent of nova-compute without using any dependency on
external-ids.

3. Not only all those problems above, it is not desired that nova-compute
should have all the code of dealing with OVS specifics (e.g,, all the
wrapping functions of ovs-related commands such as ovs-vsctl) although
quantum-agent already has all the same code of OVS specifics to deal with
OVS taps.

In summary, all these problems above occur due to the fact that quantum API
only manages quantum DB, leaving all the functionality in dealing with
actual network objects dispersed across nova-compute (e.g., OVS tap
creation) and quantum-agent (e.g., OVS flows deployment).

> nova-compute should call port-update to set binding:host_id

This could be also a very good use case. If a quantum API really creates an
actual port on a host as I have been suggesting here, nova-compute simply
gets the return values for the newly created port from that API call. The
return values would include all the detailed information including host_id,
vif_type, etc. And nova-compute can use them to update ports, or maybe
create_port() API itself already updates necessary info and simply return
the current info such as mapping of binding:host_id.

I'm not sure how effectively I have been explaining what I meant to say
regarding a desirable design between nova-compute and quantum (both
quantum-server and quantum-agent). Based on comments I would get from this
thread, I may start to write a blueprint proposal.

Please, let me know anything that I missed or misunderstood.

Thanks,

-Jun

On Thu, May 16, 2013 at 1:47 PM, Robert Kukura <rkukura at redhat.com> wrote:

> On 05/16/2013 02:40 PM, Mike Wilson wrote:
> >
> >
> >
> > On Thu, May 16, 2013 at 12:28 PM, Robert Kukura <rkukura at redhat.com
> > <mailto:rkukura at redhat.com>> wrote:
> >
> >     >
> >     > @Mike - I think we'd still want to leave nova-compute to create
> >     the tap
> >     > interfaces and sticking  external-ids on them though.
> >
> >     It also seems nova-compute should call port-update to set
> >     binding:host_id and then use the returned binding:vif_type, since the
> >     vif_type might vary depending on the host, at least with ml2. The
> Arista
> >     top-of-rack switch hardware driver functionality also depends on the
> >     binding:host_id being set.
> >
> >     -Bob
> >
> >
> > Hmmm, is that really nova-compute's job? Again, that seems to be the
> > networking abstraction's job to me. We have all these quantum agents,
> > they have the device_id (instance_uuid). Why not have a quantum
> > component (agent maybe?) query nova for the host_id and then it calls
> > port-update?
>
> I believe this is the final step of an attempt to cleanup the
> abstraction between nova and quantum. The idea is to have quantum decide
> on the VIF driver, rather than having this knowledge built into the nova
> configuration.
>
> In some cases, quantum will need to know what host the port is being
> bound on so it can determine which VIF driver to use (possibly based on
> what agent is running on that host). Also, a quantum L2 agent (if there
> is one) cannot notice the that the port is binding bound until after the
> VIF driver has been selected and done its thing.
>
> The nova code for this has been in review for a while, but may have
> expired. Gerrit is offline at the moment, so I can't search for it.
>
> -Bob
>
> >
> > -Mike
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130516/b7bb05b1/attachment.html>


More information about the OpenStack-dev mailing list