[openstack-dev] Revert Pass instance host-id to Quantum using port bindings extension.

Aaron Rosen arosen at nicira.com
Fri Jul 19 23:01:18 UTC 2013


On Fri, Jul 19, 2013 at 3:37 PM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:

> > [arosen] - sure, in this case though then we'll have to add even more
> > queries between nova-compute and quantum as nova-compute will need to
> query
> > quantum for ports matching the device_id to see if the port was already
> > created and if not try to create them.
>
> The cleanup job doesn't look like a job for nova-compute regardless of the
> rest.
>
> > Moving the create may for other reasons be a good idea (because compute
> > would *always* deal with ports and *never* with networks - a simpler
> API) -
> > but it's nothing to do with solving this problem.
> >
> > [arosen] - It does solve this issue because it moves the quantum
> port-create
> > calls outside of the retry schedule logic on that compute node.
> Therefore if
> > the port fails to create the instance goes to error state.  Moving
> networks
> > out of the nova-api will also solve this issue for us as the client then
> > won't rely on nova anymore to create the port. I'm wondering if creating
> an
> > additional network_api_class like
> nova.network.quantumv2.api.NoComputeAPI is
> > the way to prove this out. Most of the code in there would inherit from
> > nova.network.quantumv2.api.API .
>
> OK, so if we were to say that:
>
> - nova-api creates the port with an expiry timestamp to catch orphaned
> autocreated ports
>

I don't think we want to put a timestamp there. We can figure out which
ports are orphaned by checking if a port's device_id in quantum is still an
active instance_id in nova (which currently isn't true but would be if the
port-create is moved out of compute to api) and that device_owner is
nova:compute.


> - nova-compute always uses port-update (or, better still, have a
> distinct call that for now works like port-update but clearly
> represents an attach or detach and not a user-initiated update,
> improving the plugin division of labour, but that can be a separate
> proposal) and *never* creates a port; attaching to an
> apparently-attached port attached to the same instance should ensure
> that a previous attachment is destroyed, which should cover the
> multiple-schedule lost-reply case
>

agree

> - nova-compute is always talked to in terms of ports, and never in
> terms of networks (a big improvement imo)
>

agree!

- nova-compute attempts to remove autocreated ports on detach

- a cleanup job in nova-api (or nova-conductor?) cleans up expired
> autocreated ports with no attachment or a broken attachment (which
> would catch failed detachments as well as failed schedules)


> how does that work for people?  It seems to improve the internal
> interface and the transactionality, it means that there's not the
> slightly nasty (and even faintly race-prone) create-update logic in
> nova-compute, it even simplifies the nova-compute interface - though
> we would need to consider how an upgrade path would work, there; newer
> API with older compute should work fine, the reverse not so much.
>

I agree, ensuring backwards compatibility if the compute nodes are updated
and not the api nodes would be slightly tricky. I'd hope we could get away
with releasing noting that you need to update the api nodes first.


> --
> Ian.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130719/a991c2c3/attachment.html>


More information about the OpenStack-dev mailing list