[openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

Robert Kukura rkukura at redhat.com
Wed Feb 19 15:40:07 UTC 2014


On 02/10/2014 05:46 AM, Mathieu Rohon wrote:
> Hi,
> 
> one other comment inline :

Hi Mathieu, see below:

> 
> On Wed, Feb 5, 2014 at 5:01 PM, Robert Kukura <rkukura at redhat.com> wrote:
>> On 02/05/2014 09:10 AM, Henry Gessau wrote:
>>> Bob, this is fantastic, I really appreciate all the detail. A couple of
>>> questions ...
>>>
>>> On Wed, Feb 05, at 2:16 am, Robert Kukura <rkukura at redhat.com> wrote:
>>>
>>>> A couple of interrelated issues with the ML2 plugin's port binding have
>>>> been discussed over the past several months in the weekly ML2 meetings.
>>>> These effect drivers being implemented for icehouse, and therefore need
>>>> to be addressed in icehouse:
>>>>
>>>> * MechanismDrivers need detailed information about all binding changes,
>>>> including unbinding on port deletion
>>>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>>>> * MechanismDrivers' bind_port() methods are currently called inside
>>>> transactions, but in some cases need to make remote calls to controllers
>>>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>>>> * Semantics of concurrent port binding need to be defined if binding is
>>>> moved outside the triggering transaction.
>>>>
>>>> I've taken the action of writing up a unified proposal for resolving
>>>> these issues, which follows...
>>>>
>>>> 1) An original_bound_segment property will be added to PortContext. When
>>>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>>>> methods are called and a binding previously existed (whether its being
>>>> torn down or not), this property will provide access to the network
>>>> segment used by the old binding. In these same cases, the portbinding
>>>> extension attributes (such as binding:vif_type) for the old binding will
>>>> be available via the PortContext.original property. It may be helpful to
>>>> also add bound_driver and original_bound_driver properties to
>>>> PortContext that behave similarly to bound_segment and
>>>> original_bound_segment.
>>>>
>>>> 2) The MechanismDriver.bind_port() method will no longer be called from
>>>> within a transaction. This will allow drivers to make remote calls on
>>>> controllers or devices from within this method without holding a DB
>>>> transaction open during those calls. Drivers can manage their own
>>>> transactions within bind_port() if needed, but need to be aware that
>>>> these are independent from the transaction that triggered binding, and
>>>> concurrent changes to the port could be occurring.
>>>>
>>>> 3) Binding will only occur after the transaction that triggers it has
>>>> been completely processed and committed. That initial transaction will
>>>> unbind the port if necessary. Four cases for the initial transaction are
>>>> possible:
>>>>
>>>> 3a) In a port create operation, whether the binding:host_id is supplied
>>>> or not, all drivers' port_create_precommit() methods will be called, the
>>>> initial transaction will be committed, and all drivers'
>>>> port_create_postcommit() methods will be called. The drivers will see
>>>> this as creation of a new unbound port, with PortContext properties as
>>>> shown. If a value for binding:host_id was supplied, binding will occur
>>>> afterwards as described in 4 below.
>>>>
>>>> PortContext.original: None
>>>> PortContext.original_bound_segment: None
>>>> PortContext.original_bound_driver: None
>>>> PortContext.current['binding:host_id']: supplied value or None
>>>> PortContext.current['binding:vif_type']: 'unbound'
>>>> PortContext.bound_segment: None
>>>> PortContext.bound_driver: None
>>>>
>>>> 3b) Similarly, in a port update operation on a previously unbound port,
>>>> all drivers' port_update_precommit() and port_update_postcommit()
>>>> methods will be called, with PortContext properies as shown. If a value
>>>> for binding:host_id was supplied, binding will occur afterwards as
>>>> described in 4 below.
>>>>
>>>> PortContext.original['binding:host_id']: previous value or None
>>>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>>>> PortContext.original_bound_segment: None
>>>> PortContext.original_bound_driver: None
>>>> PortContext.current['binding:host_id']: current value or None
>>>> PortContext.current['binding:vif_type']: 'unbound'
>>>> PortContext.bound_segment: None
>>>> PortContext.bound_driver: None
>>>>
>>>> 3c) In a port update operation on a previously bound port that does not
>>>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>>>> update_port_postcommit() methods will be called with PortContext
>>>> properties reflecting unchanged binding states as shown.
>>>>
>>>> PortContext.original['binding:host_id']: previous value
>>>> PortContext.original['binding:vif_type']: previous value
>>>> PortContext.original_bound_segment: previous value
>>>> PortContext.original_bound_driver: previous value
>>>> PortContext.current['binding:host_id']: previous value
>>>> PortContext.current['binding:vif_type']: previous value
>>>> PortContext.bound_segment: previous value
>>>> PortContext.bound_driver: previous value
>>>>
>>>> 3d) In a the port update operation on a previously bound port that does
>>>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>>>> update_port_postcommit() methods will be called with PortContext
>>>> properties reflecting the previously bound and currently unbound binding
>>>> states as shown. If a value for binding:host_id was supplied, binding
>>>> will occur afterwards as described in 4 below.
>>>>
>>>> PortContext.original['binding:host_id']: previous value
>>>> PortContext.original['binding:vif_type']: previous value
>>>> PortContext.original_bound_segment: previous value
>>>> PortContext.original_bound_driver: previous value
>>>> PortContext.current['binding:host_id']: new or current value
>>>> PortContext.current['binding:vif_type']: 'unbound'
>>>> PortContext.bound_segment: None
>>>> PortContext.bound_driver: None
>>>>
>>>> 4) If a port create or update operation triggers binding or rebinding,
>>>> it is attempted after the initial transaction is processed and committed
>>>> as described in 3 above. The binding process itself is just as before,
>>>> except it happens after and outside the transaction. Since binding now
>>>> occurs outside the transaction, its possible that multiple threads or
>>>> processes could concurrently attempt to bind the same port, although
>>>> this is should be a rare occurrence. Rather than trying to prevent this
>>>> with some sort of distributed lock or complicated state machine,
>>>> concurrent attempts to bind are allowed to proceed in parallel. When a
>>>> thread completes its attempt to bind (either successfully or
>>>> unsuccessfully) it then performs a second transaction to update the DB
>>>> with the result of its binding attempt. When doing so, it checks to see
>>>> if some other thread has already committed relevant changes to the port
>>>> between the two transactions. There are three possible cases:
>>>>
>>>> 4a) If the thread's binding attempt succeeded, and no other thread has
>>>> committed either a new binding or changes that invalidate this thread's
>>>> new binding between the two transactions, the thread commits its own
>>>> binding results, calling all drivers' update_port_precommit() and
>>>> update_port_postcommit() methods with PortContext properties reflecting
>>>> the new binding as shown. It then returns the updated port dictionary to
>>>> the caller.
>>>>
>>>> PortContext.original['binding:host_id']: previous value
>>>> PortContext.original['binding:vif_type']: 'unbound'
>>>> PortContext.original_bound_segment: None
>>>> PortContext.original_bound_driver: None
>>>> PortContext.current['binding:host_id']: previous value
>>>
>>> Are you not expecting/allowing the host_id to change in this scenario? Why?
>>
>> Correct. This thread's initial transaction has already committed update
>> of binding:host_id if that is what's triggering this thread to bind. If
>> another thread committed a change to binding:host_id while this thread
>> was binding, that is covered in 4c.
>>
>>>
>>>> PortContext.current['binding:vif_type']: new value
>>>> PortContext.bound_segment: new value
>>>> PortContext.bound_driver: new value
>>>>
>>>> 4b) If the thread's binding attempt either succeeded or failed, but some
>>>> other thread has committed a new successful binding between the two
>>>> transactions, the thread returns a port dictionary with attributes based
>>>> on the DB state from the new transaction, including the other thread's
>>>> binding and any other port state changes. No further calls to mechanism
>>>> drivers are needed here since they are the responsibility of the other
>>>> thread that bound the port.
>>>>
>>>> 4c) If some other thread committed changes to the port's
>>>> binding-relevant state but has not committed a successful binding, then
>>>> this thread attempts to bind again using that updated state, repeating 4.
> 
> When nova will create a port, the port returned will be unbound. Nova
> gets port info (vif_type, port dict) by calling port-list with the
> neutron client, but the port could potentially not be bound yet
> Don't you think that binding the port to a vif_type outside the
> create_port could potentially create such a race conditions?

Looking at the DB at any point in time (i.e. after any commit), the port
will be in one of these four states:

1 - binding:host_id = '' and binding:vif_type = 'unbound' - In this
state, its not yet possible to bind the port.

2 - valid binding:host_id value and binding:vif_type = 'unbound' - In
this state, its likely that whatever thread has committed the
transaction setting binding:host_id (or changing something else
effecting bindings, such as binding:profile or binding:vnic_type) is
currently trying to bind the port and will commit the results soon. But
its also possible that the thread died and will never update the port
with the results.

3 - valid binding:host_id value and binding:vif_type = 'binding failed'
- In this state, the most recent thread to try to bind the port was not
able to bind it. Most likely the L2 agent was down on the node at that
time, or did not have a mapping for the physical_network of any of the
network's segments, or maybe the controller had lost touch with the
node. It may be possible to succeed in binding now or in the future.

4 - valid binding:host_id value and binding:vif_type = something other
than 'unbound' or 'binding failed' - We've got a valid binding, so no
need to do anything.

In states #1 or #4, a get port operation just returns the dictionary
based on the current state. But in states #2 or #3, the thread handling
the get port has no way to know if some other thread is attempting to
bind the port, or if so, to wait for the result to be available. The
strategy being proposed is each thread seeing a port in states #2 or #3
to simply try to bind the port itself (outside a transaction) and then
commit the result. This can result in multiple threads concurrently
trying to bind the port, but that should be very rare, and I think the
logic proposed for checking the state of the DB when committing the
binding results should handle all possible situations. The loop is to
handle the case where some binding input (binding:host_id,
binding:profile, or binding:vnic_type, or the network segments) has
changed asynchronously between the two transactions. Am I missing a race
condition?

-Bob

> 
>>>>
>>>> 5) Port deletion no longer does anything special to unbind the port. All
>>>> drivers' delete_port_precommit() and delete_port_postcommit() methods
>>>> are called with PortContext properties reflecting the binding state
>>>> before deletion as shown.
>>>>
>>>> PortContext.original: None
>>>> PortContext.original_bound_segment: None
>>>> PortContext.original_bound_driver: None
>>>> PortContext.current['binding:host_id']: previous value or None
>>>> PortContext.current['binding:vif_type']: previous value
>>>> PortContext.bound_segment: previous value
>>>> PortContext.bound_driver: previous value
>>>
>>> Could this part of the port deletion also be done by port update?
>>
>> I had considered that, but that would then involve two separate
>> transacitons - 1st the port update to unbind and then the port delete.
>> Some other thread might rebind the port between the two, so a retry loop
>> would be needed. This loop is not needed if the same transaction unbinds
>> and deletes the port.
>>
>> -Bob
>>
>>>
>>>>
>>>> 6) In order to ensure successful bindings are created and returned
>>>> whenever possible, the get port and get ports operations also attempt to
>>>> bind the port as in 4 above when binding:host_id is available but there
>>>> is no existing successful binding in the DB.
>>>>
>>>> 7) We can either eliminate MechanismDriver.unbind_port(), or call it on
>>>> the previously bound driver within the transaction in 3d and 5 above. If
>>>> we do keep it, the old binding state must be consistently reflected in
>>>> the PortContext as either current or original state, TBD. Since all
>>>> drivers see unbinding as a port update where current_bound_segment is
>>>> None and original_bound_segment is not None, calling unbind_port() seems
>>>> redundant.
>>>>
>>>> 8) If bindings shouldn't spontaneously become invalid, maybe we can
>>>> eliminate MechanismDriver.validate_bound_port().
>>>>
>>>>
>>>> I've provided a lot of details, and the above may seem complicated. But
>>>> I think its actually much more consistent and predictable than the
>>>> current port binding code, and implementation should be straightforward.
>>>>
>>>> -Bob
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




More information about the OpenStack-dev mailing list