[openstack-dev] [Neutron] Race condition between DB layer and plugin back-end implementation

Aaron Rosen arosen at nicira.com
Mon Nov 18 20:55:32 UTC 2013


This actually doesn't solve the issue because if you run multiple neutron
servers behind a loadbalancer you will still run into the same issue with
the transaction on the database I believe.

We handle this issue in the NVP plugin by removing the transaction and
attempt to manually delete the port if the rest call to nvp failed. In the
case where the port was unable to be deleted from the database (unlikely)
the operational status of the port eventually goes to error state from a
background thread that syncs the operational status from nvp to the neutron
database. Then later we have to garbage collect ports in error state.

On Mon, Nov 18, 2013 at 12:43 PM, Joshua Harlow <harlowja at yahoo-inc.com>wrote:

>  An idea, make the lock more granular.
>
>  Instead of @utils.synchronized('any-name') I wonder if u could do
> something like.
>
>  with utils.synchronized('any-name-$device-id'):
> # Code here
>
>  Then at least u won't be locking at the method level (which means no
> concurrency). Would that work?
>
>   From: Edgar Magana <emagana at plumgrid.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: Monday, November 18, 2013 12:25 PM
> To: OpenStack List <openstack-dev at lists.openstack.org>
> Subject: [openstack-dev] [Neutron] Race condition between DB layer and
> plugin back-end implementation
>
>   Developers,
>
>  This topic has been discussed before but I do not remember if we have a
> good solution or not.
> Basically, if concurrent API calls are sent to Neutron, all of them are
> sent to the plug-in level where two actions have to be made:
>
>  1. DB transaction – No just for data persistence but also to collect the
> information needed for the next action
> 2. Plug-in back-end implementation – In our case is a call to the python
> library than consequentially calls PLUMgrid REST GW (soon SAL)
>
>  For instance:
>
>  def create_port(self, context, port):
>         with context.session.begin(subtransactions=True):
>             # Plugin DB - Port Create and Return port
>             port_db = super(NeutronPluginPLUMgridV2,
> self).create_port(context,
>
>  port)
>             device_id = port_db["device_id"]
>             if port_db["device_owner"] == "network:router_gateway":
>                 router_db = self._get_router(context, device_id)
>             else:
>                 router_db = None
>             try:
>                 LOG.debug(_("PLUMgrid Library: create_port() called"))
> # Back-end implementation
>                 self._plumlib.create_port(port_db, router_db)
>             except Exception:
>>
>  The way we have implemented at the plugin-level in Havana (even in
> Grizzly) is that both action are wrapped in the same "transaction" which
> automatically rolls back any operation done to its original state
> protecting mostly the DB of having any inconsistency state or left over
> data if the back-end part fails.=.
> The problem that we are experiencing is when concurrent calls to the same
> API are sent, the number of operation at the plug-in back-end are long
> enough to make the next concurrent API call to get stuck at the DB
> transaction level, which creates a hung state for the Neutron Server to the
> point that all concurrent API calls will fail.
>
>  This can be fixed if we include some "locking" system such as calling:
>
>  from neutron.common import utile
>>
>  @utils.synchronized('any-name', external=True)
> def create_port(self, context, port):
>>
>  Obviously, this will create a serialization of all concurrent calls
> which will ends up in having a really bad performance. Does anyone has a
> better solution?
>
>  Thanks,
>
>  Edgar
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131118/2fc0837d/attachment.html>


More information about the OpenStack-dev mailing list