[openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

Salvatore Orlando sorlando at nicira.com
Fri May 8 08:20:20 UTC 2015


Just like the Neutron plugin manager, also ML2 driver manager ensure
drivers are loaded only once regardless of the number of workers.
What Kevin did proves that drivers are correctly loaded before forking (I
reckon).

However, forking is something to be careful about especially when using
eventlet. For the plugin my team maintains we were creating a periodic task
during plugin initialisation.
This lead to an interesting condition where API workers were hanging [1].
This situation was fixed with a rather pedestrian fix - by adding a delay.

Generally speaking I would find useful to have a way to "identify" an API
worker in order to designate a specific one for processing that should not
be made redundant.
On the other hand I self-object to the above statement by saying that API
workers are not supposed to do this kind of processing, which should be
deferred to some other helper process.

Salvatore

[1] https://bugs.launchpad.net/vmware-nsx/+bug/1420278

On 8 May 2015 at 09:43, Kevin Benton <blak111 at gmail.com> wrote:

> I'm not sure I understand the behavior you are seeing. When your mechanism
> driver gets initialized and kicks off processing, all of that should be
> happening in the parent PID. I don't know why your child processes start
> executing code that wasn't invoked. Can you provide a pointer to the code
> or give a sample that reproduces the issue?
>
> I modified the linuxbridge mech driver to try to reproduce it:
> http://paste.openstack.org/show/216859/
>
> In the output, I never received any of the init code output I added more
> than once, including the function spawned using eventlet.
>
> The only time I ever saw anything executed by a child process was actual
> API requests (e.g. the create_port method).
>
>
> On Thu, May 7, 2015 at 6:08 AM, Neil Jerram <Neil.Jerram at metaswitch.com>
> wrote:
>
>> Is there a design for how ML2 mechanism drivers are supposed to cope with
>> the Neutron server forking?
>>
>> What I'm currently seeing, with api_workers = 2, is:
>>
>> - my mechanism driver gets instantiated and initialized, and immediately
>> kicks off some processing that involves communicating over the network
>>
>> - the Neutron server process then forks into multiple copies
>>
>> - multiple copies of my driver's network processing then continue, and
>> interfere badly with each other :-)
>>
>> I think what I should do is:
>>
>> - wait until any forking has happened
>>
>> - then decide (somehow) which mechanism driver is going to kick off that
>> processing, and do that.
>>
>> But how can a mechanism driver know when the Neutron server forking has
>> happened?
>>
>> Thanks,
>>         Neil
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150508/e0918ac2/attachment.html>


More information about the OpenStack-dev mailing list