[openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB
Salvatore Orlando
sorlando at nicira.com
Sun Mar 15 17:27:22 UTC 2015
The L2 agent, for instance, has a logic to perform "full" synchronisations
with the server.
These happens in two cases:
1) upon agent restart, as some messages from the server side might have
gone lost
2) whenever a failure is detected on the agent side (this is probably a bit
too conservative).
Salvatore
On 14 March 2015 at 10:51, Leo Y <minherz at gmail.com> wrote:
> Hello Rossella,
>
> I meant to something different, to less conventional changes. Right now,
> the network topology state is stored in neutron DB and each compute node
> "knows" about it by using neutron API per-request. Node "knows" means that
> neutron agents have this data stored in in-memory structures. In a case
> this "synchronization" is broken due a bug in software or (un)intentional
> change in neutron DB, I'd like to understand if the re-synchronization is
> possible. Right now, I know that L3 agent (I'm not sure if its working for
> all L3 agents) has periodic task that refreshes NIC information from
> neutron server. However, L2 agents don't have this mechanic. I don't know
> about agents that implement SDN.
> So, I'm looking to learn how the current neutron implementation deals with
> that problem.
>
>
> On Fri, Mar 13, 2015 at 10:52 AM, Rossella Sblendido <rsblendido at suse.com>
> wrote:
>
>> > On 03/07/2015 01:10 PM, Leo Y wrote:
>> > What happens when neutron DB is updated to change network settings (e.g.
>> > via Dashboard or manually) when there are communication sessions opened
>> > in compute nodes. Does it influence those sessions? When the update is
>> > propagated to compute nodes?
>>
>> Hi Leo,
>>
>> when you say "change network settings" I think you mean a change in the
>> security group, is my assumption correct? In that case the Neutron
>> server will notify all the L2 agent (they reside on each compute node)
>> about the change. There are different kind of messages that the Neutron
>> server sends depending on the type of the update,
>> security_groups_rule_updated, security_groups_member_updated,
>> security_groups_provider_updated. Each L2 agent will process the message
>> and apply the required modification on the host. In the default
>> implementation we use iptables to implement security group, so the
>> update consists in some modifications of the iptables rules. Regarding
>> the existing connections in the compute nodes they might not be affected
>> by the change, which is a problem already discussed in this mail thread
>> [1] and there's a patch in review to fix that [2].
>> Hope that answers your question.
>>
>> cheers,
>>
>> Rossella
>>
>> [1]
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2014-October/049055.html
>> [2] https://review.openstack.org/#/c/147713/
>>
>> On 03/13/2015 04:10 AM, Kevin Benton wrote:
>> > Yeah, I was making a bad assumption for the l2 and l3. Sorry about that.
>> > It sounds like we don't have any protection against servers failing to
>> > send notifications.
>> >
>> > On Mar 12, 2015 7:41 PM, "Assaf Muller" <amuller at redhat.com
>> > <mailto:amuller at redhat.com>> wrote:
>> >
>> >
>> >
>> > ----- Original Message -----
>> > > > However, I briefly looked through the L2 agent code and didn't
>> see a
>> > > > periodic task to resync the port information to protect from a
>> > neutron
>> > > > server that failed to send a notification because it crashed or
>> > lost its
>> > > > amqp connection. The L3 agent has a period sync routers task
>> > that helps in
>> > > > this regard.
>> >
>> > The L3 agent periodic sync is only if the full_sync flag was turned
>> > on, which
>> > is a result of an error.
>> >
>> > > > Maybe another neutron developer more familiar with the L2
>> > > > agent can chime in here if I'm missing anything.
>> > >
>> > > i don't think you are missing anything.
>> > > periodic sync would be a good improvement.
>> > >
>> > > YAMAMAOTO Takashi
>> > >
>> > >
>> >
>> __________________________________________________________________________
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Regards,
> Leo
> ---------------------------------------------------------
> I enjoy the massacre of ads. This sentence will slaughter ads without a
> messy bloodbath
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150315/b876b968/attachment.html>
More information about the OpenStack-dev
mailing list