[openstack-dev] Performance of security group

Édouard Thuleau thuleau at gmail.com
Mon Jun 30 07:36:11 UTC 2014


Yes, the usage of fanout topic by VNI is also another big improvement we
could do.
That will fit perfectly for the l2-pop mechanism driver.
Of course, that need a specific call on a start/re-sync to get initial
state. That actually done by the l2-pop MD if the uptime of an agent is
less than 'agent_boot_time' flag [1].

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L181

Édouard.


On Fri, Jun 27, 2014 at 3:43 AM, joehuang <joehuang at huawei.com> wrote:

> Interesting idea to optimize the performance.
>
> Not only security group rule will leads to fanout message load, we need to
> review and check to see if all fanout usegae in Neutron could be optimized.
>
> For example, L2 population:
>
> self.fanout_cast(context,
>               self.make_msg(method, fdb_entries=fdb_entries),
>               topic=self.topic_l2pop_update)
>
> it would be better to use network+l2pop_update as the topic, and only the
> agents which there are VMs running on it will consume the message.
>
> Best Regards
> Chaoyi Huang( Joe Huang)
>
> -----邮件原件-----
> 发件人: Miguel Angel Ajo Pelayo [mailto:mangelajo at redhat.com]
> 发送时间: 2014年6月27日 1:33
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 主题: Re: [openstack-dev] [neutron]Performance of security group
>
> ----- Original Message -----
> > @Nachi: Yes that could a good improvement to factorize the RPC mechanism.
> >
> > Another idea:
> > What about creating a RPC topic per security group (quid of the RPC
> > topic
> > scalability) on which an agent subscribes if one of its ports is
> > associated to the security group?
> >
> > Regards,
> > Édouard.
> >
> >
>
>
> Hmm, Interesting,
>
> @Nachi, I'm not sure I fully understood:
>
>
> SG_LIST [ SG1, SG2]
> SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
> port[SG_ID1, SG_ID2], port2 , port3
>
>
> Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ...
>
>
> and let the agent do all the combination work.
>
> Something like this could make sense?
>
> Security_Groups = {SG1:{IPs:[....],RULES:[....],
>                    SG2:{IPs:[....],RULES:[....]}
>                   }
>
> Ports = {Port1:[SG1, SG2], Port2: [SG1] .... }
>
>
> @Edouard, actually I like the idea of having the agent subscribed
> to security groups they have ports on... That would remove the need to
> include
> all the security groups information on every call...
>
> But would need another call to get the full information of a set of
> security groups
> at start/resync if we don't already have any.
>
>
> >
> > On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < ayshihanzhang at 126.com >
> wrote:
> >
> >
> >
> > hi Miguel Ángel,
> > I am very agree with you about the following point:
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > --this can reduce the load of compute node.
> > >  * rpc communication mechanisms.
> > -- this can reduce the load of neutron server
> > can you help me to review my BP specs?
> >
> >
> >
> >
> >
> >
> >
> > At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < mangelajo at redhat.com
> >
> > wrote:
> > >
> > >  Hi it's a very interesting topic, I was getting ready to raise
> > >the same concerns about our security groups implementation, shihanzhang
> > >thank you for starting this topic.
> > >
> > >  Not only at low level where (with our default security group
> > >rules -allow all incoming from 'default' sg- the iptable rules
> > >will grow in ~X^2 for a tenant, and, the
> "security_group_rules_for_devices"
> > >rpc call from ovs-agent to neutron-server grows to message sizes of
> >100MB,
> > >generating serious scalability issues or timeouts/retries that
> > >totally break neutron service.
> > >
> > >   (example trace of that RPC call with a few instances
> > > http://www.fpaste.org/104401/14008522/ )
> > >
> > >  I believe that we also need to review the RPC calling mechanism
> > >for the OVS agent here, there are several possible approaches to
> breaking
> > >down (or/and CIDR compressing) the information we return via this api
> call.
> > >
> > >
> > >   So we have to look at two things here:
> > >
> > >  * physical implementation on the hosts (ipsets, nftables, ... )
> > >  * rpc communication mechanisms.
> > >
> > >   Best regards,
> > >Miguel Ángel.
> > >
> > >----- Mensaje original -----
> > >
> > >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?
> > >> It also based on the rule set mechanism.
> > >> The issue in that proposition, it's only stable since the begin of the
> > >> year
> > >> and on Linux kernel 3.13.
> > >> But there lot of pros I don't list here (leverage iptables limitation,
> > >> efficient update rule, rule set, standardization of netfilter
> > >> commands...).
> > >
> > >> Édouard.
> > >
> > >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < henry4hly at gmail.com >
> wrote:
> > >
> > >> > we have done some tests, but have different result: the performance
> is
> > >> > nearly
> > >> > the same for empty and 5k rules in iptable, but huge gap between
> > >> > enable/disable iptable hook on linux bridge
> > >>
> > >
> > >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang <
> ayshihanzhang at 126.com >
> > >> > wrote:
> > >>
> > >
> > >> > > Now I have not get accurate test data, but I can confirm the
> following
> > >> > > points:
> > >> >
> > >>
> > >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable
> > >> > > filter
> > >> > > it
> > >> > > one by one, if a VM in default security group and this default
> > >> > > security
> > >> > > group have many members, but ipset chain is set, the time ipset
> filter
> > >> > > one
> > >> > > and many member is not much difference.
> > >> >
> > >>
> > >> > > 2. when the iptable rule is very large, the probability of failure
> > >> > > that
> > >> > > iptable-save save the iptable rule is very large.
> > >> >
> > >>
> > >
> > >> > > At 2014-06-19 10:55:56, "Kevin Benton" < blak111 at gmail.com >
> wrote:
> > >> >
> > >>
> > >
> > >> > > > This sounds like a good idea to handle some of the performance
> > >> > > > issues
> > >> > > > until
> > >> > > > the ovs firewall can be implemented down the the line.
> > >> > >
> > >> >
> > >>
> > >> > > > Do you have any performance comparisons?
> > >> > >
> > >> >
> > >>
> > >> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < ayshihanzhang at 126.com
> >
> > >> > > > wrote:
> > >> > >
> > >> >
> > >>
> > >
> > >> > > > > Hello all,
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >> > > > > Now in neutron, it use iptable implementing security group,
> but
> > >> > > > > the
> > >> > > > > performance of this implementation is very poor, there is a
> bug:
> > >> > > > > https://bugs.launchpad.net/neutron/+bug/1302272 to reflect
> this
> > >> > > > > problem.
> > >> > > > > In
> > >> > > > > his test, w ith default security groups(which has remote
> security
> > >> > > > > group),
> > >> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry
> > >> > > > > compute
> > >> > > > > node,
> > >> > > > > although his patch can reduce the processing time, but it
> don't
> > >> > > > > solve
> > >> > > > > this
> > >> > > > > problem fundamentally. I have commit a BP to solve this
> problem:
> > >> > > > >
> https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
> > >> > > > > >> > > >
> > >> > >
> > >> >
> > >>
> > >> > > > > There are other people interested in this it?
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >> > > > > _______________________________________________
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >> > > > > OpenStack-dev mailing list
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >> > > > > OpenStack-dev at lists.openstack.org >> > > >
> > >> > >
> > >> >
> > >>
> > >> > > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >> > > > > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >> > > _______________________________________________
> > >> >
> > >>
> > >> > > OpenStack-dev mailing list
> > >> >
> > >>
> > >> > > OpenStack-dev at lists.openstack.org >> >
> > >>
> > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> > >>
> > >
> > >> > _______________________________________________
> > >>
> > >> > OpenStack-dev mailing list
> > >>
> > >> > OpenStack-dev at lists.openstack.org >>
> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > >
> > >> _______________________________________________
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev at lists.openstack.org >>
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
> > >_______________________________________________
> > >OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org >
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140630/58ff8c4d/attachment.html>


More information about the OpenStack-dev mailing list