<div dir="ltr">Yes, the usage of fanout topic by VNI is also another big improvement we could do.<div>That will fit perfectly for the l2-pop mechanism driver.</div><div>Of course, that need a specific call on a start/re-sync to get initial state. That actually done by the l2-pop MD if the uptime of an agent is less than 'agent_boot_time' flag [1].</div>
<div><br></div><div>[1] <a href="https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L181">https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L181</a></div>
<div><br></div><div>Édouard.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jun 27, 2014 at 3:43 AM, joehuang <span dir="ltr"><<a href="mailto:joehuang@huawei.com" target="_blank">joehuang@huawei.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Interesting idea to optimize the performance.<br>
<br>
Not only security group rule will leads to fanout message load, we need to review and check to see if all fanout usegae in Neutron could be optimized.<br>
<br>
For example, L2 population:<br>
<br>
self.fanout_cast(context,<br>
self.make_msg(method, fdb_entries=fdb_entries),<br>
topic=self.topic_l2pop_update)<br>
<br>
it would be better to use network+l2pop_update as the topic, and only the agents which there are VMs running on it will consume the message.<br>
<br>
Best Regards<br>
Chaoyi Huang( Joe Huang)<br>
<br>
-----邮件原件-----<br>
发件人: Miguel Angel Ajo Pelayo [mailto:<a href="mailto:mangelajo@redhat.com">mangelajo@redhat.com</a>]<br>
发送时间: 2014年6月27日 1:33<br>
收件人: OpenStack Development Mailing List (not for usage questions)<br>
主题: Re: [openstack-dev] [neutron]Performance of security group<br>
<div class="HOEnZb"><div class="h5"><br>
----- Original Message -----<br>
> @Nachi: Yes that could a good improvement to factorize the RPC mechanism.<br>
><br>
> Another idea:<br>
> What about creating a RPC topic per security group (quid of the RPC<br>
> topic<br>
> scalability) on which an agent subscribes if one of its ports is<br>
> associated to the security group?<br>
><br>
> Regards,<br>
> Édouard.<br>
><br>
><br>
<br>
<br>
Hmm, Interesting,<br>
<br>
@Nachi, I'm not sure I fully understood:<br>
<br>
<br>
SG_LIST [ SG1, SG2]<br>
SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..<br>
port[SG_ID1, SG_ID2], port2 , port3<br>
<br>
<br>
Probably we may need to include also the SG_IP_LIST = [SG_IP1, SG_IP2] ...<br>
<br>
<br>
and let the agent do all the combination work.<br>
<br>
Something like this could make sense?<br>
<br>
Security_Groups = {SG1:{IPs:[....],RULES:[....],<br>
SG2:{IPs:[....],RULES:[....]}<br>
}<br>
<br>
Ports = {Port1:[SG1, SG2], Port2: [SG1] .... }<br>
<br>
<br>
@Edouard, actually I like the idea of having the agent subscribed<br>
to security groups they have ports on... That would remove the need to include<br>
all the security groups information on every call...<br>
<br>
But would need another call to get the full information of a set of security groups<br>
at start/resync if we don't already have any.<br>
<br>
<br>
><br>
> On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang < <a href="mailto:ayshihanzhang@126.com">ayshihanzhang@126.com</a> > wrote:<br>
><br>
><br>
><br>
> hi Miguel Ángel,<br>
> I am very agree with you about the following point:<br>
> > * physical implementation on the hosts (ipsets, nftables, ... )<br>
> --this can reduce the load of compute node.<br>
> > * rpc communication mechanisms.<br>
> -- this can reduce the load of neutron server<br>
> can you help me to review my BP specs?<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" < <a href="mailto:mangelajo@redhat.com">mangelajo@redhat.com</a> ><br>
> wrote:<br>
> ><br>
> > Hi it's a very interesting topic, I was getting ready to raise<br>
> >the same concerns about our security groups implementation, shihanzhang<br>
> >thank you for starting this topic.<br>
> ><br>
> > Not only at low level where (with our default security group<br>
> >rules -allow all incoming from 'default' sg- the iptable rules<br>
> >will grow in ~X^2 for a tenant, and, the "security_group_rules_for_devices"<br>
> >rpc call from ovs-agent to neutron-server grows to message sizes of >100MB,<br>
> >generating serious scalability issues or timeouts/retries that<br>
> >totally break neutron service.<br>
> ><br>
> > (example trace of that RPC call with a few instances<br>
> > <a href="http://www.fpaste.org/104401/14008522/" target="_blank">http://www.fpaste.org/104401/14008522/</a> )<br>
> ><br>
> > I believe that we also need to review the RPC calling mechanism<br>
> >for the OVS agent here, there are several possible approaches to breaking<br>
> >down (or/and CIDR compressing) the information we return via this api call.<br>
> ><br>
> ><br>
> > So we have to look at two things here:<br>
> ><br>
> > * physical implementation on the hosts (ipsets, nftables, ... )<br>
> > * rpc communication mechanisms.<br>
> ><br>
> > Best regards,<br>
> >Miguel Ángel.<br>
> ><br>
> >----- Mensaje original -----<br>
> ><br>
> >> Do you though about nftables that will replace {ip,ip6,arp,eb}tables?<br>
> >> It also based on the rule set mechanism.<br>
> >> The issue in that proposition, it's only stable since the begin of the<br>
> >> year<br>
> >> and on Linux kernel 3.13.<br>
> >> But there lot of pros I don't list here (leverage iptables limitation,<br>
> >> efficient update rule, rule set, standardization of netfilter<br>
> >> commands...).<br>
> ><br>
> >> Édouard.<br>
> ><br>
> >> On Thu, Jun 19, 2014 at 8:25 AM, henry hly < <a href="mailto:henry4hly@gmail.com">henry4hly@gmail.com</a> > wrote:<br>
> ><br>
> >> > we have done some tests, but have different result: the performance is<br>
> >> > nearly<br>
> >> > the same for empty and 5k rules in iptable, but huge gap between<br>
> >> > enable/disable iptable hook on linux bridge<br>
> >><br>
> ><br>
> >> > On Thu, Jun 19, 2014 at 11:21 AM, shihanzhang < <a href="mailto:ayshihanzhang@126.com">ayshihanzhang@126.com</a> ><br>
> >> > wrote:<br>
> >><br>
> ><br>
> >> > > Now I have not get accurate test data, but I can confirm the following<br>
> >> > > points:<br>
> >> ><br>
> >><br>
> >> > > 1. In compute node, the iptable's chain of a VM is liner, iptable<br>
> >> > > filter<br>
> >> > > it<br>
> >> > > one by one, if a VM in default security group and this default<br>
> >> > > security<br>
> >> > > group have many members, but ipset chain is set, the time ipset filter<br>
> >> > > one<br>
> >> > > and many member is not much difference.<br>
> >> ><br>
> >><br>
> >> > > 2. when the iptable rule is very large, the probability of failure<br>
> >> > > that<br>
> >> > > iptable-save save the iptable rule is very large.<br>
> >> ><br>
> >><br>
> ><br>
> >> > > At 2014-06-19 10:55:56, "Kevin Benton" < <a href="mailto:blak111@gmail.com">blak111@gmail.com</a> > wrote:<br>
> >> ><br>
> >><br>
> ><br>
> >> > > > This sounds like a good idea to handle some of the performance<br>
> >> > > > issues<br>
> >> > > > until<br>
> >> > > > the ovs firewall can be implemented down the the line.<br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > Do you have any performance comparisons?<br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > On Jun 18, 2014 7:46 PM, "shihanzhang" < <a href="mailto:ayshihanzhang@126.com">ayshihanzhang@126.com</a> ><br>
> >> > > > wrote:<br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> >> > > > > Hello all,<br>
> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> >> > > > > Now in neutron, it use iptable implementing security group, but<br>
> >> > > > > the<br>
> >> > > > > performance of this implementation is very poor, there is a bug:<br>
> >> > > > > <a href="https://bugs.launchpad.net/neutron/+bug/1302272" target="_blank">https://bugs.launchpad.net/neutron/+bug/1302272</a> to reflect this<br>
> >> > > > > problem.<br>
> >> > > > > In<br>
> >> > > > > his test, w ith default security groups(which has remote security<br>
> >> > > > > group),<br>
> >> > > > > beyond 250-300 VMs, there were around 6k Iptable rules on evry<br>
> >> > > > > compute<br>
> >> > > > > node,<br>
> >> > > > > although his patch can reduce the processing time, but it don't<br>
> >> > > > > solve<br>
> >> > > > > this<br>
> >> > > > > problem fundamentally. I have commit a BP to solve this problem:<br>
> >> > > > > <a href="https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security" target="_blank">https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security</a><br>
> >> > > > > >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > > There are other people interested in this it?<br>
> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> >> > > > > _______________________________________________<br>
> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > > OpenStack-dev mailing list<br>
> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a> >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> >> > > > > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> >> > > > > >> > > ><br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> >> > > _______________________________________________<br>
> >> ><br>
> >><br>
> >> > > OpenStack-dev mailing list<br>
> >> ><br>
> >><br>
> >> > > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a> >> ><br>
> >><br>
> >> > > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a> >> ><br>
> >><br>
> ><br>
> >> > _______________________________________________<br>
> >><br>
> >> > OpenStack-dev mailing list<br>
> >><br>
> >> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a> >><br>
> >> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a> >><br>
> ><br>
> >> _______________________________________________<br>
> >> OpenStack-dev mailing list<br>
> >> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a> >><br>
> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a> ><br>
> >_______________________________________________<br>
> >OpenStack-dev mailing list<br>
> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a> ><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>