[openstack-dev] Havana neutron security groups config issue

Leandro Reox leandro.reox at gmail.com
Mon Oct 21 20:11:18 UTC 2013


We tried that a few minutes ago, and removing nova-networks doesnt make any
difference, im starting to think that neutron security groups are not
working with dockerIO containers


On Mon, Oct 21, 2013 at 4:15 PM, Aaron Rosen <arosen at nicira.com> wrote:

> Hrm, your config files looks good to me. From your iptables-save output it
> looks like you have nova-network running as well. I wonder if that is
> overwritting the rules that the agents are installing. Can you try removing
> nova-network and see if that changes anything?
>
> Aaron
>
>
> On Mon, Oct 21, 2013 at 10:45 AM, Leandro Reox <leandro.reox at gmail.com>wrote:
>
>> Aaron,
>>
>> Here you are all the info, all the nova.confs (compute, controller) , all
>> the agent logs, iptables output etc ... btw as i said we're testing this
>> setup with docker containers , just to be clear regarding your last
>> recommedation about libvirt vif driver (that we alreade have on the conf )
>>
>> Here it is: http://pastebin.com/RMgQxFyN
>>
>> Any clues ?
>>
>>
>> Best
>> Lean
>>
>>
>> On Fri, Oct 18, 2013 at 8:06 PM, Aaron Rosen <arosen at nicira.com> wrote:
>>
>>> Is anything showing up in the agents log on the hypervisors? Also, can
>>> you confirm you have this setting in your nova.conf:
>>>
>>>
>>> libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>>>
>>>
>>>
>>> On Fri, Oct 18, 2013 at 1:14 PM, Leandro Reox <leandro.reox at gmail.com>wrote:
>>>
>>>> Aaaron, i fixed the config issues moving the neutron opts up to the
>>>> default section. But now im having this issue
>>>>
>>>> i can launch intances normally, it seems that the rules are not getting
>>>> applied anywhere, i have full access to the docker containers. If i do
>>>> iptable -t nat -L and iptables -L , no rules seems to be applied to any flow
>>>>
>>>> I see the calls on the nova-api normally ... , but no rule applied
>>>>
>>>>
>>>> 2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-]
>>>> RESP:{'date': 'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200',
>>>> 'content-length': '2331', 'content-type': 'application/json;
>>>> charset=UTF-8', 'content-location': '
>>>> http://172.16.124.16:9696/v2.0/security-groups.json'}
>>>> {"security_groups": [{"tenant_id": "df26f374a7a84eddb06881c669ffd62f",
>>>> "name": "default", "description": "default", "security_group_rules":
>>>> [{"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "131f26d3-6b7b-47ef-9abf-fd664e59a972",
>>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>>> "protocol": null, "ethertype": "IPv6", "tenant_id":
>>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "93a8882b-adcd-489a-89e4-694f59555555",
>>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>>>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
>>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
>>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
>>>> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
>>>> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
>>>> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
>>>> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
>>>> "df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
>>>> "security_group_rules": [{"remote_group_id": null, "direction": "egress",
>>>> "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
>>>> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
>>>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
>>>> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
>>>> "protocol": null, "ethertype": "IPv4", "tenant_id":
>>>> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
>>>> "port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
>>>> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
>>>> "fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
>>>>  http_log_resp
>>>> /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
>>>> 2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
>>>> [req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
>>>> df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
>>>> /v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
>>>> len: 1878 time: 0.6089120
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Oct 18, 2013 at 5:07 PM, Aaron Rosen <arosen at nicira.com> wrote:
>>>>
>>>>> Do you have [default] at the top of your nova.conf? Could you pastebin
>>>>> your nova.conf  for us to see.
>>>>>  On Oct 18, 2013 12:31 PM, "Leandro Reox" <leandro.reox at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Yes it is, but i found that is not reading the parameter from the
>>>>>> nova.conf , i forced on the code on /network/manager.py and took the
>>>>>> argument finally but stacks cause says that the neutron_url and if i fix it
>>>>>> it stacks on the next neutron parameter like timeout :
>>>>>>
>>>>>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py",
>>>>>> line 1648, in __getattr__
>>>>>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack     raise
>>>>>> NoSuchOptError(name)
>>>>>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack
>>>>>> NoSuchOptError: no such option: neutron_url
>>>>>>
>>>>>> and then
>>>>>>
>>>>>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py",
>>>>>> line 1648, in __getattr__
>>>>>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack     raise
>>>>>> NoSuchOptError(name)
>>>>>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack
>>>>>> NoSuchOptError: no such option: neutron_url_timeout
>>>>>>
>>>>>> Its really weird, like its not reading the nova.conf neutron
>>>>>> parameter at all ...
>>>>>>
>>>>>> If i hardcode all the settings on the neutronv2/init.py .. at least
>>>>>> it works, and bring all the secgroup details from netruon
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 18, 2013 at 3:48 PM, Aaron Rosen <arosen at nicira.com>wrote:
>>>>>>
>>>>>>> Hi Leandro,
>>>>>>>
>>>>>>>
>>>>>>> I don't believe the setting of:  security_group_api=neutron in
>>>>>>> nova.conf actually doesn't matter at all on the compute nodes (still good
>>>>>>> to set it though). But it matters on the nova-api node. can you confirm
>>>>>>> that your nova-api node has: security_group_api=neutron in it's nova.conf?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Aaron
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 18, 2013 at 10:32 AM, Leandro Reox <
>>>>>>> leandro.reox at gmail.com> wrote:
>>>>>>>
>>>>>>>> Dear all,
>>>>>>>>
>>>>>>>> Im struggling with centralized sec groups on nova, were using OVS,
>>>>>>>> it seems like no matter what flag i change on nova conf, the node still
>>>>>>>> searchs the segroups on nova region local db
>>>>>>>>
>>>>>>>> We added :
>>>>>>>>
>>>>>>>>
>>>>>>>> [compute node]
>>>>>>>>
>>>>>>>> *nova.conf*
>>>>>>>>
>>>>>>>> firewall_driver=neutron.agent.firewall.NoopFirewallDriver
>>>>>>>> security_group_api=neutron
>>>>>>>>
>>>>>>>>
>>>>>>>> *ovs_neutron_plugin.ini*
>>>>>>>>
>>>>>>>> [securitygroup]
>>>>>>>> firewall_driver =
>>>>>>>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>>>>>>
>>>>>>>>
>>>>>>>> Restarted the agent, nova-compute services ... still the same, are
>>>>>>>> we missing something ?
>>>>>>>>
>>>>>>>> NOTE: we're using dockerIO as virt system
>>>>>>>>
>>>>>>>> Best
>>>>>>>> Leitan
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> OpenStack-dev mailing list
>>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131021/f75429ec/attachment.html>


More information about the OpenStack-dev mailing list