[Openstack] Networking issue with VlanManager and Floating IPs

Vishvananda Ishaya vishvananda at gmail.com
Tue Jul 31 18:46:46 UTC 2012


Communication should be blocked via security groups, but perhaps you want more complete isolation. The network host (which in this case is the compute host) will be able to route packets between subnets even though they are on different networks, so you will need to drop packets between vlans.  My iptables foo is a bit rusty but I think you can either explicitly drop all jumping packets:

iptables -A FORWARD -i vlan100 -o vlan101 -j DROP
iptables -A FORWARD -i vlan100 -o vlan102 -j DROP
iptables -A FORWARD -i vlan101 -o vlan102 -j DROP

or (probably better because it avoids an explosion in rules) put a fallback of drop and explicitly allow packets to the public interface:

iptables -A FORWARD -i vlan100 -o <public_iface> -j ACCEPT
iptables -A FORWARD -i <public_iface> -o vlan100 -j ACCEPT
iptables -A FORWARD -j DROP

Note that the superior solution is probably to configure dnsmasq to use your router as the gateway, and disable routing between vlans in the router.

See this post for information on how to use your router as the gateway:

https://lists.launchpad.net/openstack/msg11953.html

(note it is dhcp_option not dhcp_opiton, it was misspelled)

Vish


On Jul 31, 2012, at 10:30 AM, Xu (Simon) Chen <xchenum at gmail.com> wrote:

> Two ways that I can think of...
> 
> 1) disable forwarding on the NC, not sure if this would impact regular services.
> 2) add additional rules, something like "-s 10.10.10.0/24 -d 10.0.0.0/8 drop".
> 
> -Simon
> 
> On Mon, Jul 30, 2012 at 4:34 AM, Wael Ghandour (wghandou)
> <wghandou at cisco.com> wrote:
>> 
>> We are also seeing another issue:
>> 
>> VMs belonging to different tenants are supposed to be isolated from each
>> other, except if they have public IPs that are made accessible.
>> 
>> What I'm seeing now however is that VMs belonging to different tenants, but
>> running on the same physical host, are able to communicate to each other via
>> their private IPs.
>> 
>> Has anyone faced this issue before?
>> 
>> 
>> Regards,
>> 
>> Wael
>> 
>> 
>> 
>> On Jul 21, 2012, at 5:36 AM, Xu (Simon) Chen <xchenum at gmail.com> wrote:
>> 
>> Here is what happened on a different thread:
>> http://buriedlede.blogspot.com/2012/07/debugging-networking-problems-with.html
>> 
>> I feel that using this might solve your issue too without changing iptables
>> drivers...
>> 
>> On Fri, Jul 20, 2012 at 12:58 PM, Wael Ghandour (wghandou)
>> <wghandou at cisco.com> wrote:
>>> 
>>> 
>>> Yup, that has definitely helped, thanks a bunch Xu.
>>> 
>>> 
>>> Regards,
>>> 
>>> Wael
>>> 
>>> 
>>> 
>>> On Jul 20, 2012, at 8:09 AM, Xu (Simon) Chen wrote:
>>> 
>>> Yes, one solution is to modify the iptables driver, so that you don't SNAT
>>> for internal subnets...
>>> 
>>> So, at the beginning of the nova-network-floating-snat rules, you add
>>> something like this:
>>> -A nova-network-floating-snat -s 10.0.0.0/24 -d 10.0.0.0/24 -j ACCEPT
>>> ...
>>> -A nova-network-floating-snat -s 10.0.88.16/32 -j SNAT --to-source pub1
>>> -A nova-network-floating-snat -s 10.0.16.7/32 -j SNAT --to-source pub2
>>> -A nova-network-floating-snat -s 10.0.4.11/32 -j SNAT --to-source pub3
>>> 
>>> Then it should solve the unnecessary NATting issue...
>>> 
>>> On Fri, Jul 20, 2012 at 10:13 AM, Wael Ghandour (wghandou)
>>> <wghandou at cisco.com> wrote:
>>>> 
>>>> 
>>>> I can confirm that the VM traffic is undergoing NAT with using its
>>>> floating IP on the private interface of the nova-compute node when it tries
>>>> to reach the private address of the VMs belonging to the same tenant and on
>>>> other compute nodes. That obviously is breaking internal connectivity....
>>>> 
>>>> 
>>>> Regards,
>>>> 
>>>> Wael
>>>> 
>>>> 
>>>> 
>>>> On Jul 20, 2012, at 5:42 AM, Xu (Simon) Chen wrote:
>>>> 
>>>> There was an issue that we saw in an earlier nova-network...
>>>> 
>>>> Due to multi_host configuration, the nova-network runs on every
>>>> nova-compute node. Therefore the floating IP assignment happens on the
>>>> compute nodes directly. So between two VMs within the same tenant on
>>>> different hosts, private->public SNAT happens unnecessarily.
>>>> 
>>>> Not sure if this is fixed in Essex...
>>>> 
>>>> On Fri, Jul 20, 2012 at 3:49 AM, Edgar Magana (eperdomo)
>>>> <eperdomo at cisco.com> wrote:
>>>>> 
>>>>> Folks,
>>>>> 
>>>>> 
>>>>> 
>>>>> We are using Essex for our multi-host OpenStack deployment with Vlan
>>>>> Manager.
>>>>> 
>>>>> All the private IPs are working as expected in a multi-tenant scenario
>>>>> but the problem that we are seen is with Floating IPs.
>>>>> 
>>>>> 
>>>>> 
>>>>> We have three tenants,  all of them are able to use  Floating IPs and
>>>>> then VMs are reachable from the public network but the inter VMs
>>>>> connectivity by private IPs is totally lost. Once we dissociate the Floating
>>>>> IPs to the corresponding VMs, the connectivity is back. The odd part is that
>>>>> we are seeing this behavior in just two of the three tenants that we have
>>>>> tested so far.
>>>>> 
>>>>> 
>>>>> 
>>>>> Is anyone aware of any bug or misconfiguration in Nova-network that
>>>>> could explain this behavior? We will be running more tests and we can
>>>>> provide detailed information of our environment if needed.
>>>>> 
>>>>> 
>>>>> 
>>>>> Thanks for your help,
>>>>> 
>>>>> 
>>>>> 
>>>>> Edgar
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to     : openstack at lists.launchpad.net
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp





More information about the Openstack mailing list