[openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

Kevin Benton blak111 at gmail.com
Sun Mar 29 13:45:05 UTC 2015


Does the decision about the floating IP have to be based on the use of the
private IP in the original destination, or could you get by with rules on
the L3 agent to avoid NAT just based on the destination being in a
configured set of CIDRs?

If you could get by with the latter it would be a much simpler problem to
solve. However, I suspect you will want the former to be able to connect to
floating IPs internally as well.
On Mar 28, 2015 12:24 PM, "Steve Wormley" <openstack at wormley.com> wrote:

> On Sat, Mar 28, 2015 at 1:57 AM, Kevin Benton <blak111 at gmail.com> wrote:
>
>> You want floating IPs at each compute node, and DVR with VLAN support got
>> you close. Are the floating IPs okay being on a different network/VLAN?
>>
>
> I should clarify, the floating IPs are publicly routable addresses, as
> opposed to instances on RFC1918 space. This is the 'standard' neutron and
> nova-network floating IP model. Nothing really special there.
>
> Which address do you expect the source to be when an instance communicates
>> outside of its network (no existing connection state)? You mentioned having
>> the L3 agent ARP for a different gateway, do you still want the floating IP
>> translation to happen before that? Is there any case where it should ever
>> be via the private address?
>>
>
> Instances with assigned floating IP addresses initiating connections are
> NATted and go out the floating IP. In reality, we special case all RFC 1918
> space to not trigger the floating IP.
>
>
>> The header mangling is to make up for the fact that traffic coming to the
>> floating IP gets translated by the L3 agent before it makes it to the
>> instance so there is no way to distinguish whether the floating IP or
>> private IP was targeted. Is that correct?
>>
>
> Basically. Traffic coming in on a tenant vlan to the instance is mangled
> by the first OVS rule it hits to indicate it came in via a private
> interface/subnet/VLAN. It then hits iptables on the instance Linux bridge
> with turns the header bits onto a conntrack mark. The outbound packets from
> the instance for connection gets the conntrack mark changed back to a
> header bit. If this packet then hits iptables in the qrouter namespace
> where it's turned into a normal fwmark/nfmark. That mark is used to disable
> NAT for the packet and flags the ip route rules to not send the packet to
> the FIP namespace but to instead let it flow normally.
>
> Of course, all this horribleness is because the veth drivers in Linux wipe
> the SKB Mark(fwmark/nfmark) so I have no way to persistently track a packet
> across the OVS-veth-Linux Bridge boundaries.
>
> -Steve Wormley
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150329/1c169a7b/attachment.html>


More information about the OpenStack-dev mailing list