[openstack-dev] About DVR limit

joehuang joehuang at huawei.com
Thu Jan 15 02:14:25 UTC 2015


Hi, Swami,

Thanks for your reply.

Maybe I did not explain clearly, my suggestion is:

Add one more configuration, enable_distributed_FIP to configure the FIP will be run as distributed mode or centralized mode.

If enable_distributed_FIP = True, then FIP can be both in centralized node (dvr_snat) and distributed “dvr” compute nodes.
If enable_distributed_FIP = False, then FIP can be only in centralized node (dvr_snat).

Best Regards
Chaoyi Huang ( Joe Huang )


From: Vasudevan, Swaminathan (PNB Roseville) [mailto:swaminathan.vasudevan at hp.com]
Sent: Wednesday, January 14, 2015 1:41 PM
To: joehuang; OpenStack Development Mailing List (not for usage questions); 龚永生
Subject: RE: [openstack-dev] About DVR limit

Hi Joehuang,
FIP today as of Juno can be both in centralized node (dvr_snat) and distributed “dvr” compute nodes.
Thanks
Swami

From: joehuang [mailto:joehuang at huawei.com]
Sent: Tuesday, January 13, 2015 7:49 PM
To: OpenStack Development Mailing List (not for usage questions); 龚永生
Cc: Vasudevan, Swaminathan (PNB Roseville)
Subject: RE: [openstack-dev] About DVR limit

Hi, Swami,

I would like to know whether the FIP under DVR could be configured to distributed mode or central mode in Kilo, not find relevant information from http://specs.openstack.org/openstack/neutron-specs/.

For example, it will be helpful for following FIP use cases: 1) FIP will be addressed centrally by dedicated hardware 2) not all compute nodes have public address.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Stamina than Valued an [mailto:souminathan at yahoo.com]
Sent: Wednesday, January 14, 2015 11:05 AM
To: 龚永生
Cc: swaminathan.vasudevan at hp.com<mailto:swaminathan.vasudevan at hp.com>; openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] About DVR limit

Hi yong Zheng,
Yes your understanding is right.
We only support ovs driver.
Regarding HA and multi network support, it will be available in kilo.

Public address consumption for north south traffic for compute node is also true. But to address this issue we do have a proposal that is worked out by the l3 sub team.

I hope this clarifies your doubts.

Please let me know if I can help you with anything else.

Thanks
Swami

Sent from my iPad

On Jan 13, 2015, at 6:01 PM, 龚永生 <gong.yongsheng at 99cloud.net<mailto:gong.yongsheng at 99cloud.net>> wrote:
Hi,
 I am yong sheng gong, I want to know if the DVR has these limits besides the documented at http://docs.openstack.org/networking-guide/content/ha-dvr.html:

1. one subnet can be connected to DVR router only once, which is confirmed by BP https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr-multigateway
2. one network cannot have more than one subnet connecting to DVR routers


So the DVR limits the neutron model to:
one network has just one subnet, and one subnet cannot connect to more than one DVR routers.


ps.
req and limits documented at http://docs.openstack.org/networking-guide/content/ha-dvr.html::
 DVR requirements

·        You must use the ML2 plug-in for Open VSwitch (OVS) to enable DVR.

·        Be sure that your firewall or security groups allows UDP traffic over the VLAN, GRE, or VXLAN port to pass between the compute hosts.

 DVR limitations

·        Distributed virtual router configurations work with the Open vSwitch Modular Layer 2 driver only for Juno.

·        In order to enable true north-south bandwidth between hypervisors (compute nodes), you must use public IP addresses for every compute node and enable floating IPs.

·        For now, based on the current neutron design and architecture, DHCP cannot become distributed across compute nodes.

thanks,
Yong sheng gong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150115/d442f718/attachment.html>


More information about the OpenStack-dev mailing list