[openstack-dev] [Quantum] continuing todays discussion about the l3 agents

gong yong sheng gongysh at linux.vnet.ibm.com
Sat Dec 1 01:31:31 UTC 2012


On 12/01/2012 07:49 AM, Vinay Bannai wrote:
> Gary and Mark,
>
> You brought up the issue of scaling horizontally and vertically in 
> your earlier email. In the case of horizontal scaling, I would agree 
> that it would have to be based on the "scheduler" approach proposed by 
> Gong and Nachi.
>
> On the issue of vertical scaling (I am using the DHCP redundancy as an 
> example), I think it would be good to base our discussions on the 
> various methods that have been discussed and do pro/con analysis in 
> terms of scale, performance and other such metrics.
>
> - Split scope DHCP (two or more servers split the IP address and there 
> is no overlap)
>   pros: simple
>   cons: wastes IP addresses,
>
> - Active/Standby model (might have run VRRP or hearbeats to dictate 
> who is active)
>   pros: load evenly shared
>   cons: needs shared knowledge of address assignments,
>             need hearbeats or VRRP to keep track of failovers
another one is the IP address waste. we need one VIP, and 2+ more 
address for VRRP servers. ( we can use dhcp server's ip if we don't want 
to do load balancing behind the VRRP servers)
another one is it will make system complicated.
>
> - LB method (use load balancer to fan out to multiple dhcp servers)
>   pros: scales very well
>   cons: the lb becomes the single point of failure,
>            the lease assignments needs to be shared between the dhcp 
> servers
>
LB method will also wast ip address. First we at lease need a VIP 
address. then we will need more dhcp servers running for one network.
If we need to VRRP the VIP, we will need 2+ more addresses.
another one is it will make system complicated.
> I see that the DHCP agent and the quantum server communicate using 
> RPC. Is the plan to leave it alone or migrate it towards something 
> like AMQP based server in the future when the "scheduler" stuff is 
> implemented.
I am not very clear your point. But current RPC is on AMQP.
>
> Vinay
>
>
> On Wed, Nov 28, 2012 at 8:03 AM, Mark McClain 
> <mark.mcclain at dreamhost.com <mailto:mark.mcclain at dreamhost.com>> wrote:
>
>
>     On Nov 28, 2012, at 8:03 AM, gong yong sheng
>     <gongysh at linux.vnet.ibm.com <mailto:gongysh at linux.vnet.ibm.com>>
>     wrote:
>
>     > On 11/28/2012 08:11 AM, Mark McClain wrote:
>     >> On Nov 27, 2012, at 6:33 PM, gong yong sheng
>     <gongysh at linux.vnet.ibm.com <mailto:gongysh at linux.vnet.ibm.com>>
>     wrote:
>     >>
>     >> Just wanted to clarify two items:
>     >>
>     >>>> At the moment all of the dhcp agents receive all of the
>     updates. I do not see why we need the quantum service to indicate
>     which agent runs where. This will change the manner in which the
>     dhcp agents work.
>     >>> No. currently, we can run only one dhcp agent  since we are
>     using a topic queue for notification.
>     >> You are correct.  There is a bug in the underlying Oslo RPC
>     implementation that sets the topic and queue names to be same
>     value.  I didn't get a clear explanation of this problem until
>     today and will have to figure out a fix to oslo.
>     >>
>     >>> And one problem with multiple agents serving the same ip is:
>     >>> we will have more than one agents want to update the ip's
>     leasetime now and than.
>     >> This is not a problem.  The DHCP protocol was designed for
>     multiple servers on a network.  When a client accepts a lease, the
>     server that offered the accepted lease will be the only process
>     attempting to update the lease for that port.  The other DHCP
>     instances will not do anything, so there won't be any chance for a
>     conflict.  Also, when a client renews it sends a unicast message
>     to that previous DHCP server and so there will only be one writer
>     in this scenario too.  Additionally, we don't have to worry about
>     conflicting assignments because the dhcp agents use the same
>     static allocations from the Quantum database.
>     > I mean dhcp agent is trying to update leasetime to quantum
>     server. If we have more than one dhcp agents, this will cause
>     confusion.
>     >    def update_lease(self, network_id, ip_address, time_remaining):
>     >        try:
>     >  self.plugin_rpc.update_lease_expiration(network_id, ip_address,
>     >  time_remaining)
>     >        except:
>     >            self.needs_resync = True
>     >            LOG.exception(_('Unable to update lease'))
>     > I think it is our dhcp agent's defect. Why does our dhcp agent
>     need the lease time? all the IPs are managed in our quantum
>     server, there is not need for dynamic ip management in dhcp server
>     managed by dhcp agent.
>
>     There cannot be confusion.  The dhcp client selects only one
>     server to accept a lease, so only one agent will update this field
>     at a time. (See RFC2131 section 4.3.2 for protocol specifics).
>      The dnsmasq allocation database is static in Quantum's setup, so
>     the lease renewal needs to propagate to the Quantum Server.  The
>     Quantum server then uses the lease time to avoid allocating IP
>     addresses before the lease has expired.  In Quantum, we add an
>     additional restriction that expired allocations are not reclaimed
>     until the associated port has been deleted as well.
>
>     mark
>
>
>     _______________________________________________
>     OpenStack-dev mailing list
>     OpenStack-dev at lists.openstack.org
>     <mailto:OpenStack-dev at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
> Vinay Bannai
> Email: vbannai at gmail.com <mailto:vbannai at gmail.com>
> Google Voice: 415 938 7576
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121201/e297b959/attachment.html>


More information about the OpenStack-dev mailing list