[Openstack-operators] Two regions and so two metadata servers sharing the same VLAN

Kevin Benton blak111 at gmail.com
Wed Dec 2 23:01:48 UTC 2015


Are both metadata servers able to provide metadata for all instances of
both sides? If so, why not disable isolated metadata on one of the sides so
only one of the DHCP agents will respond?


On Thu, Nov 26, 2015 at 6:49 AM, <gilles.mocellin at nuagelibre.org> wrote:

> Hello stackers !
>
> Sorry, I also cross-posted that question here
> https://ask.openstack.org/en/question/85195/two-regions-and-so-two-metadata-servers-sharing-the-same-vlan/
>
> But I think I can reach a wider audience here.
>
> So here's my problem.
>
> I'm facing an non-conventional situation. We're building a two region
> Cloud to separate a VMware backend and a KVM one. But both regions share
> the same 2 VLANs where we connect all our instances.
>
> We don't use routers, private network, floating IPs... I've enabled
> enable_isolated_metadata, so the metadata IP is inside the dhcp namespace
> and there's a static route in the created instances to it via the dhcp's
> IP. The two DHCPs could have been a problem but we will use separate IP
> ranges, and as Neutron sets static leases with the instances MAC address,
> they should not interfere.
>
> The question I've been asked is whether we will have network problems with
> the metadata server IP 169.254.169.254, that will exist in 2 namepaces on 2
> neutron nodes but on the same VLAN. So they will send ARP packets with
> different MAC, and will perhaps perturb access to the metadata URL form the
> instances.
>
> Tcpdump shows nothing wrong, but I can't really test now because we
> haven't got yet the two regions. What do you think ?
>
> Of course, the question is not about why we choose to have two regions. I
> would have chosen Host Agregates to separate VMware and KVM, but cinder
> glance should have been configure the same way. And with VMware, it's not
> so feasible.
>
> Also, if we can, we will try to have separate networks for each regions,
> but it involves a lot of bureaucracy here...
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151202/24ede176/attachment.html>


More information about the OpenStack-operators mailing list