[Openstack-operators] Two regions and so two metadata servers sharing the same VLAN

Kris G. Lindgren klindgren at godaddy.com
Fri Dec 4 01:27:34 UTC 2015


Not sure what you can do on your vmware backed boxes, but on the kvm compute nodes you can run nova-api-metadata locally.  We do this by binding 169.254.169.254  to loopback (technically an non-arping interface would work) on each hypervisor.  If I recall correctly, setting the metadata_server to 127.0.0.1 should add the correct iptables rules when the nova-api-metadata services starts up.  You can then block requests for 169.254.169.254 from leaving/entering the server on external interfaces.  That should keep all metadata requests locally to the kvm server.   We do this on all of our hypervisors (minus the blocking of metadata from leaving the hypervsior) and are running with flat networks in neutron.  Assuming, that keeps all the kvm metadata requests local, you could then run metadata normally on the network to service the vmware clusters.  Assuming that you cant do something similar on those boxes.

I haven't done/tried this… but you could also use the extra dhcp options to inject specific and different routes to the metadata service via dhcp/config-drive.  Assuming that the traffic gets routed to the metadata server for 169.254.169.254 you could bind the metadata address to a non-arping interface and everything should be fine.

I am not sure if vmware supports config drive. If it does, then you could simply not run metadata services and use config-drive with cloud init instead.  Assuming of course that you are ok with the fact that metadata never changes on config drive once the vm is booted.  You can also with a fiarly small patch make it so where config-drive always injects the networking information into config-drive, even for neutron networks with dhcp enabled. Then statically IP your boxes using config drive vs's dhcp.  This is what we do.  DHCP is for backup only, all of our images are configured with cloud-init to statically ip from config drive on boot)
___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Kevin Benton <blak111 at gmail.com<mailto:blak111 at gmail.com>>
Date: Thursday, December 3, 2015 at 5:29 PM
To: Gilles Mocellin <gilles.mocellin at nuagelibre.org<mailto:gilles.mocellin at nuagelibre.org>>
Cc: OpenStack Operators <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>
Subject: Re: [Openstack-operators] Two regions and so two metadata servers sharing the same VLAN

Well if that's the case then the metadata wouldn't work for every instance that ARP'ed for the address and got the wrong response first.

On Thu, Dec 3, 2015 at 3:56 PM, Gilles Mocellin <gilles.mocellin at nuagelibre.org<mailto:gilles.mocellin at nuagelibre.org>> wrote:
Hum, I don't think so. Things like hostname must be only known by the neutron instance of one region...

Le 03/12/2015 00:01, Kevin Benton a écrit :
Are both metadata servers able to provide metadata for all instances of both sides? If so, why not disable isolated metadata on one of the sides so only one of the DHCP agents will respond?


On Thu, Nov 26, 2015 at 6:49 AM, <gilles.mocellin at nuagelibre.org<mailto:gilles.mocellin at nuagelibre.org> <mailto:gilles.mocellin at nuagelibre.org<mailto:gilles.mocellin at nuagelibre.org>>> wrote:

    Hello stackers !

    Sorry, I also cross-posted that question here
    https://ask.openstack.org/en/question/85195/two-regions-and-so-two-metadata-servers-sharing-the-same-vlan/

    But I think I can reach a wider audience here.

    So here's my problem.

    I'm facing an non-conventional situation. We're building a two
    region Cloud to separate a VMware backend and a KVM one. But both
    regions share the same 2 VLANs where we connect all our instances.

    We don't use routers, private network, floating IPs... I've
    enabled enable_isolated_metadata, so the metadata IP is inside the
    dhcp namespace and there's a static route in the created instances
    to it via the dhcp's IP. The two DHCPs could have been a problem
    but we will use separate IP ranges, and as Neutron sets static
    leases with the instances MAC address, they should not interfere.

    The question I've been asked is whether we will have network
    problems with the metadata server IP 169.254.169.254, that will
    exist in 2 namepaces on 2 neutron nodes but on the same VLAN. So
    they will send ARP packets with different MAC, and will perhaps
    perturb access to the metadata URL form the instances.

    Tcpdump shows nothing wrong, but I can't really test now because
    we haven't got yet the two regions. What do you think ?

    Of course, the question is not about why we choose to have two
    regions. I would have chosen Host Agregates to separate VMware and
    KVM, but cinder glance should have been configure the same way.
    And with VMware, it's not so feasible.

    Also, if we can, we will try to have separate networks for each
    regions, but it involves a lot of bureaucracy here...

    _______________________________________________
    OpenStack-operators mailing list
    OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
    <mailto:OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Kevin Benton


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
Kevin Benton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151204/f378577f/attachment.html>


More information about the OpenStack-operators mailing list