[Openstack-operators] Two regions and so two metadata servers sharing the same VLAN
gilles.mocellin at nuagelibre.org
gilles.mocellin at nuagelibre.org
Fri Dec 4 14:36:17 UTC 2015
Static IP config can be a good option. DHCP is generally not well
perceived for servers, and we will have some long lived VM, not really
But I don't see if cloud-init can really handle static configuration
(without scripting), for many different distributions with different
ways of configuring network. And also, we have Windows instances.
The easy way we decide to manage our problem until we have separate
networks is to configure the metadata server of one region to be on
This involves a coud-init config and a source modification of
Le 2015-12-04 02:27, Kris G. Lindgren a écrit :
> Not sure what you can do on your vmware backed boxes, but on the kvm
> compute nodes you can run nova-api-metadata locally. We do this by
> binding 169.254.169.254 to loopback (technically an non-arping
> interface would work) on each hypervisor. If I recall correctly,
> setting the metadata_server to 127.0.0.1 should add the correct
> iptables rules when the nova-api-metadata services starts up. You can
> then block requests for 169.254.169.254 from leaving/entering the
> server on external interfaces. That should keep all metadata requests
> locally to the kvm server. We do this on all of our hypervisors
> (minus the blocking of metadata from leaving the hypervsior) and are
> running with flat networks in neutron. Assuming, that keeps all the
> kvm metadata requests local, you could then run metadata normally on
> the network to service the vmware clusters. Assuming that you cant do
> something similar on those boxes.
> I haven't done/tried this… but you could also use the extra dhcp
> options to inject specific and different routes to the metadata
> service via dhcp/config-drive. Assuming that the traffic gets routed
> to the metadata server for 169.254.169.254 you could bind the metadata
> address to a non-arping interface and everything should be fine.
> I am not sure if vmware supports config drive. If it does, then you
> could simply not run metadata services and use config-drive with cloud
> init instead. Assuming of course that you are ok with the fact that
> metadata never changes on config drive once the vm is booted. You can
> also with a fiarly small patch make it so where config-drive always
> injects the networking information into config-drive, even for neutron
> networks with dhcp enabled. Then statically IP your boxes using config
> drive vs's dhcp. This is what we do. DHCP is for backup only, all of
> our images are configured with cloud-init to statically ip from config
> drive on boot)
> Kris Lindgren
> Senior Linux Systems Engineer
> From: Kevin Benton <blak111 at gmail.com>
> Date: Thursday, December 3, 2015 at 5:29 PM
> To: Gilles Mocellin <gilles.mocellin at nuagelibre.org>
> Cc: OpenStack Operators <openstack-operators at lists.openstack.org>
> Subject: Re: [Openstack-operators] Two regions and so two metadata
> servers sharing the same VLAN
> Well if that's the case then the metadata wouldn't work for every
> instance that ARP'ed for the address and got the wrong response first.
> On Thu, Dec 3, 2015 at 3:56 PM, Gilles Mocellin
> <gilles.mocellin at nuagelibre.org> wrote:
>> Hum, I don't think so. Things like hostname must be only known by
>> the neutron instance of one region...
>> Le 03/12/2015 00:01, Kevin Benton a écrit :
>>> Are both metadata servers able to provide metadata for all
>>> instances of both sides? If so, why not disable isolated metadata
>>> on one of the sides so only one of the DHCP agents will respond?
>>> On Thu, Nov 26, 2015 at 6:49 AM, <gilles.mocellin at nuagelibre.org
>>> <mailto:gilles.mocellin at nuagelibre.org>> wrote:
>>> Hello stackers !
>>> Sorry, I also cross-posted that question here
>>> But I think I can reach a wider audience here.
>>> So here's my problem.
>>> I'm facing an non-conventional situation. We're building a two
>>> region Cloud to separate a VMware backend and a KVM one. But
>>> regions share the same 2 VLANs where we connect all our
>>> We don't use routers, private network, floating IPs... I've
>>> enabled enable_isolated_metadata, so the metadata IP is inside
>>> dhcp namespace and there's a static route in the created
>>> to it via the dhcp's IP. The two DHCPs could have been a
>>> but we will use separate IP ranges, and as Neutron sets static
>>> leases with the instances MAC address, they should not
>>> The question I've been asked is whether we will have network
>>> problems with the metadata server IP 169.254.169.254, that
>>> exist in 2 namepaces on 2 neutron nodes but on the same VLAN.
>>> they will send ARP packets with different MAC, and will
>>> perturb access to the metadata URL form the instances.
>>> Tcpdump shows nothing wrong, but I can't really test now
>>> we haven't got yet the two regions. What do you think ?
>>> Of course, the question is not about why we choose to have two
>>> regions. I would have chosen Host Agregates to separate VMware
>>> KVM, but cinder glance should have been configure the same
>>> And with VMware, it's not so feasible.
>>> Also, if we can, we will try to have separate networks for
>>> regions, but it involves a lot of bureaucracy here...
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> <mailto:OpenStack-operators at lists.openstack.org>
>>> Kevin Benton
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
> Kevin Benton
More information about the OpenStack-operators