[Openstack] multi_host networking, but not on all nodes?
Xu (Simon) Chen
xchenum at gmail.com
Tue Feb 7 22:43:25 UTC 2012
My two cents...
The current multi-host mode is really making the assumption that if the NC
running in the dom0 is gone, all the VMs are likely screwed anyway.
When you are talking about a middle-ground, you'll need to handle NC
failures and load-balance among the NCs. You'll also need to worry about
the traffic, because the placement of NCs on a subset of nodes becomes
important.
What you really need is a scalable NATter array, which kinda load-balance
and fail-over across projects. I'm trying to prototype this now with my
colleague actually :-)
-Simon
On Tue, Feb 7, 2012 at 4:27 PM, Nathanael Burton <
nathanael.i.burton at gmail.com> wrote:
> With the default networking there's a single nova-network service.
> With the --multi_host option, 'set_network_host' sets every instance
> to use their host as the nova-network node, effectively requiring
> nova-network to run on every nova-compute. The multi_host mode
> greatly helps HA and consolidates fault domains, but at the cost of
> increased complexity and IP sprawl when using the VLAN networking
> model, as each host in the zone now has to have an IP on every VLAN.
>
> What I think I'm looking for is a middle ground where you can run
> multiple nova-network nodes, but not equal to the number of compute
> nodes. Basically a similar ability as implemented with the
> nova-volume service; the ability to scale the nova-network nodes
> independently from the computes. The big downside is that you no
> longer have the benefit of combined fault domains (network/compute).
> Is any of this possible today? Does Quantum with OpenvSwitch handle
> any of this either?
>
> Thoughts?
>
> Thanks,
>
> Nate
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120207/cc07a5d4/attachment.html>
More information about the Openstack
mailing list