Networks for different availability zones in Horizon
Fabian Zimmermann
dev.faz at gmail.com
Wed Dec 22 07:24:37 UTC 2021
Hi,
maybe this helps:
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html
You should be able to define one-network-to-used-for-all and openstack
is doing the magic in the background.
Fabian
Am Di., 21. Dez. 2021 um 17:15 Uhr schrieb Jonathan Mills <jonmills at gmail.com>:
>
> Thank you, Hai Wu, for raising the visibility of this issue. We at NASA Goddard want to echo this sentiment wholeheartedly. It’s a serious pain point!
>
> Our production OpenStack (Wallaby) cloud spans two AZs (different datacenters) right now, and before we’re done we may have 3 or 4 AZs. We use non-overlapping VLAN-based networks in different datacenters. Moreover, we have different kinds of server hardware in each datacenter (often recycled HPC compute nodes). We also have different Cinder storage backends in each datacenter.
>
> What we end up having is AZs in Nova, AZs in Neutron, AZs in Cinder, and Image Flavors for Glance that are rather AZ dependent (in order to best fit the node geometry). We instantiate these objects with explicit AZs when they support it. In the case of Neutron networks, we can use scheduler hints.
>
> The real crux of the problem is the end-user education though. Because OpenStack clients (Horizon, OSC) are perfectly happy to let the end-user try to build impossible combinations. We strongly feel that if the individual services aren’t going to communicate AZ data to each other via RPC, that the clients at least should do some kind of filtering on AZs or scheduler hints about AZs. I'm not entirely sure how you'd solve this problem on CLI easily without RPC communication, but Horizon could (should?) dynamically filter the different menus as you select options: e.g. the first selection screen ‘Details’ has you choose an AZ. Once the user makes a selection there, the flavors, storage, and network options should adjust accordingly.
>
>
> Jonathan
>
>
> > On Dec 18, 2021, at 9:44 PM, hai wu <haiwu.us at gmail.com> wrote:
> >
> > Not using vxlan, so there's no network span across availability zones.
> > It seems there's no built-in way to configure Horizon to do so. The
> > problem with this is that sometimes if user picks the wrong network
> > and availability zone combo, the VM will end up in the wrong state.
> > This is why it would be ideal to only show available networks for the
> > availability zone that the user already picked.
> >
> > I am wondering how to modify the horizon source code to achieve this,
> > but I am not familiar with Angular, and the workflow code is pretty
> > complex for newbie ..
> >
> > On Sat, Dec 18, 2021 at 6:31 PM Sean Mooney <smooney at redhat.com> wrote:
> >>
> >> On Sat, 2021-12-18 at 13:45 -0600, hai wu wrote:
> >>> In Horizon "Launch Instance" workflow, in its 4th step of selecting
> >>> networks for the new instance, it would always show all the available
> >>> networks from all availability zones, regardless of which
> >>> "Availability Zone" got picked in 1st step of "Details".
> >>>
> >>> I tried to update some DB field for availability zone hint for the
> >>> relevant network, and that did not help.
> >>>
> >>> Any way to ensure in Horizon "Launch Instance" GUI workflow, after a
> >>> user picking one availability zone in step 1, it would only show the
> >>> related networks in that availability zone as available networks in
> >>> step 4?
> >>
> >> networks do not have any affinity to Avaiablity Zones.
> >> there is no mapping between neutron physnets and nova host aggreates which are use to
> >> model aviablityitz zones.
> >>
> >> when using tunneled networks like vxlan it is assuems that all hosts in a cloud acorss all avaiablity zones can access
> >> any tenant vxlan network. the same is also ture of vlan or flat network the only excption being that htey have physnet
> >> assocaited with them. physnets may not be aviable on all host but there is corralation between phsynets and aviaablity zones.
> >> unless you manually algin them.
> >>>
> >>
> >
>
>
More information about the openstack-discuss
mailing list