[Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR
Florian Engelmann
florian.engelmann at everyware.ch
Wed Oct 24 16:02:04 UTC 2018
Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>
>
> On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
> <florian.engelmann at everyware.ch <mailto:florian.engelmann at everyware.ch>>
> wrote:
>
> Ohoh - thank you for your empathy :)
> And those great details about how to setup this mgmt network.
> I will try to do so this afternoon but solving that routing "puzzle"
> (virtual network to control nodes) I will need our network guys to help
> me out...
>
> But I will need to tell all Amphorae a static route to the gateway that
> is routing to the control nodes?
>
>
> Just set the default gateway when you create the neutron subnet. No need
> for excess static routes. The route on the other connection won't
> interfere with it as it lives in a namespace.
My compute nodes have no br-ex and there is no L2 domain spread over all
compute nodes. As far as I understood lb-mgmt-net is a provider network
and has to be flat or VLAN and will need a "physical" gateway (as there
is no virtual router).
So the question - is it possible to get octavia up and running without a
br-ex (L2 domain spread over all compute nodes) on the compute nodes?
>
>
>
> Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
> > So in your other email you said asked if there was a guide for
> > deploying it with Kolla ansible...
> >
> > Oh boy. No there's not. I don't know if you've seen my recent
> mails on
> > Octavia, but I am going through this deployment process with
> > kolla-ansible right now and it is lacking in a few areas.
> >
> > If you plan to use different CA certificates for client and server in
> > Octavia, you'll need to add that into the playbook. Presently it only
> > copies over ca_01.pem, cacert.key, and client.pem and uses them for
> > everything. I was completely unable to make it work with only one CA
> > as I got some SSL errors. It passes gate though, so I aasume it must
> > work? I dunno.
> >
> > Networking comments and a really messy kolla-ansible / octavia
> how-to below...
> >
> > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> > <florian.engelmann at everyware.ch
> <mailto:florian.engelmann at everyware.ch>> wrote:
> >>
> >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> >>> <florian.engelmann at everyware.ch
> <mailto:florian.engelmann at everyware.ch>> wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> We did test Octavia with Pike (DVR deployment) and everything was
> >>>> working right our of the box. We changed our underlay network to a
> >>>> Layer3 spine-leaf network now and did not deploy DVR as we
> don't wanted
> >>>> to have that much cables in a rack.
> >>>>
> >>>> Octavia is not working right now as the lb-mgmt-net does not
> exist on
> >>>> the compute nodes nor does a br-ex.
> >>>>
> >>>> The control nodes running
> >>>>
> >>>> octavia_worker
> >>>> octavia_housekeeping
> >>>> octavia_health_manager
> >>>> octavia_api
> >>>>
> Amphorae-VMs, z.b.
>
> lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16> default GW
> >>>> and as far as I understood octavia_worker,
> octavia_housekeeping and
> >>>> octavia_health_manager have to talk to the amphora instances.
> But the
> >>>> control nodes are spread over three different leafs. So each
> control
> >>>> node in a different L2 domain.
> >>>>
> >>>> So the question is how to deploy a lb-mgmt-net network in our
> setup?
> >>>>
> >>>> - Compute nodes have no "stretched" L2 domain
> >>>> - Control nodes, compute nodes and network nodes are in L3
> networks like
> >>>> api, storage, ...
> >>>> - Only network nodes are connected to a L2 domain (with a
> separated NIC)
> >>>> providing the "public" network
> >>>>
> >>> You'll need to add a new bridge to your compute nodes and create a
> >>> provider network associated with that bridge. In my setup this is
> >>> simply a flat network tied to a tagged interface. In your case it
> >>> probably makes more sense to make a new VNI and create a vxlan
> >>> provider network. The routing in your switches should handle
> the rest.
> >>
> >> Ok that's what I try right now. But I don't get how to setup
> something
> >> like a VxLAN provider Network. I thought only vlan and flat is
> supported
> >> as provider network? I guess it is not possible to use the tunnel
> >> interface that is used for tenant networks?
> >> So I have to create a separated VxLAN on the control and compute
> nodes like:
> >>
> >> # ip link add vxoctavia type vxlan id 42 dstport 4790 group
> 239.1.1.1
> >> dev vlan3535 ttl 5
> >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20> dev vxoctavia
> >> # ip link set vxoctavia up
> >>
> >> and use it like a flat provider network, true?
> >>
> > This is a fine way of doing things, but it's only half the battle.
> > You'll need to add a bridge on the compute nodes and bind it to that
> > new interface. Something like this if you're using openvswitch:
> >
> > docker exec openvswitch_db
> > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
> >
> > Also you'll want to remove the IP address from that interface as it's
> > going to be a bridge. Think of it like your public (br-ex) interface
> > on your network nodes.
> >
> > From there you'll need to update the bridge mappings via kolla
> > overrides. This would usually be in /etc/kolla/config/neutron. Create
> > a subdirectory for your compute inventory group and create an
> > ml2_conf.ini there. So you'd end up with something like:
> >
> > [root at kolla-deploy ~]# cat
> /etc/kolla/config/neutron/compute/ml2_conf.ini
> > [ml2_type_flat]
> > flat_networks = mgmt-net
> >
> > [ovs]
> > bridge_mappings = mgmt-net:br-mgmt
> >
> > run kolla-ansible --tags neutron reconfigure to push out the new
> > configs. Note that there is a bug where the neutron containers
> may not
> > restart after the change, so you'll probably need to do a 'docker
> > container restart neutron_openvswitch_agent' on each compute node.
> >
> > At this point, you'll need to create the provider network in the
> admin
> > project like:
> >
> > openstack network create --provider-network-type flat
> > --provider-physical-network mgmt-net lb-mgmt-net
> >
> > And then create a normal subnet attached to this network with some
> > largeish address scope. I wouldn't use 172.16.0.0/16
> <http://172.16.0.0/16> because docker
> > uses that by default. I'm not sure if it matters since the network
> > traffic will be isolated on a bridge, but it makes me paranoid so I
> > avoided it.
> >
> > For your controllers, I think you can just let everything
> function off
> > your api interface since you're routing in your spines. Set up a
> > gateway somewhere from that lb-mgmt network and save yourself the
> > complication of adding an interface to your controllers. If you
> choose
> > to use a separate interface on your controllers, you'll need to make
> > sure this patch is in your kolla-ansible install or cherry pick it.
> >
> >
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
> >
> > I don't think that's been backported at all, so unless you're running
> > off master you'll need to go get it.
> >
> > From here on out, the regular Octavia instruction should serve you.
> > Create a flavor, Create a security group, and capture their UUIDs
> > along with the UUID of the provider network you made. Override
> them in
> > globals.yml with:
> >
> > octavia_amp_boot_network_list: <uuid>
> > octavia_amp_secgroup_list: <uuid>
> > octavia_amp_flavor_id: <uuid>
> >
> > This is all from my scattered notes and bad memory. Hopefully it
> makes
> > sense. Corrections welcome.
> >
> > -Erik
> >
> >
> >
> >>
> >>
> >>>
> >>> -Erik
> >>>>
> >>>> All the best,
> >>>> Florian
> >>>> _______________________________________________
> >>>> OpenStack-operators mailing list
> >>>> OpenStack-operators at lists.openstack.org
> <mailto:OpenStack-operators at lists.openstack.org>
> >>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> --
>
> EveryWare AG
> Florian Engelmann
> Systems Engineer
> Zurlindenstrasse 52a
> CH-8003 Zürich
>
> tel: +41 44 466 60 00
> fax: +41 44 466 60 10
> mail: mailto:florian.engelmann at everyware.ch
> <mailto:florian.engelmann at everyware.ch>
> web: http://www.everyware.ch
>
--
EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich
tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelmann at everyware.ch
web: http://www.everyware.ch
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5210 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20181024/c20cee29/attachment.bin>
More information about the OpenStack-operators
mailing list