[Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

Gaël THEROND gael.therond at gmail.com
Tue Oct 23 22:26:04 UTC 2018


For the records, I’m actually working on a fairly large overhaul of our
Openstack services deployement using Kolla-Ansible. We’re leveraging
kolla-ansible to as smoothly as possible migrate all ou legacy architecture
to a shiny new using exactly the same topology as described upper (Using
cumulus/calico etc).

One of the new services that we try to provide with such method is Octavia.

As I too faced some trouble I find them not that hard to solve either by
reading carefully the current APIs ref, guides available and source code or
by asking for help right here.

People responding to octavia’s questions are IMHO blazing fast and really
clear and add great details about internals mechanisms which is really
appreciated.

As I’ve almost finish our own deployment I had noted almost all pitfalls
that I faced and which part of the documentation that was missing.

I’ll finish my deployment and test and redact a clean (and I hope as
complet as possible) documentation as I feel it’s something really needed.

On a side note regarding CA and SSL I had an issue that I solved by
correctly rebuilding my amphora. Another tip and trick here is to use
Barbican when possible as it really help a lot.

I hope it can help anyone else willing to use Octavia as I truly think this
service is a huge addition to Openstack and its gaining more and more
momentum since the Pike/Queens releases.

Le mar. 23 oct. 2018 à 19:49, Michael Johnson <johnsomor at gmail.com> a
écrit :

> I am still catching up on e-mail from the weekend.
>
> There are a lot of different options for how to implement the
> lb-mgmt-network for the controller<->amphora communication. I can't
> talk to what options Kolla provides, but I can talk to how Octavia
> works.
>
> One thing to note on the lb-mgmt-net issue, if you can setup routes
> such that the controllers can reach the IP addresses used for the
> lb-mgmt-net, and that the amphora can reach the controllers, Octavia
> can run with a routed lb-mgmt-net setup. There is no L2 requirement
> between the controllers and the amphora instances.
>
> Michael
>
> On Tue, Oct 23, 2018 at 9:57 AM Erik McCormick
> <emccormick at cirrusseven.com> wrote:
> >
> > So in your other email you said asked if there was a guide for
> > deploying it with Kolla ansible...
> >
> > Oh boy. No there's not. I don't know if you've seen my recent mails on
> > Octavia, but I am going through this deployment process with
> > kolla-ansible right now and it is lacking in a few areas.
> >
> > If you plan to use different CA certificates for client and server in
> > Octavia, you'll need to add that into the playbook. Presently it only
> > copies over ca_01.pem, cacert.key, and client.pem and uses them for
> > everything. I was completely unable to make it work with only one CA
> > as I got some SSL errors. It passes gate though, so I aasume it must
> > work? I dunno.
> >
> > Networking comments and a really messy kolla-ansible / octavia how-to
> below...
> >
> > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
> > <florian.engelmann at everyware.ch> wrote:
> > >
> > > Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
> > > > On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
> > > > <florian.engelmann at everyware.ch> wrote:
> > > >>
> > > >> Hi,
> > > >>
> > > >> We did test Octavia with Pike (DVR deployment) and everything was
> > > >> working right our of the box. We changed our underlay network to a
> > > >> Layer3 spine-leaf network now and did not deploy DVR as we don't
> wanted
> > > >> to have that much cables in a rack.
> > > >>
> > > >> Octavia is not working right now as the lb-mgmt-net does not exist
> on
> > > >> the compute nodes nor does a br-ex.
> > > >>
> > > >> The control nodes running
> > > >>
> > > >> octavia_worker
> > > >> octavia_housekeeping
> > > >> octavia_health_manager
> > > >> octavia_api
> > > >>
> > > >> and as far as I understood octavia_worker, octavia_housekeeping and
> > > >> octavia_health_manager have to talk to the amphora instances. But
> the
> > > >> control nodes are spread over three different leafs. So each control
> > > >> node in a different L2 domain.
> > > >>
> > > >> So the question is how to deploy a lb-mgmt-net network in our setup?
> > > >>
> > > >> - Compute nodes have no "stretched" L2 domain
> > > >> - Control nodes, compute nodes and network nodes are in L3 networks
> like
> > > >> api, storage, ...
> > > >> - Only network nodes are connected to a L2 domain (with a separated
> NIC)
> > > >> providing the "public" network
> > > >>
> > > > You'll need to add a new bridge to your compute nodes and create a
> > > > provider network associated with that bridge. In my setup this is
> > > > simply a flat network tied to a tagged interface. In your case it
> > > > probably makes more sense to make a new VNI and create a vxlan
> > > > provider network. The routing in your switches should handle the
> rest.
> > >
> > > Ok that's what I try right now. But I don't get how to setup something
> > > like a VxLAN provider Network. I thought only vlan and flat is
> supported
> > > as provider network? I guess it is not possible to use the tunnel
> > > interface that is used for tenant networks?
> > > So I have to create a separated VxLAN on the control and compute nodes
> like:
> > >
> > > # ip link add vxoctavia type vxlan id 42 dstport 4790 group 239.1.1.1
> > > dev vlan3535 ttl 5
> > > # ip addr add 172.16.1.11/20 dev vxoctavia
> > > # ip link set vxoctavia up
> > >
> > > and use it like a flat provider network, true?
> > >
> > This is a fine way of doing things, but it's only half the battle.
> > You'll need to add a bridge on the compute nodes and bind it to that
> > new interface. Something like this if you're using openvswitch:
> >
> > docker exec openvswitch_db
> > /usr/local/bin/kolla_ensure_openvswitch_configured br-mgmt vxoctavia
> >
> > Also you'll want to remove the IP address from that interface as it's
> > going to be a bridge. Think of it like your public (br-ex) interface
> > on your network nodes.
> >
> > From there you'll need to update the bridge mappings via kolla
> > overrides. This would usually be in /etc/kolla/config/neutron. Create
> > a subdirectory for your compute inventory group and create an
> > ml2_conf.ini there. So you'd end up with something like:
> >
> > [root at kolla-deploy ~]# cat
> /etc/kolla/config/neutron/compute/ml2_conf.ini
> > [ml2_type_flat]
> > flat_networks = mgmt-net
> >
> > [ovs]
> > bridge_mappings = mgmt-net:br-mgmt
> >
> > run kolla-ansible --tags neutron reconfigure to push out the new
> > configs. Note that there is a bug where the neutron containers may not
> > restart after the change, so you'll probably need to do a 'docker
> > container restart neutron_openvswitch_agent' on each compute node.
> >
> > At this point, you'll need to create the provider network in the admin
> > project like:
> >
> > openstack network create --provider-network-type flat
> > --provider-physical-network mgmt-net lb-mgmt-net
> >
> > And then create a normal subnet attached to this network with some
> > largeish address scope. I wouldn't use 172.16.0.0/16 because docker
> > uses that by default. I'm not sure if it matters since the network
> > traffic will be isolated on a bridge, but it makes me paranoid so I
> > avoided it.
> >
> > For your controllers, I think you can just let everything function off
> > your api interface since you're routing in your spines. Set up a
> > gateway somewhere from that lb-mgmt network and save yourself the
> > complication of adding an interface to your controllers. If you choose
> > to use a separate interface on your controllers, you'll need to make
> > sure this patch is in your kolla-ansible install or cherry pick it.
> >
> >
> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59
> >
> > I don't think that's been backported at all, so unless you're running
> > off master you'll need to go get it.
> >
> > From here on out, the regular Octavia instruction should serve you.
> > Create a flavor, Create a security group, and capture their UUIDs
> > along with the UUID of the provider network you made. Override them in
> > globals.yml with:
> >
> > octavia_amp_boot_network_list: <uuid>
> > octavia_amp_secgroup_list: <uuid>
> > octavia_amp_flavor_id: <uuid>
> >
> > This is all from my scattered notes and bad memory. Hopefully it makes
> > sense. Corrections welcome.
> >
> > -Erik
> >
> >
> >
> > >
> > >
> > > >
> > > > -Erik
> > > >>
> > > >> All the best,
> > > >> Florian
> > > >> _______________________________________________
> > > >> OpenStack-operators mailing list
> > > >> OpenStack-operators at lists.openstack.org
> > > >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20181024/b704c4cb/attachment.html>


More information about the OpenStack-operators mailing list