[Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

Florian Engelmann florian.engelmann at everyware.ch
Thu Oct 25 15:34:47 UTC 2018


I managed to configure o-hm0 on the compute nodes and I am able to 
communicate with the amphorae:


# create Octavia management net
openstack network create lb-mgmt-net -f value -c id
# and the subnet
openstack subnet create --subnet-range 172.31.0.0/16 --allocation-pool 
start=172.31.17.10,end=172.31.255.250 --network lb-mgmt-net lb-mgmt-subnet
# get the subnet ID
openstack subnet show lb-mgmt-subnet -f value -c id
# create a port in this subnet for the compute node (ewos1-com1a-poc2)
openstack port create --security-group octavia --device-owner 
Octavia:health-mgr --host=ewos1-com1a-poc2 -c id -f value --network 
lb-mgmt-net --fixed-ip 
subnet=b4c70178-949b-4d60-8d9f-09d13f720b6a,ip-address=172.31.0.101 
octavia-health-manager-ewos1-com1a-poc2-listen-port
openstack port show 6fb13c3f-469e-4a81-a504-a161c6848654
openstack network show lb-mgmt-net -f value -c id
# edit octavia_amp_boot_network_list: 3633be41-926f-4a2c-8803-36965f76ea8d
vi /etc/kolla/globals.yml
# reconfigure octavia
kolla-ansible -i inventory reconfigure -t octavia


# create o-hm0 on the compute node
docker exec ovs-vsctl -- --may-exist add-port br-int o-hm0 -- \
set Interface o-hm0 type=internal -- \
set Interface o-hm0 external-ids:iface-status=active -- \
set Interface o-hm0 external-ids:attached-mac=fa:16:3e:51:e9:c3 -- \
set Interface o-hm0 
external-ids:iface-id=6fb13c3f-469e-4a81-a504-a161c6848654 -- \
set Interface o-hm0 external-ids:skip_cleanup=true

# fix MAC of o-hm0
ip link set dev o-hm0 address fa:16:3e:51:e9:c3

# get IP from neutron DHCP agent (should get IP: 172.31.0.101 in this 
example)
ip link set dev o-hm0 up
dhclient -v o-hm0

# create a loadbalancer and test connectivity, eg. amphorae IP is 
172.31.17.15
root at ewos1-com1a-poc2:~# ping 172.31.17.15

But

octavia_worker
octavia_housekeeping
octavia_health_manager

are running on our control nodes and those are not running any OVS networks.

Next test is to deploy those three services to my network nodes and 
configure o-hm0 on the network nodes. I will have to fix

bind_port = 5555
bind_ip = 10.33.16.11
controller_ip_port_list = 10.33.16.11:5555

to bind to all IPs or the IP of o-hm0.





Am 10/25/18 um 4:39 PM schrieb Florian Engelmann:
> It looks like devstack implemented some o-hm0 interface to connect the 
> physical control host to a VxLAN.
> In our case there is no VxLAN at the control nodes nor is OVS.
> 
> Is it a option to deploy those Octavia services needing this conenction 
> to the compute or network nodes and use o-hm0?
> 
> Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
>> Or could I create lb-mgmt-net as VxLAN and connect the control nodes 
>> to this VxLAN? How to do something like that?
>>
>> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>>> Hmm - so right now I can't see any routed option because:
>>>
>>> The gateway connected to the VLAN provider networks (bond1 on the 
>>> network nodes) is not able to route any traffic to my control nodes 
>>> in the spine-leaf layer3 backend network.
>>>
>>> And right now there is no br-ex at all nor any "streched" L2 domain 
>>> connecting all compute nodes.
>>>
>>>
>>> So the only solution I can think of right now is to create an overlay 
>>> VxLAN in the spine-leaf backend network, connect all compute and 
>>> control nodes to this overlay L2 network, create a OVS bridge 
>>> connected to that network on the compute nodes and allow the Amphorae 
>>> to get an IPin this network as well.
>>> Not to forget about DHCP... so the network nodes will need this 
>>> bridge as well.
>>>
>>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>>>
>>>>
>>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian 
>>>> <florian.engelmann at everyware.ch 
>>>> <mailto:florian.engelmann at everyware.ch>> wrote:
>>>>
>>>>     On the network nodes we've got a dedicated interface to deploy 
>>>> VLANs
>>>>     (like the provider network for internet access). What about 
>>>> creating
>>>>     another VLAN on the network nodes, give that bridge a IP which is
>>>>     part of the subnet of lb-mgmt-net and start the octavia worker,
>>>>     healthmanager and controller on the network nodes binding to 
>>>> that IP?
>>>>
>>>> The problem with that is you can't out an IP in the vlan interface 
>>>> and also use it as an OVS bridge, so the Octavia processes would 
>>>> have nothing to bind to.
>>>>
>>>>
>>>> ------------------------------------------------------------------------ 
>>>>
>>>>     *From:* Erik McCormick <emccormick at cirrusseven.com
>>>>     <mailto:emccormick at cirrusseven.com>>
>>>>     *Sent:* Wednesday, October 24, 2018 6:18 PM
>>>>     *To:* Engelmann Florian
>>>>     *Cc:* openstack-operators
>>>>     *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>>>     VxLAN without DVR
>>>>
>>>>
>>>>     On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>>>     <florian.engelmann at everyware.ch
>>>>     <mailto:florian.engelmann at everyware.ch>> wrote:
>>>>
>>>>         Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>>>          >
>>>>          >
>>>>          > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>>>          > <florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>
>>>>         <mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>>>
>>>>          > wrote:
>>>>          >
>>>>          >     Ohoh - thank you for your empathy :)
>>>>          >     And those great details about how to setup this mgmt 
>>>> network.
>>>>          >     I will try to do so this afternoon but solving that
>>>>         routing "puzzle"
>>>>          >     (virtual network to control nodes) I will need our
>>>>         network guys to help
>>>>          >     me out...
>>>>          >
>>>>          >     But I will need to tell all Amphorae a static route to
>>>>         the gateway that
>>>>          >     is routing to the control nodes?
>>>>          >
>>>>          >
>>>>          > Just set the default gateway when you create the neutron
>>>>         subnet. No need
>>>>          > for excess static routes. The route on the other connection
>>>>         won't
>>>>          > interfere with it as it lives in a namespace.
>>>>
>>>>
>>>>         My compute nodes have no br-ex and there is no L2 domain spread
>>>>         over all
>>>>         compute nodes. As far as I understood lb-mgmt-net is a provider
>>>>         network
>>>>         and has to be flat or VLAN and will need a "physical" gateway
>>>>         (as there
>>>>         is no virtual router).
>>>>         So the question - is it possible to get octavia up and running
>>>>         without a
>>>>         br-ex (L2 domain spread over all compute nodes) on the compute
>>>>         nodes?
>>>>
>>>>
>>>>     Sorry, I only meant it was *like* br-ex on your network nodes. You
>>>>     don't need that on your computes.
>>>>
>>>>     The router here would be whatever does routing in your physical
>>>>     network. Setting the gateway in the neutron subnet simply adds that
>>>>     to the DHCP information sent to the amphorae.
>>>>
>>>>     This does bring up another thingI forgot  though. You'll probably
>>>>     want to add the management network / bridge to your network 
>>>> nodes or
>>>>     wherever you run the DHCP agents. When you create the subnet, be
>>>>     sure to leave some space in the address scope for the physical
>>>>     devices with static IPs.
>>>>
>>>>     As for multiple L2 domains, I can't think of a way to go about that
>>>>     for the lb-mgmt network. It's a single network with a single 
>>>> subnet.
>>>>     Perhaps you could limit load balancers to an AZ in a single rack?
>>>>     Seems not very HA friendly.
>>>>
>>>>
>>>>
>>>>          >
>>>>          >
>>>>          >
>>>>          >     Am 10/23/18 um 6:57 PM schrieb Erik McCormick:
>>>>          >      > So in your other email you said asked if there was a
>>>>         guide for
>>>>          >      > deploying it with Kolla ansible...
>>>>          >      >
>>>>          >      > Oh boy. No there's not. I don't know if you've 
>>>> seen my
>>>>         recent
>>>>          >     mails on
>>>>          >      > Octavia, but I am going through this deployment
>>>>         process with
>>>>          >      > kolla-ansible right now and it is lacking in a few 
>>>> areas.
>>>>          >      >
>>>>          >      > If you plan to use different CA certificates for
>>>>         client and server in
>>>>          >      > Octavia, you'll need to add that into the playbook.
>>>>         Presently it only
>>>>          >      > copies over ca_01.pem, cacert.key, and client.pem and
>>>>         uses them for
>>>>          >      > everything. I was completely unable to make it work
>>>>         with only one CA
>>>>          >      > as I got some SSL errors. It passes gate though, so I
>>>>         aasume it must
>>>>          >      > work? I dunno.
>>>>          >      >
>>>>          >      > Networking comments and a really messy 
>>>> kolla-ansible /
>>>>         octavia
>>>>          >     how-to below...
>>>>          >      >
>>>>          >      > On Tue, Oct 23, 2018 at 10:09 AM Florian Engelmann
>>>>          >      > <florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>
>>>>          >     <mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>>> wrote:
>>>>          >      >>
>>>>          >      >> Am 10/23/18 um 3:20 PM schrieb Erik McCormick:
>>>>          >      >>> On Tue, Oct 23, 2018 at 7:53 AM Florian Engelmann
>>>>          >      >>> <florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>
>>>>          >     <mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>>> wrote:
>>>>          >      >>>>
>>>>          >      >>>> Hi,
>>>>          >      >>>>
>>>>          >      >>>> We did test Octavia with Pike (DVR deployment) and
>>>>         everything was
>>>>          >      >>>> working right our of the box. We changed our
>>>>         underlay network to a
>>>>          >      >>>> Layer3 spine-leaf network now and did not deploy
>>>>         DVR as we
>>>>          >     don't wanted
>>>>          >      >>>> to have that much cables in a rack.
>>>>          >      >>>>
>>>>          >      >>>> Octavia is not working right now as the 
>>>> lb-mgmt-net
>>>>         does not
>>>>          >     exist on
>>>>          >      >>>> the compute nodes nor does a br-ex.
>>>>          >      >>>>
>>>>          >      >>>> The control nodes running
>>>>          >      >>>>
>>>>          >      >>>> octavia_worker
>>>>          >      >>>> octavia_housekeeping
>>>>          >      >>>> octavia_health_manager
>>>>          >      >>>> octavia_api
>>>>          >      >>>>
>>>>          >     Amphorae-VMs, z.b.
>>>>          >
>>>>          >     lb-mgmt-net 172.16.0.0/16 <http://172.16.0.0/16>
>>>>         <http://172.16.0.0/16> default GW
>>>>          >      >>>> and as far as I understood octavia_worker,
>>>>          >     octavia_housekeeping and
>>>>          >      >>>> octavia_health_manager have to talk to the amphora
>>>>         instances.
>>>>          >     But the
>>>>          >      >>>> control nodes are spread over three different
>>>>         leafs. So each
>>>>          >     control
>>>>          >      >>>> node in a different L2 domain.
>>>>          >      >>>>
>>>>          >      >>>> So the question is how to deploy a lb-mgmt-net
>>>>         network in our
>>>>          >     setup?
>>>>          >      >>>>
>>>>          >      >>>> - Compute nodes have no "stretched" L2 domain
>>>>          >      >>>> - Control nodes, compute nodes and network nodes
>>>>         are in L3
>>>>          >     networks like
>>>>          >      >>>> api, storage, ...
>>>>          >      >>>> - Only network nodes are connected to a L2 domain
>>>>         (with a
>>>>          >     separated NIC)
>>>>          >      >>>> providing the "public" network
>>>>          >      >>>>
>>>>          >      >>> You'll need to add a new bridge to your compute
>>>>         nodes and create a
>>>>          >      >>> provider network associated with that bridge. In my
>>>>         setup this is
>>>>          >      >>> simply a flat network tied to a tagged 
>>>> interface. In
>>>>         your case it
>>>>          >      >>> probably makes more sense to make a new VNI and
>>>>         create a vxlan
>>>>          >      >>> provider network. The routing in your switches
>>>>         should handle
>>>>          >     the rest.
>>>>          >      >>
>>>>          >      >> Ok that's what I try right now. But I don't get how
>>>>         to setup
>>>>          >     something
>>>>          >      >> like a VxLAN provider Network. I thought only vlan
>>>>         and flat is
>>>>          >     supported
>>>>          >      >> as provider network? I guess it is not possible to
>>>>         use the tunnel
>>>>          >      >> interface that is used for tenant networks?
>>>>          >      >> So I have to create a separated VxLAN on the control
>>>>         and compute
>>>>          >     nodes like:
>>>>          >      >>
>>>>          >      >> # ip link add vxoctavia type vxlan id 42 dstport 
>>>> 4790
>>>>         group
>>>>          >     239.1.1.1
>>>>          >      >> dev vlan3535 ttl 5
>>>>          >      >> # ip addr add 172.16.1.11/20 <http://172.16.1.11/20>
>>>>         <http://172.16.1.11/20> dev vxoctavia
>>>>          >      >> # ip link set vxoctavia up
>>>>          >      >>
>>>>          >      >> and use it like a flat provider network, true?
>>>>          >      >>
>>>>          >      > This is a fine way of doing things, but it's only 
>>>> half
>>>>         the battle.
>>>>          >      > You'll need to add a bridge on the compute nodes and
>>>>         bind it to that
>>>>          >      > new interface. Something like this if you're using
>>>>         openvswitch:
>>>>          >      >
>>>>          >      > docker exec openvswitch_db
>>>>          >      > /usr/local/bin/kolla_ensure_openvswitch_configured
>>>>         br-mgmt vxoctavia
>>>>          >      >
>>>>          >      > Also you'll want to remove the IP address from that
>>>>         interface as it's
>>>>          >      > going to be a bridge. Think of it like your public
>>>>         (br-ex) interface
>>>>          >      > on your network nodes.
>>>>          >      >
>>>>          >      >  From there you'll need to update the bridge mappings
>>>>         via kolla
>>>>          >      > overrides. This would usually be in
>>>>         /etc/kolla/config/neutron. Create
>>>>          >      > a subdirectory for your compute inventory group and
>>>>         create an
>>>>          >      > ml2_conf.ini there. So you'd end up with something 
>>>> like:
>>>>          >      >
>>>>          >      > [root at kolla-deploy ~]# cat
>>>>          >     /etc/kolla/config/neutron/compute/ml2_conf.ini
>>>>          >      > [ml2_type_flat]
>>>>          >      > flat_networks = mgmt-net
>>>>          >      >
>>>>          >      > [ovs]
>>>>          >      > bridge_mappings = mgmt-net:br-mgmt
>>>>          >      >
>>>>          >      > run kolla-ansible --tags neutron reconfigure to push
>>>>         out the new
>>>>          >      > configs. Note that there is a bug where the neutron
>>>>         containers
>>>>          >     may not
>>>>          >      > restart after the change, so you'll probably need to
>>>>         do a 'docker
>>>>          >      > container restart neutron_openvswitch_agent' on each
>>>>         compute node.
>>>>          >      >
>>>>          >      > At this point, you'll need to create the provider
>>>>         network in the
>>>>          >     admin
>>>>          >      > project like:
>>>>          >      >
>>>>          >      > openstack network create --provider-network-type flat
>>>>          >      > --provider-physical-network mgmt-net lb-mgmt-net
>>>>          >      >
>>>>          >      > And then create a normal subnet attached to this
>>>>         network with some
>>>>          >      > largeish address scope. I wouldn't use 172.16.0.0/16
>>>>         <http://172.16.0.0/16>
>>>>          >     <http://172.16.0.0/16> because docker
>>>>          >      > uses that by default. I'm not sure if it matters 
>>>> since
>>>>         the network
>>>>          >      > traffic will be isolated on a bridge, but it makes me
>>>>         paranoid so I
>>>>          >      > avoided it.
>>>>          >      >
>>>>          >      > For your controllers, I think you can just let 
>>>> everything
>>>>          >     function off
>>>>          >      > your api interface since you're routing in your
>>>>         spines. Set up a
>>>>          >      > gateway somewhere from that lb-mgmt network and save
>>>>         yourself the
>>>>          >      > complication of adding an interface to your
>>>>         controllers. If you
>>>>          >     choose
>>>>          >      > to use a separate interface on your controllers,
>>>>         you'll need to make
>>>>          >      > sure this patch is in your kolla-ansible install or
>>>>         cherry pick it.
>>>>          >      >
>>>>          >      >
>>>>          >
>>>> https://github.com/openstack/kolla-ansible/commit/0b6e401c4fdb9aa4ff87d0bfd4b25c91b86e0d60#diff-6c871f6865aecf0057a5b5f677ae7d59 
>>>>
>>>>          >      >
>>>>          >      > I don't think that's been backported at all, so 
>>>> unless
>>>>         you're running
>>>>          >      > off master you'll need to go get it.
>>>>          >      >
>>>>          >      >  From here on out, the regular Octavia instruction
>>>>         should serve you.
>>>>          >      > Create a flavor, Create a security group, and capture
>>>>         their UUIDs
>>>>          >      > along with the UUID of the provider network you made.
>>>>         Override
>>>>          >     them in
>>>>          >      > globals.yml with:
>>>>          >      >
>>>>          >      > octavia_amp_boot_network_list: <uuid>
>>>>          >      > octavia_amp_secgroup_list: <uuid>
>>>>          >      > octavia_amp_flavor_id: <uuid>
>>>>          >      >
>>>>          >      > This is all from my scattered notes and bad memory.
>>>>         Hopefully it
>>>>          >     makes
>>>>          >      > sense. Corrections welcome.
>>>>          >      >
>>>>          >      > -Erik
>>>>          >      >
>>>>          >      >
>>>>          >      >
>>>>          >      >>
>>>>          >      >>
>>>>          >      >>>
>>>>          >      >>> -Erik
>>>>          >      >>>>
>>>>          >      >>>> All the best,
>>>>          >      >>>> Florian
>>>>          >      >>>> _______________________________________________
>>>>          >      >>>> OpenStack-operators mailing list
>>>>          >      >>>> OpenStack-operators at lists.openstack.org
>>>>         <mailto:OpenStack-operators at lists.openstack.org>
>>>>          >     <mailto:OpenStack-operators at lists.openstack.org
>>>>         <mailto:OpenStack-operators at lists.openstack.org>>
>>>>          >      >>>>
>>>>          >
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>          >
>>>>          >     --
>>>>          >
>>>>          >     EveryWare AG
>>>>          >     Florian Engelmann
>>>>          >     Systems Engineer
>>>>          >     Zurlindenstrasse 52a
>>>>          >     CH-8003 Zürich
>>>>          >
>>>>          >     tel: +41 44 466 60 00
>>>>          >     fax: +41 44 466 60 10
>>>>          >     mail: mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>
>>>>          >     <mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>>
>>>>          >     web: http://www.everyware.ch
>>>>          >
>>>>
>>>>         --
>>>>         EveryWare AG
>>>>         Florian Engelmann
>>>>         Systems Engineer
>>>>         Zurlindenstrasse 52a
>>>>         CH-8003 Zürich
>>>>
>>>>         tel: +41 44 466 60 00
>>>>         fax: +41 44 466 60 10
>>>>         mail: mailto:florian.engelmann at everyware.ch
>>>>         <mailto:florian.engelmann at everyware.ch>
>>>>         web: http://www.everyware.ch
>>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

-- 

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelmann at everyware.ch
web: http://www.everyware.ch
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5210 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20181025/709a83c4/attachment.bin>


More information about the OpenStack-operators mailing list