[octavia] Routing between lb-mgmt-net and amphora
Paul Bourke
paul.bourke at oracle.com
Fri Dec 7 14:56:28 UTC 2018
Ok, so in my case, I've booted a test cirros VM on lb-mgmt-net, it gets
assigned an IP of 172.18.2.126. The goal is to be able to ping this IP
directly from the control plane.
I create a port in neutron: http://paste.openstack.org/show/736821/
I log onto the network node that's hosting the dhcp namespace for this
node: http://paste.openstack.org/show/736823/
I then run the following command lifted from devstack, with my port info
subbed in:
# ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set
Interface o-hm0 type=internal -- set Interface o-hm0
external-ids:iface-status=active -- set Interface o-hm0
external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0
external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0
external-ids:skip_cleanup=true
Here's the output of 'ovs-vsctl show' at this point:
http://paste.openstack.org/show/736826/
Note that the tap device for the VM (tap6440048f-d2) has tag 3. However,
if I try to add 'tag=3' to the above add-port command it just assigns
the port the dead tag 4095.
So at the point I have a new interface created, o-hm0, with a status of
DOWN. It's on br-int, but I can't ping the instance at 172.18.2.126. I
also assume I need to add a static route of some form on the node,
though no attempts so far have resulted in being able to ping.
Would be very grateful if you could revise these commands and let know
if they deviate from what you're doing.
-Paul
On 06/12/2018 21:12, M. Ranganathan wrote:
> HACK ALERT Disclaimer: My suggestion could be clumsy.
>
> On Thu, Dec 6, 2018 at 1:46 PM Paul Bourke <paul.bourke at oracle.com
> <mailto:paul.bourke at oracle.com>> wrote:
>
> Hi,
>
> This is mostly a follow on to the thread at[0], though due to the
> mailing list transition it was easier to start a new thread.
>
> I've been attempting to get Octavia setup according to the
> dev-quick-start guide[1], but have been struggling with the
> following piece:
>
> "Add appropriate routing to / from the ‘lb-mgmt-net’ such that
> egress is allowed, and the controller (to be created later) can talk
> to hosts on this network."
>
> In mranga's reply, they say:
>
> > -- Create an ovs port on br-int
> > -- Create a neutron port using the ovs port that you just created.
> > -- Assign the ip address of the neutron port to the ovs port
> > -- Use ip netns exec to assign a route in the router namespace of
> the LoadBalancer network.
>
> I have enough of an understanding of Neutron/OVS for this to mostly
> make sense, but not enough to actually put it into practice it
> seems. My environment:
>
> 3 x control nodes
> 2 x network nodes
> 1 x compute
>
> All nodes have two interfaces, eth0 being the management network -
> 192.168.5.0/24 <http://192.168.5.0/24>, and eth1 being used for the
> provider network. I then create the Octavia lb-mgmt-net on
> 172.18.2.0/24 <http://172.18.2.0/24>.
>
> I've read the devstack script[2] and have the following questions:
>
> * Should I add the OVS port to br-int on the compute, network nodes,
> or both?
>
>
> I have only one controller which also functions as my network node. I
> added the port on the controller/network node. br-int is the place
> where the integration happens. You will find each network has an
> internal vlan tag associated with it. Use the tag assigned to your lb
> network when you create the ovs port.
>
> ovs-vsctl show will tell you more.
>
>
> * What is the purpose of creating a neutron port in this scenario
>
>
> Just want to be sure Neutron knows about it and has an entry in its
> database so the address won't be used for something else. If you are
> using static addresses, for example you should not need this (I think).
>
> BTW the created port is DOWN. I am not sure why and I am not sure it
> matters.
>
>
> If anyone is able to explain this a bit further or can even point to
> some good material to flesh out the underlying concepts it would be
> much appreciated, I feel the 'Neutron 101' videos I've done so far
> are not quite getting me there :)
>
> Cheers,
> -Paul
>
> [0]
> http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000544.html
> [1]
> https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html
> [2] https://github.com/openstack/octavia/blob/master/devstack/plugin.sh
>
>
>
> --
> M. Ranganathan
>
More information about the openstack-discuss
mailing list