[ussuri] [neutron] deploy an additional compute node that resides in a different network

Pavlos Basaras pbasaras at gmail.com
Thu Dec 3 15:00:42 UTC 2020


Hello,

with regard to this issue, the dhcp discover does not reach the dnsmasq at
the controller, as the controller is on 10.0.0.11 and the compute node is
on 192.168.111.17.
With iptables i forward all unicast all traffic to the 10.0.0.11 from the
192.168.111.17 network, so the compute node is visible to the controller,
however, since dhcp discover (offer, request,ack) is broadcast,
this does not reach the controller network space, and thus there is no ip
allocated to the vm on host 192.168.111.17.

Is there a way to go around this issue?

Do you see any problem with my general setup?

all the best,
Pavlos




On Wed, Dec 2, 2020 at 11:21 AM Pavlos Basaras <pbasaras at gmail.com> wrote:

> Dear community,
>
> I am new to openstack, please excuse all newbie questions.
>
> I am using ubuntu 18 for all elements.
> I followed the steps for installing openstack from
> https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-ussuri
> .
>
> My setup is based on virtualbox, with mgmt at 10.0.0.0/24 and provider at
> 203.0.113.0/24 (host only adapters), as per instructions.
> The host of the virtualbox is nating those IPs to the network
> 192.168.111.0/24 (gw to internet etc.)
>
> When i deployed the compute vm at the virtual box e.g., 10.0.0.31, the vms
> are deployed successfully, and can successfully launch an instance
> at provider(203.0.113.0/24), internal (192.168.10.0/24), and self service
> (172.16.1.0/24) networks, with associated floating ips, internet
> access etc.
>
> I want to add a new compute node that resides on a different network for
> deploying vms,  i.e., 192.168.111.0/24. The virtual box host is on
> 192.168.111.15 (this is where the controller vm 10.0.0.11 is deployed )
> and the new compute is 192.168.111.17 directly visible from the virtualbox
> host.
>
> For this new node to see the controller i added an iptables rule at
> 192.168.111.15 (host of the virtualbox) to forward all traffic from
> 192.168.111.17 to the controller 10.0.0.0.11.
> Probably this is the wrong way to do it even though the following output
> seems ok (5g-cpn1=192.168.111.17) and from horizon i can see the hypervisor
> info, and relevant total and used resources when i deploy vms
> in 192.168.111.17 (the 5g-cpn1 node)
>
>  openstack network agent list
>
> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
> | ID                                   | Agent Type         | Host       |
> Availability Zone | Alive | State | Binary                    |
>
> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
> | 2d8c3a89-32c4-4b97-aa4f-ca19db53b24f | L3 agent           | controller |
> nova              | :-)   | UP    | neutron-l3-agent          |
> | 35a6b463-7571-4f41-85bc-4c26ef255012 | Linux bridge agent |* 5g-cpn1 *
>  | None              | :-)   | UP    | neutron-linuxbridge-agent |
> | 413cd13d-88d7-45ce-8b2e-26fdb265740f | Metadata agent     | controller |
> None              | :-)   | UP    | neutron-metadata-agent    |
> | 42f57bee-63b3-44e6-9392-939ece98719d | Linux bridge agent | compute    |
> None              | :-)   | UP    | neutron-linuxbridge-agent |
> | 4a787a09-04aa-4350-bd32-0c0177ed06a1 | DHCP agent         | controller |
> nova              | :-)   | UP    | neutron-dhcp-agent        |
> | 9069e26e-6fef-4b69-9c35-c30ca08377ff | Linux bridge agent | nrUE       |
> None              | XXX   | UP    | neutron-linuxbridge-agent |
> | fdafc337-7581-4ecd-b175-810713a25e1f | Linux bridge agent | controller |
> None              | :-)   | UP    | neutron-linuxbridge-agent |
>
> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
>
> openstack compute service list
>
> +----+----------------+------------+----------+---------+-------+----------------------------+
> | ID | Binary         | Host       | Zone     | Status  | State | Updated
> At                 |
>
> +----+----------------+------------+----------+---------+-------+----------------------------+
> |  3 | nova-scheduler | controller | internal | enabled | up    |
> 2020-12-02T07:21:56.000000 |
> |  4 | nova-conductor | controller | internal | enabled | up    |
> 2020-12-02T07:22:06.000000 |
> |  5 | nova-compute   | compute    | nova     | enabled | up    |
> 2020-12-02T07:22:00.000000 |
> |  6 | nova-compute   | nrUE       | nova     | enabled | down  |
> 2020-11-26T15:59:24.000000 |
> |  7 | nova-compute   |* 5g-cpn1*    | nova     | enabled | up    |
> 2020-12-02T07:22:06.000000 |
>
> +----+----------------+------------+----------+---------+-------+----------------------------+
>
>
> My current setup does not include the installation of openvswitch so far
> (at either the controller or the new compute node), so the vms (although
> deployed successfully) failed to set up networks.
>
> For setting up openvswitch correct for my setup is this the guilde that i
> need to follow??
> https://docs.openstack.org/neutron/ussuri/install/ovn/manual_install.html
> ?
>
> Again, please excuse all newbie (in process of understanding) questions so
> far.
>
> Any advice/directions/guides?
>
>
> all the best,
> Pavlos.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201203/21912c04/attachment.html>


More information about the openstack-discuss mailing list