[Openstack] Troubles with networking part of openstack

Kevin Benton kevin at benton.pub
Thu Mar 30 09:07:20 UTC 2017


Hi,

Have you setup mappings for the external network in the L2 agent?

If you haven't seen this guide, it's very good at a walk-through of
configuration for Neutron self-service networking.
https://docs.openstack.org/newton/networking-guide/deploy-ovs-selfservice.html

On Wed, Mar 29, 2017 at 4:00 AM, Bartłomiej Solarz-Niesłuchowski <
Bartlomiej.Solarz-Niesluchowski at wit.edu.pl> wrote:

> W dniu 2017-03-28 o 16:08, Jay Pipes pisze:
>
>> +kevin benton
>>
>> On 03/28/2017 07:20 AM, Bartłomiej Solarz-Niesłuchowski wrote:
>>
>>> Dear List,
>>>
>>> I am beginner of openstack user.
>>>
>>
>> Welcome to the OpenStack community! :)
>>
>> I setup openstack with RDO on Centos 7.
>>>
>>> I have 6 machines:
>>>
>>> they have two interfaces enp2s0f0  (10.51.0.x) and enp2s0f1
>>> (213.135.46.x)
>>>
>>> on machine x=1 i setup dashboard/neutron-server/nova/cinder/etc.. on
>>> machines 2-6 i setup:
>>>
>>> openstack-cinder-api.service,
>>> openstack-cinder-scheduler.service,
>>> openstack-cinder-volume.service,
>>> openstack-nova-api.service,
>>> openstack-nova-compute.service,
>>> openstack-nova-conductor.service,
>>> openstack-nova-consoleauth.service,
>>> openstack-nova-novncproxy.service,
>>> openstack-nova-scheduler.service
>>>
>>
>> I am presuming you want machines 2-6 as "compute nodes" to put VMs on? If
>> so, you definitely do not want to put anything *other* than the following
>> on those machines:
>>
>> openstack-cinder-volume.service
>> openstack-nova-compute.service
>>
>> All the other services belong on a "controller node", where you've put
>> the neutron server, dashboard, database, MQ, etc.
>>
> ok
>
>>
>> I run the virtual machine instance which have ip 10.0.3.4 (on machine 5)
>>>
>>> I setup router on machine 1
>>>
>>> I can ping from the virtual instance ip of router.
>>>
>>> I see pings from wirtual machine on machine 1 (where sit router)
>>>
>>
>> err, it looks to me that your machine 1 is a controller, not a compute
>> node? VMs should go on machines 2-6, unless I'm reading something
>> incorrectly.
>>
> machine 1 is controller and compute node
>
>>
>> But i have totally no idea how to setup network connectivity with
>>> outside world.
>>>
>>
>> <snip>
>>
>> [root at song-of-the-seas-01 ~(keystone_admin)]# ip ro
>>> default via 213.135.46.254 dev br-ex
>>>
>>
>> So here is your default gateway, on br-ex...
>>
> yes
>
>>
>> 10.51.0.0/24 dev enp2s0f0  proto kernel scope link  src 10.51.0.1
>>> 213.135.46.0/24 dev br-ex  proto kernel  scope link  src 213.135.46.180
>>>
>>> [root at song-of-the-seas-01 ~(keystone_admin)]# ip a | grep state
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen
>>> 1
>>> 2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
>>> UP qlen 1000
>>> 3: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
>>> ovs-system state UP qlen 1000
>>> 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
>>> 1000
>>> 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>> 6: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
>>> UNKNOWN qlen 1000
>>>
>>
>> And here, it's indicating br-ex is in an unknown state. Also, br-int is
>> in DOWN state, not sure if that is related. My guess would be to bring up
>> br-ex and see what is failing about the bring-up.
>>
> i make it up but no success:
> [root at song-of-the-seas-01 ~]# ip a | grep state
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
> 2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> qlen 1000
> 3: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovs-system state UP qlen 1000
> 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen
> 1000
> 5: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UNKNOWN qlen 1000
> 6: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UNKNOWN qlen 1000
> 7: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65470 qdisc
> noqueue master ovs-system state UNKNOWN qlen 1000
> 8: br-tun: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UNKNOWN qlen 1000
>
>
>
>> Of course, I'm no networking expert so hopefully one of the Neutron devs
>> can pop in to help. :)
>>
> Please help me anybody?
>
> [root at song-of-the-seas-01 ~]# ip netns
> qrouter-6794f7f3-a2af-4538-883e-78b49a6ba633
> [root at song-of-the-seas-01 ~]# ip netns exec qrouter-6794f7f3-a2af-4538-883e-78b49a6ba633
> ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 19: qr-edef78b1-56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
> noqueue state UNKNOWN qlen 1000
>     link/ether fa:16:3e:6f:0e:76 brd ff:ff:ff:ff:ff:ff
>     inet 10.0.3.1/24 brd 10.0.3.255 scope global qr-edef78b1-56
>        valid_lft forever preferred_lft forever
>     inet6 fe80::f816:3eff:fe6f:e76/64 scope link
>        valid_lft forever preferred_lft forever
>
> [root at song-of-the-seas-01 ~]# ip netns exec qrouter-6794f7f3-a2af-4538-883e-78b49a6ba633
> ip ro
> 10.0.3.0/24 dev qr-edef78b1-56  proto kernel  scope link  src 10.0.3.1
>
>
> here i have no def gateway and no idea how setup it "automatically"?
>
>
> --
> Bartłomiej Solarz-Niesłuchowski, Administrator WSISiZ
> e-mail: Bartlomiej.Solarz-Niesluchowski at wit.edu.pl
> tel. 223486547, fax 223486501
> JID: solarz at jabber.wit.edu.pl
> 01-447 Warszawa, ul. Newelska 6, pokój 404, pon.-pt. 8-16
> Motto - Jak sobie pościelisz tak sie wyśpisz
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20170330/3b5c0c76/attachment.html>


More information about the Openstack mailing list