[openstack-dev] [neutron] Question on the OVS configuration
Slawomir Kaplonski
skaplons at redhat.com
Fri Jun 15 15:53:47 UTC 2018
You are using vxlan network which is not going through br-ex but via br-tun.
In br-tun You have established vxlan tunnel:
> Port "vxlan-c0a81218"
> Interface "vxlan-c0a81218"
> type: vxlan
> options: {df_default="true", in_key=flow, local_ip="192.168.20.132", out_key=flow, remote_ip="192.168.18.24”}
So traffic from Your vm is going via this tunnel to remote end with IP 192.168.18.24 from local IP 192.168.20.132
This local IP 192.168.20.132 is probably configured on Your eno1 interface. Ovs is sending packets to remote IP according
To Your routing table so packets to 192.168.18.24 are going via eno1.
If You want to use br-ex to send packets, You should have flat or vlan network created and such networks are going via br-ex basically.
> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 12:13:
>
> Apologize for having sent this question to a dev mailing list first! But I humbly request to continue the discussion here.
>
>
> My VM is connect to a private network under demo project, here is the info of the network:
>
> $ openstack network show 64f4f4dc-a851-486a-8789-43b816d9bf3d
> +---------------------------+----------------------------------------------------------------------------+
> | Field | Value |
> +---------------------------+----------------------------------------------------------------------------+
> | admin_state_up | UP |
> | availability_zone_hints | |
> | availability_zones | nova |
> | created_at | 2018-06-15T04:26:18Z |
> | description | |
> | dns_domain | None |
> | id | 64f4f4dc-a851-486a-8789-43b816d9bf3d |
> | ipv4_address_scope | None |
> | ipv6_address_scope | None |
> | is_default | None |
> | is_vlan_transparent | None |
> | mtu | 1450 |
> | name | private |
> | port_security_enabled | True |
> | project_id | e202899d90ba449d880be42f19cd6a55 |
> | provider:network_type | vxlan |
> | provider:physical_network | None |
> | provider:segmentation_id | 72 |
> | qos_policy_id | None |
> | revision_number | 4 |
> | router:external | Internal |
> | segments | None |
> | shared | False |
> | status | ACTIVE |
> | subnets | 18a0847e-b733-4ec2-9e25-d7d630a1af2f, 91e91bab-7405-4717-97cd-4ca2cb11589d |
> | tags | |
> | updated_at | 2018-06-15T04:26:23Z |
> +---------------------------+----------------------------------------------------------------------------+
>
>
>
> And below is the full output of ovs bridges.
>
> $ sudo ovs-vsctl show
> 0ee72d8a-65bc-4c82-884a-61b0e86b9893
> Manager "ptcp:6640:127.0.0.1"
> is_connected: true
> Bridge br-int
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "qvo97604b93-55"
> tag: 1
> Interface "qvo97604b93-55"
> Port int-br-ex
> Interface int-br-ex
> type: patch
> options: {peer=phy-br-ex}
> Port "qr-c3f198ac-0b"
> tag: 1
> Interface "qr-c3f198ac-0b"
> type: internal
> Port "sg-8868b1a8-69"
> tag: 1
> Interface "sg-8868b1a8-69"
> type: internal
> Port br-int
> Interface br-int
> type: internal
> Port "qvo6f012656-74"
> tag: 1
> Interface "qvo6f012656-74"
> Port "fg-c4e5dcbc-a3"
> tag: 2
> Interface "fg-c4e5dcbc-a3"
> type: internal
> Port "tap10dc7b3e-a7"
> tag: 1
> Interface "tap10dc7b3e-a7"
> type: internal
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port "qg-4014b9e8-ce"
> tag: 2
> Interface "qg-4014b9e8-ce"
> type: internal
> Port "qr-883cea95-31"
> tag: 1
> Interface "qr-883cea95-31"
> type: internal
> Port "sg-69c838e6-bb"
> tag: 1
> Interface "sg-69c838e6-bb"
> type: internal
> Bridge br-tun
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port "vxlan-c0a81218"
> Interface "vxlan-c0a81218"
> type: vxlan
> options: {df_default="true", in_key=flow, local_ip="192.168.20.132", out_key=flow, remote_ip="192.168.18.24"}
> Port br-tun
> Interface br-tun
> type: internal
> Port patch-int
> Interface patch-int
> type: patch
> options: {peer=patch-tun}
> Bridge br-ex
> Controller "tcp:127.0.0.1:6633"
> is_connected: true
> fail_mode: secure
> Port phy-br-ex
> Interface phy-br-ex
> type: patch
> options: {peer=int-br-ex}
> Port "eno2"
> Interface "eno2"
> Port br-ex
> Interface br-ex
> type: internal
> ovs_version: "2.8.0"
>
>
>
> Thanks!
>
> Best Regards,
> Dave Chen
>
> -----Original Message-----
> From: Slawomir Kaplonski [mailto:skaplons at redhat.com]
> Sent: Friday, June 15, 2018 5:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration
>
> Please send info about network to which You vm is connected and config of all bridges from ovs also.
>
>> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 11:18:
>>
>> Thanks Slawomir for your reply, so what's the right configuration if I want my VM could be able to access external with physical NIC "eno2"? Do I still need add that NIC into "br-ex"?
>>
>>
>> Best Regards,
>> Dave Chen
>>
>> -----Original Message-----
>> From: Slawomir Kaplonski [mailto:skaplons at redhat.com]
>> Sent: Friday, June 15, 2018 5:09 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron] Question on the OVS
>> configuration
>>
>> Hi,
>>
>> If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex.
>>
>>> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17:
>>>
>>> Dear folks,
>>>
>>> I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below:
>>>
>>> Bridge br-ex
>>> Controller "tcp:127.0.0.1:6633"
>>> is_connected: true
>>> fail_mode: secure
>>> Port phy-br-ex
>>> Interface phy-br-ex
>>> type: patch
>>> options: {peer=int-br-ex}
>>> Port br-ex
>>> Interface br-ex
>>> type: internal
>>> ovs_version: "2.8.0"
>>>
>>>
>>>
>>> As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below:
>>>
>>> Bridge br-ex
>>> Controller "tcp:127.0.0.1:6633"
>>> is_connected: true
>>> fail_mode: secure
>>> Port phy-br-ex
>>> Interface phy-br-ex
>>> type: patch
>>> options: {peer=int-br-ex}
>>> *Port "eno2"*
>>> Interface "eno2"
>>> Port br-ex
>>> Interface br-ex
>>> type: internal
>>> ovs_version: "2.8.0"
>>>
>>>
>>>
>>> Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node.
>>>
>>> Inside of VM
>>> ubuntu at test-br:~$ ping 8.8.8.8
>>> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
>>> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms
>>> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms …
>>>
>>> Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i
>>> eno2 icmp
>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>> decode listening on eno2, link-type EN10MB (Ethernet), capture size
>>> 262144 bytes …
>>>
>>> Dump the traffic on the “eno1” (internal NIC), catch it!
>>> $ sudo tcpdump -nn -i eno1 icmp
>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>> decode listening on eno1, link-type EN10MB (Ethernet), capture size
>>> 262144 bytes
>>> 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id
>>> 1439, seq 1, length 64
>>> 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id
>>> 1439, seq 1, length 64
>>> 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id
>>> 1439, seq 2, length 64
>>> 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id
>>> 1439, seq 2, length 64
>>>
>>>
>>> $ sudo ip route
>>> default via 192.168.18.1 dev eno1 proto static metric 100 default
>>> via 192.168.8.1 dev eno2 proto static metric 101
>>> 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown
>>> 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
>>> linkdown
>>> 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101
>>> metric 100
>>> 192.168.16.0/21 dev eno1 proto kernel scope link src
>>> 192.168.20.132 metric 100
>>> 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1
>>>
>>>
>>> What’s going wrong here? Do I miss something? Or some service need to be restarted?
>>>
>>> Anyone could help me out? This question made me sick for many days! Huge thanks in the advance!
>>>
>>>
>>> Best Regards,
>>> Dave
>>>
>>> _____________________________________________________________________
>>> _ ____ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
>> ______________________________________________________________________
>> ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
—
Slawek Kaplonski
Senior software engineer
Red Hat
More information about the OpenStack-dev
mailing list