[openstack-dev] [Octavia] networking issues
Volodymyr Litovka
doka.ua at gmx.com
Wed Nov 8 09:11:51 UTC 2017
Please, disregard this message - I've found that part of networking
resides in namespace.
On 11/7/17 5:54 PM, Volodymyr Litovka wrote:
> Dear colleagues,
>
> while trying to setup Octavia, I faced the problem of connecting
> amphora agent to VIP network.
>
> *Environment:
> *Octavia 1.0.1 (installed by using "pip install")
> Openstack Pike:
> - Nova 16.0.1
> - Neutron 11.0.1
> - Keystone 12.0.0
>
> *Topology of testbed:*
>
> +
> |
> | +----+
> + +----+ n1 |
> | +---------+ | +----+
> +----+ Amphora +----+
> | +---------+ | +----+
> m | l +----+ n2 |
> g | b | +----+ + e
> m | t | | x
> t | | +----+ | t
> | s +----+ vR +----+ e
> | u | +----+ | r
> +------------+ b | | n
> | Controller | n | | a
> +------------+ e | + l
> t |
> +
>
> *Summary:*
>
> $ openstack loadbalancer create --name nlb2 --vip-subnet-id lbt-subnet
> $ openstack loadbalancer list
> +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
> | id | name |
> project_id | vip_address | provisioning_status |
> provider |
> +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
> | 93facca0-d39a-44e0-96b6-28efc1388c2d | nlb2 |
> d8051a3ff3ad4c4bb380f828992b8178 | 1.1.1.16 | ACTIVE |
> octavia |
> +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+
> $ openstack server list --all
> +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+
> | ID |
> Name | Status |
> Networks | Image | Flavor |
> +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+
> | 98ae591b-0270-4625-95eb-a557c1452eef |
> amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab | ACTIVE |
> lb-mgmt-net=172.16.252.28; lbt-net=1.1.1.11 | amphora | |
> | cc79ca78-b036-4d55-a4bd-5b3803ed2f9b |
> lb-n1 | ACTIVE |
> lbt-net=1.1.1.18 | | B-cup |
> | 6c43ccca-c808-44cf-974d-acdbdb4b26db |
> lb-n2 | ACTIVE |
> lbt-net=1.1.1.19 | | B-cup |
> +--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+
>
> This output shows that amphora agent is active with two interfaces,
> connected to management and project's networks (lb-mgmt-net and
> lbt-net respectively). BUT in fact there is no interface to lbt-net on
> the agent's VM:
>
> *ubuntu at amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1
> [ ... ]
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
> link/ether d0:1c:a0:58:e0:02 brd ff:ff:ff:ff:ff:ff
> inet 172.16.252.28/22 brd 172.16.255.255 scope global eth0
> *ubuntu at amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$* ls
> /sys/class/net/
> _eth0_ _lo_
> *ubuntu at amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$*
>
> The issue is that eth1 exists during start of agent's VM and then it
> magically disappears (snipped from syslog, note timing):
>
> Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1051]: DHCPREQUEST of 1.1.1.11 on eth1 to 255.255.255.255
> port 67 (xid=0x1c65db9b)
> Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1051]: DHCPOFFER of 1.1.1.11 from 1.1.1.10
> Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1051]: DHCPACK of 1.1.1.11 from 1.1.1.10
> Nov 7 12:00:31 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1051]: bound to 1.1.1.11 -- renewal in 38793 seconds.
> [ ... ]
> Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1116]: receive_packet failed on eth1: Network is down
> Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> systemd[1]: Stopping ifup for eth1...
> Nov 7 12:00:44 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1715]: Killed old client process
> Nov 7 12:00:45 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> dhclient[1715]: Error getting hardware address for "eth1": No such device
> Nov 7 12:00:45 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> ifdown[1700]: Cannot find device "eth1"
> Nov 7 12:00:45 amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab
> systemd[1]: Stopped ifup for eth1.
>
> while
>
> 1) corresponding port in Openstack is active and owned by Nova:
>
> $ openstack port show c4b46bea-5d49-46b5-98d9-f0f9eaf44708
> +-----------------------+-------------------------------------------------------------------------+
> | Field | Value |
> +-----------------------+-------------------------------------------------------------------------+
> | admin_state_up | UP |
> | allowed_address_pairs | ip_address='1.1.1.16',
> mac_address='d0:1c:a0:70:97:ba' |
> | binding_host_id | bowmore |
> | binding_profile | |
> | binding_vif_details | datapath_type='system',
> ovs_hybrid_plug='False', port_filter='True' |
> | binding_vif_type | ovs |
> | binding_vnic_type | normal |
> | created_at | 2017-11-07T12:00:24Z |
> | data_plane_status | None |
> | description | |
> | device_id | 98ae591b-0270-4625-95eb-a557c1452eef |
> | device_owner | compute:nova |
> | dns_assignment | None |
> | dns_name | None |
> | extra_dhcp_opts | |
> | fixed_ips | ip_address='1.1.1.11',
> subnet_id='dc8f0701-3553-4de1-8b65-0f9c76addf1f' |
> | id | c4b46bea-5d49-46b5-98d9-f0f9eaf44708 |
> | ip_address | None |
> | mac_address | d0:1c:a0:70:97:ba |
> | name |
> octavia-lb-vrrp-038fb78e-923e-4143-8402-ad8dbd97f9ab |
> | network_id | d38b53a2-52f0-460c-94f9-4eb404db28a1 |
> | option_name | None |
> | option_value | None |
> | port_security_enabled | True |
> | project_id | 1e96bb9d794f4588adcd6f32ee3fbaa8 |
> | qos_policy_id | None |
> | revision_number | 9 |
> | security_group_ids | 29a13b95-810e-4464-b1fb-ba61c59e1fa1 |
> | status | ACTIVE |
> | subnet_id | None |
> | tags | |
> | trunk_details | None |
> | updated_at | 2017-11-07T12:00:27Z |
> +-----------------------+-------------------------------------------------------------------------+
>
> 2) *virsh dumpxml <**instance ID>* shows this interfaces is attached to VM
> 3) *openvswitch* contains this interface in configuration
>
> 4) _*BUT*_ qemu on corresponding node running with just one "-device
> virtio-net-pci" parameter, which corresponds to port from management
> network. No second virtio-net-pci device.
>
> Manual detaching / attaching this interface using "nova
> interface-detach / interface-attache" *solves this issue* - interface
> reappear inside VM.
>
> This problem appears only with Octavia Amphora instances - all other
> servers, launched using Heat or CLI, works with two interfaces without
> any problems. Relying on this, I guess that problem related to Octavia
> controller.
>
> It worth to say, that at the same time, servers n1 and n2, which are
> connected to lbt-subnet, can ping each other, virtual router (vR) and
> local dhcp server as well (see topology above).
>
> *Neutron log files* shows the last activities re this ports much
> earlier than disappearing of eth1 from VM:
>
> _Controller node:_
> 2017-11-07 12:00:29.885 17405 DEBUG neutron.db.provisioning_blocks
> [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9 - - - - -] Provisioning
> complete for port *c4b46bea-5d49-46b5-98d9-f0f9eaf44708* triggered by
> entity L2. provisioning_complete
> /usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:138
> 2017-11-07 12:00:30.061 17405 DEBUG neutron.plugins.ml2.db
> [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9 - - - - -] For port
> c4b46bea-5d49-46b5-98d9-f0f9eaf44708, host bowmore, got binding levels
> [<neutron.plugins.ml2.models.PortBindingLevel[object at 7f74a54a3a10]
> {port_id=u'*c4b46bea-5d49-46b5-98d9-f0f9eaf44708*', host=u'bowmore',
> level=0, driver=u'openvswitch',
> segment_id=u'7cd90f29-165a-4299-be72-51d2a2c18092'}>]
> get_binding_levels
> /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py:106
> _
> Compute node:_
> 2017-11-07 12:00:28.085 22451 DEBUG neutron.plugins.ml2.db
> [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9 - - - - -] For port
> *c4b46bea-5d49-46b5-98d9-f0f9eaf44708*, host bowmore, got b
> inding levels [<neutron.plugins.ml2.models.PortBindingLevel[object at
> 7f411310ccd0] {port_id=u'c4b46bea-5d49-46b5-98d9-f0f9eaf44708',
> host=u'bowmore', level=0, driver=u'openvswit
> ch', segment_id=u'7cd90f29-165a-4299-be72-51d2a2c18092'}>]
> get_binding_levels
> /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py:106
> RESP BODY: {"events": [{"status": "completed", "tag":
> "*c4b46bea-5d49-46b5-98d9-f0f9eaf44708*", "name":
> "network-vif-plugged", "server_uuid":
> "98ae591b-0270-4625-95eb-a557c1452eef"
> , "code": 200}]}
> 2017-11-07 12:00:28.116 22451 INFO neutron.notifiers.nova [-] Nova
> event response: {u'status': u'completed', u'tag':
> u'*c4b46bea-5d49-46b5-98d9-f0f9eaf44708*', u'name': u'network-v
> if-plugged', u'server_uuid': u'98ae591b-0270-4625-95eb-a557c1452eef',
> u'code': 200}
>
> *Octavia-worker.log* is available at the following link:
> https://pastebin.com/44rwshKZ
>
> *Q**uestion**s are* - any ideas on what is happening and which further
> information and debugs I need to gather in order to resolve this issue?
>
> Thank you.
>
> --
> Volodymyr Litovka
> "Vision without Execution is Hallucination." -- Thomas Edison
--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20171108/9c196ed2/attachment.html>
More information about the OpenStack-dev
mailing list