<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <font face="SFNS Display">Dear colleagues,<br>
      <br>
      while trying to setup Octavia, I faced the problem of connecting
      amphora agent to VIP network.<br>
      <br>
      <b>Environment:<br>
      </b>Octavia 1.0.1 (installed by using "pip install")<br>
      Openstack Pike:<br>
      - Nova 16.0.1<br>
      - Neutron 11.0.1<br>
      - Keystone 12.0.0<br>
      <br>
      <b>Topology of testbed:</b><br>
    </font><tt><br>
                                  +<br>
                                  |<br>
                                  |    +----+<br>
              +                   +----+ n1 |<br>
              |    +---------+    |    +----+<br>
              +----+ Amphora +----+<br>
              |    +---------+    |    +----+<br>
            m |                 l +----+ n2 |<br>
            g |                 b |    +----+    + e<br>
            m |                 t |              | x<br>
            t |                   |    +----+    | t<br>
              |                 s +----+ vR +----+ e<br>
              |                 u |    +----+    | r<br>
         +------------+         b |              | n<br>
         | Controller |         n |              | a<br>
         +------------+         e |              + l<br>
                                t |<br>
                                  +<br>
    </tt><font face="SFNS Display"><br>
      <b>Summary:</b><br>
      <br>
    </font><tt>$ openstack loadbalancer create --name nlb2
      --vip-subnet-id lbt-subnet<br>
      $ openstack loadbalancer list<br>
+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+<br>
      | id                                   | name |
      project_id                       | vip_address |
      provisioning_status | provider |<br>
+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+<br>
      | 93facca0-d39a-44e0-96b6-28efc1388c2d | nlb2 |
      d8051a3ff3ad4c4bb380f828992b8178 | 1.1.1.16    |
      ACTIVE              | octavia  |<br>
+--------------------------------------+------+----------------------------------+-------------+---------------------+----------+<br>
      $ openstack server list --all</tt><tt><br>
    </tt><tt>+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+</tt><tt><br>
    </tt><tt>| ID                                   |
      Name                                         | Status |
      Networks                                        | Image   | Flavor
      |</tt><tt><br>
    </tt><tt>+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+</tt><tt><br>
    </tt><tt>| 98ae591b-0270-4625-95eb-a557c1452eef |
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab | ACTIVE |
      lb-mgmt-net=172.16.252.28; lbt-net=1.1.1.11     | amphora |       
      |</tt><font face="SFNS Display"><br>
    </font><tt>| cc79ca78-b036-4d55-a4bd-5b3803ed2f9b |
      lb-n1                                        | ACTIVE |
      lbt-net=1.1.1.18                                |         | B-cup 
      |</tt><tt><br>
    </tt><tt>| 6c43ccca-c808-44cf-974d-acdbdb4b26db |
      lb-n2                                        | ACTIVE |
      lbt-net=1.1.1.19                                |         | B-cup 
      |</tt><font face="SFNS Display"><br>
    </font><font face="SFNS Display"><tt>+--------------------------------------+----------------------------------------------+--------+-------------------------------------------------+---------+--------+</tt><tt><br>
      </tt><br>
      This output shows that amphora agent is active with two
      interfaces, connected to management and project's networks
      (lb-mgmt-net and lbt-net respectively). BUT in fact there is no
      interface to lbt-net on the agent's VM:<br>
      <br>
    </font><tt><b>ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$</b>
      ip a</tt><tt><br>
    </tt><tt>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
      state UNKNOWN group default qlen 1</tt><tt><br>
      [ ... ]<br>
    </tt><tt>2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
      qdisc pfifo_fast state UP group default qlen 1000</tt><tt><br>
    </tt><tt>    link/ether d0:1c:a0:58:e0:02 brd ff:ff:ff:ff:ff:ff</tt><tt><br>
    </tt><tt>    inet 172.16.252.28/22 brd 172.16.255.255 scope global
      eth0</tt><tt><br>
    </tt><tt><b>ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$</b>
      ls /sys/class/net/</tt><tt><br>
    </tt><tt><u>eth0</u>  <u>lo</u></tt><br>
    <tt><tt><b>ubuntu@amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab:~$</b></tt></tt><font
      face="SFNS Display"><br>
      <br>
      The issue is that eth1 exists during start of agent's VM and then
      it magically disappears (snipped from syslog, note timing):<br>
      <br>
    </font><tt>Nov  7 12:00:31
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]:
      DHCPREQUEST of 1.1.1.11 on eth1 to 255.255.255.255 port 67
      (xid=0x1c65db9b)</tt><tt><br>
    </tt><tt>Nov  7 12:00:31
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]:
      DHCPOFFER of 1.1.1.11 from 1.1.1.10</tt><tt><br>
    </tt><tt>Nov  7 12:00:31
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]:
      DHCPACK of 1.1.1.11 from 1.1.1.10</tt><tt><br>
    </tt><tt>Nov  7 12:00:31
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1051]: bound
      to 1.1.1.11 -- renewal in 38793 seconds.</tt><tt><br>
    </tt><tt>[ ... ]</tt><tt><br>
    </tt><tt>Nov  7 12:00:44
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1116]:
      receive_packet failed on eth1: Network is down</tt><tt><br>
    </tt><tt>Nov  7 12:00:44
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab systemd[1]: Stopping
      ifup for eth1...</tt><tt><br>
    </tt><tt>Nov  7 12:00:44
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1715]:
      Killed old client process</tt><tt><br>
    </tt><tt>Nov  7 12:00:45
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab dhclient[1715]: Error
      getting hardware address for "eth1": No such device</tt><tt><br>
    </tt><tt>Nov  7 12:00:45
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab ifdown[1700]: Cannot
      find device "eth1"</tt><tt><br>
    </tt><tt>Nov  7 12:00:45
      amphora-038fb78e-923e-4143-8402-ad8dbd97f9ab systemd[1]: Stopped
      ifup for eth1.</tt><tt><br>
    </tt><font face="SFNS Display"><br>
      while<br>
      <br>
      1) corresponding port in Openstack is active and owned by Nova:<br>
      <br>
    </font><tt>$ openstack port show
      c4b46bea-5d49-46b5-98d9-f0f9eaf44708</tt><tt><br>
    </tt><tt>+-----------------------+-------------------------------------------------------------------------+</tt><tt><br>
    </tt><tt>| Field                 |
      Value                                                                  
      |</tt><tt><br>
    </tt><tt>+-----------------------+-------------------------------------------------------------------------+</tt><tt><br>
    </tt><tt>| admin_state_up        |
      UP                                                                     
      |</tt><tt><br>
    </tt><tt>| allowed_address_pairs | ip_address='1.1.1.16',
      mac_address='d0:1c:a0:70:97:ba'                  |</tt><tt><br>
    </tt><tt>| binding_host_id       |
      bowmore                                                                
      |</tt><tt><br>
    </tt><tt>| binding_profile      
|                                                                        
      |</tt><tt><br>
    </tt><tt>| binding_vif_details   | datapath_type='system',
      ovs_hybrid_plug='False', port_filter='True'     |</tt><tt><br>
    </tt><tt>| binding_vif_type      |
      ovs                                                                    
      |</tt><tt><br>
    </tt><tt>| binding_vnic_type     |
      normal                                                                 
      |</tt><tt><br>
    </tt><tt>| created_at            |
      2017-11-07T12:00:24Z                                                   
      |</tt><tt><br>
    </tt><tt>| data_plane_status     |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| description          
|                                                                        
      |</tt><tt><br>
    </tt><tt>| device_id             |
      98ae591b-0270-4625-95eb-a557c1452eef                                   
      |</tt><tt><br>
    </tt><tt>| device_owner          |
      compute:nova                                                           
      |</tt><tt><br>
    </tt><tt>| dns_assignment        |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| dns_name              |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| extra_dhcp_opts      
|                                                                        
      |</tt><tt><br>
    </tt><tt>| fixed_ips             | ip_address='1.1.1.11',
      subnet_id='dc8f0701-3553-4de1-8b65-0f9c76addf1f' |</tt><tt><br>
    </tt><tt>| id                    |
      c4b46bea-5d49-46b5-98d9-f0f9eaf44708                                   
      |</tt><tt><br>
    </tt><tt>| ip_address            |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| mac_address           |
      d0:1c:a0:70:97:ba                                                      
      |</tt><tt><br>
    </tt><tt>| name                  |
      octavia-lb-vrrp-038fb78e-923e-4143-8402-ad8dbd97f9ab                   
      |</tt><tt><br>
    </tt><tt>| network_id            |
      d38b53a2-52f0-460c-94f9-4eb404db28a1                                   
      |</tt><tt><br>
    </tt><tt>| option_name           |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| option_value          |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| port_security_enabled |
      True                                                                   
      |</tt><tt><br>
    </tt><tt>| project_id            |
      1e96bb9d794f4588adcd6f32ee3fbaa8                                       
      |</tt><tt><br>
    </tt><tt>| qos_policy_id         |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| revision_number       |
      9                                                                      
      |</tt><tt><br>
    </tt><tt>| security_group_ids    |
      29a13b95-810e-4464-b1fb-ba61c59e1fa1                                   
      |</tt><tt><br>
    </tt><tt>| status                |
      ACTIVE                                                                 
      |</tt><tt><br>
    </tt><tt>| subnet_id             |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| tags                 
|                                                                        
      |</tt><tt><br>
    </tt><tt>| trunk_details         |
      None                                                                   
      |</tt><tt><br>
    </tt><tt>| updated_at            |
      2017-11-07T12:00:27Z                                                   
      |</tt><tt><br>
    </tt><tt>+-----------------------+-------------------------------------------------------------------------+</tt><font
      face="SFNS Display"><br>
      <br>
      2) <b>virsh dumpxml <</b><b>instance ID></b> shows this
      interfaces is attached to VM<br>
      3) <b>openvswitch</b> contains this interface in configuration<br>
      <br>
      4) <u><b>BUT</b></u> qemu on corresponding node running with just
      one "-device virtio-net-pci" parameter, which corresponds to port
      from management network. No second virtio-net-pci device.<br>
      <br>
      Manual detaching / attaching this interface using "nova
      interface-detach / interface-attache" <b>solves this issue</b> -
      interface reappear inside VM.<br>
      <br>
      This problem appears only with Octavia Amphora instances - all
      other servers, launched using Heat or CLI, works with two
      interfaces without any problems. Relying on this, I guess that
      problem related to Octavia controller.<br>
      <br>
      It worth to say, that at the same time, servers n1 and n2, which
      are connected to lbt-subnet, can ping each other, virtual router
      (vR) and local dhcp server as well (see topology above).<br>
    </font><br>
    <font face="SFNS Display"><font face="SFNS Display"><b>Neutron log
          files</b> shows the last activities re this ports much earlier
        than disappearing of eth1 from VM:<br>
        <br>
        <u>Controller node:</u><br>
      </font><tt>2017-11-07 12:00:29.885 17405 DEBUG
        neutron.db.provisioning_blocks
        [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9 - - - - -]
        Provisioning complete for port <b>c4b46bea-5d49-46b5-98d9-f0f9eaf44708</b>
        triggered by entity L2. provisioning_complete
        /usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:138</tt><tt><br>
      </tt><tt>2017-11-07 12:00:30.061 17405 DEBUG
        neutron.plugins.ml2.db [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9
        - - - - -] For port c4b46bea-5d49-46b5-98d9-f0f9eaf44708, host
        bowmore, got binding levels
        [<neutron.plugins.ml2.models.PortBindingLevel[object at
        7f74a54a3a10] {port_id=u'<b>c4b46bea-5d49-46b5-98d9-f0f9eaf44708</b>',
        host=u'bowmore', level=0, driver=u'openvswitch',
        segment_id=u'7cd90f29-165a-4299-be72-51d2a2c18092'}>]
        get_binding_levels
        /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py:106</tt><tt><br>
      </tt><font face="SFNS Display"><u><br>
          Compute node:</u><br>
      </font><tt>2017-11-07 12:00:28.085 22451 DEBUG
        neutron.plugins.ml2.db [req-ae06e469-0592-46a4-bdb4-a65f47f9dee9
        - - - - -] For port <b>c4b46bea-5d49-46b5-98d9-f0f9eaf44708</b>,
        host bowmore, got b</tt><tt><br>
      </tt><tt>inding levels
        [<neutron.plugins.ml2.models.PortBindingLevel[object at
        7f411310ccd0] {port_id=u'c4b46bea-5d49-46b5-98d9-f0f9eaf44708',
        host=u'bowmore', level=0, driver=u'openvswit</tt><tt><br>
      </tt><tt>ch',
        segment_id=u'7cd90f29-165a-4299-be72-51d2a2c18092'}>]
        get_binding_levels
        /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py:106</tt><tt><br>
      </tt><tt>RESP BODY: {"events": [{"status": "completed", "tag": "<b>c4b46bea-5d49-46b5-98d9-f0f9eaf44708</b>",
        "name": "network-vif-plugged", "server_uuid":
        "98ae591b-0270-4625-95eb-a557c1452eef"</tt><tt><br>
      </tt><tt>, "code": 200}]}</tt><tt><br>
      </tt><tt>2017-11-07 12:00:28.116 22451 INFO neutron.notifiers.nova
        [-] Nova event response: {u'status': u'completed', u'tag': u'<b>c4b46bea-5d49-46b5-98d9-f0f9eaf44708</b>',
        u'name': u'network-v</tt><tt><br>
      </tt><tt>if-plugged', u'server_uuid':
        u'98ae591b-0270-4625-95eb-a557c1452eef', u'code': 200}</tt><font
        face="SFNS Display"><br>
        <br>
        <b>Octavia-worker.log</b> is available at the following link:
        <a class="moz-txt-link-freetext" href="https://pastebin.com/44rwshKZ">https://pastebin.com/44rwshKZ</a><br>
        <br>
        <b>Q</b><b>uestion</b><b>s are</b> - any ideas on what is
        happening and which further information and debugs I need to
        gather in order to resolve this issue?<br>
      </font></font><font face="SFNS Display"><br>
      Thank you.<br>
      <br>
    </font>
    <pre class="moz-signature" cols="72">-- 
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison
</pre>
  </body>
</html>