[Openstack] [Netstack] Can't associate floating IP

Yapeng Wu yapengwu at me.com
Thu Mar 1 20:51:53 UTC 2012


In the multi-host quantum OVS plugin case, I am not clear how and when this 'host' in the 'network' table should be updated.
From the code update, I doubt this would work in multi-host case. 

Currently I always use manually update of mysql command as Brad showed in the email.

Yapeng

On Mar 01, 2012, at 09:18 AM, Tomoe Sugihara <tomoe at midokura.com> wrote:

Hi Doude,

I was dealing with another issue caused by this NULL 'host', and Dan told me that has been fixed:
https://github.com/openstack/nova/commit/43f2492175d11a3f8ea4198e65b2a6a6b38cbbb6

I haven't verified, though. Good luck.

Best,
Tomoe

On Thu, Mar 1, 2012 at 10:11 PM, Doude <doudouyam at gmail.com> wrote:
Hi,

I tested it with Quantum and I confirm the field 'host' in the 'network' table stays empty.
Someone filled a bug for that ?

Doude.


On Tue, Feb 28, 2012 at 8:06 AM, Vishvananda Ishaya <vishvananda at gmail.com> wrote:
...
At least that is the case for nova-network without quantum.  I don't know if using Quantum leads to a different result.

Vish

On Feb 27, 2012, at 9:49 PM, Yapeng Wu wrote:

Hello, Brad,

I read your reply to Darragph's email:
"Another thing to check .. when I run devstack by default my network
doesn't get associated with a host [host is NULL in the database].
Make sure the host for that network is set to the hostname of the
compute node."

I found that when I use "nova-manage network create" CLI command, the host is NULL in the database as well.
Is this a bug? Where would this "host" field get updated?

Thanks,
Yapeng



On Feb 27, 2012, at 04:02 PM, Yapeng Wu <yapengwu at me.com> wrote:

Hello, Darragh,

I...
 

2) For the second problem: command line issue, I found that it was due to the code in nova/network/manager.py, line 435 to 452:
        if network['multi_host']:
            instance = self.db.instance_get(context, fixed_ip['instance_id'])
            host = instance['host']
        else:
            host = network['host']
        interface = floating_ip['interface']
        if host == self.host:
            # i'm the correct host
            self._associate_floating_ip(context, floating_address,
                                        fixed_address, interface)
        else:
            # send to correct host
            rpc.cast(context,
                     self.db.queue_get_for(context, FLAGS.network_topic, host),
                     {'method': '_associate_floating_ip',
                      'args': {'floating_address': floating_address,
                               'fixed_address': fixed_address,
                               'interface': interface}})

The "host" should be self.host in this case, but not. So it calls rpc.cast. I thought host is read from the database by the 'instance_id'? I am not sure. If I "hacked" the code by calling self._associate_floating_ip directly, associate floating ip works.

Maybe someone familiar with nova-network code could help us on this.

Yapeng



On Feb 27, 2012, at 12:18 PM, Dan Wendlandt <dan at nicira.com> wrote:

Hi Darragh,

Thanks for the detailed write-up.  Would be great if you could take this content and create a bug on launchpad.  We'll look into this.  

On a related note.  The check in stack.sh to avoid creating a floating IP pool if quantum is enabled is no longer valid, now that Quantum Manager does (or at least intends to :P ) support floating IPs.  But rather than remove it, it may be good to just change to the check to avoid creating floating IPs if melange is enabled, as Quantum Manager's current floating IP support requires using the traditional Nova IPAM, not Melange. 

Dan


On Mon, Feb 27, 2012 at 6:38 AM, Darragh OReilly <darragh.oreilly at yahoo.com> wrote:


When I try to associate a floating IP from the dash I get:
Error: Error associating Floating IP: Associate floating ip failed (HTTP 500)

>From the command line I don't get any errors or exceptions on any of the screens after nova add-floating-ip, but the association does not happen. Are these steps right? 

thanks Darragh


u1 at u1110srv:~/devstack$ cat localrc
LOGFILE=stacklog
MYSQL_PASSWORD=nova
RABBIT_PASSWORD=nova
SERVICE_TOKEN=nova
ADMIN_PASSWORD=nova
SWIFT_HASH=nova
ENABLED_SERVICES="g-api,g-reg,key,n-api,n-cpu,n-net,n-sch,n-vnc,horizon,mysql,rabbit,openstackx,n-vol,q-svc,q-agt,swift,quantum"
Q_PLUGIN=openvswitch

After running stack.sh have:

u1 at u1110srv:~/devstack$ nova-manage network list
id       IPv4                  IPv6               start address      DNS1               DNS2               VlanID             project            uuid           
2012-02-27 13:38:04 DEBUG nova.utils [req-7b6febc7-c8cd-49e5-ac48-5707c70d8bb5 None None] backend <module 'nova.db.sqlalchemy.api' from '/opt/stack/nova/nova/db/sqlalchemy/api.pyc'> from (pid=4776) __get_backend /opt/stack/nova/nova/utils.py:603
1        10.0.0.0/24           None               10.0.0.2           8.8.4.4            None               None               None               20cda3a7-f4a8-4b3c-b399-4dd624cb7a40


u1 at u1110srv:~/devstack$ TENANT=
u1 at u1110srv:~/devstack$ USERNAME=
u1 at u1110srv:~/devstack$ . openrc
u1 at u1110srv:~/devstack$ 

u1 at u1110srv:~/devstack$ nova boot --flavor 6 --image 21b0573e-8dd6-4b42-9c01-4c8684b0b080 guest1


u1 at u1110srv:~/devstack$ nova show guest1
+-------------------+----------------------------------------------------------+
|      Property     |                          Value                           |
+-------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL                                                   |
| accessIPv4        |                                                          |
| accessIPv6        |                                                          |
| config_drive      |                                                          |
| created           | 2012-02-27T13:49:00Z                                     |
| flavor            | micro                                                    |
| hostId            | 372f92b8889526d07feaa81ec5ab9bb80228350db4cfa563c15baf6f |
| id                | 3d931521-9ea2-4344-ad52-13faf4172e96                     |
| image             | cirros-0.3.0-x86_64-blank                                |
| key_name          |                                                          |
| metadata          | {}                                                       |
| name              | guest1                                                   |
| private network   | 10.0.0.2                                                 |
| progress          | None                                                     |
| status            | ACTIVE                                                   |
| tenant_id         | 5b5a2c42b5874058962c6f543ee91c72                         |
| updated           | 2012-02-27T13:49:30Z                                     |
| user_id           | demo                                                     |
+-------------------+----------------------------------------------------------+


u1 at u1110srv:~/devstack$ ssh cirros at 10.0.0.2
The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established.
RSA key fingerprint is ed:b0:be:78:26:23:2a:8d:81:22:84:84:f0:6c:ec:3c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.2' (RSA) to the list of known hosts.
cirros at 10.0.0.2's password: 
$ ping www.openstack.org
PING www.openstack.org (98.129.229.144): 56 data bytes
64 bytes from 98.129.229.144: seq=1 ttl=61 time=182.020 ms
64 bytes from 98.129.229.144: seq=2 ttl=61 time=182.166 ms
^C
--- www.openstack.org ping statistics ---
3 packets transmitted, 2 packets received, 33% packet loss
round-trip min/avg/max = 182.020/182.093/182.166 ms
$ Connection to 10.0.0.2 closed.
u1 at u1110srv:~/devstack$ 


stack.sh is not creating any floating range with quantum enabled:

   if is_service_enabled q-svc; then
        echo "Not creating floating IPs (not supported by QuantumManager)"
    else
        # create some floating ips
        $NOVA_DIR/bin/nova-manage floating create $FLOATING_RANGE

        # create a second pool
        $NOVA_DIR/bin/nova-manage floating create --ip_range=$TEST_FLOATING_RANGE --pool=$TEST_FLOATING_POOL
    fi


u1 at u1110srv:~/devstack$ nova-manage floating create --ip_range=172.241.1.0/24 --interface=eth2

u1 at u1110srv:~/devstack$ nova-manage floating list 2>/dev/null | head -3
None    172.241.1.1    None    nova    eth2
None    172.241.1.2    None    nova    eth2
None    172.241.1.3    None    nova    eth2

u1 at u1110srv:~/devstack$ nova floating-ip-create
+-------------+-------------+----------+------+
|      Ip     | Instance Id | Fixed Ip | Pool |
+-------------+-------------+----------+------+
| 172.241.1.1 | None        | None     | nova |
+-------------+-------------+----------+------+

u1 at u1110srv:~/devstack$ nova add-floating-ip guest1 172.241.1.1
u1 at u1110srv:~/devstack$ echo $?
0
u1 at u1110srv:~/devstack$ nova floating-ip-list
+-------------+-------------+----------+------+
|      Ip     | Instance Id | Fixed Ip | Pool |
+-------------+-------------+----------+------+
| 172.241.1.1 | None        | None     | nova |
+-------------+-------------+----------+------+

u1 at u1110srv:~/devstack$ ip link show dev eth2
2: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 08:00:27:1a:5c:69 brd ff:ff:ff:ff:ff:ff

u1 at u1110srv:~/devstack$ sudo iptables -t nat -vnL | grep -i float
Chain nova-api-float-snat (1 references)
   81  4863 nova-api-float-snat  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
Chain nova-compute-float-snat (1 references)
   82  4947 nova-compute-float-snat  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
Chain nova-manage-float-snat (1 references)
   81  4863 nova-manage-float-snat  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
Chain nova-network-float-snat (1 references)
   82  4947 nova-network-float-snat  all  --  *      *       0.0.0.0/0            0.0.0.0/0           


--
Mailing list: https://launchpad.net/~netstack
Post to     : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp




-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt 
Nicira Networks: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- 
Mailing list: https://launchpad.net/~netstack
Post to : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp
-- 
Mailing list: https://launchpad.net/~netstack
Post to : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp
_______________________________________________

Mailing list: https://launchpad.net/~openstack
Post to     : openstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


--
Mailing list: https://launchpad.net/~netstack
Post to     : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp



--
Mailing list: https://launchpad.net/~netstack
Post to     : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~netstack
Post to : netstack at lists.launchpad.net
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120301/6c9a4699/attachment.html>


More information about the Openstack mailing list