I had the same problem here.<br>Looks like "nova-manage network delete" does not update the fixed_ips table.<br><br>Cheers,<br><br><div class="gmail_quote">On 30 January 2012 06:32, Alfred Perlstein <span dir="ltr"><<a href="mailto:alfred@pontiflex.com">alfred@pontiflex.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">So I actually resolved this on my own after looking at it for the good part of a day.<br>
<br>
the problem was a bit of my misunderstanding of how openstack wants a VLAN separated network to look like and a bit of openstack's database accumulating cruft and requiring a manual cleaning to get "right" again.<br>

<br>
I followed the directions here (added some extras for multiple projects): <a href="http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-vlan-networking.html" target="_blank">http://docs.openstack.org/<u></u>diablo/openstack-compute/<u></u>admin/content/configuring-<u></u>vlan-networking.html</a> <br>

<br>
I then was STILL getting machines being brought up in the wrong subnets.  I grepped through most of /var and /etc before deciding to mysqldump the openstack config database and try to figure out what all the nonsense was about.<br>

<br>
It winds up that there is a table called 'fixed_ips' that if it gets stale it causes problems.  By stale I mean it has duplicate ips in it, or somehow has old ips that are no longer in the 'networks' table, this seems to make openstack go a little nuts and give old ips to new instances.<br>

<br>
I fixed this by going into the database and running a few commands like this:<br>
-- these networks were no longer in nova.networks table.<br>
delete from fixed_ips where address like '192.168.5.%';<br>
delete from fixed_ips where address like '192.168.4.%';<br>
<br>
then I ran:<br>
nova-manage db sync<br>
<br>
and then instances started showing up in the correct network depending on project!<br>
<br>
woo, thanks for this cool project guys, nice stuff!<span class="HOEnZb"><font color="#888888"><br>
<br>
-Alfred</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
On 1/29/12 7:21 PM, Alfred Perlstein wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
A few more notes:<br>
<br>
>From nova-compute.log:<br>
<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:48,121 DEBUG nova.compute.manager [-] instance network_info: |[[{u'bridge': u'br101', u'multi_host': False, u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected': False, u'cidr': u'<a href="http://192.168.5.0/24" target="_blank">192.168.5.0/24</a>', u'cidr_v6': None}, {u'should_create_bridge': True, u'dns': [], u'vif_uuid': u'9452c320-f63a-4641-9a12-<u></u>7f342d30e897', u'label': u'appproj1net1', u'broadcast': u'192.168.5.255', u'ips': [{u'ip': u'192.168.4.14', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac': u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True, u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from (pid=6467) _run_instance /usr/lib/python2.7/dist-<u></u>packages/nova/compute/manager.<u></u>py:394<br>

<br>
>From nova-api.log:<br>
dist-packages/nova/auth/<u></u>manager.py:324<br>
<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:58,526 AUDIT nova.api [86a0236c-6c2e-45c7-9833-<u></u>06d4d5addd26 appadmin appproj1] Authenticated Request For appadmin:appproj1)<br>

<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:58,527 DEBUG nova.api [-] action: DescribeInstances from (pid=6476) __call__ /usr/lib/python2.7/dist-<u></u>packages/nova/api/ec2/__init__<u></u>.py:240<br>

<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:58,528 DEBUG nova.compute.api [-] Searching by: {'deleted': False} from (pid=6476) get_all /usr/lib/python2.7/dist-<u></u>packages/nova/compute/ap<br>

i.py:862<br>
<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0" ?><DescribeInstancesResponse xmlns="<a href="http://ec2.amazonaws.com/doc/2010-08-31/" target="_blank">http://ec2.amazonaws.<u></u>com/doc/2010-08-31/</a>"><<u></u>requestId>86a0236c-6c2e-45c7-<u></u>9833-06d4d5addd26</requestId><<u></u>reservationSet><item><ownerId><u></u>appproj1</ownerId><groupSet><<u></u>item><groupId>default</<u></u>groupId></item></groupSet><<u></u>reservationId>r-5dake0zu</<u></u>reservationId><instancesSet><<u></u>item><displayDescription/><<u></u>displayName>Server 45</displayName><<u></u>rootDeviceType>instance-store<<u></u>/rootDeviceType><keyName><u></u>appkey</keyName><instanceId>i-<u></u>0000002d</instanceId><<u></u>instanceState><code>1</code><<u></u>name>running</name></<u></u>instanceState><publicDnsName/><u></u><imageId>ami-00000012</<u></u>imageId><productCodesSet/><<u></u>privateDnsName>192.168.4.14</<u></u>privateDnsName><dnsName>192.<u></u>168.4.14</dnsName><launchTime><u></u>2012-01-30T03:18:45Z</<u></u>launchTime><amiLaunchIndex>0</<u></u>amiLaunchIndex><<u></u>rootDeviceName>/dev/vda</<u></u>rootDeviceName><kernelId>aki-<u></u>00000011</kernelId><ramdiskId><u></u>ami-00000000</ramdiskId><<u></u>placement><availabilityZone><u></u>nova</availabilityZone></<u></u>placement><ipAddress>192.168.<u></u>4.14</ipAddress><instanceType><u></u>m1.tiny</instanceType><<u></u>privateIpAddress>192.168.4.14<<u></u>/privateIpAddress></item></<u></u>instancesSet></item></<u></u>reservationSet></<u></u>DescribeInstancesResponse> from (pid=6476) _render_response /usr/lib/python2.7/dist-<u></u>packages/nova/api/ec2/<u></u>apirequest.py:99<br>

<br>
again from nova-compute.log:<br>
==> nova-compute.log <==<br>
<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:48,121 DEBUG nova.compute.manager [-] instance network_info: |[[{u'bridge': u'br101', u'multi_host': False, u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected': False, u'cidr': u'<a href="http://192.168.5.0/24" target="_blank">192.168.5.0/24</a>', u'cidr_v6': None}, {u'should_create_bridge': True, u'dns': [], u'vif_uuid': u'9452c320-f63a-4641-9a12-<u></u>7f342d30e897', u'label': u'appproj1net1', u'broadcast': u'192.168.5.255', u'ips': [{u'ip': u'192.168.4.14', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac': u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True, u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from (pid=6467) _run_instance /usr/lib/python2.7/dist-<u></u>packages/nova/compute/manager.<u></u>py:394<br>

<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:48,275 DEBUG nova.virt.libvirt_conn [-] instance instance-0000002d: starting toXML method from (pid=6467) to_xml /usr/lib/python2.7/dist-<u></u>packages/nova/virt/libvirt/<u></u>connection.py:1270<br>

<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:48,275 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan 100 and bridge br101 from (pid=6467) plug /usr/lib/python2.7/dist-<u></u>packages/nova/virt/libvirt/<u></u>vif.py:82<br>

<br>
nova-api.log:<br>
<a href="tel:2012-01-30%2003" value="+12012013003" target="_blank">2012-01-30 03</a>:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0" ?><DescribeInstancesResponse xmlns="<a href="http://ec2.amazonaws.com/doc/2010-08-31/" target="_blank">http://ec2.amazonaws.<u></u>com/doc/2010-08-31/</a>"><<u></u>requestId>86a0236c-6c2e-45c7-<u></u>9833-06d4d5addd26</requestId><<u></u>reservationSet><item><ownerId><u></u>appproj1</ownerId><groupSet><<u></u>item><groupId>default</<u></u>groupId></item></groupSet><<u></u>reservationId>r-5dake0zu</<u></u>reservationId><instancesSet><<u></u>item><displayDescription/><<u></u>displayName>Server 45</displayName><<u></u>rootDeviceType>instance-store<<u></u>/rootDeviceType><keyName><u></u>appkey</keyName><instanceId>i-<u></u>0000002d</instanceId><<u></u>instanceState><code>1</code><<u></u>name>running</name></<u></u>instanceState><publicDnsName/><u></u><imageId>ami-00000012</<u></u>imageId><productCodesSet/><<u></u>privateDnsName>192.168.4.14</<u></u>privateDnsName><dnsName>192.<u></u>168.4.14</dnsName><launchTime><u></u>2012-01-30T03:18:45Z</<u></u>launchTime><amiLaunchIndex>0</<u></u>amiLaunchIndex><<u></u>rootDeviceName>/dev/vda</<u></u>rootDeviceName><kernelId>aki-<u></u>00000011</kernelId><ramdiskId><u></u>ami-00000000</ramdiskId><<u></u>placement><availabilityZone><u></u>nova</availabilityZone></<u></u>placement><ipAddress>192.168.<u></u>4.14</ipAddress><instanceType><u></u>m1.tiny</instanceType><<u></u>privateIpAddress>192.168.4.14<<u></u>/privateIpAddress></item></<u></u>instancesSet></item></<u></u>reservationSet></<u></u>DescribeInstancesResponse> from (pid=6476) _render_response /usr/lib/python2.7/dist-<u></u>packages/nova/api/ec2/<u></u>apirequest.py:99<br>

<br>
<br>
<br>
Thank you very much,<br>
-Alfred<br>
<br>
On 1/29/12 7:08 PM, Alfred Perlstein wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hey folks,<br>
<br>
I am so so so so close to being done with my first openstack deployment here.  It is a relatively basic setup that mirrors the "Diablo starter guide".<br>
<br>
However I seem to be stuck on a simple issue.  I can't make multiple networks work, and hence I can't make multiple projects work because of what appears to be problems I'm having configuring nova-network.<br>

<br>
What is really strange is that for some reason my "appproj" user is somehow making nodes launch inside the "proj" network.<br>
<br>
I am not sure why this is happening.  Should I have these networks on separate br100/br101 interfaces?  Separate vlans?  I'm losing my mind here. :)<br>
<br>
/var/log/nova # nova-manage network list<br>
id       IPv4                  IPv6               start address      DNS1               DNS2               VlanID             project            uuid<br>
17       <a href="http://192.168.4.0/24" target="_blank">192.168.4.0/24</a>        None               192.168.4.3        None               None               100                proj               None<br>
21       <a href="http://192.168.5.0/24" target="_blank">192.168.5.0/24</a>        None               192.168.5.3        None               None               100                appproj1           None<br>
<br>
/var/log/nova # euca-describe-instances<br>
RESERVATION    r-z80ezowp    appproj1    default<br>
INSTANCE    i-00000022    ami-0000000c    192.168.4.9    192.168.4.9    running    appkey (appproj1, amber)    0        m1.tiny    2012-01-30T01:36:05Z    nova    aki-0000000b    ami-00000000<br>
RESERVATION    r-1jhgh53l    proj    default<br>
INSTANCE    i-0000001d    ami-00000004    192.168.4.5    192.168.4.5    running    mykey (proj, amber)    0        m1.tiny    2012-01-30T01:12:57Z    nova    aki-00000003    ami-00000000<br>
<br>
<br>
Note how i-00000022 is inside PROJECT="appproj1", but ip is 192.168.4.9 (this should be 192.168.5.x!!!!!).<br>
<br>
Let me show the config and output of some commands:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
/var/log/nova # cat /etc/nova/nova.conf<br>
--dhcpbridge_flagfile=/etc/<u></u>nova/nova.conf<br>
--dhcpbridge=/usr/bin/nova-<u></u>dhcpbridge<br>
--logdir=/var/log/nova<br>
--lock_path=/var/lock/nova<br>
--state_path=/var/lib/nova<br>
--verbose<br>
--s3_host=10.254.13.50<br>
--rabbit_host=10.254.13.50<br>
--cc_host=10.254.13.50<br>
--ec2_url=<a href="http://10.254.13.50:8773/services/Cloud" target="_blank">http://10.254.13.50:<u></u>8773/services/Cloud</a><br>
--fixed_range=<a href="http://192.168.0.0/16" target="_blank">192.168.0.0/16</a><br>
--network_size=256<br>
--FAKE_subdomain=ec2<br>
--routing_source_ip=10.254.13.<u></u>50<br>
--sql_connection=mysql://<a href="http://XXX:XXX@10.254.13.50/nova" target="_blank">XXX:<u></u>XXX@10.254.13.50/nova</a><br>
--glance_api_servers=<a href="http://192.168.3.1:9292" target="_blank">192.168.<u></u>3.1:9292</a><br>
--image_service=nova.image.<u></u>glance.GlanceImageService<br>
--iscsi_ip_prefix=192.168.<br>
--vlan_interface=eth1<br>
--public_interface=eth0<br>
--nova_url=<a href="http://10.254.13.50:8774/v1.1/" target="_blank">http://10.254.13.<u></u>50:8774/v1.1/</a><br>
</blockquote>
<br>
<br>
<br>
/var/log/nova # ip addr list<br>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN<br>
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
    inet <a href="http://127.0.0.1/8" target="_blank">127.0.0.1/8</a> scope host lo<br>
    inet <a href="http://169.254.169.254/32" target="_blank">169.254.169.254/32</a> scope link lo<br>
    inet6 ::1/128 scope host<br>
       valid_lft forever preferred_lft forever<br>
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc mq state UP qlen 1000<br>
    link/ether 84:2b:2b:4e:0d:dc brd ff:ff:ff:ff:ff:ff<br>
    inet <a href="http://10.254.13.50/25" target="_blank">10.254.13.50/25</a> brd 10.254.13.127 scope global eth0<br>
    inet6 fe80::862b:2bff:fe4e:ddc/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc mq state UP qlen 1000<br>
    link/ether 84:2b:2b:4e:0d:dd brd ff:ff:ff:ff:ff:ff<br>
    inet <a href="http://192.168.3.1/24" target="_blank">192.168.3.1/24</a> brd 192.168.3.255 scope global eth1<br>
    inet6 fe80::862b:2bff:fe4e:ddd/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
4: virbr0: <NO-CARRIER,BROADCAST,<u></u>MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN<br>
    link/ether 42:09:56:f6:99:44 brd ff:ff:ff:ff:ff:ff<br>
    inet <a href="http://192.168.122.1/24" target="_blank">192.168.122.1/24</a> brd 192.168.122.255 scope global virbr0<br>
6: vlan100@eth1: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc noqueue master br100 state UP<br>
    link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff<br>
    inet6 fe80::16:3eff:fe27:ceb1/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
7: br100: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc noqueue state UP<br>
    link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff<br>
    inet <a href="http://192.168.5.1/24" target="_blank">192.168.5.1/24</a> brd 192.168.5.255 scope global br100<br>
    inet <a href="http://192.168.4.1/24" target="_blank">192.168.4.1/24</a> brd 192.168.4.255 scope global br100<br>
    inet6 fe80::b069:faff:feb5:2869/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc pfifo_fast master br100 state UNKNOWN qlen 500<br>
    link/ether fe:16:3e:48:e4:aa brd ff:ff:ff:ff:ff:ff<br>
    inet6 fe80::fc16:3eff:fe48:e4aa/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
11: vnet1: <BROADCAST,MULTICAST,UP,LOWER_<u></u>UP> mtu 1500 qdisc pfifo_fast master br100 state UNKNOWN qlen 500<br>
    link/ether fe:16:3e:44:d5:c9 brd ff:ff:ff:ff:ff:ff<br>
    inet6 fe80::fc16:3eff:fe44:d5c9/64 scope link<br>
       valid_lft forever preferred_lft forever<br>
<br>
<br>
<br>
<br>
<br>
</blockquote>
<br>
</blockquote>
<br>
______________________________<u></u>_________________<br>
Openstack-operators mailing list<br>
<a href="mailto:Openstack-operators@lists.openstack.org" target="_blank">Openstack-operators@lists.<u></u>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-operators</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>checkout my blog on linux clusters:<br>-- <a href="http://linuxdatacenter.blogspot.com">linuxdatacenter.blogspot.com</a> --<br>