[Openstack-operators] Please help with multiple "fixed networks"

Linux Datacenter linuxdatacenter at gmail.com
Mon Jan 30 08:16:48 UTC 2012


I had the same problem here.
Looks like "nova-manage network delete" does not update the fixed_ips table.

Cheers,

On 30 January 2012 06:32, Alfred Perlstein <alfred at pontiflex.com> wrote:

> So I actually resolved this on my own after looking at it for the good
> part of a day.
>
> the problem was a bit of my misunderstanding of how openstack wants a VLAN
> separated network to look like and a bit of openstack's database
> accumulating cruft and requiring a manual cleaning to get "right" again.
>
> I followed the directions here (added some extras for multiple projects):
> http://docs.openstack.org/**diablo/openstack-compute/**
> admin/content/configuring-**vlan-networking.html<http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-vlan-networking.html>
>
> I then was STILL getting machines being brought up in the wrong subnets.
>  I grepped through most of /var and /etc before deciding to mysqldump the
> openstack config database and try to figure out what all the nonsense was
> about.
>
> It winds up that there is a table called 'fixed_ips' that if it gets stale
> it causes problems.  By stale I mean it has duplicate ips in it, or somehow
> has old ips that are no longer in the 'networks' table, this seems to make
> openstack go a little nuts and give old ips to new instances.
>
> I fixed this by going into the database and running a few commands like
> this:
> -- these networks were no longer in nova.networks table.
> delete from fixed_ips where address like '192.168.5.%';
> delete from fixed_ips where address like '192.168.4.%';
>
> then I ran:
> nova-manage db sync
>
> and then instances started showing up in the correct network depending on
> project!
>
> woo, thanks for this cool project guys, nice stuff!
>
> -Alfred
>
>
> On 1/29/12 7:21 PM, Alfred Perlstein wrote:
>
>> A few more notes:
>>
>> From nova-compute.log:
>> 2012-01-30 03:18:48,121 DEBUG nova.compute.manager [-] instance
>> network_info: |[[{u'bridge': u'br101', u'multi_host': False,
>> u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected': False,
>> u'cidr': u'192.168.5.0/24', u'cidr_v6': None}, {u'should_create_bridge':
>> True, u'dns': [], u'vif_uuid': u'9452c320-f63a-4641-9a12-**7f342d30e897',
>> u'label': u'appproj1net1', u'broadcast': u'192.168.5.255', u'ips': [{u'ip':
>> u'192.168.4.14', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac':
>> u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True,
>> u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from
>> (pid=6467) _run_instance /usr/lib/python2.7/dist-**
>> packages/nova/compute/manager.**py:394
>>
>> From nova-api.log:
>> dist-packages/nova/auth/**manager.py:324
>> 2012-01-30 03:18:58,526 AUDIT nova.api [86a0236c-6c2e-45c7-9833-**06d4d5addd26
>> appadmin appproj1] Authenticated Request For appadmin:appproj1)
>> 2012-01-30 03:18:58,527 DEBUG nova.api [-] action: DescribeInstances
>> from (pid=6476) __call__ /usr/lib/python2.7/dist-**
>> packages/nova/api/ec2/__init__**.py:240
>> 2012-01-30 03:18:58,528 DEBUG nova.compute.api [-] Searching by:
>> {'deleted': False} from (pid=6476) get_all /usr/lib/python2.7/dist-**
>> packages/nova/compute/ap
>> i.py:862
>> 2012-01-30 03:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0"
>> ?><DescribeInstancesResponse xmlns="http://ec2.amazonaws.**
>> com/doc/2010-08-31/ <http://ec2.amazonaws.com/doc/2010-08-31/>"><**
>> requestId>86a0236c-6c2e-45c7-**9833-06d4d5addd26</requestId><**
>> reservationSet><item><ownerId>**appproj1</ownerId><groupSet><**
>> item><groupId>default</**groupId></item></groupSet><**
>> reservationId>r-5dake0zu</**reservationId><instancesSet><**
>> item><displayDescription/><**displayName>Server 45</displayName><**
>> rootDeviceType>instance-store<**/rootDeviceType><keyName>**
>> appkey</keyName><instanceId>i-**0000002d</instanceId><**
>> instanceState><code>1</code><**name>running</name></**
>> instanceState><publicDnsName/>**<imageId>ami-00000012</**
>> imageId><productCodesSet/><**privateDnsName>192.168.4.14</**
>> privateDnsName><dnsName>192.**168.4.14</dnsName><launchTime>**
>> 2012-01-30T03:18:45Z</**launchTime><amiLaunchIndex>0</**amiLaunchIndex><*
>> *rootDeviceName>/dev/vda</**rootDeviceName><kernelId>aki-**
>> 00000011</kernelId><ramdiskId>**ami-00000000</ramdiskId><**
>> placement><availabilityZone>**nova</availabilityZone></**
>> placement><ipAddress>192.168.**4.14</ipAddress><instanceType>**
>> m1.tiny</instanceType><**privateIpAddress>192.168.4.14<**
>> /privateIpAddress></item></**instancesSet></item></**reservationSet></**DescribeInstancesResponse>
>> from (pid=6476) _render_response /usr/lib/python2.7/dist-**
>> packages/nova/api/ec2/**apirequest.py:99
>>
>> again from nova-compute.log:
>> ==> nova-compute.log <==
>> 2012-01-30 03:18:48,121 DEBUG nova.compute.manager [-] instance
>> network_info: |[[{u'bridge': u'br101', u'multi_host': False,
>> u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected': False,
>> u'cidr': u'192.168.5.0/24', u'cidr_v6': None}, {u'should_create_bridge':
>> True, u'dns': [], u'vif_uuid': u'9452c320-f63a-4641-9a12-**7f342d30e897',
>> u'label': u'appproj1net1', u'broadcast': u'192.168.5.255', u'ips': [{u'ip':
>> u'192.168.4.14', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac':
>> u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True,
>> u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from
>> (pid=6467) _run_instance /usr/lib/python2.7/dist-**
>> packages/nova/compute/manager.**py:394
>> 2012-01-30 03:18:48,275 DEBUG nova.virt.libvirt_conn [-] instance
>> instance-0000002d: starting toXML method from (pid=6467) to_xml
>> /usr/lib/python2.7/dist-**packages/nova/virt/libvirt/**connection.py:1270
>> 2012-01-30 03:18:48,275 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan
>> 100 and bridge br101 from (pid=6467) plug /usr/lib/python2.7/dist-**
>> packages/nova/virt/libvirt/**vif.py:82
>>
>> nova-api.log:
>> 2012-01-30 03:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0"
>> ?><DescribeInstancesResponse xmlns="http://ec2.amazonaws.**
>> com/doc/2010-08-31/ <http://ec2.amazonaws.com/doc/2010-08-31/>"><**
>> requestId>86a0236c-6c2e-45c7-**9833-06d4d5addd26</requestId><**
>> reservationSet><item><ownerId>**appproj1</ownerId><groupSet><**
>> item><groupId>default</**groupId></item></groupSet><**
>> reservationId>r-5dake0zu</**reservationId><instancesSet><**
>> item><displayDescription/><**displayName>Server 45</displayName><**
>> rootDeviceType>instance-store<**/rootDeviceType><keyName>**
>> appkey</keyName><instanceId>i-**0000002d</instanceId><**
>> instanceState><code>1</code><**name>running</name></**
>> instanceState><publicDnsName/>**<imageId>ami-00000012</**
>> imageId><productCodesSet/><**privateDnsName>192.168.4.14</**
>> privateDnsName><dnsName>192.**168.4.14</dnsName><launchTime>**
>> 2012-01-30T03:18:45Z</**launchTime><amiLaunchIndex>0</**amiLaunchIndex><*
>> *rootDeviceName>/dev/vda</**rootDeviceName><kernelId>aki-**
>> 00000011</kernelId><ramdiskId>**ami-00000000</ramdiskId><**
>> placement><availabilityZone>**nova</availabilityZone></**
>> placement><ipAddress>192.168.**4.14</ipAddress><instanceType>**
>> m1.tiny</instanceType><**privateIpAddress>192.168.4.14<**
>> /privateIpAddress></item></**instancesSet></item></**reservationSet></**DescribeInstancesResponse>
>> from (pid=6476) _render_response /usr/lib/python2.7/dist-**
>> packages/nova/api/ec2/**apirequest.py:99
>>
>>
>>
>> Thank you very much,
>> -Alfred
>>
>> On 1/29/12 7:08 PM, Alfred Perlstein wrote:
>>
>>> Hey folks,
>>>
>>> I am so so so so close to being done with my first openstack deployment
>>> here.  It is a relatively basic setup that mirrors the "Diablo starter
>>> guide".
>>>
>>> However I seem to be stuck on a simple issue.  I can't make multiple
>>> networks work, and hence I can't make multiple projects work because of
>>> what appears to be problems I'm having configuring nova-network.
>>>
>>> What is really strange is that for some reason my "appproj" user is
>>> somehow making nodes launch inside the "proj" network.
>>>
>>> I am not sure why this is happening.  Should I have these networks on
>>> separate br100/br101 interfaces?  Separate vlans?  I'm losing my mind here.
>>> :)
>>>
>>> /var/log/nova # nova-manage network list
>>> id       IPv4                  IPv6               start address
>>>  DNS1               DNS2               VlanID             project
>>>  uuid
>>> 17       192.168.4.0/24        None               192.168.4.3
>>>  None               None               100                proj
>>>   None
>>> 21       192.168.5.0/24        None               192.168.5.3
>>>  None               None               100                appproj1
>>>   None
>>>
>>> /var/log/nova # euca-describe-instances
>>> RESERVATION    r-z80ezowp    appproj1    default
>>> INSTANCE    i-00000022    ami-0000000c    192.168.4.9    192.168.4.9
>>>  running    appkey (appproj1, amber)    0        m1.tiny
>>>  2012-01-30T01:36:05Z    nova    aki-0000000b    ami-00000000
>>> RESERVATION    r-1jhgh53l    proj    default
>>> INSTANCE    i-0000001d    ami-00000004    192.168.4.5    192.168.4.5
>>>  running    mykey (proj, amber)    0        m1.tiny    2012-01-30T01:12:57Z
>>>    nova    aki-00000003    ami-00000000
>>>
>>>
>>> Note how i-00000022 is inside PROJECT="appproj1", but ip is 192.168.4.9
>>> (this should be 192.168.5.x!!!!!).
>>>
>>> Let me show the config and output of some commands:
>>>
>>>  /var/log/nova # cat /etc/nova/nova.conf
>>>> --dhcpbridge_flagfile=/etc/**nova/nova.conf
>>>> --dhcpbridge=/usr/bin/nova-**dhcpbridge
>>>> --logdir=/var/log/nova
>>>> --lock_path=/var/lock/nova
>>>> --state_path=/var/lib/nova
>>>> --verbose
>>>> --s3_host=10.254.13.50
>>>> --rabbit_host=10.254.13.50
>>>> --cc_host=10.254.13.50
>>>> --ec2_url=http://10.254.13.50:**8773/services/Cloud<http://10.254.13.50:8773/services/Cloud>
>>>> --fixed_range=192.168.0.0/16
>>>> --network_size=256
>>>> --FAKE_subdomain=ec2
>>>> --routing_source_ip=10.254.13.**50
>>>> --sql_connection=mysql://XXX:**XXX@10.254.13.50/nova<http://XXX:XXX@10.254.13.50/nova>
>>>> --glance_api_servers=192.168.**3.1:9292 <http://192.168.3.1:9292>
>>>> --image_service=nova.image.**glance.GlanceImageService
>>>> --iscsi_ip_prefix=192.168.
>>>> --vlan_interface=eth1
>>>> --public_interface=eth0
>>>> --nova_url=http://10.254.13.**50:8774/v1.1/<http://10.254.13.50:8774/v1.1/>
>>>>
>>>
>>>
>>>
>>> /var/log/nova # ip addr list
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>    inet 127.0.0.1/8 scope host lo
>>>    inet 169.254.169.254/32 scope link lo
>>>    inet6 ::1/128 scope host
>>>       valid_lft forever preferred_lft forever
>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc mq state UP
>>> qlen 1000
>>>    link/ether 84:2b:2b:4e:0d:dc brd ff:ff:ff:ff:ff:ff
>>>    inet 10.254.13.50/25 brd 10.254.13.127 scope global eth0
>>>    inet6 fe80::862b:2bff:fe4e:ddc/64 scope link
>>>       valid_lft forever preferred_lft forever
>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc mq state UP
>>> qlen 1000
>>>    link/ether 84:2b:2b:4e:0d:dd brd ff:ff:ff:ff:ff:ff
>>>    inet 192.168.3.1/24 brd 192.168.3.255 scope global eth1
>>>    inet6 fe80::862b:2bff:fe4e:ddd/64 scope link
>>>       valid_lft forever preferred_lft forever
>>> 4: virbr0: <NO-CARRIER,BROADCAST,**MULTICAST,UP> mtu 1500 qdisc noqueue
>>> state DOWN
>>>    link/ether 42:09:56:f6:99:44 brd ff:ff:ff:ff:ff:ff
>>>    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>>> 6: vlan100 at eth1: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc
>>> noqueue master br100 state UP
>>>    link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff
>>>    inet6 fe80::16:3eff:fe27:ceb1/64 scope link
>>>       valid_lft forever preferred_lft forever
>>> 7: br100: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc noqueue
>>> state UP
>>>    link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff
>>>    inet 192.168.5.1/24 brd 192.168.5.255 scope global br100
>>>    inet 192.168.4.1/24 brd 192.168.4.255 scope global br100
>>>    inet6 fe80::b069:faff:feb5:2869/64 scope link
>>>       valid_lft forever preferred_lft forever
>>> 8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc pfifo_fast
>>> master br100 state UNKNOWN qlen 500
>>>    link/ether fe:16:3e:48:e4:aa brd ff:ff:ff:ff:ff:ff
>>>    inet6 fe80::fc16:3eff:fe48:e4aa/64 scope link
>>>       valid_lft forever preferred_lft forever
>>> 11: vnet1: <BROADCAST,MULTICAST,UP,LOWER_**UP> mtu 1500 qdisc
>>> pfifo_fast master br100 state UNKNOWN qlen 500
>>>    link/ether fe:16:3e:44:d5:c9 brd ff:ff:ff:ff:ff:ff
>>>    inet6 fe80::fc16:3eff:fe44:d5c9/64 scope link
>>>       valid_lft forever preferred_lft forever
>>>
>>>
>>>
>>>
>>>
>>>
>>
> ______________________________**_________________
> Openstack-operators mailing list
> Openstack-operators at lists.**openstack.org<Openstack-operators at lists.openstack.org>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>



-- 
checkout my blog on linux clusters:
-- linuxdatacenter.blogspot.com --
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20120130/73b3c3c4/attachment-0002.html>


More information about the Openstack-operators mailing list