[Openstack-operators] Please help with multiple "fixed networks"
Alfred Perlstein
alfred at pontiflex.com
Mon Jan 30 03:21:57 UTC 2012
A few more notes:
From nova-compute.log:
2012-01-30 03:18:48,121 DEBUG nova.compute.manager [-] instance
network_info: |[[{u'bridge': u'br101', u'multi_host': False,
u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected':
False, u'cidr': u'192.168.5.0/24', u'cidr_v6': None},
{u'should_create_bridge': True, u'dns': [], u'vif_uuid':
u'9452c320-f63a-4641-9a12-7f342d30e897', u'label': u'appproj1net1',
u'broadcast': u'192.168.5.255', u'ips': [{u'ip': u'192.168.4.14',
u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac':
u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True,
u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from
(pid=6467) _run_instance
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:394
From nova-api.log:
dist-packages/nova/auth/manager.py:324
2012-01-30 03:18:58,526 AUDIT nova.api
[86a0236c-6c2e-45c7-9833-06d4d5addd26 appadmin appproj1] Authenticated
Request For appadmin:appproj1)
2012-01-30 03:18:58,527 DEBUG nova.api [-] action: DescribeInstances
from (pid=6476) __call__
/usr/lib/python2.7/dist-packages/nova/api/ec2/__init__.py:240
2012-01-30 03:18:58,528 DEBUG nova.compute.api [-] Searching by:
{'deleted': False} from (pid=6476) get_all
/usr/lib/python2.7/dist-packages/nova/compute/ap
i.py:862
2012-01-30 03:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0"
?><DescribeInstancesResponse
xmlns="http://ec2.amazonaws.com/doc/2010-08-31/"><requestId>86a0236c-6c2e-45c7-9833-06d4d5addd26</requestId><reservationSet><item><ownerId>appproj1</ownerId><groupSet><item><groupId>default</groupId></item></groupSet><reservationId>r-5dake0zu</reservationId><instancesSet><item><displayDescription/><displayName>Server
45</displayName><rootDeviceType>instance-store</rootDeviceType><keyName>appkey</keyName><instanceId>i-0000002d</instanceId><instanceState><code>1</code><name>running</name></instanceState><publicDnsName/><imageId>ami-00000012</imageId><productCodesSet/><privateDnsName>192.168.4.14</privateDnsName><dnsName>192.168.4.14</dnsName><launchTime>2012-01-30T03:18:45Z</launchTime><amiLaunchIndex>0</amiLaunchIndex><rootDeviceName>/dev/vda</rootDeviceName><kernelId>aki-00000011</kernelId><ramdiskId>ami-00000000</ramdiskId><placement><availabilityZone>nova</availabilityZone></placement><ipAddress>192.168.4.14</ipAddress><instanceType>m1.tiny</instanceType><privateIpAddress>192.168.4.14</privateIpAddress></item></instancesSet></item></reservationSet></DescribeInstancesResponse>
from (pid=6476) _render_response
/usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py:99
again from nova-compute.log:
==> nova-compute.log <==
2012-01-30 03:18:48,121 DEBUG nova.compute.manager [-] instance
network_info: |[[{u'bridge': u'br101', u'multi_host': False,
u'bridge_interface': u'eth1', u'vlan': 100, u'id': 18, u'injected':
False, u'cidr': u'192.168.5.0/24', u'cidr_v6': None},
{u'should_create_bridge': True, u'dns': [], u'vif_uuid':
u'9452c320-f63a-4641-9a12-7f342d30e897', u'label': u'appproj1net1',
u'broadcast': u'192.168.5.255', u'ips': [{u'ip': u'192.168.4.14',
u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac':
u'02:16:3e:3f:c2:31', u'rxtx_cap': 0, u'should_create_vlan': True,
u'dhcp_server': u'192.168.5.1', u'gateway': u'192.168.5.1'}]]| from
(pid=6467) _run_instance
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:394
2012-01-30 03:18:48,275 DEBUG nova.virt.libvirt_conn [-] instance
instance-0000002d: starting toXML method from (pid=6467) to_xml
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:1270
2012-01-30 03:18:48,275 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan
100 and bridge br101 from (pid=6467) plug
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py:82
nova-api.log:
2012-01-30 03:18:58,600 DEBUG nova.api.request [-] <?xml version="1.0"
?><DescribeInstancesResponse
xmlns="http://ec2.amazonaws.com/doc/2010-08-31/"><requestId>86a0236c-6c2e-45c7-9833-06d4d5addd26</requestId><reservationSet><item><ownerId>appproj1</ownerId><groupSet><item><groupId>default</groupId></item></groupSet><reservationId>r-5dake0zu</reservationId><instancesSet><item><displayDescription/><displayName>Server
45</displayName><rootDeviceType>instance-store</rootDeviceType><keyName>appkey</keyName><instanceId>i-0000002d</instanceId><instanceState><code>1</code><name>running</name></instanceState><publicDnsName/><imageId>ami-00000012</imageId><productCodesSet/><privateDnsName>192.168.4.14</privateDnsName><dnsName>192.168.4.14</dnsName><launchTime>2012-01-30T03:18:45Z</launchTime><amiLaunchIndex>0</amiLaunchIndex><rootDeviceName>/dev/vda</rootDeviceName><kernelId>aki-00000011</kernelId><ramdiskId>ami-00000000</ramdiskId><placement><availabilityZone>nova</availabilityZone></placement><ipAddress>192.168.4.14</ipAddress><instanceType>m1.tiny</instanceType><privateIpAddress>192.168.4.14</privateIpAddress></item></instancesSet></item></reservationSet></DescribeInstancesResponse>
from (pid=6476) _render_response
/usr/lib/python2.7/dist-packages/nova/api/ec2/apirequest.py:99
Thank you very much,
-Alfred
On 1/29/12 7:08 PM, Alfred Perlstein wrote:
> Hey folks,
>
> I am so so so so close to being done with my first openstack
> deployment here. It is a relatively basic setup that mirrors the
> "Diablo starter guide".
>
> However I seem to be stuck on a simple issue. I can't make multiple
> networks work, and hence I can't make multiple projects work because
> of what appears to be problems I'm having configuring nova-network.
>
> What is really strange is that for some reason my "appproj" user is
> somehow making nodes launch inside the "proj" network.
>
> I am not sure why this is happening. Should I have these networks on
> separate br100/br101 interfaces? Separate vlans? I'm losing my mind
> here. :)
>
> /var/log/nova # nova-manage network list
> id IPv4 IPv6 start address
> DNS1 DNS2 VlanID
> project uuid
> 17 192.168.4.0/24 None 192.168.4.3
> None None 100
> proj None
> 21 192.168.5.0/24 None 192.168.5.3
> None None 100
> appproj1 None
>
> /var/log/nova # euca-describe-instances
> RESERVATION r-z80ezowp appproj1 default
> INSTANCE i-00000022 ami-0000000c 192.168.4.9
> 192.168.4.9 running appkey (appproj1, amber) 0
> m1.tiny 2012-01-30T01:36:05Z nova aki-0000000b ami-00000000
> RESERVATION r-1jhgh53l proj default
> INSTANCE i-0000001d ami-00000004 192.168.4.5
> 192.168.4.5 running mykey (proj, amber) 0 m1.tiny
> 2012-01-30T01:12:57Z nova aki-00000003 ami-00000000
>
>
> Note how i-00000022 is inside PROJECT="appproj1", but ip is
> 192.168.4.9 (this should be 192.168.5.x!!!!!).
>
> Let me show the config and output of some commands:
>
>> /var/log/nova # cat /etc/nova/nova.conf
>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>> --logdir=/var/log/nova
>> --lock_path=/var/lock/nova
>> --state_path=/var/lib/nova
>> --verbose
>> --s3_host=10.254.13.50
>> --rabbit_host=10.254.13.50
>> --cc_host=10.254.13.50
>> --ec2_url=http://10.254.13.50:8773/services/Cloud
>> --fixed_range=192.168.0.0/16
>> --network_size=256
>> --FAKE_subdomain=ec2
>> --routing_source_ip=10.254.13.50
>> --sql_connection=mysql://XXX:XXX@10.254.13.50/nova
>> --glance_api_servers=192.168.3.1:9292
>> --image_service=nova.image.glance.GlanceImageService
>> --iscsi_ip_prefix=192.168.
>> --vlan_interface=eth1
>> --public_interface=eth0
>> --nova_url=http://10.254.13.50:8774/v1.1/
>
>
>
> /var/log/nova # ip addr list
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> inet 169.254.169.254/32 scope link lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 84:2b:2b:4e:0d:dc brd ff:ff:ff:ff:ff:ff
> inet 10.254.13.50/25 brd 10.254.13.127 scope global eth0
> inet6 fe80::862b:2bff:fe4e:ddc/64 scope link
> valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> qlen 1000
> link/ether 84:2b:2b:4e:0d:dd brd ff:ff:ff:ff:ff:ff
> inet 192.168.3.1/24 brd 192.168.3.255 scope global eth1
> inet6 fe80::862b:2bff:fe4e:ddd/64 scope link
> valid_lft forever preferred_lft forever
> 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN
> link/ether 42:09:56:f6:99:44 brd ff:ff:ff:ff:ff:ff
> inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
> 6: vlan100 at eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue master br100 state UP
> link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::16:3eff:fe27:ceb1/64 scope link
> valid_lft forever preferred_lft forever
> 7: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP
> link/ether 02:16:3e:27:ce:b1 brd ff:ff:ff:ff:ff:ff
> inet 192.168.5.1/24 brd 192.168.5.255 scope global br100
> inet 192.168.4.1/24 brd 192.168.4.255 scope global br100
> inet6 fe80::b069:faff:feb5:2869/64 scope link
> valid_lft forever preferred_lft forever
> 8: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master br100 state UNKNOWN qlen 500
> link/ether fe:16:3e:48:e4:aa brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc16:3eff:fe48:e4aa/64 scope link
> valid_lft forever preferred_lft forever
> 11: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master br100 state UNKNOWN qlen 500
> link/ether fe:16:3e:44:d5:c9 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc16:3eff:fe44:d5c9/64 scope link
> valid_lft forever preferred_lft forever
>
>
>
>
>
More information about the Openstack-operators
mailing list