<div dir="ltr"><div><div><div><div>Hi Calvin,<br><br></div>I have made this change but unfortunately the core issue has not been affected. It appear that when you span an instance files should be created under /var/lib/nova/instances/_base<br>
</div>for example /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851<br><br></div>It appears this file is never being created and nova.compute.manager is only picking up the error when qem-img is being used to calculate something to do with disk space. I have also included my new /etc/nova/nova.conf in case I have done something stupid there.<br>
<br></div><div>Thanks<br></div><div><br></div># grep -E ERROR /var/log/nova/compute.log <br><div><br>2013-08-01 11:02:37 3447 ERROR nova.virt.libvirt.driver [-] [instance: bbb1fd2e-ac12-42c4-a856-e14e6ebb7bf6] During wait destroy, instance disappeared.<br>
2013-08-01 11:02:57 ERROR nova.compute.manager [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Instance failed to spawn<br>
2013-08-01 11:02:58 ERROR nova.compute.manager [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1099, in spawn\n admin_pass=admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in _create_image\n project_id=instance[\'project_id\'])\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 131, in cache\n *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 181, in create_image\n if size and size < disk.get_disk_size(base):\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/disk/api.py", line 115, in get_disk_size\n size = images.qemu_img_info(path)[\'virtual size\']\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/images.py", line 50, in qemu_img_info\n \'qemu-img\', \'info\', path)\n', ' File "/usr/lib/python2.6/site-packages/nova/utils.py", line 210, in execute\n cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\nExit code: 1\nStdout: \'\'\nStderr: "qemu-img: Could not open \'/var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\'\\n"\n']<br>
2013-08-01 13:26:55 8566 ERROR nova.openstack.common.rpc.impl_qpid [-] Timed out waiting for RPC response: None<br>2013-08-01 13:28:08 8566 ERROR nova.virt.libvirt.driver [-] [instance: c923a717-af33-45e6-a874-0648c4c100dc] During wait destroy, instance disappeared.<br>
2013-08-01 13:33:03 ERROR nova.compute.manager [req-fbb64cce-5c18-4e00-a6d2-1277f510a4fc a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: a3c0f372-d9fe-4cbb-8006-edc8de7c6442] Instance failed to spawn<br>
2013-08-01 13:33:04 ERROR nova.compute.manager [req-fbb64cce-5c18-4e00-a6d2-1277f510a4fc a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: a3c0f372-d9fe-4cbb-8006-edc8de7c6442] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1099, in spawn\n admin_pass=admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in _create_image\n project_id=instance[\'project_id\'])\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 131, in cache\n *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 181, in create_image\n if size and size < disk.get_disk_size(base):\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/disk/api.py", line 115, in get_disk_size\n size = images.qemu_img_info(path)[\'virtual size\']\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/images.py", line 50, in qemu_img_info\n \'qemu-img\', \'info\', path)\n', ' File "/usr/lib/python2.6/site-packages/nova/utils.py", line 210, in execute\n cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\nExit code: 1\nStdout: \'\'\nStderr: "qemu-img: Could not open \'/var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\'\\n"\n']<br>
2013-08-01 13:55:09 3440 ERROR nova.virt.libvirt.driver [-] [instance: a3c0f372-d9fe-4cbb-8006-edc8de7c6442] During wait destroy, instance disappeared.<br>2013-08-01 13:55:46 ERROR nova.compute.manager [req-7132f025-8d78-4df5-b07f-4d840a22f725 a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: 7894fe8c-42cd-4d2d-a440-377677847947] Instance failed to spawn<br>
2013-08-01 13:55:47 ERROR nova.compute.manager [req-7132f025-8d78-4df5-b07f-4d840a22f725 a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: 7894fe8c-42cd-4d2d-a440-377677847947] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1099, in spawn\n admin_pass=admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in _create_image\n project_id=instance[\'project_id\'])\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 131, in cache\n *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 181, in create_image\n if size and size < disk.get_disk_size(base):\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/disk/api.py", line 115, in get_disk_size\n size = images.qemu_img_info(path)[\'virtual size\']\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/images.py", line 50, in qemu_img_info\n \'qemu-img\', \'info\', path)\n', ' File "/usr/lib/python2.6/site-packages/nova/utils.py", line 210, in execute\n cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\nExit code: 1\nStdout: \'\'\nStderr: "qemu-img: Could not open \'/var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\'\\n"\n']<br>
2013-08-01 14:02:56 3440 ERROR nova.virt.libvirt.driver [-] [instance: 7894fe8c-42cd-4d2d-a440-377677847947] During wait destroy, instance disappeared.<br>2013-08-01 14:03:14 ERROR nova.compute.manager [req-2f35d85d-9b75-4750-b4ba-2310561b7807 a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: 0619b0af-82ec-43a5-993c-8982a1539d08] Instance failed to spawn<br>
2013-08-01 14:03:15 ERROR nova.compute.manager [req-2f35d85d-9b75-4750-b4ba-2310561b7807 a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: 0619b0af-82ec-43a5-993c-8982a1539d08] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 503, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1099, in spawn\n admin_pass=admin_password)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in _create_image\n project_id=instance[\'project_id\'])\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 131, in cache\n *args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 181, in create_image\n if size and size < disk.get_disk_size(base):\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/disk/api.py", line 115, in get_disk_size\n size = images.qemu_img_info(path)[\'virtual size\']\n', ' File "/usr/lib/python2.6/site-packages/nova/virt/images.py", line 50, in qemu_img_info\n \'qemu-img\', \'info\', path)\n', ' File "/usr/lib/python2.6/site-packages/nova/utils.py", line 210, in execute\n cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\nExit code: 1\nStdout: \'\'\nStderr: "qemu-img: Could not open \'/var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851\'\\n"\n']<br>
<br>[DEFAULT]<br><br># LOGS/STATE<br>verbose=True<br>debug=True<br>logdir=/var/log/nova<br>state_path=/var/lib/nova<br>lock_path=/var/lock/nova<br>rootwrap_config=/etc/nova/rootwrap.conf<br><br># SCHEDULER<br>compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler<br>
<br># VOLUMES<br>volume_driver=nova.volume.driver.ISCSIDriver<br>volume_group=nova-volumes<br>volume_name_template=volume-%08x<br>iscsi_helper=tgtadm<br><br># DATABASE<br>sql_connection=mysql://<a href="http://nova:0pen5tack-nova@130.217.218.18/nova">nova:0pen5tack-nova@130.217.218.18/nova</a><br>
<br># COMPUTE<br>libvirt_type=kvm<br>compute_driver=libvirt.LibvirtDriver<br>instance_name_template=instance-%08x<br>api_paste_config=/etc/nova/api-paste.ini<br><br># COMPUTE/APIS: if you have separate configs for separate services<br>
# this flag is required for both nova-api and nova-compute<br>allow_resize_to_same_host=True<br><br># APIS<br>osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions<br>ec2_dmz_host=130.217.218.18<br>
s3_host=130.217.218.18<br><br># Qpid<br>rpc_backend=nova.rpc.impl_qpid<br>qpid_hostname=130.217.218.18<br><br># GLANCE<br>image_service=nova.image.glance.GlanceImageService<br>glance_api_servers=<a href="http://130.217.218.18:9292">130.217.218.18:9292</a><br>
<br># NETWORK<br>network_manager=nova.network.manager.FlatDHCPManager<br>dhcpbridge=/usr/bin/nova-dhcpbridge<br>force_dhcp_release=True<br>dhcpbridge_flagfile=/etc/nova/nova.conf<br>firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver<br>
# Change my_ip to match each host<br>my_ip=130.217.218.18<br>public_interface=em1<br>vlan_interface=em2<br>flat_network_bridge=br100<br>flat_interface=em2<br>fixed_range=<a href="http://192.168.100.0/24">192.168.100.0/24</a><br>
<br># NOVNC CONSOLE<br>novncproxy_base_url=<a href="http://130.217.218.18:6080/vnc_auto.html">http://130.217.218.18:6080/vnc_auto.html</a><br># Change vncserver_proxyclient_address and vncserver_listen to match each compute host<br>
vncserver_proxyclient_address=130.217.218.18<br>vncserver_listen=130.217.218.18<br><br># AUTHENTICATION<br>auth_strategy=keystone<br>[keystone_authtoken]<br>auth_host = 130.217.218.18<br>auth_port = 35357<br>auth_protocol = http<br>
admin_tenant_name = service<br>admin_user = nova<br>admin_password = nova<br>signing_dirname = /tmp/keystone-signing-nova<br><br><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Aug 1, 2013 at 1:29 PM, Calvin Austin <span dir="ltr"><<a href="mailto:caustin@bitglass.com" target="_blank">caustin@bitglass.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">As you have 2 nics and you want to get this running I would try keeping the bridge br100 for the em2 'private' network. Nova dhcp is already configuring the .1 for you correctly so your /etc/network/interfaces should simply have something this below for the bridge, currently your bridge is also used for your nova address as well. bring your br100 down and up against once you fix<div>
<br></div><div>Then use your em1 as your hosts 'known ip' that the services that want to talk to it (your 130 address). <br><div><br><div><div><div>auto br100</div><div>iface br100 inet static</div><div> bridge_stp off</div>
<div> bridge_fd 0</div><div> bridge_ports em2</div><div> bridge_maxwait 0</div><div><br></div><div>iface em2 inet manual</div><div> up ifconfig $IFACE 0.0.0.0 up</div><div> up ifconfig $IFACE promisc</div>
<div><br></div></div><div>regards</div></div></div></div><span class="HOEnZb"><font color="#888888"><div>calvin</div></font></span></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">On Tue, Jul 30, 2013 at 4:29 PM, Clint Dilks <span dir="ltr"><<a href="mailto:clintd@waikato.ac.nz" target="_blank">clintd@waikato.ac.nz</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><div><div><div><div>Hi,<br><br></div>I have got to the point where I am trying to launch a virtual machine as per <a href="http://docs.openstack.org/folsom/openstack-compute/install/yum/content/running-an-instance.html" target="_blank">http://docs.openstack.org/folsom/openstack-compute/install/yum/content/running-an-instance.html</a><br>
<br></div>The boot command completes but it seems some key element of networking is failing.<br></div><div>Does anyone have suggestions on how to trouble shoot this issue further?<br></div><div><br>$ nova boot --flavor 2 --image 9ed7b2c4-f296-4fd0-9ffa-026231947a09 --key_name mykey --security_group default cirros<br>
+-------------------------------------+--------------------------------------+<br>| Property | Value |<br>+-------------------------------------+--------------------------------------+<br>
| OS-DCF:diskConfig | MANUAL |<br>| OS-EXT-SRV-ATTR:host | None |<br>| OS-EXT-SRV-ATTR:hypervisor_hostname | None |<br>
| OS-EXT-SRV-ATTR:instance_name | instance-00000006 |<br>| OS-EXT-STS:power_state | 0 |<br>| OS-EXT-STS:task_state | scheduling |<br>
| OS-EXT-STS:vm_state | building |<br>| accessIPv4 | |<br>| accessIPv6 | |<br>
| adminPass | ASyz54u26TGK |<br>| config_drive | |<br>| created | 2013-07-30T22:14:32Z |<br>
| flavor | m1.small |<br>| hostId | |<br>| id | fc0c5515-9d30-4263-87ea-0f912c0fc7c7 |<br>
| image | cirros-0.3.0-x86_64 |<br>| key_name | mykey |<br>| metadata | {} |<br>
| name | cirros |<br>| progress | 0 |<br>| security_groups | [{u'name': u'default'}] |<br>
| status | BUILD |<br>| tenant_id | f037ea1bab6d4dc08b880c5fdea29fb5 |<br>| updated | 2013-07-30T22:14:32Z |<br>
| user_id | a66355f1760448989e63dc54e853674f |<br>+-------------------------------------+--------------------------------------+<br><br></div>But I notice that no IPV4 address has been assigned<br>
<br></div>If do nova lists during the boot process I see the following<br><br>$ nova list<br>+--------------------------------------+--------+--------+----------+<br>| ID | Name | Status | Networks |<br>
+--------------------------------------+--------+--------+----------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | BUILD | |<br>+--------------------------------------+--------+--------+----------+<br>$ nova list<br>
+--------------------------------------+--------+--------+-----------------------+<br>| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+-----------------------+<br>
| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | BUILD | private=192.168.100.2 |<br>+--------------------------------------+--------+--------+-----------------------+<br>$ nova list<br>+--------------------------------------+--------+--------+-----------------------+<br>
| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+-----------------------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | ERROR | private=192.168.100.2 |<br>
+--------------------------------------+--------+--------+-----------------------+<br><br></div>And if I wait some time and then repeat the nova list it will change to<br><div>$ nova list<br>+--------------------------------------+--------+--------+----------+<br>
| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+----------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | ERROR | |<br>+--------------------------------------+--------+--------+----------+<br>
<br></div><div>The process does not seem to be getting as far as attempting to boot the image<br><br>$ nova console-log fc0c5515-9d30-4263-87ea-0f912c0fc7c7<br>ERROR: The resource could not be found. (HTTP 404) (Request-ID: req-b10876a4-9193-4158-829e-3454e80c706f)<br>
</div><div><br></div><div>There doesn't seem to be any obvious errors in /var/log/nova/network.log. <a href="http://www.cms.waikato.ac.nz/%7Eclintd/nova-network.log" target="_blank">http://www.cms.waikato.ac.nz/~clintd/nova-network.log</a><br>
<br></div><div>And all nova related services appear to be running<br></div><div>]# nova-manage service list<br>Binary Host Zone Status State Updated_At<br>nova-volume prancer nova enabled :-) 2013-07-30 23:14:22<br>
nova-scheduler prancer nova enabled :-) 2013-07-30 23:14:28<br>nova-cert prancer nova enabled :-) 2013-07-30 23:14:25<br>
nova-compute prancer nova enabled :-) 2013-07-30 23:14:21<br>nova-network prancer nova enabled :-) 2013-07-30 23:14:27<br>
nova-console prancer nova enabled :-) 2013-07-30 23:14:26<br>nova-consoleauth prancer nova enabled :-) 2013-07-30 23:14:23<br>
<br># ip a<br>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN <br> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br> inet <a href="http://127.0.0.1/8" target="_blank">127.0.0.1/8</a> scope host lo<br>
inet <a href="http://169.254.169.254/32" target="_blank">169.254.169.254/32</a> scope link lo<br>2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000<br> link/ether 00:24:e8:fe:0e:8a brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://130.217.218.18/16" target="_blank">130.217.218.18/16</a> brd 130.217.255.255 scope global em1<br>3: em2: <NO-CARRIER,BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc mq state DOWN qlen 1000<br>
link/ether 00:24:e8:fe:0e:8b brd ff:ff:ff:ff:ff:ff<br>
5: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN <br> link/ether 00:24:e8:fe:0e:8b brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://192.168.100.1/24" target="_blank">192.168.100.1/24</a> brd 192.168.100.255 scope global br100<br>
inet <a href="http://192.168.206.130/24" target="_blank">192.168.206.130/24</a> brd 192.168.206.255 scope global br100<br>8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN <br> link/ether 52:54:00:83:81:d1 brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://192.168.122.1/24" target="_blank">192.168.122.1/24</a> brd 192.168.122.255 scope global virbr0<br>9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500<br> link/ether 52:54:00:83:81:d1 brd ff:ff:ff:ff:ff:ff<br>
</div><div><br>I am also seeing the following in /var/log messages. Could it be related to the problem?<br> <br></div><div>Jul 31 10:46:55 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:46:55 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>
Jul 31 10:48:22 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:48:22 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>Jul 31 10:50:29 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>
Jul 31 10:50:29 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>Jul 31 10:50:31 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:50:31 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>
Jul 31 10:50:32 prancer dnsmasq-dhcp[17941]: DHCPRELEASE(br100) 192.168.100.2 fa:16:3e:7f:0f:96 unknown lease<br><br></div># cat /etc/nova/nova.conf<br>[DEFAULT]<br><br># LOGS/STATE<br>verbose=True<br>logdir=/var/log/nova<br>
state_path=/var/lib/nova<br>lock_path=/var/lock/nova<br>rootwrap_config=/etc/nova/<div>rootwrap.conf<br><br># SCHEDULER<br>compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler<br><br># VOLUMES<br>volume_driver=nova.volume.driver.ISCSIDriver<br>
volume_group=nova-volumes<br>volume_name_template=volume-%08x<br>iscsi_helper=tgtadm<br><br># DATABASE<br>sql_connection=mysql://<a href="http://nova:0pen5tack-nova@192.168.206.130/nova" target="_blank">nova:0pen5tack-nova@192.168.206.130/nova</a><br>
<br># COMPUTE<br>libvirt_type=kvm<br>compute_driver=libvirt.LibvirtDriver<br>instance_name_template=instance-%08x<br>api_paste_config=/etc/nova/api-paste.ini<br><br># COMPUTE/APIS: if you have separate configs for separate services<br>
# this flag is required for both nova-api and nova-compute<br>allow_resize_to_same_host=True<br><br># APIS<br>osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions<br>ec2_dmz_host=192.168.206.130<br>
s3_host=192.168.206.130<br><br># Qpid<br>rpc_backend=nova.rpc.impl_qpid<br>qpid_hostname=192.168.206.130<br><br># GLANCE<br>image_service=nova.image.glance.GlanceImageService<br>glance_api_servers=<a href="http://192.168.206.130:9292" target="_blank">192.168.206.130:9292</a><br>
<br># NETWORK<br>network_manager=nova.network.manager.FlatDHCPManager<br>dhcpbridge=/usr/bin/nova-dhcpbridge<br>force_dhcp_release=True<br>dhcpbridge_flagfile=/etc/nova/nova.conf<br>firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver<br>
# Change my_ip to match each host<br>my_ip=192.168.206.130<br>public_interface=em2<br>vlan_interface=em2<br>flat_network_bridge=br100<br>flat_interface=em2<br>fixed_range=<a href="http://192.168.100.0/24" target="_blank">192.168.100.0/24</a><br>
<br># NOVNC CONSOLE<br>novncproxy_base_url=<a href="http://192.168.206.130:6080/vnc_auto.html" target="_blank">http://192.168.206.130:6080/vnc_auto.html</a><br># Change vncserver_proxyclient_address and vncserver_listen to match each compute host<br>
vncserver_proxyclient_address=192.168.206.130<br>vncserver_listen=192.168.206.130<br><br># AUTHENTICATION<br>auth_strategy=keystone<br>[keystone_authtoken]<br>auth_host = 127.0.0.1<br>auth_port = 35357<br>auth_protocol = http<br>
admin_tenant_name = service<br>admin_user = nova<br>admin_password = nova<br>signing_dirname = /tmp/keystone-signing-nova<br><br><br><div>Thanks for any advice you are willing to share :)</div></div></div>
<br></div></div><div class="im">_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></div></blockquote></div><br></div>
</blockquote></div><br></div>