<div dir="ltr"><div><div>Hi,<br><br></div>Thanks for the response, and the nudge to look at compute.log. Please let me know if you have any further suggestions<br><br></div><div>This is what I found that seems to be relevant, the full log is available at <a href="http://www.cms.waikato.ac.nz/~clintd/compute.log">http://www.cms.waikato.ac.nz/~clintd/compute.log</a><br>
<br><br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Starting toXML method to_xml<br>
/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:1872<br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] CPU mode 'host-model' model '' was chosen get_guest_cpu_config /usr/lib/pytho<br>
n2.6/site-packages/nova/virt/libvirt/driver.py:1561<br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] block_device_list [] _volume_in_mapping /usr/lib/python2.6/site-packages/nova<br>
/virt/libvirt/driver.py:1498<br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Path '/var/lib/nova/instances' supports direct I/O _supports_direct_io /usr/l<br>
ib/python2.6/site-packages/nova/virt/libvirt/driver.py:1228<br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] block_device_list [] _volume_in_mapping /usr/lib/python2.6/site-packages/nova<br>
/virt/libvirt/driver.py:1498<br>2013-08-01 11:02:56 DEBUG nova.virt.libvirt.vif [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Ensuring bridge br100 plug /usr<br>
/lib/python2.6/site-packages/nova/virt/libvirt/vif.py:109<br>2013-08-01 11:02:56 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Got semaphore "ensure_bridge" for method "ensure_bridge"... inner /usr/lib/python2.6/site-p<br>
ackages/nova/utils.py:764<br>2013-08-01 11:02:56 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... inner /usr/lib/p<br>
ython2.6/site-packages/nova/utils.py:768<br>2013-08-01 11:02:56 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Got file lock "ensure_bridge" for method "ensure_bridge"... inner /usr/lib/python2.6/site-p<br>
ackages/nova/utils.py:794<br>2013-08-01 11:02:56 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip link show dev br100<br>
execute /usr/lib/python2.6/site-packages/nova/utils.py:187<br>2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Result was 0 execute /usr/lib/python2.6/site-packages/nova/utils.py:203<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf brctl addif br100 em2 <br>
execute /usr/lib/python2.6/site-packages/nova/utils.py:187<br>2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Result was 1 execute /usr/lib/python2.6/site-packages/nova/utils.py:203<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Running cmd (subprocess): ip route show dev em2 execute /usr/lib/python2.6/site-packages/no<br>
va/utils.py:187<br>2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Result was 0 execute /usr/lib/python2.6/site-packages/nova/utils.py:203<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf ip addr show dev em2 scope global execute /usr/lib/python2.6/site-packages/nova/utils.py:187<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Result was 0 execute /usr/lib/python2.6/site-packages/nova/utils.py:203<br>
2013-08-01 11:02:57 DEBUG nova.virt.libvirt.config [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Generated XML <domain type="kvm"><br> <uuid>c923a717-af33-45e6-a874-0648c4c100dc</uuid><br>
<name>instance-0000000f</name><br> <memory>2097152</memory><br> <vcpu>1</vcpu><br> <os><br> <type>hvm</type><br> <boot dev="hd"/><br> </os><br>
<features><br> <acpi/><br> <apic/><br> </features><br> <clock offset="utc"><br> <timer name="pit" tickpolicy="delay"/><br> <timer name="rtc" tickpolicy="catchup"/><br>
</clock><br> <cpu mode="host-model" match="exact"/><br> <devices><br> <disk type="file" device="disk"><br> <driver name="qemu" type="qcow2" cache="none"/><br>
<source file="/var/lib/nova/instances/instance-0000000f/disk"/><br> <target bus="virtio" dev="vda"/><br> </disk><br> <interface type="bridge"><br>
<mac address="fa:16:3e:01:15:41"/><br> <source bridge="br100"/><br> <filterref filter="nova-instance-instance-0000000f-fa163e011541"><br> <parameter name="IP" value="192.168.100.2"/><br>
<parameter name="DHCPSERVER" value="192.168.100.1"/><br> <parameter name="PROJNET" value="192.168.100.0"/><br> <parameter name="PROJMASK" value="255.255.255.0"/><br>
</filterref><br> </interface><br> <serial type="file"><br> <source path="/var/lib/nova/instances/instance-0000000f/console.log"/><br> </serial><br> <serial type="pty"/><br>
<input type="tablet" bus="usb"/><br> <graphics type="vnc" autoport="yes" keymap="en-us" listen="192.168.206.130"/><br> </devices><br></domain><br>
to_xml /usr/lib/python2.6/site-packages/nova/virt/libvirt/config.py:66<br>2013-08-01 11:02:57 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Finished toXML method to_xml /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:1876<br>
2013-08-01 11:02:57 INFO nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Creating image<br>
2013-08-01 11:02:57 DEBUG nova.virt.libvirt.driver [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] block_device_list [] _volume_in_mapping /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:1498<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851 execute /usr/lib/python2.6/site-packages/nova/utils.py:187<br>
2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Result was 1 execute /usr/lib/python2.6/site-packages/nova/utils.py:203<br>
2013-08-01 11:02:57 ERROR nova.compute.manager [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Instance failed to spawn<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] Traceback (most recent call last):<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 756, in _spawn<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] block_device_info)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] temp_level, payload)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] self.gen.next()<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] return f(*args, **kw)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1099, in spawn<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] admin_pass=admin_password)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1365, in _create_image<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] project_id=instance['project_id'])<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 131, in cache<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] *args, **kwargs)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py", line 181, in create_image<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] if size and size < disk.get_disk_size(base):<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/disk/api.py", line 115, in get_disk_size<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] size = images.qemu_img_info(path)['virtual size']<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/virt/images.py", line 50, in qemu_img_info<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] 'qemu-img', 'info', path)<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] File "/usr/lib/python2.6/site-packages/nova/utils.py", line 210, in execute<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] cmd=' '.join(cmd))<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] ProcessExecutionError: Unexpected error while running command.<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] Command: env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] Exit code: 1<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] Stdout: ''<br>2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] Stderr: "qemu-img: Could not open '/var/lib/nova/instances/_base/5f5e7ab50d5be1f36aad1d5632ce5a225502a851'\n"<br>
2013-08-01 11:02:57 3447 TRACE nova.compute.manager [instance: c923a717-af33-45e6-a874-0648c4c100dc] <br>2013-08-01 11:02:57 DEBUG nova.utils [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Got semaphore "compute_resources" for method "abort_resource_claim"... inner /usr/lib/python2.6/site-packages/nova/utils.py:764<br>
2013-08-01 11:02:57 INFO nova.compute.resource_tracker [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] Aborting claim: [Claim c923a717-af33-45e6-a874-0648c4c100dc: 2048 MB memory, 20 GB disk, 1 VCPUS]<br>
2013-08-01 11:02:57 DEBUG nova.compute.manager [req-a6c67c92-87b6-4622-b9a1-22af69b1328d a66355f1760448989e63dc54e853674f f037ea1bab6d4dc08b880c5fdea29fb5] [instance: c923a717-af33-45e6-a874-0648c4c100dc] Deallocating network for instance _deallocate_network /usr/lib/python2.6/site-packages/nova/compute/manager.py:782<br>
<br><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Aug 1, 2013 at 10:07 AM, <span dir="ltr"><<a href="mailto:yongiman@gmail.com" target="_blank">yongiman@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>Hi Client,</div><div><br></div><div>br100 has 2 network address.</div><div><br></div><div>/var/log message says that point</div>
<div><br></div><div>How about removing one of the address?</div><div><br></div><div>However I don't think it's directrly related to vm provisioning error</div><div><br></div><div>Do you have nova-compute message?</div>
<div><div class="h5"><div><br><br></div><div><br>On 2013. 7. 31., at 오전 8:29, Clint Dilks <<a href="mailto:clintd@waikato.ac.nz" target="_blank">clintd@waikato.ac.nz</a>> wrote:<br><br></div><blockquote type="cite">
<div><div dir="ltr"><div><div><div><div><div>Hi,<br><br></div>I have got to the point where I am trying to launch a virtual machine as per <a href="http://docs.openstack.org/folsom/openstack-compute/install/yum/content/running-an-instance.html" target="_blank">http://docs.openstack.org/folsom/openstack-compute/install/yum/content/running-an-instance.html</a><br>
<br></div>The boot command completes but it seems some key element of networking is failing.<br></div><div>Does anyone have suggestions on how to trouble shoot this issue further?<br></div><div><br>$ nova boot --flavor 2 --image 9ed7b2c4-f296-4fd0-9ffa-026231947a09 --key_name mykey --security_group default cirros<br>
+-------------------------------------+--------------------------------------+<br>| Property | Value |<br>+-------------------------------------+--------------------------------------+<br>
| OS-DCF:diskConfig | MANUAL |<br>| OS-EXT-SRV-ATTR:host | None |<br>| OS-EXT-SRV-ATTR:hypervisor_hostname | None |<br>
| OS-EXT-SRV-ATTR:instance_name | instance-00000006 |<br>| OS-EXT-STS:power_state | 0 |<br>| OS-EXT-STS:task_state | scheduling |<br>
| OS-EXT-STS:vm_state | building |<br>| accessIPv4 | |<br>| accessIPv6 | |<br>
| adminPass | ASyz54u26TGK |<br>| config_drive | |<br>| created | 2013-07-30T22:14:32Z |<br>
| flavor | m1.small |<br>| hostId | |<br>| id | fc0c5515-9d30-4263-87ea-0f912c0fc7c7 |<br>
| image | cirros-0.3.0-x86_64 |<br>| key_name | mykey |<br>| metadata | {} |<br>
| name | cirros |<br>| progress | 0 |<br>| security_groups | [{u'name': u'default'}] |<br>
| status | BUILD |<br>| tenant_id | f037ea1bab6d4dc08b880c5fdea29fb5 |<br>| updated | 2013-07-30T22:14:32Z |<br>
| user_id | a66355f1760448989e63dc54e853674f |<br>+-------------------------------------+--------------------------------------+<br><br></div>But I notice that no IPV4 address has been assigned<br>
<br></div>If do nova lists during the boot process I see the following<br><br>$ nova list<br>+--------------------------------------+--------+--------+----------+<br>| ID | Name | Status | Networks |<br>
+--------------------------------------+--------+--------+----------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | BUILD | |<br>+--------------------------------------+--------+--------+----------+<br>$ nova list<br>
+--------------------------------------+--------+--------+-----------------------+<br>| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+-----------------------+<br>
| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | BUILD | private=192.168.100.2 |<br>+--------------------------------------+--------+--------+-----------------------+<br>$ nova list<br>+--------------------------------------+--------+--------+-----------------------+<br>
| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+-----------------------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | ERROR | private=192.168.100.2 |<br>
+--------------------------------------+--------+--------+-----------------------+<br><br></div>And if I wait some time and then repeat the nova list it will change to<br><div>$ nova list<br>+--------------------------------------+--------+--------+----------+<br>
| ID | Name | Status | Networks |<br>+--------------------------------------+--------+--------+----------+<br>| fc0c5515-9d30-4263-87ea-0f912c0fc7c7 | cirros | ERROR | |<br>+--------------------------------------+--------+--------+----------+<br>
<br></div><div>The process does not seem to be getting as far as attempting to boot the image<br><br>$ nova console-log fc0c5515-9d30-4263-87ea-0f912c0fc7c7<br>ERROR: The resource could not be found. (HTTP 404) (Request-ID: req-b10876a4-9193-4158-829e-3454e80c706f)<br>
</div><div><br></div><div>There doesn't seem to be any obvious errors in /var/log/nova/network.log. <a href="http://www.cms.waikato.ac.nz/%7Eclintd/nova-network.log" target="_blank">http://www.cms.waikato.ac.nz/~clintd/nova-network.log</a><br>
<br></div><div>And all nova related services appear to be running<br></div><div>]# nova-manage service list<br>Binary Host Zone Status State Updated_At<br>nova-volume prancer nova enabled :-) 2013-07-30 23:14:22<br>
nova-scheduler prancer nova enabled :-) 2013-07-30 23:14:28<br>nova-cert prancer nova enabled :-) 2013-07-30 23:14:25<br>
nova-compute prancer nova enabled :-) 2013-07-30 23:14:21<br>nova-network prancer nova enabled :-) 2013-07-30 23:14:27<br>
nova-console prancer nova enabled :-) 2013-07-30 23:14:26<br>nova-consoleauth prancer nova enabled :-) 2013-07-30 23:14:23<br>
<br># ip a<br>1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN <br> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br> inet <a href="http://127.0.0.1/8" target="_blank">127.0.0.1/8</a> scope host lo<br>
inet <a href="http://169.254.169.254/32" target="_blank">169.254.169.254/32</a> scope link lo<br>2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000<br> link/ether 00:24:e8:fe:0e:8a brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://130.217.218.18/16" target="_blank">130.217.218.18/16</a> brd 130.217.255.255 scope global em1<br>3: em2: <NO-CARRIER,BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc mq state DOWN qlen 1000<br>
link/ether 00:24:e8:fe:0e:8b brd ff:ff:ff:ff:ff:ff<br>
5: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN <br> link/ether 00:24:e8:fe:0e:8b brd ff:ff:ff:ff:ff:ff<br> inet <a href="http://192.168.100.1/24" target="_blank">192.168.100.1/24</a> brd 192.168.100.255 scope global br100<br>
inet <a href="http://192.168.206.130/24" target="_blank">192.168.206.130/24</a> brd 192.168.206.255 scope global br100<br>8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN <br> link/ether 52:54:00:83:81:d1 brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://192.168.122.1/24" target="_blank">192.168.122.1/24</a> brd 192.168.122.255 scope global virbr0<br>9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500<br> link/ether 52:54:00:83:81:d1 brd ff:ff:ff:ff:ff:ff<br>
</div><div><br>I am also seeing the following in /var/log messages. Could it be related to the problem?<br> <br></div><div>Jul 31 10:46:55 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:46:55 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>
Jul 31 10:48:22 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:48:22 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>Jul 31 10:50:29 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>
Jul 31 10:50:29 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>Jul 31 10:50:31 prancer dnsmasq[17941]: read /etc/hosts - 2 addresses<br>Jul 31 10:50:31 prancer dnsmasq[17941]: read /var/lib/nova/networks/nova-br100.conf<br>
Jul 31 10:50:32 prancer dnsmasq-dhcp[17941]: DHCPRELEASE(br100) 192.168.100.2 fa:16:3e:7f:0f:96 unknown lease<br><br></div># cat /etc/nova/nova.conf<br>[DEFAULT]<br><br># LOGS/STATE<br>verbose=True<br>logdir=/var/log/nova<br>
state_path=/var/lib/nova<br>lock_path=/var/lock/nova<br>rootwrap_config=/etc/nova/<div>rootwrap.conf<br><br># SCHEDULER<br>compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler<br><br># VOLUMES<br>volume_driver=nova.volume.driver.ISCSIDriver<br>
volume_group=nova-volumes<br>volume_name_template=volume-%08x<br>iscsi_helper=tgtadm<br><br># DATABASE<br>sql_connection=mysql://<a href="http://nova:0pen5tack-nova@192.168.206.130/nova" target="_blank">nova:0pen5tack-nova@192.168.206.130/nova</a><br>
<br># COMPUTE<br>libvirt_type=kvm<br>compute_driver=libvirt.LibvirtDriver<br>instance_name_template=instance-%08x<br>api_paste_config=/etc/nova/api-paste.ini<br><br># COMPUTE/APIS: if you have separate configs for separate services<br>
# this flag is required for both nova-api and nova-compute<br>allow_resize_to_same_host=True<br><br># APIS<br>osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions<br>ec2_dmz_host=192.168.206.130<br>
s3_host=192.168.206.130<br><br># Qpid<br>rpc_backend=nova.rpc.impl_qpid<br>qpid_hostname=192.168.206.130<br><br># GLANCE<br>image_service=nova.image.glance.GlanceImageService<br>glance_api_servers=<a href="http://192.168.206.130:9292" target="_blank">192.168.206.130:9292</a><br>
<br># NETWORK<br>network_manager=nova.network.manager.FlatDHCPManager<br>dhcpbridge=/usr/bin/nova-dhcpbridge<br>force_dhcp_release=True<br>dhcpbridge_flagfile=/etc/nova/nova.conf<br>firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver<br>
# Change my_ip to match each host<br>my_ip=192.168.206.130<br>public_interface=em2<br>vlan_interface=em2<br>flat_network_bridge=br100<br>flat_interface=em2<br>fixed_range=<a href="http://192.168.100.0/24" target="_blank">192.168.100.0/24</a><br>
<br># NOVNC CONSOLE<br>novncproxy_base_url=<a href="http://192.168.206.130:6080/vnc_auto.html" target="_blank">http://192.168.206.130:6080/vnc_auto.html</a><br># Change vncserver_proxyclient_address and vncserver_listen to match each compute host<br>
vncserver_proxyclient_address=192.168.206.130<br>vncserver_listen=192.168.206.130<br><br># AUTHENTICATION<br>auth_strategy=keystone<br>[keystone_authtoken]<br>auth_host = 127.0.0.1<br>auth_port = 35357<br>auth_protocol = http<br>
admin_tenant_name = service<br>admin_user = nova<br>admin_password = nova<br>signing_dirname = /tmp/keystone-signing-nova<br><br><br><div>Thanks for any advice you are willing to share :)</div></div></div>
</div></blockquote></div></div><blockquote type="cite"><div><span>_______________________________________________</span><br><span>Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a></span><br>
<span>Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a></span><br><span>Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a></span><br>
</div></blockquote></div></blockquote></div><br></div>