[Openstack] 1st install - instances not getting VNICs

Ian Pilcher arequipeno at gmail.com
Sat Nov 2 04:54:14 UTC 2013


I'm working my through the "OpenStack Installation Guide for Red Hat
Enterprise Linux, CentOS, and Fedora" using RDO on CentOS 6.4 (with
nested KVM on top of Fedora 19).  I've reached the "Booting an Image"
section, and although I am able to (apparently) boot the CirrOS test
image in an m1.tiny, no VNIC is being assigned to the instance.

I have created a network:

[root at controller ~]# nova network-show cddf2338-689b-4747-9ce7-6e0f00ad1b45
+---------------------+--------------------------------------+
| Property            | Value                                |
+---------------------+--------------------------------------+
| bridge              | br100                                |
| vpn_public_port     | None                                 |
| dhcp_start          | 172.31.249.2                         |
| bridge_interface    | br100                                |
| updated_at          | None                                 |
| id                  | cddf2338-689b-4747-9ce7-6e0f00ad1b45 |
| cidr_v6             | None                                 |
| deleted_at          | None                                 |
| gateway             | 172.31.249.1                         |
| rxtx_base           | None                                 |
| label               | vmnet                                |
| priority            | None                                 |
| project_id          | None                                 |
| vpn_private_address | None                                 |
| deleted             | 0                                    |
| vlan                | None                                 |
| broadcast           | 172.31.249.255                       |
| netmask             | 255.255.255.0                        |
| injected            | False                                |
| cidr                | 172.31.249.0/24                      |
| vpn_public_address  | None                                 |
| multi_host          | True                                 |
| dns2                | None                                 |
| created_at          | 2013-11-02T03:25:29.000000           |
| host                | None                                 |
| gateway_v6          | None                                 |
| netmask_v6          | None                                 |
| dns1                | 8.8.4.4                              |
+---------------------+--------------------------------------+

And the bridge exists on the compute node:

[root at compute1 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br100           8000.525400a53b13       no              eth1
virbr0          8000.525400606d47       yes             virbr0-nic

(But as you can see, there is no instance VNIC connected to it.)

Sure enough, it appears that the instance was simply created without a
VNIC:

[root at compute1 ~]# ps ax | grep qemu
 1967 ?        Sl     0:08 /usr/libexec/qemu-kvm -global
virtio-blk-pci.scsi=off -nodefconfig -nodefaults -nographic -drive
file=/var/lib/nova/instances/473ae9ed-f5c5-4cd7-8e7c-0fe12c507eb6/disk,cache=none,format=qcow2,if=virtio
-nodefconfig -machine accel=kvm:tcg -m 500 -no-reboot -device
virtio-serial -serial stdio -device sga -chardev
socket,path=/tmp/libguestfsfFBsgn/guestfsd.sock,id=channel0 -device
virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -kernel
/var/tmp/.guestfs-162/kernel.1592 -initrd
/var/tmp/.guestfs-162/initrd.1592 -append panic=1 console=ttyS0
udevtimeout=300 no_timer_check acpi=off printk.time=1
cgroup_disable=memory selinux=0  TERM=linux  -drive
file=/var/tmp/.guestfs-162/root.1592,snapshot=on,if=virtio,cache=unsafe

Any hints as to what might be happening or where I can look to debug
this would be greatly appreciated.

Thanks!

-- 
========================================================================
Ian Pilcher                                         arequipeno at gmail.com
Sometimes there's nothing left to do but crash and burn...or die trying.
========================================================================




More information about the Openstack mailing list