[Openstack] error creating instance

Uwe Sauter uwe.sauter.de at gmail.com
Mon Jun 29 13:59:17 UTC 2015


You wouldn't want to configure br-ex on a compute host. br-ex is the bridge that connects a network node to the outside provider
network.

If you wanted to use VLAN based tenant networks, then you would have to configure a new, separate bridge. But that is not your case.

Please also make sure that GRE traffic is allowed between network and compute nodes.

iptables -I INPUT 2 -p gre -s <your tunnel network CIDR> -d <your tunnel network CIDR> -j ACCEPT

or equivalent.

	Uwe



Am 29.06.2015 um 15:45 schrieb Yngvi Páll Þorfinnsson:
> Hi Andreas,
> 
> OK, so I'm providing the correct log file now i.e.
> So, I'm not putting the line
> #bridge_mappings = external:br-ex
> In the conf file
> /etc/neutron/plugins/ml2/ml2_conf.ini
> On computer host for now.
> 
> Best regards
> Yngvi
> 
> -----Original Message-----
> From: Andreas Scheuring [mailto:scheuran at linux.vnet.ibm.com] 
> Sent: 29. júní 2015 13:07
> To: uwe.sauter.de at gmail.com
> Cc: openstack at lists.openstack.org
> Subject: Re: [Openstack] error creating instance
> 
> Uwe,
> along the configuration files Yngvi is using gre networking. 
> So there should no bridge mapping be required at all for spawning an instance, right?
> 
> The vlan tagging is done via a vlan device on the bond. So in fact it's a static configured vlan. Openstack does gre tunneling on this vlan.
> 
> 
> Yngvi: on the compute node, there should be a "neutron-openvswtich-agent" log file. You sent over the log file of openvswitch itself. But what is required is the logfile of the openstack neutron agent managing openvswitch
> 
> Andreas
> 
> 
> 
> On Mo, 2015-06-29 at 13:40 +0200, Uwe Sauter wrote:
>> Hi,
>>
>> I ran into a similar problem. Make sure that you include the
>>
>> [ml2_type_vlan]
>> tenant_network_type = vlan
>> network_vlan_ranges = physnet1:1501:1509
>>
>>
>> on your controller network (as the controller needs this info to make 
>> a proper decission on which VLAN IDs are available for tenant networks).
>>
>> On other trick that isn't mentioned in any documentation is:
>>
>> Instead of symlinking  /etc/neutron/plugins.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini do it the other way around:
>> Keep your neutron plugin configuration centralized in /etc/neutron/plugin.ini and create one or two symlinks:
>>
>> /etc/neutron/plugins/ml2/ml2_conf.ini -> /etc/neutron/plugins.ini (on 
>> each host running a neutron service) 
>> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini -> 
>> /etc/neutron/plugins.ini (on each host running 
>> neutron-openvswitch-agent)
>>
>> Just make sure that /etc/neutron/plugins.ini keeps both ML2 and OpenVSwitch config options.
>>
>>
>> The whole trick works because  Neutron services are started using the 
>> option "--config <file>". And it will only read the files mentioned on the commandline / init script / systemd unit file.
>>
>>
>> Regards,
>>
>> 	Uwe
>>
>>
>>
>> Am 29.06.2015 um 13:01 schrieb Yngvi Páll Þorfinnsson:
>>> Hi
>>>
>>> I should add to this information,
>>> I'm using VLANs on compute, network and swift nodes.
>>>
>>> But not on the controller node.
>>> I'm not sure if that causes problems ?
>>>
>>> Best regards
>>> Yngvi
>>>
>>>
>>> Hi Andreas
>>>
>>> I'v attached those files from a network node:
>>>
>>> Ml2_conf.ini
>>> Neutron.conf
>>> L3_agent.ini
>>> Nova-compute.log
>>>
>>> I've setup 3 vlans on one interface and configured bonding It works and I have good connection between the servers:
>>>
>>> root at network2:/# cat /proc/net/vlan/config
>>> VLAN Dev name    | VLAN ID
>>> Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
>>> bond0.48       | 48  | bond0
>>> bond0.47       | 47  | bond0
>>> bond0.45       | 45  | bond0
>>>
>>> Tunnel network	-> bond0.47
>>> Mgmt network		-> bond0.48
>>> Extrenal network  	-> bond0.45
>>>
>>> And when I check for bridges and ports on the network node:
>>>
>>> root at network2:/# ovs-vsctl list-br
>>> br-ex
>>> br-int
>>> br-tun
>>>
>>> root at network2:/# ovs-vsctl list-ports br-ex
>>> bond0.45
>>> phy-br-ex
>>> qg-eb22e091-62
>>>
>>> best regards
>>> Yngvi
>>>
>>>
>>> -----Original Message-----
>>> From: Andreas Scheuring [mailto:scheuran at linux.vnet.ibm.com]
>>> Sent: 29. júní 2015 06:59
>>> To: Yngvi Páll Þorfinnsson
>>> Cc: openstack at lists.openstack.org
>>> Subject: Re: [Openstack] error creating instance
>>>
>>> The issue seems to be related to a neutron missconfiguration. 
>>> --> Unexpected vif_type=binding_failed
>>>
>>> Please have a look at your neutron server config file in the network
>>> node(s) and the l2 agent config files (ovs?). You should find additional information there.
>>>
>>>
>>> If this doesn't help, please provide these log files, the neutron config files and a brief description of how your nodes network are set up.
>>>
>>>
>>> Andreas
>>>
>>>
>>>
>>> On Fr, 2015-06-26 at 15:05 +0000, Yngvi Páll Þorfinnsson wrote:
>>>> Hi
>>>>
>>>>  
>>>>
>>>> Can someone please help on this matter?
>>>>
>>>>  
>>>>
>>>> I have installed and configured OpenStack(Juno) environment on 
>>>> Ubuntu(14.04).
>>>>
>>>> We have currently 1 controller node, 3 network nodes and 4 compute 
>>>> nodes as well as 4 swift nodes.
>>>>
>>>> I'm using OpenStack Networking (neutron).  
>>>>
>>>> I‘v recently introduced VLANs for tunnel-, mgmt- and external 
>>>> networks.
>>>>
>>>>  
>>>>
>>>> When I try to create a instance, with this cmd:
>>>>
>>>>  
>>>>
>>>> nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic 
>>>> net-id=7a344656-815c-4116-b697-b52f9fdc6e4c --security-group 
>>>> default --key-name demo-key demo-instance3
>>>>
>>>>  
>>>>
>>>> it fails.  The status from nova list is :
>>>>
>>>> root at controller2:/# nova list
>>>>
>>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>>>
>>>> | ID                                   | Name            | Status |
>>>> Task State | Power State | Networks              |
>>>>
>>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>>>
>>>> | ca662fc0-2417-4da1-be2c-d6ccf90ed732 | demo-instance22 | ERROR  | -
>>>> | NOSTATE     |                       |
>>>>
>>>> | 17d26ca3-f56c-4a87-ae0a-acfafea4838c | demo-instance30 | ERROR  | -
>>>> | NOSTATE     | demo-net=x.x.x.x |
>>>>
>>>> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
>>>>
>>>>  
>>>>
>>>> This error apperas in the /var/log/syslog file on the computer node:
>>>>
>>>>  
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.597951]  nbd8: p1
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.668430] EXT4-fs (nbd8): VFS:
>>>> Can't find ext4 filesystem
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.668521] EXT4-fs (nbd8): VFS:
>>>> Can't find ext4 filesystem
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.668583] EXT4-fs (nbd8): VFS:
>>>> Can't find ext4 filesystem
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.668899] FAT-fs (nbd8): 
>>>> bogus number of reserved sectors
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.668936] FAT-fs (nbd8): 
>>>> Can't find a valid FAT filesystem
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.753989] block nbd8:
>>>> NBD_DISCONNECT
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.754056] block nbd8: Receive 
>>>> control failed (result -32)
>>>>
>>>> Jun 26 14:55:04 compute5 kernel: [ 2187.754161] block nbd8: queue 
>>>> cleared
>>>>
>>>>  
>>>>
>>>> Also, this is logged on the computer node, in 
>>>> /var/log/nova/nova-compute.log
>>>>
>>>>  
>>>>
>>>> 2015-06-26 14:55:02.591 7961 AUDIT nova.compute.claims [-] [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] disk limit not specified, 
>>>> defaulting to unlimited
>>>>
>>>> 2015-06-26 14:55:02.606 7961 AUDIT nova.compute.claims [-] [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Claim successful
>>>>
>>>> 2015-06-26 14:55:02.721 7961 INFO nova.scheduler.client.report [-] 
>>>> Compute_service record updated for ('compute5')
>>>>
>>>> 2015-06-26 14:55:02.836 7961 INFO nova.scheduler.client.report [-] 
>>>> Compute_service record updated for ('compute5')
>>>>
>>>> 2015-06-26 14:55:03.115 7961 INFO nova.virt.libvirt.driver [-]
>>>> [instance: 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Creating image
>>>>
>>>> 2015-06-26 14:55:03.118 7961 INFO nova.openstack.common.lockutils 
>>>> [-] Created lock path: /var/lib/nova/instances/locks
>>>>
>>>> 2015-06-26 14:55:03.458 7961 INFO nova.scheduler.client.report [-] 
>>>> Compute_service record updated for ('compute5', 
>>>> 'compute5.siminn.is')
>>>>
>>>> 2015-06-26 14:55:04.088 7961 INFO nova.virt.disk.vfs.api [-] Unable 
>>>> to import guestfsfalling back to VFSLocalFS
>>>>
>>>> 2015-06-26 14:55:04.363 7961 ERROR nova.compute.manager [-] [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Instance failed to spawn
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Traceback (most recent call
>>>> last):
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
>>>> 2267, in _build_resources
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     yield resources
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
>>>> 2137, in _build_and_run_instance
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]
>>>> block_device_info=block_device_info)
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", 
>>>> line 2620, in spawn
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     write_to_disk=True)
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", 
>>>> line 4159, in _get_guest_xml
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     context)
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", 
>>>> line 3937, in _get_guest_config
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     flavor,
>>>> CONF.libvirt.virt_type)
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 
>>>> 352, in get_config
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     _("Unexpected vif_type=%s")
>>>> % vif_type)
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] NovaException: Unexpected 
>>>> vif_type=binding_failed
>>>>
>>>> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
>>>> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]
>>>>
>>>>  
>>>>
>>>>  
>>>>
>>>> Best regards
>>>>
>>>> Yngvi
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe : 
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>> --
>>> Andreas
>>> (IRC: scheuran)
>>>
>>>
>>> _______________________________________________
>>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe : 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe : 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> --
> Andreas
> (IRC: scheuran)
> 
> 
> 
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 





More information about the Openstack mailing list