[Openstack] error creating instance

Andreas Scheuring scheuran at linux.vnet.ibm.com
Mon Jun 29 15:25:23 UTC 2015



Attempting to bind port 2bf4a49b-2ad6-4ead-a656-65814ad0724e on network
7a344656-815c-4116-b697-b52f9fdc6e4c
bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:57
2015-06-29 14:28:55.924 5328 DEBUG
neutron.plugins.ml2.drivers.mech_agent
[req-9fe66e60-1a70-4ad6-b21e-ef91aca8a931 None] Checking agent:
{'binary': u'neutron-openvswitch-agent', 'description': None,
'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2015,
6, 29, 14, 28, 45), 'alive': True, 'id':
u'1c06fb08-105c-4659-ae0e-4a905931311e', 'topic': u'N/A', 'host':
u'compute5', 'agent_type': u'Open vSwitch agent', 'started_at':
datetime.datetime(2015, 6, 29, 14, 27, 45), 'created_at':
datetime.datetime(2015, 6, 26, 14, 51, 14), 'configurations':
{u'arp_responder_enabled': False, u'tunneling_ip': u'172.22.15.17',
u'devices': 0, u'l2_population': False, u'tunnel_types': [u'gre'],
u'enable_distributed_routing': False, u'bridge_mappings': {}}}
bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:65
2015-06-29 14:28:55.925 5328 DEBUG
neutron.plugins.ml2.drivers.mech_openvswitch
[req-9fe66e60-1a70-4ad6-b21e-ef91aca8a931 None] Checking segment:
{'segmentation_id': 1102L, 'physical_network': u'external', 'id':
u'cf6489c4-7ed6-43dc-85aa-f4b8c6b501ca', 'network_type': u'vlan'} for
mappings: {} with tunnel_types: [u'gre']
check_segment_for_agent /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_openvswitch.py:52



===
Checking segment: {'segmentation_id': 1102L,
'physical_network':u'external', 'id':
u'cf6489c4-7ed6-43dc-85aa-f4b8c6b501ca', 'network_type': u'vlan'} 
for mappings: {} 
with tunnel_types: [u'gre']

This looks strange: Seems like your tenant network has a
physical_network of type vlan assigned. That shouldn't be the case.

Could you please provide the following information:

Information of all Openstack networks available:

> neutron net-list

> neutron net-show <uuid> 

Especially of this one:
> neturon net-show cf6489c4-7ed6-43dc-85aa-f4b8c6b501ca


Usually your network should look like this (in this case vxlan):

+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | ef6552a5-be39-4bcc-9dde-2a200eaca64d |
| mtu                       | 0                                    |
| name                      | private                              |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1001                                 |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 4b539feb-b104-4f69-83ba-76f746a2c592 |
|                           | ac255618-afe9-4aea-b86d-b662b68e9d9d |
| tenant_id                 | 3c4ddcff52a74f2b97b71392300aa74d     |
+---------------------------+--------------------------------------+

How did you create yours? via the UI? Or are you attaching your instance
to the external network instead? In any case you need to attach it to
your tenant network!! If it's not visible via the UI, maybe you have to
switch to another tenant to get it.

Hope we're close to finding the issue ;)


Andreas


On Mo, 2015-06-29 at 14:33 +0000, Yngvi Páll Þorfinnsson wrote:
> OK, I've enabled 
> Debug=True
> 
> I did try to create an instance, I've attached the server.log file for the neutron server
> 
> Best regards
> Yngvi
> 
> 
> -----Original Message-----
> From: Andreas Scheuring [mailto:scheuran at linux.vnet.ibm.com] 
> Sent: 29. júní 2015 14:02
> To: Yngvi Páll Þorfinnsson
> Cc: uwe.sauter.de at gmail.com; openstack at lists.openstack.org
> Subject: Re: [Openstack] error creating instance
> 
> Correct, as the error messages indicated, get rid of the bridgemapping - fine. Your agent is running now without error messages.
> 
> Could you please enable debug=true in your neutron.conf, in order to also get the debug logs (restart neutron server and ovs-agent).
> 
> Can you trigger the start of another instance, so that an error message appears?
> 
> Andreas
> 
> 
> 
> 
> On Mo, 2015-06-29 at 13:45 +0000, Yngvi Páll Þorfinnsson wrote:
> > Hi Andreas,
> > 
> > OK, so I'm providing the correct log file now i.e.
> > So, I'm not putting the line
> > #bridge_mappings = external:br-ex
> > In the conf file
> > /etc/neutron/plugins/ml2/ml2_conf.ini
> > On computer host for now.
> > 
> > Best regards
> > Yngvi
> > 
> > -----Original Message-----
> > From: Andreas Scheuring [mailto:scheuran at linux.vnet.ibm.com]
> > Sent: 29. júní 2015 13:07
> > To: uwe.sauter.de at gmail.com
> > Cc: openstack at lists.openstack.org
> > Subject: Re: [Openstack] error creating instance
> > 
> > Uwe,
> > along the configuration files Yngvi is using gre networking. 
> > So there should no bridge mapping be required at all for spawning an instance, right?
> > 
> > The vlan tagging is done via a vlan device on the bond. So in fact it's a static configured vlan. Openstack does gre tunneling on this vlan.
> > 
> > 
> > Yngvi: on the compute node, there should be a 
> > "neutron-openvswtich-agent" log file. You sent over the log file of 
> > openvswitch itself. But what is required is the logfile of the 
> > openstack neutron agent managing openvswitch
> > 
> > Andreas
> > 
> > 
> > 
> > On Mo, 2015-06-29 at 13:40 +0200, Uwe Sauter wrote:
> > > Hi,
> > > 
> > > I ran into a similar problem. Make sure that you include the
> > > 
> > > [ml2_type_vlan]
> > > tenant_network_type = vlan
> > > network_vlan_ranges = physnet1:1501:1509
> > > 
> > > 
> > > on your controller network (as the controller needs this info to 
> > > make a proper decission on which VLAN IDs are available for tenant networks).
> > > 
> > > On other trick that isn't mentioned in any documentation is:
> > > 
> > > Instead of symlinking  /etc/neutron/plugins.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini do it the other way around:
> > > Keep your neutron plugin configuration centralized in /etc/neutron/plugin.ini and create one or two symlinks:
> > > 
> > > /etc/neutron/plugins/ml2/ml2_conf.ini -> /etc/neutron/plugins.ini 
> > > (on each host running a neutron service) 
> > > /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini -> 
> > > /etc/neutron/plugins.ini (on each host running
> > > neutron-openvswitch-agent)
> > > 
> > > Just make sure that /etc/neutron/plugins.ini keeps both ML2 and OpenVSwitch config options.
> > > 
> > > 
> > > The whole trick works because  Neutron services are started using 
> > > the option "--config <file>". And it will only read the files mentioned on the commandline / init script / systemd unit file.
> > > 
> > > 
> > > Regards,
> > > 
> > > 	Uwe
> > > 
> > > 
> > > 
> > > Am 29.06.2015 um 13:01 schrieb Yngvi Páll Þorfinnsson:
> > > > Hi
> > > > 
> > > > I should add to this information,
> > > > I'm using VLANs on compute, network and swift nodes.
> > > > 
> > > > But not on the controller node.
> > > > I'm not sure if that causes problems ?
> > > > 
> > > > Best regards
> > > > Yngvi
> > > > 
> > > > 
> > > > Hi Andreas
> > > > 
> > > > I'v attached those files from a network node:
> > > > 
> > > > Ml2_conf.ini
> > > > Neutron.conf
> > > > L3_agent.ini
> > > > Nova-compute.log
> > > > 
> > > > I've setup 3 vlans on one interface and configured bonding It works and I have good connection between the servers:
> > > > 
> > > > root at network2:/# cat /proc/net/vlan/config
> > > > VLAN Dev name    | VLAN ID
> > > > Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
> > > > bond0.48       | 48  | bond0
> > > > bond0.47       | 47  | bond0
> > > > bond0.45       | 45  | bond0
> > > > 
> > > > Tunnel network	-> bond0.47
> > > > Mgmt network		-> bond0.48
> > > > Extrenal network  	-> bond0.45
> > > > 
> > > > And when I check for bridges and ports on the network node:
> > > > 
> > > > root at network2:/# ovs-vsctl list-br br-ex br-int br-tun
> > > > 
> > > > root at network2:/# ovs-vsctl list-ports br-ex
> > > > bond0.45
> > > > phy-br-ex
> > > > qg-eb22e091-62
> > > > 
> > > > best regards
> > > > Yngvi
> > > > 
> > > > 
> > > > -----Original Message-----
> > > > From: Andreas Scheuring [mailto:scheuran at linux.vnet.ibm.com]
> > > > Sent: 29. júní 2015 06:59
> > > > To: Yngvi Páll Þorfinnsson
> > > > Cc: openstack at lists.openstack.org
> > > > Subject: Re: [Openstack] error creating instance
> > > > 
> > > > The issue seems to be related to a neutron missconfiguration. 
> > > > --> Unexpected vif_type=binding_failed
> > > > 
> > > > Please have a look at your neutron server config file in the 
> > > > network
> > > > node(s) and the l2 agent config files (ovs?). You should find additional information there.
> > > > 
> > > > 
> > > > If this doesn't help, please provide these log files, the neutron config files and a brief description of how your nodes network are set up.
> > > > 
> > > > 
> > > > Andreas
> > > > 
> > > > 
> > > > 
> > > > On Fr, 2015-06-26 at 15:05 +0000, Yngvi Páll Þorfinnsson wrote:
> > > >> Hi
> > > >>
> > > >>  
> > > >>
> > > >> Can someone please help on this matter?
> > > >>
> > > >>  
> > > >>
> > > >> I have installed and configured OpenStack(Juno) environment on 
> > > >> Ubuntu(14.04).
> > > >>
> > > >> We have currently 1 controller node, 3 network nodes and 4 
> > > >> compute nodes as well as 4 swift nodes.
> > > >>
> > > >> I'm using OpenStack Networking (neutron).  
> > > >>
> > > >> I‘v recently introduced VLANs for tunnel-, mgmt- and external 
> > > >> networks.
> > > >>
> > > >>  
> > > >>
> > > >> When I try to create a instance, with this cmd:
> > > >>
> > > >>  
> > > >>
> > > >> nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic 
> > > >> net-id=7a344656-815c-4116-b697-b52f9fdc6e4c --security-group 
> > > >> default --key-name demo-key demo-instance3
> > > >>
> > > >>  
> > > >>
> > > >> it fails.  The status from nova list is :
> > > >>
> > > >> root at controller2:/# nova list
> > > >>
> > > >> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
> > > >>
> > > >> | ID                                   | Name            | Status |
> > > >> Task State | Power State | Networks              |
> > > >>
> > > >> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
> > > >>
> > > >> | ca662fc0-2417-4da1-be2c-d6ccf90ed732 | demo-instance22 | ERROR  | -
> > > >> | NOSTATE     |                       |
> > > >>
> > > >> | 17d26ca3-f56c-4a87-ae0a-acfafea4838c | demo-instance30 | ERROR  | -
> > > >> | NOSTATE     | demo-net=x.x.x.x |
> > > >>
> > > >> +--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
> > > >>
> > > >>  
> > > >>
> > > >> This error apperas in the /var/log/syslog file on the computer node:
> > > >>
> > > >>  
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.597951]  nbd8: p1
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.668430] EXT4-fs (nbd8): VFS:
> > > >> Can't find ext4 filesystem
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.668521] EXT4-fs (nbd8): VFS:
> > > >> Can't find ext4 filesystem
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.668583] EXT4-fs (nbd8): VFS:
> > > >> Can't find ext4 filesystem
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.668899] FAT-fs (nbd8): 
> > > >> bogus number of reserved sectors
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.668936] FAT-fs (nbd8): 
> > > >> Can't find a valid FAT filesystem
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.753989] block nbd8:
> > > >> NBD_DISCONNECT
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.754056] block nbd8: 
> > > >> Receive control failed (result -32)
> > > >>
> > > >> Jun 26 14:55:04 compute5 kernel: [ 2187.754161] block nbd8: queue 
> > > >> cleared
> > > >>
> > > >>  
> > > >>
> > > >> Also, this is logged on the computer node, in 
> > > >> /var/log/nova/nova-compute.log
> > > >>
> > > >>  
> > > >>
> > > >> 2015-06-26 14:55:02.591 7961 AUDIT nova.compute.claims [-] [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] disk limit not specified, 
> > > >> defaulting to unlimited
> > > >>
> > > >> 2015-06-26 14:55:02.606 7961 AUDIT nova.compute.claims [-] [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Claim successful
> > > >>
> > > >> 2015-06-26 14:55:02.721 7961 INFO nova.scheduler.client.report 
> > > >> [-] Compute_service record updated for ('compute5')
> > > >>
> > > >> 2015-06-26 14:55:02.836 7961 INFO nova.scheduler.client.report 
> > > >> [-] Compute_service record updated for ('compute5')
> > > >>
> > > >> 2015-06-26 14:55:03.115 7961 INFO nova.virt.libvirt.driver [-]
> > > >> [instance: 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Creating image
> > > >>
> > > >> 2015-06-26 14:55:03.118 7961 INFO nova.openstack.common.lockutils 
> > > >> [-] Created lock path: /var/lib/nova/instances/locks
> > > >>
> > > >> 2015-06-26 14:55:03.458 7961 INFO nova.scheduler.client.report 
> > > >> [-] Compute_service record updated for ('compute5',
> > > >> 'compute5.siminn.is')
> > > >>
> > > >> 2015-06-26 14:55:04.088 7961 INFO nova.virt.disk.vfs.api [-] 
> > > >> Unable to import guestfsfalling back to VFSLocalFS
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 ERROR nova.compute.manager [-] [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Instance failed to spawn
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] Traceback (most recent call
> > > >> last):
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
> > > >> 2267, in _build_resources
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     yield resources
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
> > > >> 2137, in _build_and_run_instance
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]
> > > >> block_device_info=block_device_info)
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
> > > >> line 2620, in spawn
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     write_to_disk=True)
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
> > > >> line 4159, in _get_guest_xml
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     context)
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
> > > >> line 3937, in _get_guest_config
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     flavor,
> > > >> CONF.libvirt.virt_type)
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File
> > > >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 
> > > >> 352, in get_config
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]     _("Unexpected vif_type=%s")
> > > >> % vif_type)
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c] NovaException: Unexpected 
> > > >> vif_type=binding_failed
> > > >>
> > > >> 2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance:
> > > >> 17d26ca3-f56c-4a87-ae0a-acfafea4838c]
> > > >>
> > > >>  
> > > >>
> > > >>  
> > > >>
> > > >> Best regards
> > > >>
> > > >> Yngvi
> > > >>
> > > >>
> > > >> _______________________________________________
> > > >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > >> Post to     : openstack at lists.openstack.org
> > > >> Unsubscribe : 
> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > 
> > > > --
> > > > Andreas
> > > > (IRC: scheuran)
> > > > 
> > > > 
> > > > _______________________________________________
> > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > Post to     : openstack at lists.openstack.org
> > > > Unsubscribe : 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > 
> > > 
> > > 
> > > _______________________________________________
> > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > Post to     : openstack at lists.openstack.org
> > > Unsubscribe : 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > 
> > --
> > Andreas
> > (IRC: scheuran)
> > 
> > 
> > 
> > _______________________________________________
> > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe : 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> --
> Andreas
> (IRC: scheuran)
> 
> 

-- 
Andreas
(IRC: scheuran)






More information about the Openstack mailing list