[Openstack] ML2 Plugin and vif_type=binding_failed
Yankai Liu
yankai.liu at canonical.com
Wed Jun 18 00:26:57 UTC 2014
Heiko,
great to see it's resolved. I ran into the similar issue because I am
trying to configure the network with vxlan which is previously gre. It
turned out that you have to take care the ml2 configurations on controller
node, network node and compute node very carefully. In addition, deleting
the old networks and recreating them if you want to use the same network
for vxlan instead of gre. Some hints for stackers who plans to do the same
test.
Best Regards,
Kaya Liu
刘艳凯
On Tue, Jun 17, 2014 at 9:07 PM, Akilesh K <akilesh1597 at gmail.com> wrote:
> The database section need not be in ml2_conf.ini. It needs to be only in
> neutron.conf if you are using icehouse.
>
>
> On Tue, Jun 17, 2014 at 6:21 PM, Heiko Krämer <hkraemer at anynines.com>
> wrote:
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Akesh,
>>
>> you're right on the controller host was the ml2 config not correct -.-
>> my false.
>>
>> In addition in the ml2_conf need to be the database connection
>> informations like in ovs.
>>
>> It's running now :)
>>
>> Thanks again.
>>
>>
>> Cheers
>> Heiko
>>
>> On 17.06.2014 12:31, Akash Gunjal wrote:
>> > Hi,
>> >
>> > This error occurs when the config is wrong wither on controller or
>> > the compute. Check the ml2_conf.ini on controller and
>> > ovs_plugin.ini on the compute.
>> >
>> >
>> > Regards, Akash
>> >
>> >
>> >
>> > From: Heiko Krämer <hkraemer at anynines.com> To: Akilesh K
>> > <akilesh1597 at gmail.com>, Cc: "openstack at lists.openstack.org"
>> > <openstack at lists.openstack.org> Date: 06/17/2014 03:56 PM Subject:
>> > Re: [Openstack] ML2 Plugin and vif_type=binding_failed
>> >
>> >
>> >
>> > Hi Akilesh,
>> >
>> > i see this warn on neutron-server
>> >
>> > 2014-06-17 10:14:20.283 24642 WARNING neutron.plugins.ml2.managers
>> > [req-d23b58ce-3389-4af5-bdd2-a78bd7cec507 None] Failed to bind
>> > port f71d7e0e-8955-4784-83aa-c23bf1b16f4f on host
>> > nettesting.hydranodes.de
>> >
>> >
>> > if i restart ovs-agent on network node i see this one: 2014-06-17
>> > 09:28:26.029 31369 ERROR neutron.agent.linux.ovsdb_monitor [-]
>> > Error received from ovsdb monitor:
>> > 2014-06-17T09:28:26Z|00001|fatal_signal|WARN|terminating with
>> > signal 15 (Terminated) 2014-06-17 09:28:29.275 31870 WARNING
>> > neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
>> > f71d7e0e-8955-4784-83aa-c23bf1b16f4f not defined on plugin
>> > 2014-06-17 09:28:29.504 31870 WARNING
>> > neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
>> > 39bb4ba0-3d37-4ffe-9c81-073807f8971a not defined on plugin
>> >
>> >
>> > same on comp host if i restart ovs agent: 2014-06-17 09:28:44.446
>> > 25476 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received
>> > from ovsdb monitor:
>> > 2014-06-17T09:28:44Z|00001|fatal_signal|WARN|terminating with
>> > signal 15 (Terminated)
>> >
>> >
>> > but ovs seems to be correct:
>> >
>> > ##Compute## 7bbe81f3-79fa-4efa-b0eb-76addb57675c Bridge br-tun Port
>> > "gre-64141401" Interface "gre-64141401" type: gre options:
>> > {in_key=flow, local_ip="100.20.20.2", out_key=flow,
>> > remote_ip="100.20.20.1"} Port patch-int Interface patch-int type:
>> > patch options: {peer=patch-tun} Port br-tun Interface br-tun type:
>> > internal Bridge br-int Port br-int Interface br-int type: internal
>> > Port patch-tun Interface patch-tun type: patch options:
>> > {peer=patch-int} ovs_version: "2.0.1"
>> >
>> >
>> >
>> > ### Network node### a40d7fc6-b0f0-4d55-98fc-c02cc7227d6c Bridge
>> > br-ex Port br-ex Interface br-ex type: internal Bridge br-tun Port
>> > "gre-64141402" Interface "gre-64141402" type: gre options:
>> > {in_key=flow, local_ip="100.20.20.1", out_key=flow,
>> > remote_ip="100.20.20.2"} Port patch-int Interface patch-int type:
>> > patch options: {peer=patch-tun} Port br-tun Interface br-tun type:
>> > internal Bridge br-int Port int-br-int Interface int-br-int Port
>> > "tapf71d7e0e-89" tag: 4095 Interface "tapf71d7e0e-89" type:
>> > internal Port br-int Interface br-int type: internal Port
>> > patch-tun Interface patch-tun type: patch options:
>> > {peer=patch-int} Port "qr-39bb4ba0-3d" tag: 4095 Interface
>> > "qr-39bb4ba0-3d" type: internal Port phy-br-int Interface
>> > phy-br-int ovs_version: "2.0.1"
>> >
>> >
>> > I see this one in my neutron DB:
>> >
>> > neutron=# select * from ml2_port_bindings ; port_id
>> > | host | vif_type | driver | segment |
>> > vnic_type | vif_details | profile -
>> >
>> --------------------------------------+--------------------------+----------------+--------+---------+-----------+-------------+---------
>> >
>> > 39bb4ba0-3d37-4ffe-9c81-073807f8971a | nettesting.hydranodes.de |
>> > binding_failed | | | normal | | {}
>> > f71d7e0e-8955-4784-83aa-c23bf1b16f4f | nettesting.hydranodes.de |
>> > binding_failed | | | normal | | {}
>> >
>> >
>> > is that maybe the problem ?
>> >
>> > Cheers Heiko
>> >
>> >
>> >
>> > On 17.06.2014 12:08, Akilesh K wrote:
>> >> File looks good except that [agent] section is not needed. Can
>> >> you reply with some log from '/var/log/neutron/server.log'
>> >> during instance launch exactly.
>> >
>> >> The vif_type=binding_failed occurs when neutron is unable to
>> >> create a port for some reason. Either neutron server log or the
>> >> plugin's log file should have some information why it failed in
>> >> first place.
>> >
>> >
>> >> On Tue, Jun 17, 2014 at 3:07 PM, Heiko Krämer
>> >> <hkraemer at anynines.com> wrote:
>> >
>> >> Hi Kaya,
>> >
>> >> https://gist.github.com/foexle/e1f02066d6a9cff306f4
>> >
>> >> Cheers Heiko
>> >
>> >> On 17.06.2014 11:17, Yankai Liu wrote:
>> >>>>> Heiko,
>> >>>>>
>> >>>>> Would you please share your ml2_conf.ini?
>> >>>>>
>> >>>>> Best Regards, Kaya Liu 刘艳凯 Cloud Architect, Canonical
>> >>>>>
>> >>>>>
>> >>>>> On Tue, Jun 17, 2014 at 4:58 PM, Heiko Krämer
>> >>>>> <hkraemer at anynines.com> wrote:
>> >>>>>
>> >>>>> Hi guys,
>> >>>>>
>> >>>>> i'm trying to get work ml2 plugin in Icehouse (Ubuntu
>> >>>>> 14.04+cloud archive packages). I get everything if it try
>> >>>>> to start an instance:
>> >>>>>
>> >>>>> 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher six.reraise(self.type_,
>> >>>>> self.value, self.tb) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>
>> >>>>>
>> line 1396, in _reschedule_or_error 2014-06-17 08:42:01.893
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher bdms,
>> >>>>> requested_networks) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>
>> >>>>>
>> line 2125, in _shutdown_instance 2014-06-17 08:42:01.893
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>> requested_networks) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py",
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> >>>>>
>> > line 68, in __exit__
>> >>>>> 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher six.reraise(self.type_,
>> >>>>> self.value, self.tb) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>
>> >>>>>
>> line 2115, in _shutdown_instance 2014-06-17 08:42:01.893
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>> block_device_info) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> line 953, in destroy 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher destroy_disks) 2014-06-17
>> >>>>> 08:42:01.893 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>> File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> line 989, in cleanup 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher self.unplug_vifs(instance,
>> >>>>> network_info) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> line 860, in unplug_vifs 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher
>> >>>>> self.vif_driver.unplug(instance, vif) 2014-06-17
>> >>>>> 08:42:01.893 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>> File
>> >>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py",
>> >>>>>
>> >>>>>
>> line 798, in unplug 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher _("Unexpected vif_type=%s")
>> >>>>> % vif_type) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>> oslo.messaging.rpc.dispatcher NovaException: Unexpected
>> >>>>> vif_type=binding_failed 2014-06-17 08:42:01.893 25437
>> >>>>> TRACE oslo.messaging.rpc.dispatcher
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> So i've found a solution but still not working yet ?!
>> >>>>>
>> >>>>>
>> >
>> >
>> https://ask.openstack.org/en/question/29518/unexpected-vif_typebinding_failed/?answer=32429#post-id-32429
>> >
>> >
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >
>> > I've checked the agent_down_time and retry interval. All neutron
>> >>>>> agents are present and running if i check the api.
>> >>>>>
>> >>>>> ovs plugin and ml2 plugin config are the same.
>> >>>>>
>> >>>>> DHCP and l3 agents creates ports on openvswitch (network
>> >>>>> host) but i get the error (above) on compute hosts.
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Modules are installed and loaded:
>> >>>>>
>> >>>>> filename:
>> >>>>> /lib/modules/3.13.0-29-generic/kernel/net/openvswitch/openvswitch.ko
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> >>>>>
>> > license: GPL
>> >>>>> description: Open vSwitch switching datapath
>> >>>>> srcversion: 1CEE031973F0E4024ACC848 depends:
>> >>>>> libcrc32c,vxlan,gre intree: Y vermagic:
>> >>>>> 3.13.0-29-generic SMP mod_unload modversions signer:
>> >>>>> Magrathea: Glacier signing key sig_key:
>> >>>>> 66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8
>> >>>>>
>> >>>>>
>> sig_hashalgo: sha512
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Nova-Config [DEFAULT] libvirt_type=kvm
>> >>>>> libvirt_ovs_bridge=br-int libvirt_vif_type=ethernet
>> >>>>> libvirt_use_virtio_for_bridges=True
>> >>>>> libvirt_cpu_mode=host-passthrough
>> >>>>> disk_cachemodes="file=writeback,block=none"
>> >>>>> running_deleted_instance_action=reep
>> >>>>> compute_driver=libvirt.LibvirtDriver
>> >>>>> libvirt_inject_partition = -1 libvirt_nonblocking = True
>> >>>>> vif_plugging_is_fatal = False vif_plugging_timeout = 0
>> >>>>>
>> >>>>> [..]
>> >>>>>
>> >>>>> network_api_class=nova.network.neutronv2.api.API
>> >>>>> neutron_url=http://net.cloud.local:9696
>> >>>>> neutron_metadata_proxy_shared_secret = xxx
>> >>>>> neutron_auth_strategy=keystone
>> >>>>> neutron_admin_tenant_name=service
>> >>>>> neutron_admin_username=keystone neutron_admin_password=xxx
>> >>>>> neutron_admin_auth_url=https://auth-testing.cloud.local:35357/v2.0
>> >>>>>
>> >>>>>
>> >
>> >>>>>
>> linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
>> >>>>>
>> >>>>>
>> >> firewall_driver=nova.virt.firewall.NoopFirewallDriver
>> >>>>> security_group_api=neutron
>> >>>>> service_neutron_metadata_proxy=true
>> >>>>> force_dhcp_release=True
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Do anyone have the same problem and solved it ?
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Cheers and Thanks Heiko
>> >>>>>
>> >>>>>>
>> >>>>>> _______________________________________________ Mailing
>> >>>>>> list:
>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>
>> >>>>>>
>> >
>> >>>>>>
>> Post to : openstack at lists.openstack.org Unsubscribe :
>> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>
>> >>>>>
>> >
>> >>>>>>
>> >>>>>>
>> >>>
>> >>> _______________________________________________ Mailing list:
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>> Post to : openstack at lists.openstack.org Unsubscribe :
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>
>> >
>> >
>> >
>> > _______________________________________________ Mailing list:
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post
>> > to : openstack at lists.openstack.org Unsubscribe :
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>>
>> - --
>> Anynines.com
>>
>> B.Sc. Informatik
>> CIO
>> Heiko Krämer
>>
>>
>> Twitter: @anynines
>>
>> - - ----
>> Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
>> Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
>> Sitz: Saarbrücken
>> Avarteq GmbH
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.14 (GNU/Linux)
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>
>> iQEcBAEBAgAGBQJToDnbAAoJELxFogM4ixOF2gcIAOBU7VyL9YB3VPH101zFsNvZ
>> VMtUJTiFE2j/esNCvtp+Vh/FQFBZtWWyvACU4eMtMp1ET9WOdjFFJFYPyDtK+GZJ
>> sGs2DAC1z3NnWdPkSZC1aW0L5urLPHRqQc6d6rk7PUgxN6zA5w9unSOWZDL9GM9b
>> ELSOr/suU5EH8vI6hJZVYhyjq3Mrjaf81xZXT5DmrfrLWan4o1+y5pubBhpRw4vO
>> MOgrAKiMStP4Ok2wTssZ13MvmZeMfdNByuPF4XEvn53xYKj4w7INAusvPTTsfc1U
>> ln3cSLK7wMq+x41KlXUb2mK4y0nbYed5emWfCEKL2xZb1Jk9E+UzN0ZppCslWs8=
>> =cwR+
>> -----END PGP SIGNATURE-----
>>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140618/dc991db7/attachment.html>
More information about the Openstack
mailing list