[Openstack] ML2 Plugin and vif_type=binding_failed
Raphael Ribeiro
raphaelpr01 at gmail.com
Thu Jun 26 18:20:39 UTC 2014
Hi Yankai, the compute.log:
https://gist.github.com/raphapr/8e7896a738c6f6e6d27d#file-compute-log
but there is nothing in /var/log/neutron on compute node, strange?
I notice this in the ovs configuration
*compute node*
# ovs-vsctl show
2662367f-e844-4fad-8c00-8f9dd9ddaa3d
Bridge br-int
Port br-int
Interface br-int
type: internal
ovs_version: "1.11.0"
*network node*
# ovs-vsctl show
c01dd533-019c-471e-8930-609aca800b93
Bridge br-int
Port "qr-03b6df09-98"
tag: 1
Interface "qr-03b6df09-98"
type: internal
Port br-int
Interface br-int
type: internal
Port "qr-fc94fed1-33"
tag: 4095
Interface "qr-fc94fed1-33"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-tun
Interface int-br-tun
Port "tap03acfd5b-75"
tag: 1
Interface "tap03acfd5b-75"
type: internal
Bridge br-ex
Port "qg-8c866cbc-1b"
Interface "qg-8c866cbc-1b"
type: internal
Port "eth2"
Interface "eth2"
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.11.0"
I also noticed that I can ping subnet gateway with compute node but not
with the other nodes:
*compute node*
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=63 time=2.46 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=63 time=2.35 ms
^C
--- 192.168.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1698ms
rtt min/avg/max/mdev = 2.355/2.407/2.460/0.071 ms
*controller node *
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
>From 10.0.0.11 icmp_seq=2 Destination Host Unreachable
>From 10.0.0.11 icmp_seq=3 Destination Host Unreachable
>From 10.0.0.11 icmp_seq=4 Destination Host Unreachable
^C
--- 192.168.1.1 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3643ms
*network node*
# ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
>From 10.0.0.21 icmp_seq=2 Destination Host Unreachable
>From 10.0.0.21 icmp_seq=3 Destination Host Unreachable
>From 10.0.0.21 icmp_seq=4 Destination Host Unreachable
maybe that's the root of the problem?
2014-06-26 6:02 GMT-03:00 Yankai Liu <yankai.liu at canonical.com>:
> Raphael,
>
> If you could share the debug log from nova compute node (/var/log/nova/;
> /var/log/neutron/) it will be helpful to dig out the root cause.
>
>
> On Thu, Jun 26, 2014 at 3:10 PM, Heiko Krämer <hkraemer at anynines.com>
> wrote:
>
>> Hi Raphael,
>>
>> could you please show
>> neutron net-show 013dbc13-ebc5-407b-9d24-c3bf21c68a90
>>
>> in addition
>> cat /etc/neutron/neutron.conf | grep core_plugin
>>
>>
>> Cheers
>> Heiko
>>
>> Am 25.06.2014 20:45, schrieb Raphael Ribeiro:
>> > Hi Heiko, I already have done this too, unfortunately the error
>> persists.
>> >
>> > Yankai, I tried create an instance:
>> >
>> >
>> > nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic
>> net-id=013dbc13-ebc5-407b-9d24-c3bf21c68a90 --security-group default
>> --key-name demo-key cirros
>> >
>> +--------------------------------------+------------------------------------------------------------+
>> > | Property |
>> Value |
>> >
>> +--------------------------------------+------------------------------------------------------------+
>> > | OS-DCF:diskConfig |
>> MANUAL |
>> > | OS-EXT-AZ:availability_zone |
>> nova |
>> > | OS-EXT-SRV-ATTR:host |
>> - |
>> > | OS-EXT-SRV-ATTR:hypervisor_hostname |
>> - |
>> > | OS-EXT-SRV-ATTR:instance_name |
>> instance-0000003c |
>> > | OS-EXT-STS:power_state |
>> 0 |
>> > | OS-EXT-STS:task_state |
>> scheduling |
>> > | OS-EXT-STS:vm_state |
>> building |
>> > | OS-SRV-USG:launched_at |
>> - |
>> > | OS-SRV-USG:terminated_at |
>> - |
>> > | accessIPv4
>> | |
>> > | accessIPv6
>> | |
>> > | adminPass |
>> DCGKfVprD8kD |
>> > | config_drive
>> | |
>> > | created |
>> 2014-06-25T18:38:15Z |
>> > | flavor | m1.tiny
>> (1) |
>> > | hostId
>> | |
>> > | id |
>> f173ceb8-2016-4e3e-bdde-bd5a5aed961b |
>> > | image | cirros-0.3.2-x86_64
>> (2acf2ca6-a935-45b3-94f2-d428f34f710f) |
>> > | key_name |
>> demo-key |
>> > | metadata |
>> {} |
>> > | name |
>> cirros |
>> > | os-extended-volumes:volumes_attached |
>> [] |
>> > | progress |
>> 0 |
>> > | security_groups |
>> default |
>> > | status |
>> BUILD |
>> > | tenant_id |
>> 30f220b0dca34241b9e4feb0bd117fe8 |
>> > | updated |
>> 2014-06-25T18:38:15Z |
>> > | user_id |
>> 8d8b6dbdacc6402b960b964b00bf8d14 |
>> >
>> +--------------------------------------+------------------------------------------------------------+
>> >
>> > # nova list
>> >
>> +--------------------------------------+--------+--------+------------+-------------+----------+
>> > | ID | Name | Status | Task State |
>> Power State | Networks |
>> >
>> +--------------------------------------+--------+--------+------------+-------------+----------+
>> > | f173ceb8-2016-4e3e-bdde-bd5a5aed961b | cirros | ERROR | - |
>> NOSTATE | |
>> >
>> +--------------------------------------+--------+--------+------------+-------------+----------+
>> >
>> >
>> > looking neutron tables, I found it:
>> >
>> > select * from
>> ml2_port_bindings;
>>
>> >
>> +--------------------------------------+----------+----------------+-------------+--------------------------------------+-----------+--------------------------
>> > ----------------------+---------+
>> > | port_id | host | vif_type |
>> driver | segment | vnic_type |
>> vif_details
>> > | profile |
>> >
>> +--------------------------------------+----------+----------------+-------------+--------------------------------------+-----------+--------------------------
>> > ----------------------+---------+
>> > | 03b6df09-988f-414d-a7d5-28b4c4d3396c | network | ovs |
>> openvswitch | a6ce71e2-e5c6-4a87-9297-4eafe8c0c6f7 | normal |
>> {"port_filter": true, "ov
>> > s_hybrid_plug": true} | {} |
>> > | 8c866cbc-1b99-4ab4-94ae-ccc60ebe165a | network | ovs |
>> openvswitch | 3d889f27-853a-43ea-afdc-dc19902d3b25 | normal |
>> {"port_filter": true, "ov
>> > s_hybrid_plug": true} | {} |
>> > | 03acfd5b-75e7-4816-8323-4decddd2ccdc | network | ovs |
>> openvswitch | a6ce71e2-e5c6-4a87-9297-4eafe8c0c6f7 | normal |
>> {"port_filter": true, "ov
>> > s_hybrid_plug": true} | {} |
>> > | 2e6e9310-a909-4ebc-b767-c289eb73b8d3 | compute1 | binding_failed |
>> NULL | NULL | normal
>> |
>> > | |
>> > | ebced03c-7ee1-44c6-b692-220837f1d121 | compute1 | binding_failed |
>> NULL | NULL | normal
>> |
>> > | |
>> > | b073139a-4ddc-40ed-b4d7-65935b0f192b | compute1 | binding_failed |
>> NULL | NULL | normal
>> |
>> > | |
>> > | d653d9e6-62ba-4e63-a53b-147038b73cd2 | compute1 | binding_failed |
>> NULL | NULL | normal |
>> > ...
>> >
>> >
>> > so compute openvswitch agent cannot communicate with controller?
>> >
>> > # neutron agent-list
>> >
>> +--------------------------------------+--------------------+----------+-------+----------------+
>> > | id | agent_type | host
>> | alive | admin_state_up |
>> >
>> +--------------------------------------+--------------------+----------+-------+----------------+
>> > | 294806e8-0ff6-455b-a9f2-8af5bb6d56e4 | Open vSwitch agent | compute1
>> | :-) | True |
>> > | 717e19f3-b042-433f-a6a8-7ff78ab35dce | Open vSwitch agent | network
>> | :-) | True |
>> > | aa70df34-07bb-47b4-aa00-58312db861f8 | L3 agent | network
>> | :-) | True |
>> > | d3372871-37a1-4214-a722-c711c47aed34 | DHCP agent | network
>> | :-) | True |
>> > | a8bdb412-d3bf-4aef-ad31-1d46f267d711 | Metadata agent | network
>> | :-) | True |
>> >
>> +--------------------------------------+--------------------+----------+-------+----------------+
>> >
>> >
>> >
>> >
>> >
>> > 2014-06-23 4:56 GMT-03:00 Yankai Liu <yankai.liu at canonical.com
>> <mailto:yankai.liu at canonical.com> <yankai.liu at canonical.com>>:
>>
>> >
>> > Raphael,
>> >
>> > Please check if your instance is created successfully. Sometimes
>> the instance is failed to spawn for some other reason and nova will try to
>> clean up the instance to roll back. During the clean-up it's possible to
>> get the vif_binding exception. You may double check your nova compute and
>> nova controller log files to see what happened before this exception comes
>> out.
>> >
>> > Best Regards,
>> > Kaya Liu
>> > 刘艳凯
>> >
>> > On Mon, Jun 23, 2014 at 3:01 PM, Heiko Krämer <
>> hkraemer at anynines.com <mailto:hkraemer at anynines.com>
>> <hkraemer at anynines.com>> wrote:
>> >
>> >
>>
>> Hi Raphael,
>>
>> please check if your ovs_plugin config is the same as the ml2 config.
>>
>> In addition i'm missing in your nova.conf:
>> libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
>>
>>
>> Cheers
>> Heiko
>>
>> On 20.06.2014 20:23, Raphael Ribeiro wrote:
>> > Hi Mark, thanks for answering. I already have done this, same error
>> logs. I
>> > cannot imagine what is wrong with my files:
>>
>> > compute node config
>> > https://gist.github.com/raphapr/8e7896a738c6f6e6d27d
>>
>> > neutron node config
>> > https://gist.github.com/raphapr/a9e804f40d3336d7db7f
>>
>> > controller node config
>> > https://gist.github.com/raphapr/c46382554f733d0c1de1
>>
>> > can you help me?
>>
>>
>> > 2014-06-20 2:50 GMT-03:00 Mark Kirkwood <mark.kirkwood at catalyst.net.nz
>> <mailto:mark.kirkwood at catalyst.net.nz> <mark.kirkwood at catalyst.net.nz>>:
>>
>>
>> >> He did this:
>> >>
>> >> $ cat /etc/neutron/neutron.conf
>> >> ...
>> >> [database]
>> >> # set in plugin
>> >> #connection =
>> >>
>> >>
>> >> $ cat /etc/neutron/plugins/ml2/ml2_conf.ini
>> >> ...
>> >> [database]
>> >> connection = mysql://neutron:password@127.0.0.1/neutron
>> <http://neutron:password@127.0.0.1/neutron>
>> <http://neutron:password@127.0.0.1/neutron>
>>
>> >>
>> >> Then (re)initialize the various db structures and restart all neutron
>> >> daemons:
>> >>
>> >> $ neutron-db-manage --config-file /etc/neutron/neutron.conf \
>> >> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
>> >>
>> >>
>> >> On 20/06/14 15:49, Raphael Ribeiro wrote:
>> >>
>> >>> Hi Heiko, what was wrong with the ml2 config? Can you post here?
>> >>>
>> >>> I'm having the same problem,.
>> >>>
>> >>> Thanks!
>> >>>
>> >>>
>> >>> 2014-06-17 9:51 GMT-03:00 Heiko Krämer <hkraemer at anynines.com
>> <mailto:hkraemer at anynines.com> <hkraemer at anynines.com>>:
>>
>> >>>
>> >>> -----BEGIN PGP SIGNED MESSAGE-----
>> >>>> Hash: SHA1
>> >>>>
>> >>>> Hi Akesh,
>> >>>>
>> >>>> you're right on the controller host was the ml2 config not correct
>> -.-
>> >>>> my false.
>> >>>>
>> >>>> In addition in the ml2_conf need to be the database connection
>> >>>> informations like in ovs.
>> >>>>
>> >>>> It's running now :)
>> >>>>
>> >>>> Thanks again.
>> >>>>
>> >>>>
>> >>>> Cheers
>> >>>> Heiko
>> >>>>
>> >>>> On 17.06.2014 12:31, Akash Gunjal wrote:
>> >>>>
>> >>>>> Hi,
>> >>>>>
>> >>>>> This error occurs when the config is wrong wither on controller or
>> >>>>> the compute. Check the ml2_conf.ini on controller and
>> >>>>> ovs_plugin.ini on the compute.
>> >>>>>
>> >>>>>
>> >>>>> Regards, Akash
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> From: Heiko Krämer <hkraemer at anynines.com
>> <mailto:hkraemer at anynines.com> <hkraemer at anynines.com>> To:
>> Akilesh K
>> >>>>> <akilesh1597 at gmail.com <mailto:akilesh1597 at gmail.com>
>> <akilesh1597 at gmail.com>>, Cc: "openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>"
>> >>>>> <openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>>
>> Date: 06/17/2014 03:56 PM Subject:
>>
>> >>>>> Re: [Openstack] ML2 Plugin and vif_type=binding_failed
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> Hi Akilesh,
>> >>>>>
>> >>>>> i see this warn on neutron-server
>> >>>>>
>> >>>>> 2014-06-17 10:14:20.283 24642 <tel:20.283%2024642> WARNING
>> neutron.plugins.ml2.managers
>>
>> >>>>> [req-d23b58ce-3389-4af5-bdd2-a78bd7cec507 None] Failed to bind
>> >>>>> port f71d7e0e-8955-4784-83aa-c23bf1b16f4f on host
>> >>>>> nettesting.hydranodes.de <http://nettesting.hydranodes.de>
>> <http://nettesting.hydranodes.de>
>>
>> >>>>>
>> >>>>>
>> >>>>> if i restart ovs-agent on network node i see this one: 2014-06-17
>> >>>>> 09:28:26.029 31369 <tel:26.029%2031369> ERROR
>> neutron.agent.linux.ovsdb_monitor [-]
>>
>> >>>>> Error received from ovsdb monitor:
>> >>>>> 2014-06-17T09:28:26Z|00001|fatal_signal|WARN|terminating with
>> >>>>> signal 15 (Terminated) 2014-06-17 09:28:29.275 31870 WARNING
>> >>>>> neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
>> >>>>> f71d7e0e-8955-4784-83aa-c23bf1b16f4f not defined on plugin
>> >>>>> 2014-06-17 09:28:29.504 31870 WARNING
>> >>>>> neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device
>> >>>>> 39bb4ba0-3d37-4ffe-9c81-073807f8971a not defined on plugin
>> >>>>>
>> >>>>>
>> >>>>> same on comp host if i restart ovs agent: 2014-06-17 09:28:44.446
>> >>>>> 25476 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received
>> >>>>> from ovsdb monitor:
>> >>>>> 2014-06-17T09:28:44Z|00001|fatal_signal|WARN|terminating with
>> >>>>> signal 15 (Terminated)
>> >>>>>
>> >>>>>
>> >>>>> but ovs seems to be correct:
>> >>>>>
>> >>>>> ##Compute## 7bbe81f3-79fa-4efa-b0eb-76addb57675c Bridge br-tun Port
>> >>>>> "gre-64141401" Interface "gre-64141401" type: gre options:
>> >>>>> {in_key=flow, local_ip="100.20.20.2", out_key=flow,
>> >>>>> remote_ip="100.20.20.1"} Port patch-int Interface patch-int type:
>> >>>>> patch options: {peer=patch-tun} Port br-tun Interface br-tun type:
>> >>>>> internal Bridge br-int Port br-int Interface br-int type: internal
>> >>>>> Port patch-tun Interface patch-tun type: patch options:
>> >>>>> {peer=patch-int} ovs_version: "2.0.1"
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> ### Network node### a40d7fc6-b0f0-4d55-98fc-c02cc7227d6c Bridge
>> >>>>> br-ex Port br-ex Interface br-ex type: internal Bridge br-tun Port
>> >>>>> "gre-64141402" Interface "gre-64141402" type: gre options:
>> >>>>> {in_key=flow, local_ip="100.20.20.1", out_key=flow,
>> >>>>> remote_ip="100.20.20.2"} Port patch-int Interface patch-int type:
>> >>>>> patch options: {peer=patch-tun} Port br-tun Interface br-tun type:
>> >>>>> internal Bridge br-int Port int-br-int Interface int-br-int Port
>> >>>>> "tapf71d7e0e-89" tag: 4095 Interface "tapf71d7e0e-89" type:
>> >>>>> internal Port br-int Interface br-int type: internal Port
>> >>>>> patch-tun Interface patch-tun type: patch options:
>> >>>>> {peer=patch-int} Port "qr-39bb4ba0-3d" tag: 4095 Interface
>> >>>>> "qr-39bb4ba0-3d" type: internal Port phy-br-int Interface
>> >>>>> phy-br-int ovs_version: "2.0.1"
>> >>>>>
>> >>>>>
>> >>>>> I see this one in my neutron DB:
>> >>>>>
>> >>>>> neutron=# select * from ml2_port_bindings ; port_id
>> >>>>> | host | vif_type | driver | segment |
>> >>>>> vnic_type | vif_details | profile -
>> >>>>>
>> >>>>> --------------------------------------+---------------------
>> >>>> -----+----------------+--------+---------+-----------+------
>> >>>> -------+---------
>> >>>>
>> >>>>>
>> >>>>> 39bb4ba0-3d37-4ffe-9c81-073807f8971a | nettesting.hydranodes.de
>> <http://nettesting.hydranodes.de> <http://nettesting.hydranodes.de> |
>> >>>>> binding_failed | | | normal | | {}
>> >>>>> f71d7e0e-8955-4784-83aa-c23bf1b16f4f | nettesting.hydranodes.de
>> <http://nettesting.hydranodes.de> <http://nettesting.hydranodes.de> |
>>
>> >>>>> binding_failed | | | normal | | {}
>> >>>>>
>> >>>>>
>> >>>>> is that maybe the problem ?
>> >>>>>
>> >>>>> Cheers Heiko
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> On 17.06.2014 12:08, Akilesh K wrote:
>> >>>>>
>> >>>>>> File looks good except that [agent] section is not needed. Can
>> >>>>>> you reply with some log from '/var/log/neutron/server.log'
>> >>>>>> during instance launch exactly.
>> >>>>>>
>> >>>>>
>> >>>>> The vif_type=binding_failed occurs when neutron is unable to
>> >>>>>> create a port for some reason. Either neutron server log or the
>> >>>>>> plugin's log file should have some information why it failed in
>> >>>>>> first place.
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>> On Tue, Jun 17, 2014 at 3:07 PM, Heiko Krämer
>> >>>>>> <hkraemer at anynines.com <mailto:hkraemer at anynines.com>
>> <hkraemer at anynines.com>> wrote:
>> >>>>>>
>> >>>>>
>> >>>>> Hi Kaya,
>> >>>>>>
>> >>>>>
>> >>>>> https://gist.github.com/foexle/e1f02066d6a9cff306f4
>> >>>>>>
>> >>>>>
>> >>>>> Cheers Heiko
>> >>>>>>
>> >>>>>
>> >>>>> On 17.06.2014 11:17, Yankai Liu wrote:
>> >>>>>>
>> >>>>>>> Heiko,
>> >>>>>>>>>
>> >>>>>>>>> Would you please share your ml2_conf.ini?
>> >>>>>>>>>
>> >>>>>>>>> Best Regards, Kaya Liu 刘艳凯 Cloud Architect, Canonical
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> On Tue, Jun 17, 2014 at 4:58 PM, Heiko Krämer
>> >>>>>>>>> <hkraemer at anynines.com <mailto:hkraemer at anynines.com>
>> <hkraemer at anynines.com>> wrote:
>> >>>>>>>>>
>> >>>>>>>>> Hi guys,
>> >>>>>>>>>
>> >>>>>>>>> i'm trying to get work ml2 plugin in Icehouse (Ubuntu
>> >>>>>>>>> 14.04+cloud archive packages). I get everything if it try
>> >>>>>>>>> to start an instance:
>> >>>>>>>>>
>> >>>>>>>>> 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher six.reraise(self.type_,
>> >>>>>>>>> self.value, self.tb) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> line 1396, in _reschedule_or_error 2014-06-17 08:42:01.893
>> >>>>
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher bdms,
>> >>>>>>>>> requested_networks) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> line 2125, in _shutdown_instance 2014-06-17 08:42:01.893
>> >>>>
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>>>>>> requested_networks) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/
>> >>>>>>>>> common/excutils.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>>
>> >>>>>>>>> line 68, in __exit__
>> >>>>>
>> >>>>>> 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher six.reraise(self.type_,
>> >>>>>>>>> self.value, self.tb) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> line 2115, in _shutdown_instance 2014-06-17 08:42:01.893
>> >>>>
>> >>>>> 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>>>>>> block_device_info) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>> line 953, in destroy 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>
>> >>>>> oslo.messaging.rpc.dispatcher destroy_disks) 2014-06-17
>> >>>>>>>>> 08:42:01.893 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>>>>>> File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>> line 989, in cleanup 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>
>> >>>>> oslo.messaging.rpc.dispatcher self.unplug_vifs(instance,
>> >>>>>>>>> network_info) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>> line 860, in unplug_vifs 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>
>> >>>>> oslo.messaging.rpc.dispatcher
>> >>>>>>>>> self.vif_driver.unplug(instance, vif) 2014-06-17
>> >>>>>>>>> 08:42:01.893 25437 TRACE oslo.messaging.rpc.dispatcher
>> >>>>>>>>> File
>> >>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py",
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> line 798, in unplug 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>
>> >>>>> oslo.messaging.rpc.dispatcher _("Unexpected vif_type=%s")
>> >>>>>>>>> % vif_type) 2014-06-17 08:42:01.893 25437 TRACE
>> >>>>>>>>> oslo.messaging.rpc.dispatcher NovaException: Unexpected
>> >>>>>>>>> vif_type=binding_failed 2014-06-17 08:42:01.893 25437
>> >>>>>>>>> TRACE oslo.messaging.rpc.dispatcher
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> So i've found a solution but still not working yet ?!
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>> https://ask.openstack.org/en/question/29518/unexpected-vif_
>> >>>> typebinding_failed/?answer=32429#post-id-32429
>> >>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>> I've checked the agent_down_time and retry interval. All neutron
>> >>>>>
>> >>>>>> agents are present and running if i check the api.
>> >>>>>>>>>
>> >>>>>>>>> ovs plugin and ml2 plugin config are the same.
>> >>>>>>>>>
>> >>>>>>>>> DHCP and l3 agents creates ports on openvswitch (network
>> >>>>>>>>> host) but i get the error (above) on compute hosts.
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Modules are installed and loaded:
>> >>>>>>>>>
>> >>>>>>>>> filename:
>> >>>>>>>>> /lib/modules/3.13.0-29-generic/kernel/net/
>> >>>>>>>>> openvswitch/openvswitch.ko
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>>
>> >>>>>>>>> license: GPL
>> >>>>>
>> >>>>>> description: Open vSwitch switching datapath
>> >>>>>>>>> srcversion: 1CEE031973F0E4024ACC848 depends:
>> >>>>>>>>> libcrc32c,vxlan,gre intree: Y vermagic:
>> >>>>>>>>> 3.13.0-29-generic SMP mod_unload modversions signer:
>> >>>>>>>>> Magrathea: Glacier signing key sig_key:
>> >>>>>>>>> 66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> sig_hashalgo: sha512
>> >>>>
>> >>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Nova-Config [DEFAULT] libvirt_type=kvm
>> >>>>>>>>> libvirt_ovs_bridge=br-int libvirt_vif_type=ethernet
>> >>>>>>>>> libvirt_use_virtio_for_bridges=True
>> >>>>>>>>> libvirt_cpu_mode=host-passthrough
>> >>>>>>>>> disk_cachemodes="file=writeback,block=none"
>> >>>>>>>>> running_deleted_instance_action=reep
>> >>>>>>>>> compute_driver=libvirt.LibvirtDriver
>> >>>>>>>>> libvirt_inject_partition = -1 libvirt_nonblocking = True
>> >>>>>>>>> vif_plugging_is_fatal = False vif_plugging_timeout = 0
>> >>>>>>>>>
>> >>>>>>>>> [..]
>> >>>>>>>>>
>> >>>>>>>>> network_api_class=nova.network.neutronv2.api.API
>> >>>>>>>>> neutron_url=http://net.cloud.local:9696
>> >>>>>>>>> neutron_metadata_proxy_shared_secret = xxx
>> >>>>>>>>> neutron_auth_strategy=keystone
>> >>>>>>>>> neutron_admin_tenant_name=service
>> >>>>>>>>> neutron_admin_username=keystone neutron_admin_password=xxx
>> >>>>>>>>> neutron_admin_auth_url=
>> https://auth-testing.cloud.local:35357/v2.0
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>> linuxnet_interface_driver=nova.network.linux_net.
>> >>>> LinuxOVSInterfaceDriver
>> >>>>
>> >>>>>
>> >>>>>>>>>
>> >>>>>>>>> firewall_driver=nova.virt.firewall.NoopFirewallDriver
>> >>>>>>
>> >>>>>>> security_group_api=neutron
>> >>>>>>>>> service_neutron_metadata_proxy=true
>> >>>>>>>>> force_dhcp_release=True
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Do anyone have the same problem and solved it ?
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Cheers and Thanks Heiko
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>>> _______________________________________________ Mailing
>> >>>>>>>>>> list:
>> >>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>
>> >>>>>>>>>> Post to : openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>
>> Unsubscribe :
>>
>> >>>>
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>
>> >>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>> _______________________________________________ Mailing list:
>> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>> Post to : openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>
>> Unsubscribe :
>>
>> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>>>
>> >>>>>>>
>> >>>>>
>> >>>>>
>> >>>>> _______________________________________________ Mailing list:
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post
>> >>>>> to : openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>
>> Unsubscribe :
>>
>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >>>>>
>> >>>>>
>> >>>> - --
>> >>>> Anynines.com
>> >>>>
>> >>>> B.Sc. Informatik
>> >>>> CIO
>> >>>> Heiko Krämer
>> >>>>
>> >>>>
>> >>>> Twitter: @anynines
>> >>>>
>> >>>> - - ----
>> >>>> Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
>> >>>> Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
>> >>>> Sitz: Saarbrücken
>> >>>> Avarteq GmbH
>> >>>>
>>
>> >
>> >
>>
>> > _______________________________________________
>> > Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to : openstack at lists.openstack.org
>> <mailto:openstack at lists.openstack.org> <openstack at lists.openstack.org>
>> > Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> >
>> >
>> >
>> >
>> >
>> > --
>> > /
>> > Raphael Pereira Ribeiro
>> > /
>> > /Instituto de Computação - IC/UFAL/
>> > /Graduando em Ciências da Computação/
>> > _/http://lattes.cnpq.br/9969641216207080/_
>>
>> --
>> anynines.com
>>
>>
>
--
*Raphael Pereira Ribeiro*
*Instituto de Computação - IC/UFAL*
*Graduando em Ciências da Computação*
*http://lattes.cnpq.br/9969641216207080
<http://lattes.cnpq.br/9969641216207080>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140626/3624904d/attachment.html>
More information about the Openstack
mailing list