<div dir="ltr">I solved the error with the link sent by dmitry (<a href="https://github.com/droopy4096/openstack-ng/tree/multinode">https://github.com/droopy4096/openstack-ng/tree/multinode</a>). Thanks Dmitry!<div><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">apparently on the compute and network node was the ml2_conf.ini and ovs_neutron_plugin.ini not correct, so I copied here <a href="https://github.com/droopy4096/openstack-ng/tree/multinode/roles/neutron-controller/templates">https://github.com/droopy4096/openstack-ng/tree/multinode/roles/neutron-controller/templates</a></font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">But now, when trying to create an instance, this happens:</font></div><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif"><br>
</font></div><div><font face="arial, sans-serif">/var/log/neutron/openvswitch-agent.log (compute)</font></div><div><font face="arial, sans-serif"><div><br></div><div>2014-06-27 04:04:06.355 2398 WARNING neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Unable to create tunnel port. Invalid remote IP: local_ip=10.0.1.21</div>
<div>2014-06-27 04:04:06.355 2398 WARNING neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Unable to create tunnel port. Invalid remote IP: local_ip=10.0.1.31</div><div>...</div><div>2014-06-27 04:05:30.203 2398 INFO neutron.agent.securitygroups_rpc [req-180b4ec3-939e-4e22-a87e-f09f935b5f17 None] Security group member updated [u'edced1da-e548-4932-849d-3707ab64b3ab']</div>
<div>2014-06-27 04:05:38.474 2398 INFO neutron.agent.securitygroups_rpc [-] Preparing filters for devices set([u'69d5d895-1992-4aac-b830-a6e9d054450b'])</div><div>2014-06-27 04:05:39.031 2398 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 69d5d895-1992-4aac-b830-a6e9d054450b updated. Details: {u'admin_state_up': True, u'network_id': u'2dcbc40a-13df-4dfe-b6c9-e522be9cdc5b', u'segmentation_id': 2, u'physical_network': None, u'device': u'69d5d895-1992-4aac-b830-a6e9d054450b', u'port_id': u'69d5d895-1992-4aac-b830-a6e9d054450b', u'network_type': u'gre'}</div>
<div>...</div><div><br></div><div>seems to me something easy to solve, but I cannot find the root of the problem.</div></font><div><br></div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2014-06-26 15:20 GMT-03:00 Raphael Ribeiro <span dir="ltr"><<a href="mailto:raphaelpr01@gmail.com" target="_blank">raphaelpr01@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi Yankai, the compute.log: <a href="https://gist.github.com/raphapr/8e7896a738c6f6e6d27d#file-compute-log" target="_blank">https://gist.github.com/raphapr/8e7896a738c6f6e6d27d#file-compute-log</a><div><br>
</div><div>but there is nothing in /var/log/neutron on compute node, strange?</div>
<div><br></div><div>I notice this in the ovs configuration</div><div><br></div><div><br></div><div><b>compute node</b></div><div><br></div><div><div># ovs-vsctl show</div><div>2662367f-e844-4fad-8c00-8f9dd9ddaa3d</div><div class="">
<div>
Bridge br-int</div><div> Port br-int</div><div> Interface br-int</div><div> type: internal</div></div><div> ovs_version: "1.11.0"</div></div><div><br></div><div><br></div>
<div><b>network node</b></div>
<div><br></div><div><div><div># ovs-vsctl show</div><div>c01dd533-019c-471e-8930-609aca800b93</div><div> Bridge br-int</div><div> Port "qr-03b6df09-98"</div><div> tag: 1</div><div> Interface "qr-03b6df09-98"</div>
<div class="">
<div> type: internal</div><div> Port br-int</div><div> Interface br-int</div><div> type: internal</div></div><div> Port "qr-fc94fed1-33"</div><div> tag: 4095</div>
<div> Interface "qr-fc94fed1-33"</div><div class=""><div> type: internal</div><div> Port patch-tun</div><div> Interface patch-tun</div><div> type: patch</div>
<div> options: {peer=patch-int}</div>
</div><div> Port int-br-tun</div><div> Interface int-br-tun</div><div> Port "tap03acfd5b-75"</div><div> tag: 1</div><div> Interface "tap03acfd5b-75"</div>
<div> type: internal</div>
<div> Bridge br-ex</div><div> Port "qg-8c866cbc-1b"</div><div> Interface "qg-8c866cbc-1b"</div><div> type: internal</div><div> Port "eth2"</div><div>
Interface "eth2"</div><div class=""><div> Port br-ex</div><div> Interface br-ex</div><div> type: internal</div><div> Bridge br-tun</div></div><div class=""><div> Port br-tun</div>
<div> Interface br-tun</div>
<div> type: internal</div><div> Port patch-int</div><div> Interface patch-int</div><div> type: patch</div><div> options: {peer=patch-tun}</div></div><div> ovs_version: "1.11.0"</div>
</div></div><div><br></div><div>I also noticed that I can ping subnet gateway with compute node but not with the other nodes:</div><div><br></div><div><b>compute node</b></div><div><br></div><div><br></div><div><div># ping 192.168.1.1</div>
<div>PING 192.168.1.1 <a href="tel:%28192.168.1.1%29%2056" value="+551921681156" target="_blank">(192.168.1.1) 56</a>(84) bytes of data.</div><div>64 bytes from <a href="http://192.168.1.1" target="_blank">192.168.1.1</a>: icmp_seq=1 ttl=63 time=2.46 ms</div>
<div>64 bytes from <a href="http://192.168.1.1" target="_blank">192.168.1.1</a>: icmp_seq=2 ttl=63 time=2.35 ms</div>
<div>^C</div><div>--- 192.168.1.1 ping statistics ---</div><div>2 packets transmitted, 2 received, 0% packet loss, time 1698ms</div><div>rtt min/avg/max/mdev = 2.355/2.407/2.460/0.071 ms</div></div><div><br></div><div><b>controller node </b></div>
<div><br></div><div><div># ping 192.168.1.1 </div><div>PING 192.168.1.1 <a href="tel:%28192.168.1.1%29%2056" value="+551921681156" target="_blank">(192.168.1.1) 56</a>(84) bytes of data.</div>
<div>From 10.0.0.11 icmp_seq=2 Destination Host Unreachable</div><div>From 10.0.0.11 icmp_seq=3 Destination Host Unreachable</div><div>From 10.0.0.11 icmp_seq=4 Destination Host Unreachable</div><div>^C</div><div>--- 192.168.1.1 ping statistics ---</div>
<div>4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3643ms</div></div><div><br></div><div><b>network node</b></div><div><br></div><div><div># ping 192.168.1.1</div><div>PING 192.168.1.1 <a href="tel:%28192.168.1.1%29%2056" value="+551921681156" target="_blank">(192.168.1.1) 56</a>(84) bytes of data.</div>
<div>From 10.0.0.21 icmp_seq=2 Destination Host Unreachable</div><div>From 10.0.0.21 icmp_seq=3 Destination Host Unreachable</div><div>From 10.0.0.21 icmp_seq=4 Destination Host Unreachable</div></div><div><br></div><div>
<br></div><div>maybe that's the root of the problem?<br></div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-06-26 6:02 GMT-03:00 Yankai Liu <span dir="ltr"><<a href="mailto:yankai.liu@canonical.com" target="_blank">yankai.liu@canonical.com</a>></span>:<div>
<div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Raphael,<div><br></div><div>If you could share the debug log from nova compute node (/var/log/nova/; /var/log/neutron/) it will be helpful to dig out the root cause.</div>
<div><div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Thu, Jun 26, 2014 at 3:10 PM, Heiko Krämer <span dir="ltr"><<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi Raphael,<br>
<br>
could you please show <br>
neutron net-show <span style="white-space:pre-wrap">013dbc13-ebc5-407b-9d24-c3bf21c68a90<br>
<br>
in addition <br>
cat /etc/neutron/neutron.conf | grep core_plugin<br>
<br>
<br>
Cheers<br>
Heiko<br>
</span><br>
Am <a href="tel:25.06.2014%2020" value="+12506201420" target="_blank">25.06.2014 20</a>:45, schrieb Raphael Ribeiro:<br>
<span style="white-space:pre-wrap"><div><div>> Hi Heiko, I already have done
this too, unfortunately the error persists.<br>
><br>
> Yankai, I tried create an instance:<br>
><br>
><br>
> nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic
net-id=013dbc13-ebc5-407b-9d24-c3bf21c68a90 --security-group
default --key-name demo-key cirros<br>
>
+--------------------------------------+------------------------------------------------------------+<br>
> | Property |
Value |<br>
>
+--------------------------------------+------------------------------------------------------------+<br>
> | OS-DCF:diskConfig |
MANUAL |<br>
> | OS-EXT-AZ:availability_zone |
nova |<br>
> | OS-EXT-SRV-ATTR:host |
- |<br>
> | OS-EXT-SRV-ATTR:hypervisor_hostname |
- |<br>
> | OS-EXT-SRV-ATTR:instance_name |
instance-0000003c |<br>
> | OS-EXT-STS:power_state |
0 |<br>
> | OS-EXT-STS:task_state |
scheduling |<br>
> | OS-EXT-STS:vm_state |
building |<br>
> | OS-SRV-USG:launched_at |
- |<br>
> | OS-SRV-USG:terminated_at |
- |<br>
> | accessIPv4
| |<br>
> | accessIPv6
| |<br>
> | adminPass |
DCGKfVprD8kD |<br>
> | config_drive
| |<br>
> | created |
2014-06-25T18:38:15Z |<br>
> | flavor | m1.tiny
(1) |<br>
> | hostId
| |<br>
> | id |
f173ceb8-2016-4e3e-bdde-bd5a5aed961b |<br>
> | image | cirros-0.3.2-x86_64
(2acf2ca6-a935-45b3-94f2-d428f34f710f) |<br>
> | key_name |
demo-key |<br>
> | metadata |
{} |<br>
> | name |
cirros |<br>
> | os-extended-volumes:volumes_attached |
[] |<br>
> | progress |
0 |<br>
> | security_groups |
default |<br>
> | status |
BUILD |<br>
> | tenant_id |
30f220b0dca34241b9e4feb0bd117fe8 |<br>
> | updated |
2014-06-25T18:38:15Z |<br>
> | user_id |
8d8b6dbdacc6402b960b964b00bf8d14 |<br>
>
+--------------------------------------+------------------------------------------------------------+<br>
><br>
> # nova list<br>
>
+--------------------------------------+--------+--------+------------+-------------+----------+<br>
> | ID | Name | Status |
Task State | Power State | Networks |<br>
>
+--------------------------------------+--------+--------+------------+-------------+----------+<br>
> | f173ceb8-2016-4e3e-bdde-bd5a5aed961b | cirros | ERROR |
- | NOSTATE | |<br>
>
+--------------------------------------+--------+--------+------------+-------------+----------+<br>
><br>
><br>
> looking neutron tables, I found it:<br>
><br>
> select * from
ml2_port_bindings;
<br>
>
+--------------------------------------+----------+----------------+-------------+--------------------------------------+-----------+--------------------------<br>
> ----------------------+---------+<br>
> | port_id | host |
vif_type | driver |
segment | vnic_type |
vif_details <br>
> | profile |<br>
>
+--------------------------------------+----------+----------------+-------------+--------------------------------------+-----------+--------------------------<br>
> ----------------------+---------+<br>
> | 03b6df09-988f-414d-a7d5-28b4c4d3396c | network |
ovs | openvswitch |
a6ce71e2-e5c6-4a87-9297-4eafe8c0c6f7 | normal | {"port_filter":
true, "ov<br>
> s_hybrid_plug": true} | {} |<br>
> | 8c866cbc-1b99-4ab4-94ae-ccc60ebe165a | network |
ovs | openvswitch |
3d889f27-853a-43ea-afdc-dc19902d3b25 | normal | {"port_filter":
true, "ov<br>
> s_hybrid_plug": true} | {} |<br>
> | 03acfd5b-75e7-4816-8323-4decddd2ccdc | network |
ovs | openvswitch |
a6ce71e2-e5c6-4a87-9297-4eafe8c0c6f7 | normal | {"port_filter":
true, "ov<br>
> s_hybrid_plug": true} | {} |<br>
> | 2e6e9310-a909-4ebc-b767-c289eb73b8d3 | compute1 |
binding_failed | NULL |
NULL | normal
| <br>
> | |<br>
> | ebced03c-7ee1-44c6-b692-220837f1d121 | compute1 |
binding_failed | NULL |
NULL | normal
| <br>
> | |<br>
> | b073139a-4ddc-40ed-b4d7-65935b0f192b | compute1 |
binding_failed | NULL |
NULL | normal
| <br>
> | |<br>
> | d653d9e6-62ba-4e63-a53b-147038b73cd2 | compute1 |
binding_failed | NULL |
NULL | normal | <br>
> ...<br>
><br>
><br>
> so compute openvswitch agent cannot communicate with
controller?<br>
><br>
> # neutron agent-list <br>
>
+--------------------------------------+--------------------+----------+-------+----------------+<br>
> | id | agent_type |
host | alive | admin_state_up |<br>
>
+--------------------------------------+--------------------+----------+-------+----------------+<br>
> | 294806e8-0ff6-455b-a9f2-8af5bb6d56e4 | Open vSwitch agent |
compute1 | :-) | True |<br>
> | 717e19f3-b042-433f-a6a8-7ff78ab35dce | Open vSwitch agent |
network | :-) | True |<br>
> | aa70df34-07bb-47b4-aa00-58312db861f8 | L3 agent |
network | :-) | True |<br>
> | d3372871-37a1-4214-a722-c711c47aed34 | DHCP agent |
network | :-) | True |<br>
> | a8bdb412-d3bf-4aef-ad31-1d46f267d711 | Metadata agent |
network | :-) | True |<br>
>
+--------------------------------------+--------------------+----------+-------+----------------+<br>
><br>
><br>
><br>
><br>
><br></div></div>
> 2014-06-23 4:56 GMT-03:00 Yankai Liu
<<a href="mailto:yankai.liu@canonical.com" target="_blank">yankai.liu@canonical.com</a>
<a href="mailto:yankai.liu@canonical.com" target="_blank"><mailto:yankai.liu@canonical.com></a>>:<div><br>
><br>
> Raphael,<br>
><br>
> Please check if your instance is created successfully.
Sometimes the instance is failed to spawn for some other reason
and nova will try to clean up the instance to roll back. During
the clean-up it's possible to get the vif_binding exception. You
may double check your nova compute and nova controller log files
to see what happened before this exception comes out.<br>
><br>
> Best Regards,<br>
> Kaya Liu<br>
> 刘艳凯<br>
><br></div><div>
> On Mon, Jun 23, 2014 at 3:01 PM, Heiko Krämer
<<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a> <a href="mailto:hkraemer@anynines.com" target="_blank"><mailto:hkraemer@anynines.com></a>>
wrote:<br>
><br>
></div></span><br>
<blockquote type="cite"><div>Hi Raphael,<br>
<br>
please check if your ovs_plugin config is the same as the ml2
config.<br>
<br>
In addition i'm missing in your nova.conf:<br>
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver<br>
<br>
<br>
Cheers<br>
Heiko<br>
<br>
On 20.06.2014 20:23, Raphael Ribeiro wrote:<br>
> Hi Mark, thanks for answering. I already have done this, same
error logs. I<br>
> cannot imagine what is wrong with my files:<br>
<br>
> compute node config<br>
> <a href="https://gist.github.com/raphapr/8e7896a738c6f6e6d27d" target="_blank">https://gist.github.com/raphapr/8e7896a738c6f6e6d27d</a><br>
<br>
> neutron node config<br>
> <a href="https://gist.github.com/raphapr/a9e804f40d3336d7db7f" target="_blank">https://gist.github.com/raphapr/a9e804f40d3336d7db7f</a><br>
<br>
> controller node config<br>
> <a href="https://gist.github.com/raphapr/c46382554f733d0c1de1" target="_blank">https://gist.github.com/raphapr/c46382554f733d0c1de1</a><br>
<br>
> can you help me?<br>
<br>
<br></div>
> 2014-06-20 2:50 GMT-03:00 Mark Kirkwood
<<a href="mailto:mark.kirkwood@catalyst.net.nz" target="_blank">mark.kirkwood@catalyst.net.nz</a>
<a href="mailto:mark.kirkwood@catalyst.net.nz" target="_blank"><mailto:mark.kirkwood@catalyst.net.nz></a>>:<div><br>
<br>
>> He did this:<br>
>><br>
>> $ cat /etc/neutron/neutron.conf<br>
>> ...<br>
>> [database]<br>
>> # set in plugin<br>
>> #connection =<br>
>><br>
>><br>
>> $ cat /etc/neutron/plugins/ml2/ml2_conf.ini<br>
>> ...<br>
>> [database]<br></div>
>> connection = <a href="mailto:mysql://neutron:password@127.0.0.1/neutron" target="_blank">mysql://neutron:password@127.0.0.1/neutron</a>
<a href="http://neutron:password@127.0.0.1/neutron" target="_blank"><http://neutron:password@127.0.0.1/neutron></a><div><br>
>><br>
>> Then (re)initialize the various db structures and restart
all neutron<br>
>> daemons:<br>
>><br>
>> $ neutron-db-manage --config-file
/etc/neutron/neutron.conf \<br>
>> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
upgrade head<br>
>><br>
>><br>
>> On 20/06/14 15:49, Raphael Ribeiro wrote:<br>
>><br>
>>> Hi Heiko, what was wrong with the ml2 config? Can you
post here?<br>
>>><br>
>>> I'm having the same problem,.<br>
>>><br>
>>> Thanks!<br>
>>><br>
>>><br></div>
>>> 2014-06-17 9:51 GMT-03:00 Heiko Krämer
<<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a>
<a href="mailto:hkraemer@anynines.com" target="_blank"><mailto:hkraemer@anynines.com></a>>:<div><br>
>>><br>
>>> -----BEGIN PGP SIGNED MESSAGE-----<br>
>>>> Hash: SHA1<br>
>>>><br>
>>>> Hi Akesh,<br>
>>>><br>
>>>> you're right on the controller host was the ml2
config not correct -.-<br>
>>>> my false.<br>
>>>><br>
>>>> In addition in the ml2_conf need to be the
database connection<br>
>>>> informations like in ovs.<br>
>>>><br>
>>>> It's running now :)<br>
>>>><br>
>>>> Thanks again.<br>
>>>><br>
>>>><br>
>>>> Cheers<br>
>>>> Heiko<br>
>>>><br>
>>>> On 17.06.2014 12:31, Akash Gunjal wrote:<br>
>>>><br>
>>>>> Hi,<br>
>>>>><br>
>>>>> This error occurs when the config is wrong
wither on controller or<br>
>>>>> the compute. Check the ml2_conf.ini on
controller and<br>
>>>>> ovs_plugin.ini on the compute.<br>
>>>>><br>
>>>>><br>
>>>>> Regards, Akash<br>
>>>>><br>
>>>>><br>
>>>>><br></div>
>>>>> From: Heiko Krämer <<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a>
<a href="mailto:hkraemer@anynines.com" target="_blank"><mailto:hkraemer@anynines.com></a>> To: Akilesh K<br>
>>>>> <<a href="mailto:akilesh1597@gmail.com" target="_blank">akilesh1597@gmail.com</a>
<a href="mailto:akilesh1597@gmail.com" target="_blank"><mailto:akilesh1597@gmail.com></a>>, Cc:
"<a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a>"<br>
>>>>> <<a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a>> Date: 06/17/2014
03:56 PM Subject:<div><br>
>>>>> Re: [Openstack] ML2 Plugin and
vif_type=binding_failed<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> Hi Akilesh,<br>
>>>>><br>
>>>>> i see this warn on neutron-server<br>
>>>>><br></div>
>>>>> 2014-06-17 10:14:<a href="tel:20.283%2024642" value="+12028324642" target="_blank">20.283 24642</a>
<tel:20.283%2024642> WARNING neutron.plugins.ml2.managers<div><br>
>>>>> [req-d23b58ce-3389-4af5-bdd2-a78bd7cec507
None] Failed to bind<br>
>>>>> port f71d7e0e-8955-4784-83aa-c23bf1b16f4f on
host<br></div>
>>>>> <a href="http://nettesting.hydranodes.de" target="_blank">nettesting.hydranodes.de</a>
<a href="http://nettesting.hydranodes.de" target="_blank"><http://nettesting.hydranodes.de></a><div><br>
>>>>><br>
>>>>><br>
>>>>> if i restart ovs-agent on network node i see
this one: 2014-06-17<br></div>
>>>>> 09:28:<a href="tel:26.029%2031369" value="+12602931369" target="_blank">26.029 31369</a> <tel:26.029%2031369>
ERROR neutron.agent.linux.ovsdb_monitor [-]<div><div><br>
>>>>> Error received from ovsdb monitor:<br>
>>>>>
2014-06-17T09:28:26Z|00001|fatal_signal|WARN|terminating with<br>
>>>>> signal 15 (Terminated) 2014-06-17
09:28:29.275 31870 WARNING<br>
>>>>>
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device<br>
>>>>> f71d7e0e-8955-4784-83aa-c23bf1b16f4f not
defined on plugin<br>
>>>>> 2014-06-17 09:28:29.504 31870 WARNING<br>
>>>>>
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device<br>
>>>>> 39bb4ba0-3d37-4ffe-9c81-073807f8971a not
defined on plugin<br>
>>>>><br>
>>>>><br>
>>>>> same on comp host if i restart ovs agent:
2014-06-17 09:28:44.446<br>
>>>>> 25476 ERROR neutron.agent.linux.ovsdb_monitor
[-] Error received<br>
>>>>> from ovsdb monitor:<br>
>>>>>
2014-06-17T09:28:44Z|00001|fatal_signal|WARN|terminating with<br>
>>>>> signal 15 (Terminated)<br>
>>>>><br>
>>>>><br>
>>>>> but ovs seems to be correct:<br>
>>>>><br>
>>>>> ##Compute##
7bbe81f3-79fa-4efa-b0eb-76addb57675c Bridge br-tun Port<br>
>>>>> "gre-64141401" Interface "gre-64141401" type:
gre options:<br>
>>>>> {in_key=flow, local_ip="100.20.20.2",
out_key=flow,<br>
>>>>> remote_ip="100.20.20.1"} Port patch-int
Interface patch-int type:<br>
>>>>> patch options: {peer=patch-tun} Port br-tun
Interface br-tun type:<br>
>>>>> internal Bridge br-int Port br-int Interface
br-int type: internal<br>
>>>>> Port patch-tun Interface patch-tun type:
patch options:<br>
>>>>> {peer=patch-int} ovs_version: "2.0.1"<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> ### Network node###
a40d7fc6-b0f0-4d55-98fc-c02cc7227d6c Bridge<br>
>>>>> br-ex Port br-ex Interface br-ex type:
internal Bridge br-tun Port<br>
>>>>> "gre-64141402" Interface "gre-64141402" type:
gre options:<br>
>>>>> {in_key=flow, local_ip="100.20.20.1",
out_key=flow,<br>
>>>>> remote_ip="100.20.20.2"} Port patch-int
Interface patch-int type:<br>
>>>>> patch options: {peer=patch-tun} Port br-tun
Interface br-tun type:<br>
>>>>> internal Bridge br-int Port int-br-int
Interface int-br-int Port<br>
>>>>> "tapf71d7e0e-89" tag: 4095 Interface
"tapf71d7e0e-89" type:<br>
>>>>> internal Port br-int Interface br-int type:
internal Port<br>
>>>>> patch-tun Interface patch-tun type: patch
options:<br>
>>>>> {peer=patch-int} Port "qr-39bb4ba0-3d" tag:
4095 Interface<br>
>>>>> "qr-39bb4ba0-3d" type: internal Port
phy-br-int Interface<br>
>>>>> phy-br-int ovs_version: "2.0.1"<br>
>>>>><br>
>>>>><br>
>>>>> I see this one in my neutron DB:<br>
>>>>><br>
>>>>> neutron=# select * from ml2_port_bindings ;
port_id<br>
>>>>> | host | vif_type |
driver | segment |<br>
>>>>> vnic_type | vif_details | profile -<br>
>>>>><br>
>>>>>
--------------------------------------+---------------------<br>
>>>>
-----+----------------+--------+---------+-----------+------<br>
>>>> -------+---------<br>
>>>><br>
>>>>><br></div></div>
>>>>> 39bb4ba0-3d37-4ffe-9c81-073807f8971a |
<a href="http://nettesting.hydranodes.de" target="_blank">nettesting.hydranodes.de</a> <a href="http://nettesting.hydranodes.de" target="_blank"><http://nettesting.hydranodes.de></a> |<br>
>>>>> binding_failed | | | normal
| | {}<br>
>>>>> f71d7e0e-8955-4784-83aa-c23bf1b16f4f |
<a href="http://nettesting.hydranodes.de" target="_blank">nettesting.hydranodes.de</a> <a href="http://nettesting.hydranodes.de" target="_blank"><http://nettesting.hydranodes.de></a> |<div><br>
>>>>> binding_failed | | | normal
| | {}<br>
>>>>><br>
>>>>><br>
>>>>> is that maybe the problem ?<br>
>>>>><br>
>>>>> Cheers Heiko<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On 17.06.2014 12:08, Akilesh K wrote:<br>
>>>>><br>
>>>>>> File looks good except that [agent]
section is not needed. Can<br>
>>>>>> you reply with some log from
'/var/log/neutron/server.log'<br>
>>>>>> during instance launch exactly.<br>
>>>>>><br>
>>>>><br>
>>>>> The vif_type=binding_failed occurs when
neutron is unable to<br>
>>>>>> create a port for some reason. Either
neutron server log or the<br>
>>>>>> plugin's log file should have some
information why it failed in<br>
>>>>>> first place.<br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Tue, Jun 17, 2014 at 3:07 PM, Heiko
Krämer<br></div><div>
>>>>>> <<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a>
<a href="mailto:hkraemer@anynines.com" target="_blank"><mailto:hkraemer@anynines.com></a>> wrote:<br>
>>>>>><br>
>>>>><br>
>>>>> Hi Kaya,<br>
>>>>>><br>
>>>>><br>
>>>>>
<a href="https://gist.github.com/foexle/e1f02066d6a9cff306f4" target="_blank">https://gist.github.com/foexle/e1f02066d6a9cff306f4</a><br>
>>>>>><br>
>>>>><br>
>>>>> Cheers Heiko<br>
>>>>>><br>
>>>>><br>
>>>>> On 17.06.2014 11:17, Yankai Liu wrote:<br>
>>>>>><br>
>>>>>>> Heiko,<br>
>>>>>>>>><br>
>>>>>>>>> Would you please share your
ml2_conf.ini?<br>
>>>>>>>>><br>
>>>>>>>>> Best Regards, Kaya Liu 刘艳凯
Cloud Architect, Canonical<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> On Tue, Jun 17, 2014 at 4:58
PM, Heiko Krämer<br></div><div><div>
>>>>>>>>> <<a href="mailto:hkraemer@anynines.com" target="_blank">hkraemer@anynines.com</a>
<a href="mailto:hkraemer@anynines.com" target="_blank"><mailto:hkraemer@anynines.com></a>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>> Hi guys,<br>
>>>>>>>>><br>
>>>>>>>>> i'm trying to get work ml2
plugin in Icehouse (Ubuntu<br>
>>>>>>>>> 14.04+cloud archive
packages). I get everything if it try<br>
>>>>>>>>> to start an instance:<br>
>>>>>>>>><br>
>>>>>>>>> 2014-06-17 08:42:01.893 25437
TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
six.reraise(self.type_,<br>
>>>>>>>>> self.value, self.tb)
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> line 1396, in
_reschedule_or_error 2014-06-17 08:42:01.893<br>
>>>><br>
>>>>> 25437 TRACE oslo.messaging.rpc.dispatcher
bdms,<br>
>>>>>>>>> requested_networks)
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> line 2125, in
_shutdown_instance 2014-06-17 08:42:01.893<br>
>>>><br>
>>>>> 25437 TRACE oslo.messaging.rpc.dispatcher<br>
>>>>>>>>> requested_networks)
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/openstack/<br>
>>>>>>>>> common/excutils.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>><br>
>>>>>>>>> line 68, in __exit__<br>
>>>>><br>
>>>>>> 2014-06-17 08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
six.reraise(self.type_,<br>
>>>>>>>>> self.value, self.tb)
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> line 2115, in
_shutdown_instance 2014-06-17 08:42:01.893<br>
>>>><br>
>>>>> 25437 TRACE oslo.messaging.rpc.dispatcher<br>
>>>>>>>>> block_device_info) 2014-06-17
08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>> line 953, in destroy
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>><br>
>>>>> oslo.messaging.rpc.dispatcher destroy_disks)
2014-06-17<br>
>>>>>>>>> 08:42:01.893 25437 TRACE
oslo.messaging.rpc.dispatcher<br>
>>>>>>>>> File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>> line 989, in cleanup
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>><br>
>>>>> oslo.messaging.rpc.dispatcher
self.unplug_vifs(instance,<br>
>>>>>>>>> network_info) 2014-06-17
08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>> line 860, in unplug_vifs
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>><br>
>>>>> oslo.messaging.rpc.dispatcher<br>
>>>>>>>>>
self.vif_driver.unplug(instance, vif) 2014-06-17<br>
>>>>>>>>> 08:42:01.893 25437 TRACE
oslo.messaging.rpc.dispatcher<br>
>>>>>>>>> File<br>
>>>>>>>>>
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py",<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> line 798, in unplug
2014-06-17 08:42:01.893 25437 TRACE<br>
>>>><br>
>>>>> oslo.messaging.rpc.dispatcher _("Unexpected
vif_type=%s")<br>
>>>>>>>>> % vif_type) 2014-06-17
08:42:01.893 25437 TRACE<br>
>>>>>>>>> oslo.messaging.rpc.dispatcher
NovaException: Unexpected<br>
>>>>>>>>> vif_type=binding_failed
2014-06-17 08:42:01.893 25437<br>
>>>>>>>>> TRACE
oslo.messaging.rpc.dispatcher<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> So i've found a solution but
still not working yet ?!<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>
<a href="https://ask.openstack.org/en/question/29518/unexpected-vif_" target="_blank">https://ask.openstack.org/en/question/29518/unexpected-vif_</a><br>
>>>> typebinding_failed/?answer=32429#post-id-32429<br>
>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>> I've checked the agent_down_time and retry
interval. All neutron<br>
>>>>><br>
>>>>>> agents are present and running if i
check the api.<br>
>>>>>>>>><br>
>>>>>>>>> ovs plugin and ml2 plugin
config are the same.<br>
>>>>>>>>><br>
>>>>>>>>> DHCP and l3 agents creates
ports on openvswitch (network<br>
>>>>>>>>> host) but i get the error
(above) on compute hosts.<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Modules are installed and
loaded:<br>
>>>>>>>>><br>
>>>>>>>>> filename:<br>
>>>>>>>>>
/lib/modules/3.13.0-29-generic/kernel/net/<br>
>>>>>>>>> openvswitch/openvswitch.ko<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>><br>
>>>>>>>>> license: GPL<br>
>>>>><br>
>>>>>> description: Open vSwitch switching
datapath<br>
>>>>>>>>> srcversion:
1CEE031973F0E4024ACC848 depends:<br>
>>>>>>>>> libcrc32c,vxlan,gre intree: Y
vermagic:<br>
>>>>>>>>> 3.13.0-29-generic SMP
mod_unload modversions signer:<br>
>>>>>>>>> Magrathea: Glacier signing
key sig_key:<br>
>>>>>>>>>
66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> sig_hashalgo: sha512<br>
>>>><br>
>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Nova-Config [DEFAULT]
libvirt_type=kvm<br>
>>>>>>>>> libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet<br>
>>>>>>>>>
libvirt_use_virtio_for_bridges=True<br>
>>>>>>>>>
libvirt_cpu_mode=host-passthrough<br>
>>>>>>>>>
disk_cachemodes="file=writeback,block=none"<br>
>>>>>>>>>
running_deleted_instance_action=reep<br>
>>>>>>>>>
compute_driver=libvirt.LibvirtDriver<br>
>>>>>>>>> libvirt_inject_partition = -1
libvirt_nonblocking = True<br>
>>>>>>>>> vif_plugging_is_fatal = False
vif_plugging_timeout = 0<br>
>>>>>>>>><br>
>>>>>>>>> [..]<br>
>>>>>>>>><br>
>>>>>>>>>
network_api_class=nova.network.neutronv2.api.API<br>
>>>>>>>>>
neutron_url=<a href="http://net.cloud.local:9696" target="_blank">http://net.cloud.local:9696</a><br>
>>>>>>>>>
neutron_metadata_proxy_shared_secret = xxx<br>
>>>>>>>>>
neutron_auth_strategy=keystone<br>
>>>>>>>>>
neutron_admin_tenant_name=service<br>
>>>>>>>>>
neutron_admin_username=keystone neutron_admin_password=xxx<br>
>>>>>>>>>
neutron_admin_auth_url=<a href="https://auth-testing.cloud.local:35357/v2.0" target="_blank">https://auth-testing.cloud.local:35357/v2.0</a><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>>
linuxnet_interface_driver=nova.network.linux_net.<br>
>>>> LinuxOVSInterfaceDriver<br>
>>>><br>
>>>>><br>
>>>>>>>>><br>
>>>>>>>>>
firewall_driver=nova.virt.firewall.NoopFirewallDriver<br>
>>>>>><br>
>>>>>>> security_group_api=neutron<br>
>>>>>>>>>
service_neutron_metadata_proxy=true<br>
>>>>>>>>> force_dhcp_release=True<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Do anyone have the same
problem and solved it ?<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Cheers and Thanks Heiko<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>>>
_______________________________________________ Mailing<br>
>>>>>>>>>> list:<br>
>>>>>>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>><br></div></div>
>>>>>>>>>> Post to :
<a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a> Unsubscribe :<div><br>
>>>><br>
>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>><br>
>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>
_______________________________________________ Mailing list:<br>
>>>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br></div>
>>>>>>> Post to :
<a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a> Unsubscribe :<div><br>
>>>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>>>><br>
>>>>>>><br>
>>>>><br>
>>>>><br>
>>>>>
_______________________________________________ Mailing list:<br>
>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a> Post<br></div>
>>>>> to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a> Unsubscribe :<div><br>
>>>>>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>>>>><br>
>>>>><br>
>>>> - --<br>
>>>> Anynines.com<br>
>>>><br>
>>>> B.Sc. Informatik<br>
>>>> CIO<br>
>>>> Heiko Krämer<br>
>>>><br>
>>>><br>
>>>> Twitter: @anynines<br>
>>>><br>
>>>> - - ----<br>
>>>> Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH)
Julian Fischer<br>
>>>> Handelsregister: AG Saarbrücken HRB 17413,
Ust-IdNr.: DE262633168<br>
>>>> Sitz: Saarbrücken<br>
>>>> Avarteq GmbH<br>
>>>> </div></blockquote>
<span style="white-space:pre-wrap">><br>
><div><br>
> _______________________________________________<br>
> Mailing list:
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br></div>
> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a>
<a href="mailto:openstack@lists.openstack.org" target="_blank"><mailto:openstack@lists.openstack.org></a><br>
> Unsubscribe :
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
><br>
><br>
><br>
><span><font color="#888888"><br>
><br>
> -- <br>
> /<br>
> Raphael Pereira Ribeiro<br>
> /<br>
> /Instituto de Computação - IC/UFAL/<br>
> /Graduando em Ciências da Computação/<br>
> _/<a href="http://lattes.cnpq.br/9969641216207080/_" target="_blank">http://lattes.cnpq.br/9969641216207080/_</a></font></span></span><span><font color="#888888"><br>
<br>
-- <br>
<a href="http://anynines.com" target="_blank">anynines.com</a><br>
<br>
</font></span></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br><br clear="all"><div class=""><div><br></div>-- <br><div><i><div style="display:inline!important"><span style="font-style:normal">Raphael Pereira Ribeiro</span></div></i></div><div><i>Instituto de Computação - IC/UFAL</i></div>
<div><i>Graduando em Ciências da Computação</i></div><div><font color="#0000EE"><u><a href="http://lattes.cnpq.br/9969641216207080" target="_blank"><i>http://lattes.cnpq.br/9969641216207080</i></a></u></font></div>
</div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div><i><div style="display:inline!important"><span style="font-style:normal">Raphael Pereira Ribeiro</span></div></i></div><div><i>Instituto de Computação - IC/UFAL</i></div>
<div><i>Graduando em Ciências da Computação</i></div><div><font color="#0000EE"><u><a href="http://lattes.cnpq.br/9969641216207080" target="_blank"><i>http://lattes.cnpq.br/9969641216207080</i></a></u></font></div>
</div>