Hi Uwe, <div><br></div><div>really I started directly with neutron. Never gone with legacy. </div><div><br></div><div>But I suppose there's old config laying around. I still think that bridge_mapping is needed for VLAN config I use. Every paper describes GRE config, but I still feel more confortable using VLANs.</div><div><br></div><div>Any other help about this issue? Can someone confirm if I can get rid of the directives in both configs?</div><div><br></div><div>I suppose I cannot because they get in effect when started the ovs plugin.</div><div><br></div><div>Thank you in advance.</div><div><br></div><div><br><br>El dom, 14 de dic 2014 a las 8:40 , Uwe Sauter <uwe.sauter.de@gmail.com> escribió:<br>
<blockquote type="cite"><div class="plaintext" style="white-space: pre-wrap;">Hi,
I presume that you upgraded from an older version that used nova-network
(now called legacy networking).
Using neutron means that VMs aren't connected to br0 directly any more
as there is a whole virtual networking infrastructure in place.
To give a small overview:
On a compute node a VM connects to br-int (integration bridge). This
bridge itself is connected through a virtual cable to br-tun (tunneling
bridge). That bridge has also assigned a physical interface that allows
traffic to flow to the network node
On the network node there also exists a br-tun that has a physical
interface attached. Through this inferface traffic enters the node.
br-tun is virtually connected to br-ex that has a separate physical
interface attached that connects to "the outside", meaning the
networking infrastructure outside your cloud.
I cannot help you with the configuration issue but recommend that you
familiarize yourself with neutron.
Regards,
Uwe
Am 14.12.2014 um 19:36 schrieb Gonzalo Aguilar Delgado:
<blockquote> Hi all,
I'm installing a new compute node from scratch and reviewing all old
config. I've found two setting that seems equal, one in ml2 plugin and
one in openvswitch.
But I don't really understand why they are.
ovs_neutron_plugin.ini:
bridge_mappings = default:br0,extnet1:br-ex
ml2/ml2_conf.ini:
[ovs]
bridge_mappings = default:br0,extnet1:br-ex
For me it's strange the settings are in both places. I think this is a
result of upgrading without taking much care of removing old config.
But also it's strange that everything works with the bridges br0 and
br-ex without physical interface. I mean, seems to do nothing but it
needs to be there.
Also I should expect VM be attached to br0 (Default) but it's not, they
are attached to the br-int (integration bridge), for me this is correct.
Since it's described here like this:
<a href="https://openstack.redhat.com/Networking_in_too_much_detail">https://openstack.redhat.com/Networking_in_too_much_detail</a>
And works ok.
So what's the purporse of these bridges?
Here is:
neutron 2.3.4
nova 2.17.0
Best regards,
_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : openstack@lists.openstack.org
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</blockquote>
_______________________________________________
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
Post to : openstack@lists.openstack.org
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a>
</div></blockquote></div>