[Openstack] [quantum] Relationship between br-int and physical bridge mapping in OVS plugin

Vinay Bannai vbannai at gmail.com
Mon Nov 5 18:05:16 UTC 2012


Yes, that makes sense. I was not thinking about multiple physical nics in
the provider network space.

I am trying to get a better understanding of how the vif plugins in the
br-int and the bridge providing external connectivity interact.
The quantum vif plug-in will do the work to configure the br-ext for
tunneling (whether it is vlan or gre).

In my two node setup (controller node and compute node), I am able to all
my VM's instantiated properly. The controller also has n-cpu running so I
get an even distribution of the vm's between the controller node and the
compute nodes. The DHCP IP address is allocated fine. However when I try to
vnc or ping the VM's, I am only able to get to the VM's on the controller
node. I am not able to ping the instances created on the compute node. I am
thinking that there is problem in my phsynet connectivity as the instances
I am not able to get to are on the compute node.

I have vlan tunneling enabled and here is my snippet of the
ovs_quantum_plugin.ini file on the compute and controller node. Any
pointers on where to look?

[OVS]
bridge_mappings = physnet1:br-eth2
network_vlan_ranges = physnet1:1:4094
tenant_network_type = vlan


Thanks
Vinay
On Sun, Nov 4, 2012 at 10:44 PM, Dan Wendlandt <dan at nicira.com> wrote:

>
>
>
> On Sun, Nov 4, 2012 at 9:57 PM, Vinay Bannai <vbannai at gmail.com> wrote:
>
>> I have a multi node setup. The CC controller doubles up as the quantum
>> server and also has the l3 agent and DHCP. I have configured OVS as my
>> L2 plugin with vlan tunneling. On the compute nodes, I see that in
>> addition to having the integration bridge (br-int) you will also need
>> the ovs physical bridge (br-th1) with the physical ether port eth1 as
>> a member. I am wondering about the relationship between br-int and
>> br-eth1 bridges. Wouldn't it make sense to add eth1 port to the
>> integration mode.
>
>
> you might have quantum networks that use vlans on different on different
> physical NICs (e.g., eth0 and eth1), so adding each NIC directly to br-int
> wouldn't make sense.  Similarly, you might have some quantum networks that
> also use tunneling.  Hence, all vNICs are just plugged into br-int, and the
> plugin is responsible for doing the right thing with the traffic.
>
> Dan
>
>
>
>
>> Why have two bridges on the compute node for VMs to
>> talk to other VMs in the same tenancy over the physical network?
>> I am sure I am missing something in my understanding so would
>> appreciate any comments or explanations.
>>
>> Thanks
>> Vinay
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>


-- 
Vinay Bannai
Email: vbannai at gmail.com
Google Voice: 415 938 7576
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121105/51afd8fd/attachment.html>


More information about the Openstack mailing list