[Openstack] [quantum] Relationship between br-int and physical bridge mapping in OVS plugin

Aniruddha Khadkikar askhadkikar at gmail.com
Tue Nov 6 15:02:47 UTC 2012


On Tue, Nov 6, 2012 at 1:02 AM, Vinay Bannai <vbannai at gmail.com> wrote:
> Thanks Dennis.
>
> I don't have a switch in-between the two nodes so I don't have the default
> native VLAN issue. The two nodes are connected back to back.
>
> I managed to console into the VM's on the compute node and saw that they
> don't have the IP address. The VM's on the controller node (also where the
> dhcp agent and quantum server are located) seem to get the IP address. To
> test the hypothesis, I created a separate network with the command that you
> mention below for the "demo" tenant and was able to spawn the VM's. I see
> the same problem on the compute node VM's not getting the IP address
> assigned. Has anyone who has been able to setup the VLAN tunnels
> successfully share their config files?

We had a similar issue which still is not resolved. We are using gre
tunneling. What worked for us to get the vm's to be able to get the IP
addresses alloted properly was assigning an IP address to the
integration bridges (i.e. br-int). After a week's effort we had all
elements working including floating ip allocation. Then we rebooted
the host servers.

And kaput. Nothing works now, and unfortunately we have been unable to
figure out why. We are wondering what would change after a server
reboot that could severely compromise the networking. Now we're doing
it again step by step.

I would say for us its more of our understanding of openvswitch than quantum.

Cheers
Aniruddha

>
> Thanks
> Vinay
>
>
> On Mon, Nov 5, 2012 at 10:37 AM, Qin, Xiaohong <Xiaohong.Qin at emc.com> wrote:
>>
>> Hi Vinay,
>>
>>
>>
>> I have sent the following email out a while ago,
>>
>>
>>
>> -------
>>
>> In the following quantum command,
>>
>>
>>
>> quantum net-create --tenant-id $TENANT_ID net1 --provider:network_type
>> vlan --provider:physical_network physnet1 --provider:segmentation_id 1024
>>
>>
>>
>> provider:segmentation_id is actually a VLAN id which is used in the
>> network for controller and compute nodes. This same VLAN id is also passed
>> to the physical switch that interconnects controller and compute nodes. Try
>> to avoid use VLAN id 1 since some physical switches do not forward VLAN 1
>> frames by default.
>>
>> ------
>>
>>
>>
>> This is the problem we were running into when setting up multi node
>> deployment. Not sure if it helps or not.
>>
>>
>>
>> Dennis Qin
>>
>>
>>
>>
>>
>> From: openstack-bounces+xiaohong.qin=emc.com at lists.launchpad.net
>> [mailto:openstack-bounces+xiaohong.qin=emc.com at lists.launchpad.net] On
>> Behalf Of Vinay Bannai
>> Sent: Monday, November 05, 2012 10:05 AM
>> To: Dan Wendlandt
>> Cc: Openstack
>> Subject: Re: [Openstack] [quantum] Relationship between br-int and
>> physical bridge mapping in OVS plugin
>>
>>
>>
>> Yes, that makes sense. I was not thinking about multiple physical nics in
>> the provider network space.
>>
>>
>>
>> I am trying to get a better understanding of how the vif plugins in the
>> br-int and the bridge providing external connectivity interact.
>>
>> The quantum vif plug-in will do the work to configure the br-ext for
>> tunneling (whether it is vlan or gre).
>>
>>
>>
>> In my two node setup (controller node and compute node), I am able to all
>> my VM's instantiated properly. The controller also has n-cpu running so I
>> get an even distribution of the vm's between the controller node and the
>> compute nodes. The DHCP IP address is allocated fine. However when I try to
>> vnc or ping the VM's, I am only able to get to the VM's on the controller
>> node. I am not able to ping the instances created on the compute node. I am
>> thinking that there is problem in my phsynet connectivity as the instances I
>> am not able to get to are on the compute node.
>>
>> I have vlan tunneling enabled and here is my snippet of the
>> ovs_quantum_plugin.ini file on the compute and controller node. Any pointers
>> on where to look?
>>
>>
>>
>> [OVS]
>>
>> bridge_mappings = physnet1:br-eth2
>>
>> network_vlan_ranges = physnet1:1:4094
>>
>> tenant_network_type = vlan
>>
>>
>>
>>
>>
>> Thanks
>>
>> Vinay
>>
>> On Sun, Nov 4, 2012 at 10:44 PM, Dan Wendlandt <dan at nicira.com> wrote:
>>
>>
>>
>>
>>
>> On Sun, Nov 4, 2012 at 9:57 PM, Vinay Bannai <vbannai at gmail.com> wrote:
>>
>> I have a multi node setup. The CC controller doubles up as the quantum
>> server and also has the l3 agent and DHCP. I have configured OVS as my
>> L2 plugin with vlan tunneling. On the compute nodes, I see that in
>> addition to having the integration bridge (br-int) you will also need
>> the ovs physical bridge (br-th1) with the physical ether port eth1 as
>> a member. I am wondering about the relationship between br-int and
>> br-eth1 bridges. Wouldn't it make sense to add eth1 port to the
>> integration mode.
>>
>>
>>
>> you might have quantum networks that use vlans on different on different
>> physical NICs (e.g., eth0 and eth1), so adding each NIC directly to br-int
>> wouldn't make sense.  Similarly, you might have some quantum networks that
>> also use tunneling.  Hence, all vNICs are just plugged into br-int, and the
>> plugin is responsible for doing the right thing with the traffic.
>>
>>
>>
>> Dan
>>
>>
>>
>>
>>
>>
>>
>> Why have two bridges on the compute node for VMs to
>> talk to other VMs in the same tenancy over the physical network?
>> I am sure I am missing something in my understanding so would
>> appreciate any comments or explanations.
>>
>> Thanks
>> Vinay
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack at lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>>
>>
>> --
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Dan Wendlandt
>>
>> Nicira, Inc: www.nicira.com
>>
>> twitter: danwendlandt
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>>
>>
>>
>>
>>
>>
>> --
>> Vinay Bannai
>> Email: vbannai at gmail.com
>> Google Voice: 415 938 7576
>
>
>
>
> --
> Vinay Bannai
> Email: vbannai at gmail.com
> Google Voice: 415 938 7576
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack at lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>




More information about the Openstack mailing list