[openstack-dev] [Neutron] [Nova] libvirt+Xen+OVS VLAN networking in icehouse

iain macdonnell openstack-dev at dseven.org
Fri Mar 14 17:42:09 UTC 2014


Hi Simon,

Thank you! One of those bugs helped me focus on an area of the (nova)
code that I hadn't found my way to yet, and I was able to at least get
back the functionality I had with Havana (hybrid OVS+Ethernet-Bridge
model) by setting firewall_driver in nova.conf to
nova.virt.libvirt.firewall.IptablesFirewallDriver instead of
nova.virt.firewall.NoopFirewallDriver - now I can launch an instance
with functional networking again.

I'd still like to understand how the OVS port is supposed to get setup
when the non-hybrid model is used, and eliminate the ethernet bridge
if possible. I'll dig into that a bit more...

    ~iain



On Fri, Mar 14, 2014 at 3:01 AM, Simon Pasquier <simon.pasquier at bull.net> wrote:
> Hi,
>
> I've played a little with XenAPI + OVS. You might be interested by this
> bug report [1] that describes a related problem I've seen in this
> configuration. I'm not sure about Xen libvirt though. My assumption is
> that the future-proof solution for using Xen with OpenStack is the
> XenAPI driver but someone from Citrix (Bob?) may confirm.
>
> Note also that the security groups are currently broken with libvirt +
> OVS. As you noted, the iptables rules are applied directly to the OVS
> port thus they are not effective (see [2] for details). There's work in
> progress [3][4] to fix this critical issue. As far as the XenAPI driver
> is concerned, there is another bug [5] tracking the lack of support for
> security groups which should be addressed by the OVS firewall driver [6].
>
> HTH,
>
> Simon
>
> [1] https://bugs.launchpad.net/neutron/+bug/1268955
> [2] https://bugs.launchpad.net/nova/+bug/1112912
> [3] https://review.openstack.org/21946
> [4] https://review.openstack.org/44596
> [5] https://bugs.launchpad.net/neutron/+bug/1245809
> [6] https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver
>
> On 13/03/2014 19:35, iain macdonnell wrote:
>> I've been playing with an icehouse build grabbed from fedorapeople. My
>> hypervisor platform is libvirt-xen, which I understand may be
>> deprecated for icehouse(?) but I'm stuck with it for now, and I'm
>> using VLAN networking. It almost works, but I have a problem with
>> networking. In havana, the VIF gets placed on a legacy ethernet
>> bridge, and a veth pair connects that to the OVS integration bridge.
>> In understand that this was done to enable iptables filtering at the
>> VIF. In icehouse, the VIF appears to get placed directly on the
>> integration bridge - i.e. the libvirt XML includes something like:
>>
>>     <interface type='bridge'>
>>       <mac address='fa:16:3e:e7:1e:c3'/>
>>       <source bridge='br-int'/>
>>       <script path='vif-bridge'/>
>>       <target dev='tap43b9d367-32'/>
>>     </interface>
>>
>>
>> The problem is that the port on br-int does not have the VLAN tag.
>> i.e. I'll see something like:
>>
>>     Bridge br-int
>>         Port "tap43b9d367-32"
>>             Interface "tap43b9d367-32"
>>         Port "qr-cac87198-df"
>>             tag: 1
>>             Interface "qr-cac87198-df"
>>                 type: internal
>>         Port "int-br-bond0"
>>             Interface "int-br-bond0"
>>         Port br-int
>>             Interface br-int
>>                 type: internal
>>         Port "tapb8096c18-cf"
>>             tag: 1
>>             Interface "tapb8096c18-cf"
>>                 type: internal
>>
>>
>> If I manually set the tag using 'ovs-vsctl set port tap43b9d367-32
>> tag=1', traffic starts flowing where it needs to.
>>
>> I've traced this back a bit through the agent code, and find that the
>> bridge port is ignored by the agent because it does not have any
>> "external_ids" (observed with 'ovs-vsctl list Interface'), and so the
>> update process that normally sets the tag is not invoked. It appears
>> that Xen is adding the port to the bridge, but nothing is updating it
>> with the neutron-specific "external_ids" that the agent expects to
>> see.
>>
>> Before I dig any further, I thought I'd ask; is this stuff supposed to
>> work at this point? Is it intentional that the VIF is getting placed
>> directly on the integration bridge now? Might I be missing something
>> in my configuration?
>>
>> FWIW, I've tried the ML2 plugin as well as the legacy OVS one, with
>> the same result.
>>
>> TIA,
>>
>>     ~iain
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list