[openstack-dev] [Networking] Updated OVS+XS/XCP patches

Maru Newby marun at redhat.com
Wed May 29 06:36:26 UTC 2013


On May 28, 2013, at 9:37 AM, Mate Lakat <mate.lakat at citrix.com> wrote:

> Hi,
> 
> See my comments inline.
> 
> On Sun, May 26, 2013 at 12:11:09AM -0700, Maru Newby wrote:
>> On second thought, please disregard my previous response.  It's not my
>> responsibility to choose an implementation strategy for supporting
>> Quantum/OVS on XCP.  That's Citrix - Mate's - role.  If the preferred
> I don't want to be seen like that. It would be better to have some
> discussion around the solutions, and pick the best together.

If there were an outstanding technical reason to choose one solution over another, I might agree.  But upon reflection, I don't think there is a good reason to choose either solution so whatever you're more comfortable with supporting should be chosen.


>> way of supporting DHCP on a compute node is using separate l2 agents
>> to manage the domU and dom0 bridges, I'm fine with that.  I don't
>> think this is any less complex that the single agent solution - the
>> complexity is simply relegated to system configuration (2 agents and
>> vlan translation between the 2 bridges) instead of existing in code -
> Are you saying, that the VLAN sync is needed if we pick the
> configuration suggested by me? I was assuming, that that task is done by
> quantum, if we are using physical networks. If the VLANs need to be
> synced, my proposal is clearly not enough.

Have no fear, quantum will do the translation.  Each of the integration bridges (1 for each agent) will automatically translate between the local and system-wide vlan tags.


m.

>> but it's clearly a case of tomEHto/tomAHto.  
>> 
>> And all my protestation about HA is for naught - the configuration for
>> running DHCP on a given compute node will be the same for single-host
>> or multi-host deployments.
>> 
>> So I'm going to abandon the DHCP patch, and leave it to Mate to update
>> devstack to support the multiple l2 agent configuration.
>> 
>> 
>> m.
>> 
>> 
> Cheers,
> Mate
>> On May 25, 2013, at 11:47 PM, Maru Newby <marun at redhat.com> wrote:
>> 
>>> 
>>> On May 23, 2013, at 2:16 AM, John Garbutt <john at johngarbutt.com> wrote:
>>> 
>>>> I am just thinking back to the discussions in San Diego, I think
>>>> everyone agreed on L2 plugin, so thats great:
>>>> https://review.openstack.org/#/c/15022/
>>>> 
>>>> For DHCP (ignoring the nova-network style HA) you can run the DHCP
>>>> agent on a physically separate box (or different VM), and it should
>>>> work OK. Do correct me if that is wrong. This means we would have
>>>> Quantum support in XCP/XenServer using OVS (it already works with
>>>> NVP).
>>> 
>>> That was always the case, yes.  The L2 patch is all that's needed to support a multi-host solution that doesn't run DHCP in the compute node domU.
>>> 
>>>> 
>>>> Now for nova-style network HA, I fear I am out of the loop on these
>>>> plans. I am tempted to say we should only worry about that once it
>>>> lands.
>>> 
>>> Since the patch currently under review already does worry about this, and has been tested to work, what would be the value of devolving to a less functional approach?
>>> 
>>>> 
>>>> This leaves single box deployments of Quantum. I am good with two
>>>> agents running (mate's plan). It is similar to how in XenAPI
>>>> nova-network VLAN world we created those bridges/rules in DomU.
>>>> Performance was not great, but it was good enough for testing needs.
>>>> However, as Maru points out, with multiple DHCP, his direction may be
>>>> the only way forward.
>>> 
>>> The patch currently under review uses a single agent for simplicity - a single l2 agent can use consistent local vlan tags across the dom0 and domU bridges, so that the two can be trivially joined with a trunking link.  The l2 agent stores the association of local vlan tags to tenant networks in memory, so Mate's plan to use 2 agents would require either changing this semantic or requiring vlan translation between the two bridges (since local tags used by the agents could vary).  Neither approach is as straightforward as what I've proposed, and both suffer the penalty of having to be written and tested where the current patch is already complete.
>>> 
>>> While I stand behind the technical approach of the patch, its documentation is clearly insufficient (as evidenced by the confusion in this thread).   Mate - if you have the cycles and mandate to pursue alternative approaches, might I suggest that your energy instead be directed towards improving said documentation?   If there's anything still unclear, let's schedule an irc conversation and make sure you have everything you need to support this going forward.
>>> 
>>> 
>>> m.
>>> 
>>> 
>>> 
>>>> John
>>>> 
>>>> On 22 May 2013 20:13, Maru Newby <marun at redhat.com> wrote:
>>>>> 
>>>>> On May 22, 2013, at 10:54 AM, Mate Lakat <mate.lakat at citrix.com> wrote:
>>>>> 
>>>>>> Hi All,
>>>>>> 
>>>>>> I successfully tested the first patch:
>>>>>> https://review.openstack.org/#/c/15022/
>>>>>> and would be really good to have it approved!
>>>>>> 
>>>>>> I think the second one:
>>>>>> https://review.openstack.org/#/c/15023
>>>>>> is not required. I think having one agent responsible for two
>>>>>> openvswitch instances is not good, and that change is too specific for
>>>>>> the All-in-one deployment scenario, which is mainly used for dev/test.
>>>>> 
>>>>> 1. The main thing the l2 agent is doing to a local bridge is tagging the ports with local vlans to provide tenant isolation.  There is no reason a given agent can't tag ports on multiple bridges, and as the submitted patch demonstrates, the added complexity is minimal.
>>>>> 
>>>>> 2.  The change may appear specific to an all-in-one deployment scenario, but there is a plan to support nova-style network HA for dhcp by running a dhcp agent on each compute node, and the proposed patch would support this.
>>>>> 
>>>>> I'm not sure I see the value of the alternate solution you propose - am I missing something?
>>>>> 
>>>>> 
>>>>> m.
>>>>> 
>>>>> 
>>>>>> 
>>>>>> My idea, is to slightly modify devstack instead, to end up with:
>>>>>> - Have one agent for both the dom0 and the domU openvswitch instances.
>>>>>> Both running in domU, using different root wrappers, and different
>>>>>> configurations (requires #15022 patch).
>>>>>> - Create another XenServer network - a new ovs bridge - and use that for
>>>>>> connecting VMs, do not add any domU interfaces to that bridge.
>>>>>> - Use domU's eth1's network as a "physical" network. In domU, create a
>>>>>> bridge, connect the eth1 to that bridge. Name this bridge br-eth1.
>>>>>> Specify br-eth1 in bridge_mappings for the domU agent. In dom0's agent
>>>>>> config, specify bridge_mappings to point to the bridge of the network
>>>>>> where domU's eth1 is plugged in. Call this network physnet1.
>>>>>> 
>>>>>> I did some drawing here:
>>>>>> 
>>>>>> https://raw.github.com/matelakat/shared/xs-q-v1/xenserver-quantum/deployment.png
>>>>>> 
>>>>>> I am not a quantum expert, so let me know your ideas.
>>>>>> 
>>>>>> And If anyone has some core-spare time, could you please core-review
>>>>>> https://review.openstack.org/#/c/15022/ so that we could start using Quantum
>>>>>> with XenServer/XCP?
>>>>>> 
>>>>>> Mate
>>>>>> 
>>>>>> On Wed, May 01, 2013 at 11:46:36PM +0100, Maru Newby wrote:
>>>>>>> I've finally updated the patches necessary to get the OVS plugin working on XS/XCP:
>>>>>>> 
>>>>>>> https://review.openstack.org/#/c/15022/
>>>>>>> https://review.openstack.org/#/c/15023
>>>>>>> 
>>>>>>> There's also a corresponding devstack patch to ensure the dom0 rootwrap is properly configured:
>>>>>>> 
>>>>>>> https://review.openstack.org/#/c/27982/
>>>>>>> 
>>>>>>> I've updated the config doc if anyone wants to test things out for real:
>>>>>>> 
>>>>>>> http://wiki.openstack.org/QuantumDevstackOvsXcp
>>>>>>> 
>>>>>>> Reviewer love appreciated - especially if you are a Xen specialist (I'm looking at you current and former Citrix employees!).   It would be great to finally see this merged into master!
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> 
>>>>>>> 
>>>>>>> Maru
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>> 
>>>>>> --
>>>>>> Mate Lakat
>>>>>> 
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>> 
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> -- 
> Mate Lakat
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list