[openstack-dev] [Neutron][ml2][Manila] API to query segments used during port binding

Koderer, Marc marc at koderer.com
Fri Mar 11 07:23:30 UTC 2016


Hi folks,

I had a deep dive session with Bob (thx Bob).

We have a plan to solve the issue without any change of APIs or 
manila driver reworks.

The process will look like the following:

1.) In case of multi-segment/hpb Manila creates a port like Ironic ML2
    would do it [1]:
      vif_type = baremetal
      binding_profile = list of sw ports
      device_owner = manila:ID

2.) Manila waits until all ports are actively bound
2.a) Neutron binds the port through the segments
2.b) A manila-neutron mech driver (a really simple one) fulfils the binding
       and sets:
     vif_details = {“share_segmentation_id" = XX,
                           “share_network_type" = YY}

3.) When the port becomes active Manila has all it needs to proceed

In 2.b. the manila MD will only search for device_owner == “manila:”,
sets the needed details and fulfils the binding (~10 LOC).

We also discussed about using ML2 GW [2] or trunk port [3].
I consider the design above as simple enough to get merged smoothly
in Newton.

@Ben: Will be back for bug hunting now.

Regards
Marc

[1]: https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html <https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html>
[2]: https://wiki.openstack.org/wiki/Neutron/L2-GW
[3]: https://wiki.openstack.org/wiki/Neutron/TrunkPort


> On 09 Mar 2016, at 09:25, Koderer, Marc <marc at koderer.com> wrote:
> 
>> 
>> On 09 Mar 2016, at 08:43, Sukhdev Kapur <sukhdevkapur at gmail.com <mailto:sukhdevkapur at gmail.com>> wrote:
>> 
>> Hi Marc, 
>> 
>> I am driving the ironic-ml2 integration and introduced baremetal type for vmic_type. 
> 
> Basically that’s my plan. So in my current implementation
> I use the baremetal vnic_type [1] and add a binding profile [2].
> 
>> You can very much use the same integration here - however, I am not completely clear about your use case. 
>> Do you want neutron/ML2 to plumb the network for Manila or do you want to find out what VLAN (segmentation id) is used on the segment which connects TOR to the storage device? 
> 
> Generally I want to find the best architecture for this feature :)
> Introducing a neutron-agent that does the plumbing will mean this agent needs
> to have a connection to the storage node itself. So we will end-up in a
> storage agent with a driver model (or an agent for each storage device). On
> the other side it follows the idea that neutron takes care about networking.
> 
> From a Manila perspective the easiest solution would be to have an interface to
> get the segmentation id of the lowest bound segment.
> 
>> 
>> You had this on the agenda of ML2 meeting for tomorrow and I was going to discuss this with you in the meeting. But, I noticed that you removed it from the agenda. Do you have what you need? If not, you may want to join us in the ML2 meeting tomorrow and we can discuss this use case there. 
> 
> Yeah I am sorry - I have to move the topic +1 week due to an internal meeting :(
> But we can have a chat on IRC (mkoderer on freenode).
> 
> Regards
> Marc
> 
> [1]: https://review.openstack.org/#/c/283494/ <https://review.openstack.org/#/c/283494/>
> [2]: https://review.openstack.org/#/c/284034/ <https://review.openstack.org/#/c/284034/>
> 
> 
>> 
>> -Sukhdev
>> 
>> 
>> On Tue, Mar 1, 2016 at 1:08 AM, Koderer, Marc <marc at koderer.com <mailto:marc at koderer.com>> wrote:
>> 
>>> On 01 Mar 2016, at 06:22, Kevin Benton <kevin at benton.pub <mailto:kevin at benton.pub>> wrote:
>>> 
>>> >This seems gross and backwards. It makes sense as a short term hack but given that we have time to design this correctly I'd prefer to get this information in a more straighforward way.
>>> 
>>> Well it depends on what is happening here. If Manilla is wiring up a specific VLAN for a port, that makes it part of the port binding process, in which case it should be an ML2 driver. Can you provide some more details about what Manilla is doing with this info?
>> 
>> The VLAN segment ID and IP address is used in the share driver to configure the
>> corresponding interface resources within the storage. Just to give some
>> examples:
>> 
>>  - NetApp driver uses it to create a logical interface and assign it to a
>>    “storage virtual machine” [1]
>>  - EMC driver does it in similar manner [2]
>> 
>> My idea was to use the same principle as ironic ml2 intregration is doing [3]
>> by setting the vnic_type to “baremetal”.
>> 
>> In Manila's current implementation storage drivers are also responsible to
>> setup the right networking setup. Would you suggest to move this part into the
>> port binding phase?
>> 
>> Regards
>> Marc
>> 
>> 
>> [1]: https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272 <https://github.com/openstack/manila/blob/master/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L272>
>> [2]: https://github.com/openstack/manila/blob/master/manila/share/drivers/emc/plugins/vnx/connection.py#L609 <https://github.com/openstack/manila/blob/master/manila/share/drivers/emc/plugins/vnx/connection.py#L609>
>> [3]: https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html <https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html>
>> 
>> 
>>> 
>>> On Mon, Feb 29, 2016 at 5:29 PM, Ben Swartzlander <ben at swartzlander.org <mailto:ben at swartzlander.org>> wrote:
>>> On 02/29/2016 04:38 PM, Kevin Benton wrote:
>>> You're correct. Right now there is no way via the HTTP API to find which
>>> segments a port is bound to.
>>> This is something we can certainly consider adding, but it will need an
>>> RFE so it wouldn't land until Newton at the earliest.
>>> 
>>> I believe Newton is the target for this work. This is feature freeze week after all.
>>> 
>>> Have you considered writing an ML2 driver that just notifies Manilla of
>>> the port's segment info? All of this information is available to ML2
>>> drivers in the PortContext object that is passed to them.
>>> 
>>> This seems gross and backwards. It makes sense as a short term hack but given that we have time to design this correctly I'd prefer to get this information in a more straighforward way.
>>> 
>>> -Ben Swartzlander
>>> 
>>> 
>>> On Mon, Feb 29, 2016 at 6:48 AM, Ihar Hrachyshka <ihrachys at redhat.com <mailto:ihrachys at redhat.com>
>>> <mailto:ihrachys at redhat.com <mailto:ihrachys at redhat.com>>> wrote:
>>> 
>>>     Fixed neutron tag in the subject.
>>> 
>>>     Marc <marc at koderer.com <mailto:marc at koderer.com> <mailto:marc at koderer.com <mailto:marc at koderer.com>>> wrote:
>>> 
>>>         Hi Neutron team,
>>> 
>>>         I am currently working on a feature for hierarchical port
>>>         binding support in
>>>         Manila [1] [2]. Just to give some context: In the current
>>>         implementation Manila
>>>         creates a neutron port but let it unbound (state DOWN).
>>>         Therefore Manila uses
>>>         the port create only retrieve an IP address and segmentation ID
>>>         (some drivers
>>>         only support VLAN here).
>>> 
>>>         My idea is to change this behavior and do an actual port binding
>>>         action so that
>>>         the configuration of VLAN isn’t a manual job any longer. And
>>>         that multi-segment
>>>         and HPB is supported on the long-run.
>>> 
>>>         My current issue is: How can Manila retrieve the segment
>>>         information for a
>>>         bound port? Manila only is interested in the last (bottom)
>>>         segmentation ID
>>>         since I assume the storage is connected to a ToR switch.
>>> 
>>>         Database-wise it’s possible to query it using
>>>         ml2_port_binding_levels table.
>>>         But AFAIK there is no API to query this. The only information
>>>         that is exposed
>>>         are all segments of a network. But this is not sufficient to
>>>         identify which
>>>         segments actually used for a port binding.
>>> 
>>>         Regards
>>>         Marc
>>>         SAP SE
>>> 
>>>         [1]:
>>>         https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support <https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support>
>>>         [2]: https://review.openstack.org/#/c/277731/ <https://review.openstack.org/#/c/277731/>
>>>         __________________________________________________________________________
>>>         OpenStack Development Mailing List (not for usage questions)
>>>         Unsubscribe:
>>>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
>>>         <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>>
>>>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> 
>>> 
>>> 
>>>     __________________________________________________________________________
>>>     OpenStack Development Mailing List (not for usage questions)
>>>     Unsubscribe:
>>>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
>>>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>>
>>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> 
>>> 
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> 
>>> 
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>> 
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org <mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160311/4780bf13/attachment.html>


More information about the OpenStack-dev mailing list