Add netapp storage in edge site | Wallaby | DCN

Alan Bishop abishop at redhat.com
Fri May 26 04:33:14 UTC 2023


On Thu, May 25, 2023 at 12:09 AM Swogat Pradhan <swogatpradhan22 at gmail.com>
wrote:

> Hi Alan,
> So, can I include the cinder-netapp-storage.yaml file in the central site
> and then use the new backend to add storage to edge VM's?
>

Where is the NetApp physically located? Tripleo's DCN architecture assumes
the storage is physically located at the same site where the cinder-volume
service will be deployed. If you include the cinder-netapp-storage.yaml
environment file in the central site's controlplane, then VMs at the edge
site will encounter the problems I outlined earlier (network latency, no
ability to do cross-AZ attachments).


> I believe it is not possible right?? as the cinder volume in the edge
> won't have the config for the netapp.
>

The cinder-volume services at an edge site are meant to manage storage
devices at that site. If the NetApp is at the edge site, ideally you'd
include some variation of the cinder-netapp-storage.yaml environment file
in the edge site's deployment. However, then you're faced with the fact
that the NetApp driver doesn't support A/A, which is required for c-vol
services running at edge sites (In case you're not familiar with these
details, tripleo runs all cinder-volume services in active/passive mode
under pacemaker on controllers in the controlplane. Thus, only a single
instance runs at any time, and pacemaker provides HA by moving the service
to another controller if the first one goes down. However, pacemaker is not
available at edge sites, and so to get HA, multiple instances of the
cinder-volume service run simultaneously on 3 nodes (A/A), using etcd as a
Distributed Lock Manager (DLM) to coordinate things. But drivers must
specifically support running A/A, and the NetApp driver does NOT.)

Alan


> With regards,
> Swogat Pradhan
>
> On Thu, May 25, 2023 at 2:17 AM Alan Bishop <abishop at redhat.com> wrote:
>
>>
>>
>> On Wed, May 24, 2023 at 3:15 AM Swogat Pradhan <swogatpradhan22 at gmail.com>
>> wrote:
>>
>>> Hi,
>>> I have a DCN setup and there is a requirement to use a netapp storage
>>> device in one of the edge sites.
>>> Can someone please confirm if it is possible?
>>>
>>
>> I see from prior email to this list that you're using tripleo, so I'll
>> respond with that in mind.
>>
>> There are many factors that come into play, but I suspect the short
>> answer to your question is no.
>>
>> Tripleo's DCN architecture requires the cinder-volume service running at
>> edge sites to run in active-active
>> mode, where there are separate instances running on three nodes in to for
>> the service to be highly
>> available (HA).The problem is that only a small number of cinder drivers
>> support running A/A, and NetApp's
>> drivers do not support A/A.
>>
>> It's conceivable you could create a custom tripleo role that deploys just
>> a single node running cinder-volume
>> with a NetApp backend, but it wouldn't be HA.
>>
>> It's also conceivable you could locate the NetApp system in the central
>> site's controlplane, but there are
>> extremely difficult constraints you'd need to overcome:
>> - Network latency between the central and edge sites would mean the disk
>> performance would be bad.
>> - You'd be limited to using iSCSI (FC wouldn't work)
>> - Tripleo disables cross-AZ attachments, so the only way for an edge site
>> to access a NetApp volume
>> would be to configure the cinder-volume service running in the
>> controlplane with a backend availability
>> zone set to the edge site's AZ. You mentioned the NetApp is needed "in
>> one of the edge sites," but in
>> reality the NetApp would be available in one, AND ONLY ONE edge site, and
>> it would also not be available
>> to any instances running in the central site.
>>
>> Alan
>>
>>
>>> And if so then should i add the parameters in the edge deployment script
>>> or the central deployment script.
>>> Any suggestions?
>>>
>>> With regards,
>>> Swogat Pradhan
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230525/5647cccc/attachment.htm>


More information about the openstack-discuss mailing list