DR options with openstack

Adam Peacock alawson at aqorn.com
Fri Jan 17 15:54:36 UTC 2020


I'm traveling in India right now and will reply later. I've architected
several large OpenStack clouds from Cisco to Juniper to SAP to AT&T to HPE
to Wells Fargo to  -- you name it. Will share some things we've done
regarding DR and more specifically how we handled replication and dividing
the cloud up so it made sense from a design and operational perspective.

Also, we need to be clear not everyone leans towards being a developer or
even *wants* to go in that direction when using OpenStack. In fact, most
don't and if there is that expectation by those entrenched with the
OpenStack product, the OpenStack option gets dropped in favor of something
else. It's developer-friendly but we need to be mega-mega-careful, as a
community, to ensure development isn't the baseline or assumption for
adequate support or to get questions answered. Especially since we've
converged our communication channels.

/soapbox

More later.

//adam

On Fri, Jan 17, 2020, 7:19 PM Tony Pearce <tony.pearce at cinglevue.com> wrote:

> Hi Walter
>
> Thank you for the information.
> It's unfortunate about the lack of support from nimble.
>
> With regards to replication, nimble has their own software implementation
> that I'm currently using. The problem I face is that the replicated volumes
> have a different iqn, serial number and are accessed via a different array
> IP.
>
> I didn't get time to read up on freezer today but I'm hopeful that I can
> use something there. 🙂
>
>
> On Fri, 17 Jan 2020, 21:10 Walter Boring, <waboring at hemna.com> wrote:
>
>> Hi Tony,
>>    Looking at the nimble driver, it has been removed from Cinder due to
>> lack of support and maintenance from the vendor.  Also,
>> Looking at the code prior to it's removal, it didn't have any support for
>> replication and failover.   Cinder is a community based opensource project
>> that relies on vendors, operators and users to contribute and support the
>> codebase.   As a core member of the Cinder team, we do our best to provide
>> support for folks using Cinder and this mailing list and the
>> #openstack-cinder channel is the best mechanism to get in touch with us.
>>  The #openstack-cinder irc channel is not a developer only channel.  We
>> help when we can, but also remember we have our day jobs as well.
>>
>>   Unfortunately Nimble stopped providing support for their driver quite a
>> while ago now and part of the Cinder policy to have a driver in tree is to
>> have CI (Continuous Integration) tests in place to ensure that cinder
>> patches don't break a driver.  If the CI isn't in place, then the Cinder
>> team marks the driver as unsupported in a release, and the following
>> release the driver gets removed.
>>
>> All that being said, the nimbe driver never supported the cheesecake
>> replication/DR capabilities that were added in Cinder.
>>
>> Walt (hemna in irc)
>>
>> On Thu, Jan 16, 2020 at 2:49 AM Tony Pearce <tony.pearce at cinglevue.com>
>> wrote:
>>
>>> Hi all
>>>
>>>
>>>
>>> My questions are;
>>>
>>>
>>>
>>>    1. How are people using iSCSI Cinder storage with Openstack
>>>    to-date?  For example a Nimble Storage array backend. I mean to say, are
>>>    people using backend integration drivers for other hardware (like netapp)?
>>>    Or are they using backend iscsi for example?
>>>    2. How are people managing DR with Openstack in terms of backend
>>>    storage replication to another array in another location and continuing to
>>>    use Openstack?
>>>
>>>
>>>
>>> The environment which I am currently using;
>>>
>>> 1 x Nimble Storage array (iSCSI) with nimble.py Cinder driver
>>>
>>> 1 x virtualised Controller node
>>>
>>> 2 x physical compute nodes
>>>
>>> This is Openstack Pike.
>>>
>>>
>>>
>>> In addition, I have a 2nd Nimble Storage array in another location.
>>>
>>>
>>>
>>> To explain the questions I’d like to put forward my thoughts for
>>> question 2 first:
>>>
>>> For point 2 above, I have been searching for a way to utilise replicated
>>> volumes on the 2nd array from Openstack with existing instances. For
>>> example, if site 1 goes down how would I bring up openstack in the 2nd
>>> location and boot up the instances where their volumes are stored on the 2
>>> nd array. I found a proposal for something called “cheesecake” ref:
>>> https://specs.openstack.org/openstack/cinder-specs/specs/rocky/cheesecake-promote-backend.html
>>> But I could not find if it had been approved or implemented. So I return
>>> to square 1. I have some thoughts about failing over the controller VM and
>>> compute node but I don’t think there’s any need to go into here because of
>>> the above blocker and for brevity anyway.
>>>
>>>
>>>
>>> The nimble.py driver which I am using came with Openstack Pike and it
>>> appears Nimble / HPE are not maintaining it any longer. I saw a commit to
>>> remove nimble.py in Openstack Train release. The driver uses the REST API
>>> to perform actions on the array. Such as creating a volume, downloading the
>>> image, mounting the volume to the instance, snapshots, clones etc. This is
>>> great for me because to date I have around 10TB of openstack storage data
>>> allocated and the Nimble array shows the amount of data being consumed is
>>> <900GB. This is due to the compression and zero-byte snapshots and clones.
>>>
>>>
>>>
>>> So coming back to question 2 – is it possible? Can you drop me some
>>> keywords that I can search for such as an Openstack component like
>>> Cheesecake? I think basically what I am looking for is a supported way of
>>> telling Openstack that the instance volumes are now located at the new /
>>> second array. This means a new cinder backend. Example, new iqn, IP
>>> address, volume serial number.  I think I could probably hack the cinder db
>>> but I really want to avoid that.
>>>
>>>
>>>
>>> So failing the above, it brings me to the question 1 I asked before. How
>>> are people using Cinder volumes? May be I am going about this the wrong way
>>> and need to take a few steps backwards to go forwards? I need storage to be
>>> able to deploy instances onto. Snapshots and clones are desired. At the
>>> moment these operations take less time than the horizon dashboard takes to
>>> load because of the waiting API responses.
>>>
>>>
>>>
>>> When searching for information about the above as an end-user / consumer
>>> I get a bit concerned. Is it right that Openstack usage is dropping?
>>> There’s no web forum to post questions. The chatroom on freenode is filled
>>> with ~300 ghosts. Ask Openstack questions go without response. Earlier this
>>> week (before I found this mail list) I had to use facebook to report that
>>> the Openstack.org website had been hacked. Basically it seems that if
>>> you’re a developer that can write code then you’re in but that’s it. I have
>>> never been a coder and so I am somewhat stuck.
>>>
>>>
>>>
>>> Thanks in advance
>>>
>>>
>>>
>>> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
>>> Windows 10
>>>
>>>
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20200117/8cf6bc15/attachment-0001.html>


More information about the openstack-discuss mailing list