Anyone using ScaleIO block storage?
Jay Bryant
jsbryant at electronicjungle.net
Thu Dec 6 15:36:48 UTC 2018
> Not supporting iSCSI would indeed be an issue for bare-metal instances.
The same basic issue exists for Ceph backed storage, although I've been
encouraging the cinder team to provide a capability of returning an iscsi
volume mapping for Ceph. If there
> is a similar possibility, please let me know as it might change the
overall discussion regarding providing storage for bare metal instances.
Julia,
This is an interesting idea. Depending on how things go with the Ceph
iSCSI implementation goes I wonder if we can look at doing something more
general where the volume node can act as an iSCSI gateway for any user that
wants iSCSI support. I am not sure how hard creating a general solution
would be or what the performance impact would be. It puts the volume node
in the data path which may cause people to hesitate on this. Something to
think about though.
Jay
On Wed, Dec 5, 2018 at 5:30 PM Julia Kreger <juliaashleykreger at gmail.com>
wrote:
>
>
> On Wed, Dec 5, 2018 at 2:02 PM Kimball (US), Conrad <
> conrad.kimball at boeing.com> wrote:
> [trim]
>
>> One concern I do have is that it uses a proprietary protocol that in turn
>> requires a proprietary “data client”. For VM hosting this data client can
>> be installed in the compute node host OS, but seems like we wouldn’t be
>> able to boot a bare-metal instance from a ScaleIO-backed Cinder volume.
>>
>
> Not supporting iSCSI would indeed be an issue for bare-metal instances.
> The same basic issue exists for Ceph backed storage, although I've been
> encouraging the cinder team to provide a capability of returning an iscsi
> volume mapping for Ceph. If there is a similar possibility, please let me
> know as it might change the overall discussion regarding providing storage
> for bare metal instances.
>
> -Julia
>
>>
--
jsbryant at electronicjungle.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181206/3bc41d6d/attachment.html>
More information about the openstack-discuss
mailing list