Anyone using ScaleIO block storage?
Is anyone using ScaleIO (from Dell EMC) as a Cinder storage provider? What has been your experience with it, and at what scale? Our enterprise storage team is moving to ScaleIO and wants our OpenStack deployments to use it, so I'm looking for real life experiences to calibrate vendor stories of wonderfulness. One concern I do have is that it uses a proprietary protocol that in turn requires a proprietary "data client". For VM hosting this data client can be installed in the compute node host OS, but seems like we wouldn't be able to boot a bare-metal instance from a ScaleIO-backed Cinder volume. Conrad Kimball Associate Technical Fellow Enterprise Architecture Chief Architect, Enterprise Cloud Services conrad.kimball@boeing.com<mailto:conrad.kimball@boeing.com>
On Wed, Dec 5, 2018 at 2:02 PM Kimball (US), Conrad < conrad.kimball@boeing.com> wrote: [trim]
One concern I do have is that it uses a proprietary protocol that in turn requires a proprietary “data client”. For VM hosting this data client can be installed in the compute node host OS, but seems like we wouldn’t be able to boot a bare-metal instance from a ScaleIO-backed Cinder volume.
Not supporting iSCSI would indeed be an issue for bare-metal instances. The same basic issue exists for Ceph backed storage, although I've been encouraging the cinder team to provide a capability of returning an iscsi volume mapping for Ceph. If there is a similar possibility, please let me know as it might change the overall discussion regarding providing storage for bare metal instances. -Julia
Not supporting iSCSI would indeed be an issue for bare-metal instances. The same basic issue exists for Ceph backed storage, although I've been encouraging the cinder team to provide a capability of returning an iscsi volume mapping for Ceph. If there is a similar possibility, please let me know as it might change the overall discussion regarding providing storage for bare metal instances.
Julia, This is an interesting idea. Depending on how things go with the Ceph iSCSI implementation goes I wonder if we can look at doing something more general where the volume node can act as an iSCSI gateway for any user that wants iSCSI support. I am not sure how hard creating a general solution would be or what the performance impact would be. It puts the volume node in the data path which may cause people to hesitate on this. Something to think about though. Jay On Wed, Dec 5, 2018 at 5:30 PM Julia Kreger <juliaashleykreger@gmail.com> wrote:
On Wed, Dec 5, 2018 at 2:02 PM Kimball (US), Conrad < conrad.kimball@boeing.com> wrote: [trim]
One concern I do have is that it uses a proprietary protocol that in turn requires a proprietary “data client”. For VM hosting this data client can be installed in the compute node host OS, but seems like we wouldn’t be able to boot a bare-metal instance from a ScaleIO-backed Cinder volume.
Not supporting iSCSI would indeed be an issue for bare-metal instances. The same basic issue exists for Ceph backed storage, although I've been encouraging the cinder team to provide a capability of returning an iscsi volume mapping for Ceph. If there is a similar possibility, please let me know as it might change the overall discussion regarding providing storage for bare metal instances.
-Julia
-- jsbryant@electronicjungle.net
On Wed, Dec 5, 2018 at 10:57 PM, Kimball (US), Conrad <conrad.kimball@boeing.com> wrote:
Is anyone using ScaleIO (from Dell EMC) as a Cinder storage provider? What has been your experience with it, and at what scale?
My employer has multiple customers using our OpenStack based cloud solution with ScaleIO as volume backend. These customers are mostly telco operators running virtual network functions in their cloud, but there are customers using the cloud for other non telco IT purpose too. There are various types and flavors of the ScaleIO deployments at these customers, including low footprint deployment providing nx100 GiB raw capacity with small number of servers, medium capacity ultra HA systems with nx10 servers using multiple protection domains and fault sets, high capacity systems with petabyte range raw capacity, hyperconverged systems running storage and compute services on the same servers. The general feedback from the customers are positive, we did not hear about performance or stability issues. However, one common property of these customers and deployments that none of them handle bare metal instances, therefore, we do not have experience with that. In order to boot bare metal instance from ScaleIO volume, the BIOS should be able to act as ScaleIO client, which will likely never happen. ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers. Cheers, gibi
Our enterprise storage team is moving to ScaleIO and wants our OpenStack deployments to use it, so I’m looking for real life experiences to calibrate vendor stories of wonderfulness.
One concern I do have is that it uses a proprietary protocol that in turn requires a proprietary “data client”. For VM hosting this data client can be installed in the compute node host OS, but seems like we wouldn’t be able to boot a bare-metal instance from a ScaleIO-backed Cinder volume.
Conrad Kimball Associate Technical Fellow Enterprise Architecture Chief Architect, Enterprise Cloud Services conrad.kimball@boeing.com
On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: [...]
In order to boot bare metal instance from ScaleIO volume, the BIOS should be able to act as ScaleIO client, which will likely never happen. ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers. [...]
You'd only need iSCSI support for bootstrapping though, right? Once you're able to boot a ramdisk with the ScaleIO (my friends at EMC would want me to remind everyone it's called "VFlexOS" now) driver it should be able to pivot to their proprietary protocol. In theory some running service on the network could simply act as an iSCSI proxy for that limited purpose. -- Jeremy Stanley
On 12/6/2018 9:58 AM, Jeremy Stanley wrote:
On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: [...]
In order to boot bare metal instance from ScaleIO volume, the BIOS should be able to act as ScaleIO client, which will likely never happen. ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers. [...]
You'd only need iSCSI support for bootstrapping though, right? Once you're able to boot a ramdisk with the ScaleIO (my friends at EMC would want me to remind everyone it's called "VFlexOS" now) driver it should be able to pivot to their proprietary protocol. In theory some running service on the network could simply act as an iSCSI proxy for that limited purpose. Good question. Don't know the details there. I am going to add Helen Walsh who works on the Dell/EMC drivers to see if she could help give some insight.
Jay
On Thu, Dec 6, 2018 at 8:04 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: [...]
In order to boot bare metal instance from ScaleIO volume, the BIOS should be able to act as ScaleIO client, which will likely never happen. ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers.
[...]
You'd only need iSCSI support for bootstrapping though, right? Once you're able to boot a ramdisk with the ScaleIO (my friends at EMC would want me to remind everyone it's called "VFlexOS" now) driver it should be able to pivot to their proprietary protocol. In theory some running service on the network could simply act as an iSCSI proxy for that limited purpose. -- Jeremy Stanley
This is a great point. I fear the issue would be how to inform the guest of what and how to pivot. At some point it might just be easier to boot the known kernel/ramdisk and have a command line argument. That being said things like this is why ironic implemented the network booting ramdisk interface so an operator could choose something along similar lines. If some abstraction pattern could be identified, and be well unit tested at least, I feel like we might be able to pass along the necessary information if needed. Naturally the existing ironic community does not have access to this sort of hardware, and it would be a bespoke sort of integration. We investigated doing something similar for Ceph integration but largely pulled back due to a lack of initial ramdisk loader standardization and even support for the root filesystem on Ceph.
On Thu, Dec 6, 2018 at 7:31 AM Balázs Gibizer <balazs.gibizer@ericsson.com> wrote:
[trim] ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers. [trim]
iSCSI does have the ability to communicate additional paths that a client may choose to invoke, the issue then largely becomes locking across paths, which becomes a huge issue if lun locking is being used as part of something like a clustered file system. Of course, most initial initiators may not be able to support this, and as far as I'm aware what iscsi initiators that we can control in hardware don't have or have limited iscsi multipath support. Of course, if they iBFT load that.... Well, I'll stop now because of limitations with iBFT. :)
participants (6)
-
Balázs Gibizer
-
Jay Bryant
-
Jay Bryant
-
Jeremy Stanley
-
Julia Kreger
-
Kimball (US), Conrad