[ops] Anyone using ScaleIO block storage?
Julia Kreger
juliaashleykreger at gmail.com
Thu Dec 6 17:17:23 UTC 2018
On Thu, Dec 6, 2018 at 8:04 AM Jeremy Stanley <fungi at yuggoth.org> wrote:
> On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote:
> [...]
> > In order to boot bare metal instance from ScaleIO volume, the BIOS
> > should be able to act as ScaleIO client, which will likely never
> > happen. ScaleIO used to have a capability to expose the volumes
> > over standard iSCSI, but this capability has been removed long
> > time ago. As this was a feature in the past, making Dell/EMC to
> > re-introduce it may not be completely impossible if there is high
> > enough interest for that. However, this would vanish the power of
> > the proprietary protocol which let the client to balance the load
> > towards multiple servers.
[...]
>
> You'd only need iSCSI support for bootstrapping though, right? Once
> you're able to boot a ramdisk with the ScaleIO (my friends at EMC
> would want me to remind everyone it's called "VFlexOS" now) driver
> it should be able to pivot to their proprietary protocol. In theory
> some running service on the network could simply act as an iSCSI
> proxy for that limited purpose.
> --
> Jeremy Stanley
>
This is a great point. I fear the issue would be how to inform the guest of
what and how to pivot. At some point it might just be easier to boot the
known kernel/ramdisk and have a command line argument. That being said
things like this is why ironic implemented the network booting ramdisk
interface so an operator could choose something along similar lines. If
some abstraction pattern could be identified, and be well unit tested at
least, I feel like we might be able to pass along the necessary information
if needed. Naturally the existing ironic community does not have access to
this sort of hardware, and it would be a bespoke sort of integration. We
investigated doing something similar for Ceph integration but largely
pulled back due to a lack of initial ramdisk loader standardization and
even support for the root filesystem on Ceph.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181206/8dcfaebe/attachment.html>
More information about the openstack-discuss
mailing list