Update it seems the Cinder BlockDeviceDriver may be better: https://wiki.openstack.org/wiki/BlockDeviceDriver Related article by somebody else: http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local-disks-for-instances.html On Sun, May 10, 2015 at 2:14 PM, Sam Stoelinga <sammiestoel at gmail.com> wrote: > I am experimenting with an approach. My current plan is to run Cinder LVM > on the compute node and attach block device of the local cinder LVM backend > directly to the VM. > The cinder LVM backend would be a VG span across JBOD disks. > > Anybody ever tried this? Are there any special directive required so that > Cinder-scheduler knows that the local Cinder LVM backend should be used? > > On Sun, May 10, 2015 at 10:51 AM, Jagat Singh <jagatsingh at gmail.com> > wrote: > >> We are running Hadoop on Openstack. But have disks configured as RAID5. >> We get terrible disk throughput. >> >> How do we do openstack disk configuration for JBOD in the way Hadoop >> likes. >> >> We are running Cisco UCS device , each server has 32 cores and 10 disks >> but on Openstack Hadoop layer in datanode each disk appears as one at this >> moment. >> >> Is there any documentation or best practices around configuring Openstack >> for Hadoop. I know there is Savana project but that does not shares how to >> do things at physical level. >> >> Thanks >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150510/050c9c19/attachment.html>