[openstack-dev] [Nova] sponsor some LVM development
Jay Pipes
jaypipes at gmail.com
Fri Jan 22 15:34:29 UTC 2016
On 01/22/2016 09:33 AM, Matthew Booth wrote:
> On Tue, Jan 19, 2016 at 8:47 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov
> <mailto:Kevin.Fox at pnnl.gov>> wrote:
>
> One feature I think we would like to see that could benefit from LVM
> is some kind of multidisk support with better fault tolerance....
>
> For example:
> Say you have a node, and there are 20 vm's on it, and thats all the
> disk io it could take. But say you have spare cpu/ram capacity other
> then the diskio being used up. It would be nice to be able to add a
> second disk, and be able to launch 20 more vm's, located on the
> other disk.
>
> If you combined them together into one file system (linear append or
> raid0), you could loose all 40 vm's if something went wrong. That
> may be more then you want to risk. If you could keep them as
> separate file systems or logical volumes (maybe with contigous
> lv's?) Each vm could only top out a spindle, but it would be much
> more fault tolerant to failures on the machine. I can see some cases
> where that tradeoff between individual vm performance and number of
> vm's affected by a device failure can lean in that direction.
>
> Thoughts?
>
>
> This is simple enough for a compute host. However, the real problem is
> in the scheduler. The scheduler needs to be aware that there are
> multiple distinct resource pools on the host. For e.g., if your 2 pools
> have 20G of disk each, that doesn't mean you can spin up an instance
> with 30G. This is the same problem the VMware driver has, and to the
> best of my knowledge there's still no solution to that.
The proposed solution is here:
https://review.openstack.org/#/c/253187/2/specs/mitaka/approved/generic-resource-pools.rst
Best,
-jay
More information about the OpenStack-dev
mailing list