[openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?
yang, xing
xing.yang at emc.com
Wed Sep 16 17:39:54 UTC 2015
Hi Eric,
Please see my replies inline below.
Thanks,
Xing
On 9/16/15, 1:20 PM, "Eric Harney" <eharney at redhat.com> wrote:
>On 09/15/2015 04:56 PM, yang, xing wrote:
>> Hi Eric,
>>
>> Regarding the default max_over_subscription_ratio, I initially set the
>> default to 1 while working on oversubscription, and changed it to 2
>>after
>> getting review comments. After it was merged, I got feedback that 2 is
>> too small and 20 is more appropriated, so I changed it to 20. So it
>>looks
>> like we can¹t find a default value that makes everyone happy.
>>
>
>I'm curious about how this is used in real-world deployments. Are we
>making the assumption that the admin has some external monitoring
>configured to send alarms if the storage is nearing capacity?
We can have Ceilometer integration for capacity notifications. See the
following patches on capacity headroom:
https://review.openstack.org/#/c/170380/
https://review.openstack.org/#/c/206923/
>
>> If we can decide what is the best default value for LVM, we can change
>>the
>> default max_over_subscription_ratio, but we should also allow other
>> drivers to specify a different config option if a different default
>>value
>> is more appropriate for them.
>
>This sounds like a good idea, I'm just not sure how to structure it yet
>without creating a very confusing set of config options.
I’m thinking we could have a prefix with vendor name for this and it also
requires documentation by driver maintainers if they are using a different
config option. I proposed a topic to discuss about this at the summit.
>
>
>> On 9/15/15, 1:38 PM, "Eric Harney" <eharney at redhat.com> wrote:
>>
>>> On 09/15/2015 01:00 PM, Chris Friesen wrote:
>>>> I'm currently trying to work around an issue where activating LVM
>>>> snapshots created through cinder takes potentially a long time.
>>>> (Linearly related to the amount of data that differs between the
>>>> original volume and the snapshot.) On one system I tested it took
>>>>about
>>>> one minute per 25GB of data, so the worst-case boot delay can become
>>>> significant.
>>>>
>>>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots
>>>>were
>>>> not intended to be kept around indefinitely, they were supposed to be
>>>> used only until the backup was taken and then deleted. He recommends
>>>> using thin provisioning for long-lived snapshots due to differences in
>>>> how the metadata is maintained. (He also says he's heard reports of
>>>> volume activation taking half an hour, which is clearly crazy when
>>>> instances are waiting to access their volumes.)
>>>>
>>>> Given the above, is there any reason why we couldn't make thin
>>>> provisioning the default?
>>>>
>>>
>>>
>>> My intention is to move toward thin-provisioned LVM as the default --
>>>it
>>> is definitely better suited to our use of LVM. Previously this was
>>>less
>>> easy, since some older Ubuntu platforms didn't support it, but in
>>> Liberty we added the ability to specify lvm_type = "auto" [1] to use
>>> thin if it is supported on the platform.
>>>
>>> The other issue preventing using thin by default is that we default the
>>> max oversubscription ratio to 20. IMO that isn't a safe thing to do
>>>for
>>> the reference implementation, since it means that people who deploy
>>> Cinder LVM on smaller storage configurations can easily fill up their
>>> volume group and have things grind to halt. I think we want something
>>> closer to the semantics of thick LVM for the default case.
>>>
>>> We haven't thought through a reasonable migration strategy for how to
>>> handle that. I'm not sure we can change the default oversubscription
>>> ratio without breaking deployments using other drivers. (Maybe I'm
>>> wrong about this?)
>>>
>>> If we sort out that issue, I don't see any reason we can't switch over
>>> in Mitaka.
>>>
>>> [1] https://review.openstack.org/#/c/104653/
>>>
>>>
>>>________________________________________________________________________
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list