[openstack-dev] [fuel][nailgun][volume-manager][fuel-agent] lvm metadata size value. why was it set to 64M?

Evgeniy L eli at mirantis.com
Thu Feb 18 10:01:50 UTC 2016


Hi Alexander,

I was trying to trace the change and found 3 year old commit, yes it's hard
to recover the reason [0].
So what we should ask is what is a right way to calculate lvm metadata size
and change this behaviour.

I would suggest at least explicitly set metadata size on Nailgun side to
the same amount we have in the agent (until better size is found). Plus
explicitly reserve some amount of space based on io-optimal of specific
disk.

Thanks,

[0]
https://github.com/Mirantis/fuelweb/commit/d4d14b528b76b8e9fcbca51d3047a3884792d69f
[1] https://www.redhat.com/archives/linux-lvm/2012-April/msg00024.html

On Wed, Feb 17, 2016 at 8:51 PM, Alexander Gordeev <agordeev at mirantis.com>
wrote:

> Hi,
>
> Apparently, nailgun assumes that lvm metadata size is always set to 64M [1]
>
> It seems that it was defined here since the early beginning of nailgun as
> a project, therefore it's impossible to figure out for what purposes that
> was done as early commit messages are not so informative.
>
> According to the documentation (man lvm.conf):
>
>               pvmetadatasize — Approximate number of sectors to set aside
> for each copy of the metadata. Volume groups with large numbers  of
> physical  or  logical  volumes,  or  volumes groups containing complex
> logical volume structures will need additional space for their metadata.
> The metadata areas are treated as circular buffers, so unused space becomes
> filled with an archive of the most recent previous versions of the metadata.
>
>
> The default value is set to 255 sectors. (128KiB)
>
> Quotation from particular lvm.conf sample:
>     # Approximate default size of on-disk metadata areas in sectors.
>     # You should increase this if you have large volume groups or
>     # you want to retain a large on-disk history of your metadata changes.
>
>     # pvmetadatasize = 255
>
>
> nailgun's volume manager calculates sizes of logical volumes within one
> physical volume group and takes into account the size of lvm metadata [2].
>
> However, due to logical volumes size gets rounded to the nearest multiple
> of PE size (which is 4M usually), fuel-agent always ends up with the lack
> of free space when creating logical volumes exactly in accordance with
> partitioning scheme is generated by volume manager.
> Thus, tricky logic was added into fuel-agent [3] to bypass that flaw.
> Since 64M is way too big value when compared with typical one, fuel-agent
> silently reduces the size of lvm metadata by 8M and then partitioning
> always goes smooth.
>
> Consequently, almost each physical volume group remains only 4M of free
> space. It worked fine on old good HDDs.
>
> But when the time comes to use any FC/HBA/HW RAID block storage device
> which is occasionally reporting relatively huge values for minimal io size
> and optimal io size exposed in sysfs, then fuel-agent might end up with the
> lack of free space once again due to logical volume alignments within
> physical volume group [4]. Those alignments have been done by LVM
> automatically with respect to those values [5]
>
> As I'm going to trade off some portion of reserved amount of disk space
> for storing lvm metadata for the sake of logical volume alignments, here're
> the questions:
>
> * why was lvm metadata set to 64M?
> * could someone shed more light on any obvious reasons/needs hidden behind
> that?
> * what is the minimal size of lvm metadata we'll be happy with?
> * the same question for the optimal size.
>
>
> [1]
> https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L824
> [2]
> https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L867-L875
> [3]
> https://github.com/openstack/fuel-agent/commit/c473202d4db774b0075b8d9c25f217068f7c1727
> [4] https://bugs.launchpad.net/fuel/+bug/1546049
> [5] http://people.redhat.com/msnitzer/docs/io-limits.txt
>
>
> Thanks,
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160218/67b3a160/attachment.html>


More information about the OpenStack-dev mailing list