[openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

Deepak Shetty dpkshetty at gmail.com
Thu Apr 17 16:37:20 UTC 2014


On Thu, Apr 17, 2014 at 10:00 PM, Deepak Shetty <dpkshetty at gmail.com> wrote:

>
>
>
> On Fri, Apr 11, 2014 at 8:25 PM, Eric Harney <eharney at redhat.com> wrote:
>
>> On 04/11/2014 07:54 AM, Deepak Shetty wrote:
>> > Hi,
>> >    I am using the nfs and glusterfs driver as reference here.
>> >
>> > I see that load_shares_config is called everytime via
>> > _ensure_shares_mounted which I feel is incorrect mainly because
>> > ensure_shares_mounted loads the config file again w/o restarting the
>> service
>> >
>> > I think that the shares config file should only be loaded once (during
>> > service startup) as part of do_setup and never again.
>> >
>>
>> Wouldn't this change the functionality that this provides now, though?
>>
>
> What functionality are you referring to.. ? didn't get you here
>
>
>>
>> Unless I'm missing something, since get_volume_stats calls
>> _ensure_shares_mounted(), this means you can add a new share to the
>> config file and have it become active in the driver.  (While I'm not
>> sure this was the original intent, it could be nice to have and should
>> at least be considered before ditching it.)
>>
>
> That does sound like a good to have feature but it actually is a bug bcos
> for server IP changes, it is effected w/o restarting service, but if one
> adds -o options its not effected even if u restart service.. so i feel
> whats happening is un-intended and actually a bug!
>
> Config should be loaded once and any changes to it should be effected post
> service restart
>

forgot to add, that for the above to work consistently, we definitely need
to have a framework / mechanism in cinder where the driver is provided a
function/callback to gracefully cleanup its mounts (or any other thing)
during service goign down. Today we don't have such a thing and hence
drivers don't cleanup their mounts and hence when service starts up, it
ensure_shares_mounted sees the mount being present and does nothing. this
would work nice if drivers were given ability to clean up the mounts as
part of service going down.

thanx,
deepak


>
>
>> > If someone changes something in the conf file, one needs to restart
>> service
>> > which calls do_setup again and the changes made in shares.conf is taken
>> > effect.
>> >
>>
>> I'm not sure this is correct given the above.
>>
>
> Pls see above.. it works in a incorrect way which is confusing to the
> admin/user
>
>
>>
>> > In looking further.. the ensure_shares_mounted ends up calling
>> > remotefsclient.mount() which does _Nothing_ if the share is already
>> > mounted.. whcih is mostly the case. So even if someone changed
>> something in
>> > the shares file (like added -o options) it won't take effect as the
>> share
>> > is already mounted & service already running.
>> >
>> > In fact today, if you restart the service, even then the changes in
>> share
>> > won't take effect as the mount is not un-mounted, hence when the
>> service is
>> > started next, the mount is existing and ensures_shares_mounted just
>> returns
>> > w/o doing anything.
>> >
>> > The only adv of calling load_shares_config in ensure_shares_mounted is
>> if
>> > someone changed the shares server IP while the service is running ... it
>> > loads the new share usign the new server IP.. which again is wrong since
>> > ideally the person should restart service for any shares.conf changes to
>> > take effect.
>> >
>>
>> This won't work anyway because of how we track provider_location in the
>> database.  This particular case is planned to be addressed via this
>> blueprint with reworks configuration:
>>
>>
>> https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements
>>
>
> Agree, but until this is realized, we can fix the code/flow such that its
> sane.. in the sense, it works consistently for all cases.
> today it doesn't.. for some change it works w/o service restart and for
> some it doesn't work even after service restart
>
> thanx,
> deepak
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140417/71923666/attachment.html>


More information about the OpenStack-dev mailing list