[openstack-dev] [Fuel] Removing old node logs

Vladimir Kozhukalov vkozhukalov at mirantis.com
Thu Jun 26 09:25:04 UTC 2014


Making diagnostic snapshot for a particular environment is a good idea. But
the issue is still there.

We often have the situation when user actually doesn't care of old logs at
all. He downloads ISO, installs it and tries various installation options
(Ubuntu, Centos, HA, Ceph, etc.). Sooner or later his hard drive is full
and he even can not make the diagnostic snapshot. Dealing with that stuff
about taking care of available free space inside shotgun seems to be not a
good idea. But we still need to address this. The easiest way to do that is
to delete old log directories (logrotate or nailgun itself). In this case
the issue at least will be rather seldom. But, of course, the right way is
to have a kind of monitoring system on the master node and notify user when
disk is full or launch a kind of cleaning task.

Ok, right place where we should deal with that stuff about removing old
logs is logrotate. Currently it just moves files like this
/var/log/remote/old-node.example.com/some.log -> /var/log/remote/
old-node.example.com/some.log.1.gz. But what it actually should do is to
remove the whole directories which are related to nonexistent nodes, right?





Vladimir Kozhukalov


On Tue, Jun 24, 2014 at 9:19 PM, Andrey Danin <adanin at mirantis.com> wrote:

> +1 to @Aleksandr
>
>
> On Tue, Jun 24, 2014 at 8:32 PM, Aleksandr Didenko <adidenko at mirantis.com>
> wrote:
>
>> Yes, of course, snapshot for all nodes at once (like currently) should
>> also be available.
>>
>>
>> On Tue, Jun 24, 2014 at 7:27 PM, Igor Kalnitsky <ikalnitsky at mirantis.com>
>> wrote:
>>
>>> Hello,
>>>
>>> @Aleks, it's a good idea to make snapshot per environment, but I think
>>> we can keep functionality to make snapshot for all nodes at once too.
>>>
>>> - Igor
>>>
>>>
>>> On Tue, Jun 24, 2014 at 6:38 PM, Aleksandr Didenko <
>>> adidenko at mirantis.com> wrote:
>>>
>>>> Yeah, I thought about diagnostic snapshot too. Maybe it would be better
>>>> to implement per-environment diagnostic snapshots? I.e. add diagnostic
>>>> snapshot generate/download buttons/links in the environment actions tab.
>>>> Such snapshot would contain info/logs about Fuel master node and nodes
>>>> assigned to the environment only.
>>>>
>>>>
>>>> On Tue, Jun 24, 2014 at 6:27 PM, Igor Kalnitsky <
>>>> ikalnitsky at mirantis.com> wrote:
>>>>
>>>>> Hi guys,
>>>>>
>>>>> What about our diagnostic snapshot?
>>>>>
>>>>> I mean we're going to make snapshot of entire /var/log and obviously
>>>>> this old logs will be included in snapshot. Should we skip theem or
>>>>> such situation is ok?
>>>>>
>>>>> - Igor
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 24, 2014 at 5:57 PM, Aleksandr Didenko <
>>>>> adidenko at mirantis.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> If user runs some experiments with creating/deleting clusters, then
>>>>>> taking care of old logs is under user's responsibility, I suppose. Fuel
>>>>>> configures log rotation with compression for remote logs, so old logs will
>>>>>> be gzipped and will not take much space.
>>>>>>
>>>>>> In case of additional boolean parameter, the default value should be
>>>>>> "0-don't touch old logs".
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Alex
>>>>>>
>>>>>>
>>>>>> On Tue, Jun 24, 2014 at 4:07 PM, Vladimir Kozhukalov <
>>>>>> vkozhukalov at mirantis.com> wrote:
>>>>>>
>>>>>>> Guys,
>>>>>>>
>>>>>>> What do you think of removing node logs on master node right after
>>>>>>> removing node from cluster?
>>>>>>>
>>>>>>> The issue is when user do experiments he creates and deletes
>>>>>>> clusters and old unused directories remain and take disk space. On the
>>>>>>> other hand, it is not so hard to imaging the situation when user would like
>>>>>>> to be able to take a look in old logs.
>>>>>>>
>>>>>>> My suggestion here is to add a boolean parameter into settings which
>>>>>>> will manage this piece of logic (1-remove old logs, 0-don't touch old logs).
>>>>>>>
>>>>>>> Thanks for your opinions.
>>>>>>>
>>>>>>> Vladimir Kozhukalov
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> OpenStack-dev mailing list
>>>>>>> OpenStack-dev at lists.openstack.org
>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev at lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-dev mailing list
>>>>> OpenStack-dev at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-dev mailing list
>>>> OpenStack-dev at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> adanin at mirantis.com
> skype: gcon.monolake
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140626/e70ed702/attachment-0001.html>


More information about the OpenStack-dev mailing list