[openstack-dev] [oslo][performance] Proposing tail-based sampling in OSProfiler

Boris Pavlovic boris at pavlovic.me
Thu Aug 3 20:01:52 UTC 2017


Rajul,

May I ask why you think so?


Exposed by OSprofiler issues are going to be really hard to fix in current
OpenStack architecture.

Best regards,
Boris Pavlovic

On Thu, Aug 3, 2017 at 12:56 PM, Rajul Kumar <kumar.raju at husky.neu.edu>
wrote:

> Hi Boris
>
> Good to hear from you.
> May I ask why you think so?
>
> We do see some potential with OSProfiler for this and further objectives.
>
> Thanks
> Rajul
>
> On Thu, Aug 3, 2017 at 3:48 PM, Boris Pavlovic <boris at pavlovic.me> wrote:
>
>> Rajul,
>>
>> It makes sense! However, maybe it's a bit too late... ;)
>>
>> Best regards,
>> Boris Pavlovic
>>
>> On Thu, Aug 3, 2017 at 12:16 PM, Rajul Kumar <kumar.raju at husky.neu.edu>
>> wrote:
>>
>>> Hello everyone
>>>
>>> I have added a blueprint on having tail-based sampling as a sampling
>>> option for continuous tracing in OSProfiler. It would be really helpful to
>>> have some thoughts, ideas, comments on this from the community.
>>>
>>> Continuous tracing provides a good insight on how various transactions
>>> behave across in a distributed system. Currently, OpenStack doesn't have a
>>> defined solution for continuous tracing. Though, it has OSProfiler that
>>> does generates selective traces, it may not capture the occurrence. Even if
>>> we have OSProfiler running continuously [1], we need to sample the traces
>>> so as to cut down the data generated and still keep the useful info.
>>>
>>> Head based sampling can be applied that decides initially whether a
>>> trace should be saved or not. However, it may miss out on some useful
>>> traces. I propose to have tail-based sampling [2] mechanism that makes the
>>> decision at the end of the transaction and tends to keep all the useful
>>> traces. This may require a lot of changes depending on what all type of
>>> info is required and the solution that we pick to implement it [2]. This
>>> may not affect the current working of any of the services on OpenStack as
>>> it will be off the critical path [3].
>>>
>>> Please share your thoughts on this and what solution should be preferred
>>> in a broader OpenStack's perspective.
>>> This is a step in the process of having an automated diagnostic solution
>>> for OpenStack cluster.
>>>
>>> [1] https://blueprints.launchpad.net/osprofiler/+spec/osprof
>>> iler-overhead-control
>>> [2] https://blueprints.launchpad.net/osprofiler/+spec/tail-b
>>> ased-coherent-sampling
>>> [3] https://blueprints.launchpad.net/osprofiler/+spec/asynch
>>> ronous-trace-collection
>>>
>>> Thanks
>>> Rajul Kumar
>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20170803/e5ec85e7/attachment.html>


More information about the OpenStack-dev mailing list