[OCTAVIA] How to monitor amphora

Joseph Davis joseph.davis at suse.com
Tue Nov 27 00:11:21 UTC 2018

Hello Sa Pham and others,
Just to throw another option in to your discussion, the monasca-agent
would also be able to monitor the cpu, memory, etc. on an amphora.
Similar to how the Ceilometer agent -> Gnocchi publishing would work, you
could then use Monasca services to track usage and set alarms. And
Monasca handles storage in a time series database (Cassandra or InfluxDB).
https://github.com/openstack/monasca-agent/blob/master/docs/Agent.md has
some details, an you can always ask questions in #openstack-monasca.
Adam Harwell wrote:
I would also note (though I'm guessing you've heard this from Michael 
already) that logging and metrics for internal amphora data (like 
Haproxy logs, agent logs, etc) is on our roadmap to officially expose 
via some mechanism. We discussed possible ways to do this at the last 
PTG, but we were just waiting on someone to have time to do a blueprint 
for review. If you'd like to help get support for this upstream, you're 
welcome (actually, highly encouraged!) to hop into our IRC channel 
#openstack-lbaas and we can even help you get started. Hope to see you 
there! --Adam Harwell (rm_work) On Mon, Nov 26, 2018, 10:19 Michael 
Johnson <johnsomor at gmail.com> wrote:

> This is an area we have been talking about recently at the PTGs.
> Adding connection log offloading is on our short term road map.
> Someone had signed up for that, but has pivoted to work on other
> items.
> Due to that I can't say if it will make it for Stein or not. If
> someone is interested in picking this up, I can help point you to the
> documentation of our discussions at the PTGs. If not, I would expect
> it to land late in Stein or early in Train.
> As for other monitoring, CPU, etc. this is area we currently leave to
> the operator, but would like to do some work on. As mentioned above
> gnocchi appears to be a nice time-series database for metrics
> collection. We would need to have some design discussions on what and
> how to collect the data. As you mentioned, monitoring agents can be
> added to the amphora image at build time if the operator would like to
> add a local tool set.
> As with most parts of Octavia, we try to make things "plug-able" so
> that operators have a choice of tools. Both the log offloading and
> metrics should follow that design principle.
> Michael
> On Sun, Nov 25, 2018 at 10:49 PM Gaël THEROND<gael.therond at gmail.com>
> wrote:
>> Hi!
>> I’ve taken some time to read your presentation and I have to say thank
> you for your wonderful work ! It’s pretty much complete and with code
> example which is really cool.
>> It’s exactly what I was looking for.
>> If I could just suggest one thing, it would be to give operators freedom
> to choose which logging and which metrics clients they want to use.
>> Doing so would avoid having to rely on gnocchi as you would be able to
> choose the clients type at amphora build time and specify clients target at
> runtime using the amphora-agent.
>> On another hand, it might we’ll be tricky to do so as everything is
> namespaced into the amphora instance.
>> As suggested using gnocchi to store HAProxy logs may well be the easiest
> short path actually, I’m wondering if it would be possible for searchlight
> to use that logs at some point.
>> Anyway, thanks for the awesome job.
>> For now I’ll keep it simple and just use the API for basic status.
>> Kind regards,
>> G.
>> Le lun. 26 nov. 2018 à 04:23, Sa Pham<saphi070 at gmail.com>  a écrit :
>>> yes. I have discussed with the Octavia team. PTL Johnson has suggested
> me to use gnocchi to store octavia amphora metric.
>>> On Wed, Nov 21, 2018 at 3:16 PM Gaël THEROND<gael.therond at gmail.com>
> wrote:
>>>> Hi! Thanks for this material, I’ll have a look at it.
>>>> As our philosophy at work is to always push upstream all our
> patchs/features or bug fixes, I won’t modify and keep the source code on
> our own and if we need further features like that I think we will rather
> push for a Blueprint with all required commits.
>>>> Did you already discussed that topic with the Octavia team?
>>>> Thanks a lot.
>>>> Le mer. 21 nov. 2018 à 09:12, Sa Pham<saphi070 at gmail.com>  a écrit :
>>>>> Hi,
>>>>> In Vietnam Openinfra Days 2018, I have a presentation about
> monitoring and logging for Octavia Amphora.  We have to customize octavia
> source code to do this. I send you my presentation slide:
> https://drive.google.com/file/d/1dHXExEKrHDg4Cf3D1fBeulLDW_G-txLr/view?usp=sharing
>>>>> Best,
>>>>> On Wed, Nov 21, 2018 at 3:05 PM Gaël THEROND<gael.therond at gmail.com>
> wrote:
>>>>>> Hi guys,
>>>>>> As already discussed I had to test Octavia as our corporate
> LoadBalancer solution and it was a success.
>>>>>> Thank to everyone on this list that assisted me and especially
> Michael Octavia is fully working and without weird nightmare glitches.
>>>>>> Now that I validated the technical part of my project, I need to
> enter more operationals questions such as what’s the best way to monitor
> and log amphora?
>>>>>> I would like to be able to get a monitoring and logging agent
> installed on the amphora in order to get proper metrics about what’s going
> on with the LoadBalancer, is it fully cpu loaded? Is it using all network
> resources available, are the file descriptors near the initial limits? How
> much xxx HTTP return code the loadBalancer is facing and send those logs to
> an ELK or something similar.
>>>>>> Do you know if that something that I could achieve by adding more
> elements to the image at DIB time or are there any other better
> best-practices that I should be aware of ?
>>>>>> As Octavia is creating a namespace for each HAProxy process, I am
> wandering if it’s even possible.
>>>>>> Thanks a lot for your hard work guys.
>>>>>> Kind regards,
>>>>> --
>>>>> Sa Pham Dang
>>>>> Cloud RnD Team - VCCloud
>>>>> Phone/Telegram: 0986.849.582
>>>>> Skype: great_bn
>>> --
>>> Sa Pham Dang
>>> Cloud RnD Team - VCCloud
>>> Phone/Telegram: 0986.849.582
>>> Skype: great_bn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20181126/3e709272/attachment.html>

More information about the openstack-discuss mailing list