We're using charmed openstack and charmed k8s on top of that. In our k8s environment, we have an nginx ingress controller that uses the openstack-integrator charm to create and manage octavia LBs. I'm trying to figure out how to manage the haproxy frontend and backend timeout parameters that seem to default to 50000ms via:
OctaviaTimeoutClientData
OctaviaTimeoutMemberData
Ideally, being able to specify this in the k8s service manifest that gets passed down into the octavia config would be "the way."
I figured this out... for posterity... k8s services need these annotations so that octavia will create the haproxy config properly: frontend timeout: loadbalancer.openstack.org/timeout-client-data backend timeout: loadbalancer.openstack.org/timeout-member-data