[Octavia] multi-arch Amphora

Michael Johnson johnsomor at gmail.com
Tue Jun 23 23:46:35 UTC 2020


Hi Tony,

My thoughts on your question:

It is almost always better to locate the load balancing engine closer
to the ingress point. This will allow the load balancer to make the
decisions about which backend member service is best suited to handle
the traffic and then allow the underlying cloud networking, be it OVS,
OVN, or other, make the best choice for passing the traffic. This
especially applies to OVN and DVR with the lookup tables.

Should we support multiple amphora architecture types?
I think if there is demand for it we should. I can tell you there are
64bit ARM SoCs that can easily saturate gigabit links without breaking
a sweat using our engines.

How to select the underlying architecture?
I would start with Octavia flavors. I would implement a default
"architecture" configuration setting and then add support for the
operator defining Octavia flavors for the various architectures
available to override that default at load balancer creation time. I
think we will be able to do this via the glance "tags" Octaiva flavor
[1] setting in the Octavia flavors, but we could also make the
architecture a first class citizen in Octavia flavors as well.

As I have mentioned to others interested in endianness issues, we have
two protocols between the control plane and the amphora. One is REST
based over TLS, the other is a signed heartbeat message over UDP. In a
quick review I think the Octavia implementation should be ok, but I
think it is a risk and should be explicitly tested and/or deeply
evaluated before I would say there aren't endianness gremlins present.

[1] https://review.opendev.org/737528

Michael

On Mon, Jun 22, 2020 at 8:49 PM Tony Breeds <tony at bakeyournoodle.com> wrote:
>
> Hello Octavia team!
>     I have a somewhat obscure question for y'all about the "best way" to
> run an Amphora in an multi-arch OpenStack.
>
> I'm going to start with some level-setting assumptions, the describe my
> understanding so that if I'm wrong we can correct that before we get to
> the crux of my question(s)
>
> Assumptions
> -----------
>  - The build of all objects, amphora guest images etc etc is fine
>  - The deployment of OpenStack is reasonably simple and defined.
>  - An OpenStack that runs entirely on on CPU Architecture (aarch64,
>    ppc64le, x86_64, etc) "works"
>
> Bottom line we don't need to care about how the 'bits' are built or
> installed just that they are :)
>
> My Understanding
> ----------------
>
> When asked to create a loadbalancer. Octavia will spawn an instance (via
> Nova) running the image tagged in glance with $amp_image_tag[1,2].  This
> instances shoudl be scheduled closer to the ingress/egress of the
> network then the workload it's balancing.  All communication between the
> Amphora and Octavia API is via REST (as opposed to RPC) and therefore we
> won't have any 'endianness/binary bugs' [3]
>
> The question(s)
> ---------------
>
> We have an OpenStack that has compute nodes running mixed architectures
> (aarch64, ppc64le, x86_64, etc) we can ensure that images run on the
> correct system (either with hw_architecture or aggregates).  To make
> that work today we load an appropriately tagged image into glance and
> then the amphora just runs on type of system and it's all pretty easy.
>
> Can, should, we do better?
>  - Does OVN/DVR alter the perfomance such that there *is* merit in
>    scheduling the amphora closer to the workloads
>  - Should we support multiple Amphora (one per architecture)?
>  - If so should this be exposed to the user?
>      openstack loadbalancer create \
>        --name lb1 --vip-subnet-id public-subnet \
>        --run-on aarch64
>
>
> Yours Tony.
>
> [1]
> https://docs.openstack.org/octavia/latest/configuration/configref.html#controller_worker.amp_image_tag
> [2] There should be only one image tagged as such:
> https://opendev.org/openstack/octavia/src/branch/master/octavia/compute/drivers/nova_driver.py#L34-L57
> [3] https://www.youtube.com/watch?v=oBSuXP-1Tc0



More information about the openstack-discuss mailing list