[Octavia] multi-arch Amphora
tony at bakeyournoodle.com
Tue Jun 23 03:45:06 UTC 2020
Hello Octavia team!
I have a somewhat obscure question for y'all about the "best way" to
run an Amphora in an multi-arch OpenStack.
I'm going to start with some level-setting assumptions, the describe my
understanding so that if I'm wrong we can correct that before we get to
the crux of my question(s)
- The build of all objects, amphora guest images etc etc is fine
- The deployment of OpenStack is reasonably simple and defined.
- An OpenStack that runs entirely on on CPU Architecture (aarch64,
ppc64le, x86_64, etc) "works"
Bottom line we don't need to care about how the 'bits' are built or
installed just that they are :)
When asked to create a loadbalancer. Octavia will spawn an instance (via
Nova) running the image tagged in glance with $amp_image_tag[1,2]. This
instances shoudl be scheduled closer to the ingress/egress of the
network then the workload it's balancing. All communication between the
Amphora and Octavia API is via REST (as opposed to RPC) and therefore we
won't have any 'endianness/binary bugs' 
We have an OpenStack that has compute nodes running mixed architectures
(aarch64, ppc64le, x86_64, etc) we can ensure that images run on the
correct system (either with hw_architecture or aggregates). To make
that work today we load an appropriately tagged image into glance and
then the amphora just runs on type of system and it's all pretty easy.
Can, should, we do better?
- Does OVN/DVR alter the perfomance such that there *is* merit in
scheduling the amphora closer to the workloads
- Should we support multiple Amphora (one per architecture)?
- If so should this be exposed to the user?
openstack loadbalancer create \
--name lb1 --vip-subnet-id public-subnet \
 There should be only one image tagged as such:
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 488 bytes
Desc: not available
More information about the openstack-discuss