[OpenStack-Infra] ARA 1.0 deployment plans
Ian Wienand
iwienand at redhat.com
Tue Jun 18 01:55:27 UTC 2019
On Tue, Jun 11, 2019 at 04:39:58PM -0400, David Moreau Simard wrote:
> Although it was first implemented as somewhat of a hack to address the
> lack of scalability of HTML generation, I've gotten to like the design
> principle of isolating a job's result in a single database.
>
> It easy to scale and keeps latency to a minimum compared to a central
> database server.
I've been ruminating on how all this can work given some constraints
of
- keep current model of "click on a link in the logs, see the ara
results"
- no middleware to intercept such clicks with logs on swift
- don't actually know where the logs are if using swift (not just
logs.openstack.org/xy/123456/) which makes it harder to find job
artefacts like sqlite db's post job run (have to query gerrit or
zuul results db?)
- some jobs, like in system-config, have "nested" ARA reports from
subnodes; essentially reporting twice.
Can the ARA backend import a sqlite run after the fact? I agree
adding latency to jobs running globally sending results piecemeal back
to a central db isn't going work; but if it logged everything to a
local db as now, then we uploaded that to a central location in post
that might work? Although we can't run services/middleware on logs
directly, we could store the results as we see fit and run services on
a separate host.
If say, you had a role that sent the generated ARA sqlite.db to
ara.opendev.org and got back a UUID, then it could write into the logs
ara-report/index.html which might just be a straight 301 redirect to
https://ara.opendev.org/UUID. This satisfies the "just click on it"
part.
It seems that "all" that needs to happen is that requests for
https://ara.opendev.org/uuid/api/v1/... to query either just the
results for "uuid" in the db.
And could the ara-web app (which is presumably then just statically
served from that host) know that when started as
https://ara.opendev.org/uuid it should talk to
https://ara.opendev.org/uuid/api/...?
I think though, this might be relying on a feature of the ara REST
server that doesn't exist -- the idea of unique "runs"? Is that
something you'd have to paper-over with, say wsgi starting a separate
ara REST process/thread to respond to each incoming
/uuid/api/... request (maybe the process just starts pointing to
/opt/logs/uuid/results.sqlite)?
This doesn't have to grow indefinitely, we can similarly just have a
cron query to delete rows older than X weeks.
Easy in theory, of course ;)
-i
More information about the OpenStack-Infra
mailing list