[openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud
Steven Hardy
shardy at redhat.com
Mon Dec 12 14:09:00 UTC 2016
On Wed, Nov 30, 2016 at 01:54:34PM -0700, Alex Schultz wrote:
> Hey folks,
>
> So I'm in the process of evaluating options for implementing the
> capture-environment-status-and-logs[0] blueprint. At the moment my
> current plan is to implement a mistral workflow to execute the
> sosreport to bundle the status and logs up on the requested nodes.
> I'm leveraging a similar concept to the the remote execution[1] method
> we current expose via 'openstack overcloud execute'. The issue I'm
> currently running into is getting the files off the overcloud node(s)
> so that they can be returned to the tripleoclient. The files can be
> large so I don't think they are something that can just be returned as
> output from Heat. So I wanted to ask for some input on the best path
> forward.
>
> IDEA 1: Write something (script or utility) to be executed via Heat on
> the nodes to push the result files to a container on the undercloud.
> Pros:
> - The swift container can be used by the mistral workflow for other
> actions as part of this bundling
> - The tripleoclient will be able to just pull the result files
> straight from swift
> - No additional user access needs to be created to perform operations
> against the overcloud from the undercloud
> Cons:
> - Swift credentials (or token) need to be passed to the script being
> executed by Heat on the overcloud nodes which could lead to undercloud
> credentials being leaked to the overcloud
I think we can just use a swift tempurl? That's in alignment for what we
already do both for polling metadata from heat (which is put into swift,
then we give a tempurl to the nodes, see /etc/os-collect-config.conf on the
overcloud nodes.
It's also well aligned with what we do for the DeployArtifactURLs
interface.
I guess the main difference here is we're only allowing GET access for
those cases, but here there's probably more scope for abuse, e.g POSTing
giant files from the overcloud nodes could impact e.g disk space on the
undercloud?
> - I'm not sure if all overcloud nodes would have access to the
> undercloud swift endpoint
I think they will, or the tempurl transport we use for heat won't work.
> IDEA 2: Write additional features into undercloud deployment for ssh
> key generation and inclusion into the deployment specifically for this
> functionality to be able to reach into the nodes and pull files out
> (via ssh).
> Pros:
> - We would be able to leverage these 'support' credentials for future
> support features (day 2 operations?)
> - ansible (or similar tooling) could be used to perform operations
> against the overcloud from the undercloud nodes
> Cons:
> - Complexity and issues around additional user access
> - Depending on where the ssh file transfer occurs (client vs mistral),
> additional network access might be needed.
>
> IDEA 2a: Leverage the validations ssh key to pull files off of the
> overcloud nodes
> Pros:
> - ssh keys already exist when enable_validations = true so we can
> leverage existing
> Cons:
> - Validations can be disabled, possibly preventing 'support' features
> from working
> - Probably should not leverage the same key for multiple functions.
>
> I'm leaning towards idea 1, but wanted to see if there was some other
> form of existing functionality I'm not aware of.
Yeah I think (1) is probably the way to go, although cases could be argued
for all approaches you mention.
My main reason for preferring (1) is I think we'll want the data to end up
in swift anyway, e.g so UI users can access it (which won't be possible if
we e.g scp some tarball from overcloud nodes into the undercloud filesystem
directly, so we may as well just push it into swift from the nodes?)
Steve
More information about the OpenStack-dev
mailing list