[heat] keystone endpoint configuration
Zane Bitter
zbitter at redhat.com
Thu Feb 28 16:22:33 UTC 2019
Sending to the list as I intended the first time... with edit to take
into account new info from Rabi.
On 27/02/19 5:29 PM, Jonathan Rosser wrote:
> Conversely, heat itself needs to be able to talk to many other openstack components, defined in the [clients_*] config sections. It is reasonable to describe these interactions as being "Internal" - I may misunderstand some of this though.
Yeah, that is reasonable, and in fact we give you the option in the
config file to choose which set of endpoints to use from the catalog.
But we assume that there is only one auth_url used to fetch the catalog.
> So here lies the issue - appropriate entries in heat.conf to make internal interactions between heat and horizon (one example) work in real-world deployments results in the keystone internal URL being placed in callbacks, and then SoftwareDemployments never complete as the internal keystone URL is not usually accessible to a VM.
Do you know which config options affect Horizon? It's surprising to me
that anything in *Heat's* config would make a difference to Horizon.
The only thing I can think of is that www_authenticate_uri might have to
be on the internal network for Horizon to work, but Colleen already said
that it should always be on the public network. And in any event it
would be overridden for the purposes of signal URLs by setting
[clients_keystone]/auth_uri to the public endpoint.
> I suspect that there is not much coverage for this kind of network separation in gate tests.
Yeah, I suspect only the deployment projects run tests with
complex-enough networking setups to be able to verify this, and it's not
their job to check that Heat supports this use case.
> There are already examples of similar config options in heat.conf, such
> as "heat_waitcondition_server_url" - would additonal config items such
> as server_base_auth_url and signal_responder_auth_url be appropriate so
> that we can be totally explicit about the endpoints handed on to created
> VM?
Yes, that's along the lines of what I was thinking too (although I think
we'd only need one option, for URLs destined to be called from
userspace). We already have an endpoint_type option (that defaults to
PublicURL), so maybe we just need to be able to specify
internal_auth_uri and public_auth_uri and we can select based on the
endpoint type when we're using the clients internally, but always use
the public one when gathering data to pass to a VM?
[Edit: Rabi pointed out that X-Auth-Url is inserted by middleware, and
gets its information from the same config options, so the following
paragraph is a red herring.]
Looking close at the code though, I wonder if part of the problem here
is that we use the X-Auth-Url header from the request context whenever
it is available (and only fall back to the config options when it is
not), which should generally always be the public URL... but possibly is
the internal URL if the request comes from Horizon... or Magnum?? In
fact, I wonder if that could be the _whole_ problem in terms of the
stuff you laid out at the beginning of the thread?
Notwithstanding that there are still cases with completely isolated
networks that cannot be covered by config options alone. e.g. as
discussed in https://storyboard.openstack.org/#!/story/2004524 you can
get the 'signal_url' attribute from OS::Heat::ScalingPolicy. That can
either be passed to Aodh (should presumably use the internal auth URI)
or e.g. given to a VM that might be running the user's own monitoring
system (must use the public auth URI). The only way I can see to solve
that would be to have separate 'internal_signal_url' and
'public_signal_url' attributes (are ordinary users even supposed to be
able to see internal URLs in the catalog?).
That's not an ideal experience for users since on the vast majority of
clouds where the services are able to access external endpoints then the
public URL will continue to work in either case (so extra work to figure
out, and people will choose wrong leaving lots of non-interoperable
templates floating around out there). But I guess it may be better than
what we have now.
cheers,
Zane.
More information about the openstack-discuss
mailing list