[openstack-dev] [Heat] stevedore plugins (and wait conditions)

Steven Hardy shardy at redhat.com
Tue Jul 8 21:17:13 UTC 2014


On Tue, Jul 08, 2014 at 03:08:32PM -0400, Zane Bitter wrote:
> I see that the new client plugins are loaded using stevedore, which is great
> and IMO absolutely the right tool for that job. Thanks to Angus & Steve B
> for implementing it.
> 
> Now that we have done that work, I think there are more places we can take
> advantage of it too - for example, we currently have competing native wait
> condition resource types being implemented by Jason[1] and Steve H[2]
> respectively, and IMHO that is a mistake. We should have *one* native wait
> condition resource type, one AWS-compatible one, software deployments and
> any custom plugin that require signalling; and they should all use a common
> SignalResponder implementation that would call an API that is pluggable
> using stevedore. (In summary, what we're trying to make configurable is an
> implementation that should be invisible to the user, not an interface that
> is visible to the user, and therefore the correct unit of abstraction is an
> API, not a resource.)

To clarify, they're not competing as such - Jason and I have chatted about
the two approaches and have been working to maintain a common interface,
such that they would be easily substituted based on deployer or user
preferences.

My initial assumption was that this substitution would happen via resource
mappings in the global environment, but I now see that you are proposing
the configurable part to be at a lower level, subsituting the transport
behind a common resource implementation.

Regarding forcing deployers to make a one-time decision, I have a question
re cost (money and performance) of the Swift approach vs just hitting the
Heat API

- If folks use the Swift resource and it stores data associated with the
  signal in Swift, does that incurr cost to the user in a public cloud
  scenario?
- What sort of overhead are we adding, with the signals going to swift,
  then in the current implementation being copied back into the heat DB[1]?

It seems to me at the moment that the swift notification method is good if
you have significant data associated with the signals, but there are
advantages to the simple API signal approach I've been working on when you
just need a simple "one shot" low overhead way to get data back from an
instance.

FWIW, the reason I revived these patches was I found that
SoftwareDeployments did not meet my needs for a really simple signalling
mechanism when writing tempest tests:

https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml

These tests currently use the AWS WaitCondition resources, and I wanted a
native alternative, without the complexity of using SoftwareDeployments
(which also won't work with minimal cirros images without some pretty hacky
workarounds[2])

I'm all for making things simple, avoiding duplication and confusion for
users, but I'd like to ensure that making this a one-time deployer level
decision definitely makes sense, vs giving users some choice over what
method is used.

[1] https://review.openstack.org/#/c/96947/
[2] https://review.openstack.org/#/c/91475/

Steve



More information about the OpenStack-dev mailing list