[openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report
Clint Byrum
clint at fewbar.com
Wed Aug 27 18:03:10 UTC 2014
Excerpts from Steven Hardy's message of 2014-08-27 10:08:36 -0700:
> On Wed, Aug 27, 2014 at 09:40:31AM -0700, Clint Byrum wrote:
> > Excerpts from Zane Bitter's message of 2014-08-27 08:41:29 -0700:
> > > On 27/08/14 11:04, Steven Hardy wrote:
> > > > On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
> > > >> I am little bit skeptical about using Swift for this use case because of
> > > >> its eventual consistency issue. I am not sure Swift cluster is good to be
> > > >> used for this kind of problem. Please note that Swift cluster may give you
> > > >> old data at some point of time.
> > > >
> > > > This is probably not a major problem, but it's certainly worth considering.
> > > >
> > > > My assumption is that the latency of making the replicas consistent will be
> > > > small relative to the timeout for things like SoftwareDeployments, so all
> > > > we need is to ensure that instances eventually get the new data, act on
> > >
> > > That part is fine, but if they get the new data and then later get the
> > > old data back again... that would not be so good.
> > >
> >
> > Agreed, and I had not considered that this can happen.
> >
> > There is a not-so-simple answer though:
> >
> > * Heat inserts this as initial metadata:
> >
> > {"metadata": {}, "update-url": "xxxxxx", "version": 0}
> >
> > * Polling goes to update-url and ignores metadata <= 0
> >
> > * Polling finds new metadata in same format, and continues the loop
> > without talking to Heat
> >
> > However, this makes me rethink why we are having performance problems.
> > MOST of the performance problems have two root causes:
> >
> > * We parse the entire stack to show metadata, because we have to see if
> > there are custom access controls defined in any of the resources used.
> > I actually worked on a patch set to deprecate this part of the resource
> > plugin API because it is impossible to scale this way.
> > * We rely on the engine to respond because of the parsing issue.
> >
> > If however we could just push metadata into the db fully resolved
> > whenever things in the stack change, and cache the response in the API
> > using Last-Modified/Etag headers, I think we'd be less inclined to care
> > so much about swift for polling. However we are still left with the many
> > thousands of keystone users being created vs. thousands of swift tempurls.
>
> There's probably a few relatively simple optimisations we can do if the
> keystone user thing becomes the bottleneck:
> - Make the user an attribute of the stack and only create one per
> stack/tree-of-stacks
> - Make the user an attribute of each server resource (probably more secure
> but less optimal if your optimal is less keystone users).
>
> I don't think the many keystone users thing is actually a problem right now
> though, or is it?
1000 servers means 1000 keystone users to manage, and all of the tokens
and backend churn that implies.
It's not "a problem", but it is quite a bit heavier than tempurls.
More information about the OpenStack-dev
mailing list