[openstack-dev] [nova] question about e41fb84 "fix anti-affinity race condition on boot"
joe.gordon0 at gmail.com
Mon Mar 17 22:19:27 UTC 2014
On Mon, Mar 17, 2014 at 12:52 PM, Jay Pipes <jaypipes at gmail.com> wrote:
> On Mon, 2014-03-17 at 12:39 -0700, Joe Gordon wrote:
> > On Mon, Mar 17, 2014 at 12:29 PM, Andrew Laski
> > <andrew.laski at rackspace.com> wrote:
> > On 03/17/14 at 01:11pm, Chris Friesen wrote:
> > On 03/17/2014 11:59 AM, John Garbutt wrote:
> > On 17 March 2014 17:54, John Garbutt
> > <john at johngarbutt.com> wrote:
> > Given the scheduler split, writing
> > that value into the nova db from
> > the scheduler would be a step
> > backwards, and it probably breaks lots
> > of code that assumes the host is not
> > set until much later.
> > Why would that be a step backwards? The scheduler has
> > picked a host for the instance, so it seems reasonable
> > to record that information in the instance itself as
> > early as possible (to be incorporated into other
> > decision-making) rather than have it be implicit in
> > the destination of the next RPC message.
> > Now I could believe that we have code that assumes
> > that having "instance.host" set implies that it's
> > already running on that host, but that's a different
> > issue.
> > I forgot to mention, I am starting to be a fan
> > of a two-phase commit
> > approach, which could deal with these kinds of
> > things in a more
> > explicit way, before starting the main boot
> > process.
> > Its not as elegant as a database transaction,
> > but that doesn't seems
> > possible in the log run, but there could well
> > be something I am
> > missing here too.
> > I'm not an expert in this area, so I'm curious why you
> > think that database transactions wouldn't be possible
> > in the long run.
> > There has been some effort around splitting the scheduler out
> > of Nova and into its own project. So down the road the
> > scheduler may not have direct access to the Nova db.
> > If we do pull out the nova scheduler it can have its own DB, so I
> > don't think this should be an issue.
> Just playing devil's advocate here, but even if Gantt had its own
> database, would that necessarily mean that there would be only a single
> database across the entire deployment? I'm thinking specifically in the
> case of cells, where presumably, scheduling requests would jump through
> multiple layers of Gantt services, would a single database transaction
> really be possible to effectively fence the entire scheduling request?
So that opens the whole can of gantt and cells worms. I would rather
evaluate design decisions more around what exists today and less on what we
think will exist in the future (although we definitely don't want to design
our selves into a corner). I'm just not very keen on the answer 'we
shouldn't do x because of this thing we talked about but haven't done.'
That being said, this debate gets more complicated when you factor in the
overhead of sqlalchemy, if we can drop that overhead we solve a lot of
problems all at once (db is used all over the place both directly and
through conductor, and sqlalchemy can have a 10x+ overhead).
For historical reasons we have spent a lot of time trying to decouple the
SQL DB from the rest of the codebase, because we left the door open for
alternate DB backends (re: noSQL). I think the ship has sailed on that one
and we shouldn't spend worry about designing things around possibly adding
in a noSQL backend in the future.
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev