[openstack-dev] [Nova] virt driver architecture

Devananda van der Veen devananda.vdv at gmail.com
Thu May 9 22:17:00 UTC 2013


On Thu, May 9, 2013 at 2:11 PM, Chris Behrens <cbehrens at codestud.com> wrote:

>
> On May 9, 2013, at 9:30 AM, Devananda van der Veen <
> devananda.vdv at gmail.com> wrote:
>
> On Thu, May 9, 2013 at 8:45 AM, Sean Dague <sean at dague.net> wrote:
>
>>
>> I think we learned a really important lesson in baremetal: putting a
>> different complex management system underneath the virt driver interface is
>> a bad fit, requires nova to do unnatural things, and just doesn't make
>> anyone happy at the end of the day. That's since resulted in baremetal
>> spinning out to a new incubated project, Ironic, which I think is really
>> the right long term approach.
>>
>> I think we need to take that lesson for what it was, and realize these
>> virt cluster drivers are very much the same kind of problem. They are
>> better served living in some new incubated effort instead of force fitting
>> into the nova-compute virt layer and driving a lot more complexity into
>> nova.
>
>
> I don't feel like a new project is needed here -- the ongoing discussion
> about moving scheduling/orchestration logic out of nova-compute and into
> conductor-or-something-else seems to frame this discussion, too.
>
> The biggest change to Nova that I recall around adding the Baremetal code
> was the addition of the "node" aka "hypervisor_hostname" concept -- that a
> single nova compute host might control more than one discrete thing, which
> thereby need to be identified as (host, node). That change opened the door
> for other cluster drivers to fit under the virt interface. IMBW, but I
> believe this is exactly what the vCenter and Hyper-V folks are looking at.
> It's also my current plan for Ironic. However, I also believe that this
> logic doesn't necessarily have to live under the virt API layer; I think
> it's a really good fit for the orchestration/conductor discussions....
>
> We were talking about this a few days ago in -nova, particularly how
> moving some of the ComputeManager logic out to conductor might fit together
> with simplifying the (host, node) complexities, and help make nova-compute
> just a thin virt API layer. Here is a very poor summary of what I recall...
> * AMQP topic is based on "nodename", not "hostname"
> * for local hypervisors (KVM, etc), the topic identifies the local host,
> and the local nova-compute agent subscribes to it
> * for clustered hypervisors (ironic, vCenter, etc), the topic identifies
> the unique resource, and any nova-compute which can manage that resource
> subscribes to the topic.
>
>
> I wonder if this is a different discussion than the one I was involved in
> recently.  The one I was involved with was more:
>

Nope. Same discussion, you just explained it better than I did :)



>
> 1) AMQP topic is still based on 'hostname'
> 2) instance['host'] goes away, though.  We're left with only
> instance['node']
> 3) All concepts of scheduling/resource tracking for host and node is now
> just for the 'node'.
> 4) In order to do an action (build, or otherwise) on a node, you have some
> sort of mapping of node -> host, so you can cast to the correct
> nova-compute.  Ie, topic=instance['host'] becomes:
>  topic=topic_for_node(instance['node']) where we've not exactly defined how
> topic_for_node() works yet. :)
>
> (Or in the case of Ironic, #4 might be that we talk to Ironic API and
> there's no need for a nova-compute at all? -- I don't have a full picture
> of how Ironic will tie in completely, yet.)
>
> But the ideas above are pretty close, it sounds like.
>
> This would also remove the SPoF that nova-compute currently has for any
> cluster-of-discrete-things it manages today (eg, bare metal).
>
>
> Yeah, that's part of the idea with moving things to conductor.
>
> - Chris
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130509/51e86fa9/attachment.html>


More information about the OpenStack-dev mailing list