<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div apple-content-edited="true"><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">On the Hyper-V side we have also the need to manage clustering (fault tolerance at the node level). This is something that was a bit further along the way in my personal priority list for Hyper-V, but at the design summit we heard a huge lot of requests for supporting it ASAP.</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The main idea behind it is that in a lot of scenarios customers want to move their VMs to openstack from environments where HA at the host level is considered an obvious feature (System center VMM, vSphere, etc). </div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">For what concerns Hyper-V, this can be done by putting the hosts in a Microsoft Cluster (MSCS) with the VM local storage on a shared storage (CSV) and that's it, HA is ready. It's very simple and cost effective, with the main advantage that in case of node failures, the cluster service will take care of failing over the VM resources to another node.</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Now, I really agree on the fact that adding a "sub-controller" under Nova adds unnecessary complexity, but IMO a solution to this issue is required (possibly in the Havana timeframe). </div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">IMO like most failover clusters, the Microsoft cluster service can be splitted very roughly in two components: a scheduler and a heartbeat service.</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The scheduler is the component which is obviously overlapping with the Nova one, while the hearthbeat (AFAIK) is totally missing. What is also missing is a feature that entitles the scheduler to restart an instance that used to run on a node on another one in case of failures signaled from the heartbeat component.</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">I suppose that this type of discussions come over and over and I might have missed some of them, but what about a "nova-heartbeat" service, maybe with a plugin / driver model that will enable different solutions (e.g. HALinux, Microsoft Cluster Server, etc)?</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Note: there's a completely different business scenario that requires to have OpenStack running along with vSphere, System Center VMM, etc.</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">In this case the user wants to manage VMs with both solutions at the same time tipycally for partitioned workloads: e.g.: VDI desktops with OpenStack and accounting servers with System center / vSphere. This is also something that customers keep on asking, which means creating a driver for System center VMM, unrelated to the Hyper-V one. </div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div></span></span></div>
<br><div><div>On May 9, 2013, at 20:44 , Russell Bryant <<a href="mailto:rbryant@redhat.com">rbryant@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">On 05/09/2013 12:30 PM, Devananda van der Veen wrote:<br><blockquote type="cite">On Thu, May 9, 2013 at 8:45 AM, Sean Dague <<a href="mailto:sean@dague.net">sean@dague.net</a><br><<a href="mailto:sean@dague.net">mailto:sean@dague.net</a>>> wrote:<br><br> On 05/09/2013 10:53 AM, Russell Bryant wrote:<br><br> Greetings,<br><br> I've been growing concerned with the evolution of Nova's<br> architecture in<br> terms of the virt drivers and the impact they have on the rest<br> of Nova.<br> I've heard these concerns from others in private conversation.<br> Another<br> thread on the list today pushed me to where I think it's time we<br> talk<br> about it:<br><br> <a href="http://lists.openstack.org/__pipermail/openstack-dev/2013-__May/008801.html">http://lists.openstack.org/__pipermail/openstack-dev/2013-__May/008801.html</a><br> <<a href="http://lists.openstack.org/pipermail/openstack-dev/2013-May/008801.html">http://lists.openstack.org/pipermail/openstack-dev/2013-May/008801.html</a>><br><br> At our last design summit, there was a discussion of adding a<br> new virt<br> driver to support oVirt (RHEVM). That seems inappropriate for<br> Nova to<br> me. oVirt is a full virt management system and uses libvirt+KVM<br> hypervisors. We use libvirt+KVM directly. Punting off to yet<br> another<br> management system that wants to manage all of the same things as<br> OpenStack seems like a broken architecture. In fact, oVirt has<br> done a<br> lot of work to *consume* OpenStack resources (glance, quantum),<br> which<br> seems completely appropriate.<br><br><br> +1<br><br><br> Things get more complicated if we take that argument and apply it to<br> other drivers that we already have in Nova. In particular, I<br> think this<br> applies to the VMware (vCenter mode, not ESX mode) and Hyper-V<br> drivers.<br> I'm not necessarily proposing that those drivers work<br> significantly<br> different. I don't think that's practical if we want to support<br> these<br> systems.<br><br> We now have two different types of drivers: those that manage<br> individual<br> hypervisor nodes, and those that proxy to much more complex systems.<br><br> We need to be very aware of what's going on in all virt drivers,<br> even<br> the ones we don't care about as much because we don't use them.<br> We also<br> need to continue to solidify the virt driver interface and be<br> extremely<br> cautious when these drivers require changes to other parts of Nova.<br> Above all, let's make sure that evolution in this area is well<br> thought<br> out and done by conscious decision.<br><br> Comments airing more specific concerns in this area would be<br> appreciated.<br><br><br> I think we learned a really important lesson in baremetal: putting a<br> different complex management system underneath the virt driver<br> interface is a bad fit, requires nova to do unnatural things, and<br> just doesn't make anyone happy at the end of the day. That's since<br> resulted in baremetal spinning out to a new incubated project,<br> Ironic, which I think is really the right long term approach.<br><br> I think we need to take that lesson for what it was, and realize<br> these virt cluster drivers are very much the same kind of problem.<br> They are better served living in some new incubated effort instead<br> of force fitting into the nova-compute virt layer and driving a lot<br> more complexity into nova.<br><br><br>I don't feel like a new project is needed here -- the ongoing discussion<br>about moving scheduling/orchestration logic out of nova-compute and into<br>conductor-or-something-else seems to frame this discussion, too. <br><br>The biggest change to Nova that I recall around adding the Baremetal<br>code was the addition of the "node" aka "hypervisor_hostname" concept --<br>that a single nova compute host might control more than one discrete<br>thing, which thereby need to be identified as (host, node). That change<br>opened the door for other cluster drivers to fit under the virt<br>interface. IMBW, but I believe this is exactly what the vCenter and<br>Hyper-V folks are looking at. It's also my current plan for Ironic.<br>However, I also believe that this logic doesn't necessarily have to live<br>under the virt API layer; I think it's a really good fit for the<br>orchestration/conductor discussions....<br></blockquote><br>Yep, that was the change that had the most impact on the rest of Nova.<br>I think there's a big difference between baremetal and these other<br>drivers. In the case of baremetal, the Nova component is still in full<br>control of all nodes. There's not another system that is also (or<br>instead of Nova) in control of the individual nodes.<br><br><blockquote type="cite">We were talking about this a few days ago in -nova, particularly how<br>moving some of the ComputeManager logic out to conductor might fit<br>together with simplifying the (host, node) complexities, and help make<br>nova-compute just a thin virt API layer. Here is a very poor summary of<br>what I recall...<br>* AMQP topic is based on "nodename", not "hostname"<br>* for local hypervisors (KVM, etc), the topic identifies the local host,<br>and the local nova-compute agent subscribes to it<br>* for clustered hypervisors (ironic, vCenter, etc), the topic identifies<br>the unique resource, and any nova-compute which can manage that resource<br>subscribes to the topic.<br><br>This would also remove the SPoF that nova-compute currently has for any<br>cluster-of-discrete-things it manages today (eg, baremetal).<br></blockquote><br>Totally agreed with this. However, I'm not sure having clustered<br>hypervisors expose individual resources is something they want to do.<br>It's in conflict with what the underlying system we're talking to wants<br>to be in control of.<br><br>-- <br>Russell Bryant<br><br>_______________________________________________<br>OpenStack-dev mailing list<br><a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev<br></blockquote></div><br></body></html>