[openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper
Jay Lau
jay.lau.513 at gmail.com
Mon Jul 20 03:18:00 UTC 2015
The nova guys propose move Hyper to Magnum but not Nova as Hyper cannot fit
into nova virt driver well.
As Hyper is now integrating with Kubernetes, I think that the integration
point may be creating a kubernetes hyper bay with ironic driver.
Thanks
2015-07-20 10:00 GMT+08:00 Kai Qiang Wu <wkqwu at cn.ibm.com>:
> Hi Peng,
>
> As @Adrian pointed it out:
>
> *My fist suggestion is to find a way to make a nova virt driver for Hyper,
> which could allow it to be used with all of our current Bay types in
> Magnum.*
>
>
> I remembered you or other guys in your company proposed one bp about nova
> virt driver for Hyper. What's the status of the bp now?
> Is it accepted by nova projects or cancelled ?
>
>
> Thanks
>
> Best Wishes,
>
> --------------------------------------------------------------------------------
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wkqwu at cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
> 100193
>
> --------------------------------------------------------------------------------
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Adrian Otto ---07/19/2015 11:18:02
> PM---Peng, You are not the first to think this way, and it's one o]Adrian
> Otto ---07/19/2015 11:18:02 PM---Peng, You are not the first to think this
> way, and it's one of the reasons we did not integrate Cont
>
> From: Adrian Otto <adrian.otto at rackspace.com>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Date: 07/19/2015 11:18 PM
> Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
> withHyper
> ------------------------------
>
>
>
> Peng,
>
> You are not the first to think this way, and it's one of the reasons we
> did not integrate Containers with OpenStack in a meaningful way a full year
> earlier. Please pay attention closely.
>
> 1) OpenStack's key influences care about two personas: 1.1) Cloud
> Operators 1.2) Cloud Consumers. If you only think in terms of 1.2, then
> your idea will get killed. Operators matter.
>
> 2) Cloud Operators need a consistent way to bill for the IaaS services the
> provide. Nova emits all of the RPC messages needed to do this. Having a
> second nova that does this slightly differently is a really annoying
> problem that will make Operators hate the software. It's better to use
> nova, have things work consistently, and plug in virt drivers to it.
>
> 3) Creation of a host is only part of the problem. That's the easy part.
> Nova also does a bunch of other things too. For example, say you want to
> live migrate a guest from one host to another. There is already
> functionality in Nova for doing that.
>
> 4) Resources need to be capacity managed. We call this scheduling. Nova
> has a pluggable scheduler to help with the placement of guests on hosts.
> Magnum will not.
>
> 5) Hosts in a cloud need to integrate with a number of other services,
> such as an image service, messaging, networking, storage, etc. If you only
> think in terms of host creation, and do something without nova, then you
> need to re-integrate with all of these things.
>
> Now, I probably left out examples of lots of other things that Nova does.
> What I have mentioned us enough to make my point that there are a lot of
> things that Magnum is intentionally NOT doing that we expect to get from
> Nova, and I will block all code that gratuitously duplicates functionality
> that I believe belongs in Nova. I promised our community I would not
> duplicate existing functionality without a very good reason, and I will
> keep that promise.
>
> Let's find a good way to fit Hyper with OpenStack in a way that best
> leverages what exists today, and is least likely to be rejected. Please
> note that the proposal needs to be changed from where it is today to
> achieve this fit.
>
> My fist suggestion is to find a way to make a nova virt driver for Hyper,
> which could allow it to be used with all of our current Bay types in Magnum.
>
> Thanks,
>
> Adrian
>
>
> -------- Original message --------
> From: Peng Zhao <peng at hyper.sh>
> Date: 07/19/2015 5:36 AM (GMT-08:00)
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev at lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
> withHyper
>
> Thanks Jay.
>
> Hongbin, yes, it will be a scheduling system, either swarm, k8s or mesos.
> I just think bay isn't a must in this case, and we don't need nova to
> provision BM hosts, which makes things more complicated imo.
>
> Peng
>
>
> ------------------ Original ------------------
> *From: * "Jay Lau"<jay.lau.513 at gmail.com>;
> *Date: * Sun, Jul 19, 2015 10:36 AM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev at lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal
> withHyper
>
> Hong Bin,
>
> I have some online discussion with Peng, seems hyper is now integrating
> with Kubernetes and also have plan integrate with mesos for scheduling.
> Once mesos integration finished, we can treat mesos+hyper as another kind
> of bay.
>
> Thanks
>
> 2015-07-19 4:15 GMT+08:00 Hongbin Lu <*hongbin.lu at huawei.com*
> <hongbin.lu at huawei.com>>:
>
> Peng,
>
>
>
> Several questions Here. You mentioned that HyperStack is a single big
> “bay”. Then, who is doing the multi-host scheduling, Hyper or something
> else? Were you suggesting to integrate Hyper with Magnum directly? Or you
> were suggesting to integrate Hyper with Magnum indirectly (i.e. through
> k8s, mesos and/or Nova)?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Peng Zhao [mailto:*peng at hyper.sh* <peng at hyper.sh>]
> * Sent:* July-17-15 12:34 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [magnum][bp] Power Magnum to run on
> metal with Hyper
>
>
>
> Hi, Adrian, Jay and all,
>
>
>
> There could be a much longer version of this, but let me try to
> explain in a minimalist way.
>
>
>
> Bay currently has two modes: VM-based, BM-based. In both cases, Bay
> helps to isolate different tenants' containers. In other words, bay is
> single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff,
> given the performance merits of LXC vs VM. However, for a VM-based bay,
> there is no performance gain, but single tenancy seems a must, due to the
> lack of isolation in container. Hyper, as a hypervisor-based substitute for
> container, brings the much-needed isolation, and therefore enables
> multi-tenancy. In HyperStack, we don't really need Ironic to provision
> multiple Hyper bays. On the other hand, the entire HyperStack cluster is a
> single big "bay". Pretty similar to how Nova works.
>
>
>
> Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN
> functionality. So when someone submits a Docker Compose app, HyperStack
> would launch HyperVMs and call Cinder/Neutron to setup the volumes and
> network. The architecture is quite simple.
>
>
>
> Here are a blog I'd like to recommend:
> *https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html*
> <https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html>
>
>
>
> Let me know your questions.
>
>
>
> Thanks,
>
> Peng
>
>
>
> ------------------ Original ------------------
>
> *From: * "Adrian Otto"<*adrian.otto at rackspace.com*
> <adrian.otto at rackspace.com>>;
>
> *Date: * Thu, Jul 16, 2015 11:02 PM
>
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> *openstack-dev at lists.openstack.org* <openstack-dev at lists.openstack.org>>;
>
>
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetalwith Hyper
>
>
>
> Jay,
>
>
>
> Hyper is a substitute for a Docker host, so I expect it could work
> equally well for all of the current bay types. Hyper’s idea of a “pod” and
> a Kubernetes “pod” are similar, but different. I’m not yet convinced that
> integrating Hyper host creation direct with Magnum (and completely
> bypassing nova) is a good idea. It probably makes more sense to implement
> use nova with the ironic dirt driver to provision Hyper hosts so we can use
> those as substitutes for Bay nodes in our various Bay types. This would fit
> in the place were we use Fedora Atomic today. We could still rely on nova
> to do all of the machine instance management and accounting like we do
> today, but produce bays that use Hyper instead of a Docker host. Everywhere
> we currently offer CoreOS as an option we could also offer Hyper as an
> alternative, with some caveats.
>
>
>
> There may be some caveats/drawbacks to consider before committing to a
> Hyper integration. I’ll be asking those of Peng also on this thread, so
> keep an eye out.
>
>
>
> Thanks,
>
>
>
> Adrian
>
>
> On Jul 16, 2015, at 3:23 AM, Jay Lau <*jay.lau.513 at gmail.com*
> <jay.lau.513 at gmail.com>> wrote:
>
>
>
> Thanks Peng, then I can see two integration points for Magnum and
> Hyper:
>
> 1) Once Hyper and k8s integration finished, we can deploy k8s in
> two mode: docker and hyper mode, the end user can select which mode they
> want to use. For such case, we do not need to create a new bay but may need
> some enhancement for current k8s bay
>
> 2) After mesos and hyper integration, we can treat mesos and hyper
> as a new bay to magnum. Just like what we are doing now for mesos+marathon.
>
> Thanks!
>
>
>
> 2015-07-16 17:38 GMT+08:00 Peng Zhao <*peng at hyper.sh*
> <peng at hyper.sh>>:
>
>
> Hi Jay,
>
> Yes, we are working with the community to integrate Hyper with Mesos and
> K8S. Since Hyper uses Pod as the default job unit, it is quite easy to
> integrate with K8S. Mesos takes a bit more efforts, but still
> straightforward.
>
> We expect to finish both integration in v0.4 early August.
>
> Best,
> Peng
>
> -----------------------------------------------------
> Hyper - Make VM run like Container
>
>
>
> On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau <*jay.lau.513 at gmail.com*
> <jay.lau.513 at gmail.com>> wrote:
>
>
> Hi Peng,
>
> Just want to get more for Hyper. If we create a hyper bay, then can I
> set up multiple hosts in a hyper bay? If so, who will do the scheduling,
> does mesos or some others integrate with hyper?
> I did not find much info for hyper cluster management.
>
> Thanks.
>
> 2015-07-16 9:54 GMT+08:00 Peng Zhao <*peng at hyper.sh* <peng at hyper.sh>>:
>
>
>
>
>
>
>
>
>
>
> ------------------ Original ------------------
> *From: * “Adrian Otto”<*adrian.otto at rackspace.com*
> <adrian.otto at rackspace.com>>;
> *Date: * Wed, Jul 15, 2015 02:31 AM
> *To: * “OpenStack Development Mailing List (not for usage questions)“<
> *openstack-dev at lists.openstack.org* <openstack-dev at lists.openstack.org>>;
>
>
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run
> onmetal withHyper
>
> Peng,
>
> On Jul 13, 2015, at 8:37 PM, Peng Zhao <*peng at hyper.sh*
> <peng at hyper.sh>> wrote:
>
> Thanks Adrian!
>
> Hi, all,
>
> Let me recap what is hyper and the idea of hyperstack.
>
> Hyper is a single-host runtime engine. Technically,
> Docker = LXC + AUFS
> Hyper = Hypervisor + AUFS
> where AUFS is the Docker image.
>
> I do not understand the last line above. My understanding is that AUFS
> == UnionFS, which is used to implement a storage driver for Docker. Others
> exist for btrfs, and devicemapper. You select which one you want by setting
> an option like this:
>
> DOCKEROPTS=”-s devicemapper”
>
> Are you trying to say that with Hyper, AUFS is used to provide layered
> Docker image capability that are shared by multiple hypervisor guests?
>
>
> Peng >>> Yes, AUFS implies the Docker images here.
>
>
> My guess is that you are trying to articulate that a host running
> Hyper is a 1:1 substitute for a host running Docker, and will respond using
> the Docker remote API. This would result in containers running on the same
> host that have a superior security isolation than they would if LXC was
> used as the backend to Docker. Is this correct?
>
>
> Peng>>> Exactly
>
>
> Due to the shared-kernel nature of LXC, Docker lacks of the necessary
> isolation in a multi-tenant CaaS platform, and this is what
> Hyper/hypervisor is good at.
>
> And because of this, most CaaS today run on top of IaaS:
> *https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png*
> <https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png>
> Hyper enables the native, secure, bare-metal CaaS
> *https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png*
> <https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png>
>
> From the tech stack perspective, Hyperstack turns Magnum o run in parallel
> with Nova, not running on atop.
>
> For this to work, we’d expect to get a compute host from Heat, so if the
> bay type were set to “hyper”, we’d need to use a template that can produce
> a compute host running Hyper. How would that host be produced, if we do not
> get it from nova? Might it make more sense to make a dirt driver for nova
> that could produce a Hyper guest on a host already running the nova-compute
> agent? That way Magnum would not need to re-create any of Nova’s
> functionality in order to produce nova instances of type “hyper”.
>
> Peng >>> We don’t have to get the physical host from nova. Let’s say
> OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for
> everyone else”
> HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like
> CaaS for everyone else”
>
> Ideally, customers should deploy a single OpenStack cluster, with both
> nova/kvm and magnum/hyper. I’m looking for a solution to make nova/magnum
> co-exist.
>
>
> Is Hyper compatible with libvirt?
>
>
> Peng>>> We are working on the libvirt integration, expect in v0.5
>
>
>
> Can Hyper support nested Docker containers within the Hyper guest?
>
>
> Peng>>> Docker in Docker? In a HyperVM instance, there is no docker
> daemon, cgroup and namespace (except MNT for pod). VM serves the purpose of
> isolation. We plan to support cgroup and namespace, so you can control
> whether multiple containers in a pod share the same namespace, or
> completely isolated. But in either case, no docker daemon is present.
>
>
>
> Thanks,
>
> Adrian Otto
>
>
> Best,
> Peng
>
> ------------------ Original ------------------
> *From: * “Adrian Otto”<*adrian.otto at rackspace.com*
> <adrian.otto at rackspace.com>>;
> *Date: * Tue, Jul 14, 2015 07:18 AM
> *To: * “OpenStack Development Mailing List (not for usage
> questions)“<*openstack-dev at lists.openstack.org*
> <openstack-dev at lists.openstack.org>>;
>
> *Subject: * Re: [openstack-dev] [magnum][bp] Power Magnum to run on
> metal withHyper
>
> Team,
>
> I woud like to ask for your input about adding support for Hyper in
> Magnum:
>
> *https://blueprints.launchpad.net/magnum/+spec/hyperstack*
> <https://blueprints.launchpad.net/magnum/+spec/hyperstack>
>
> We touched on this in our last team meeting, and it was apparent
> that achieving a higher level of understanding of the technology before
> weighing in about the directional approval of this blueprint. Peng Zhao and
> Xu Wang have graciously agreed to respond to this thread to address
> questions about how the technology works, and how it could be integrated
> with Magnum.
>
> Please take a moment to review the blueprint, and ask your
> questions here on this thread.
>
> Thanks,
>
> Adrian Otto
>
> On Jul 2, 2015, at 8:48 PM, Peng Zhao <*peng at hyper.sh*
> <peng at hyper.sh>> wrote:
>
>
>
> Here is the bp of Magnum+Hyper+Metal integration:
> *https://blueprints.launchpad.net/magnum/+spec/hyperstack*
> <https://blueprints.launchpad.net/magnum/+spec/hyperstack>
>
> Wanted to hear more thoughts and kickstart some brainstorming.
>
> Thanks,
> Peng
>
> -----------------------------------------------------
> Hyper - Make VM run like Container
>
>
>
> __________________________________________________________________________
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *OpenStack-dev-request at lists.openstack.org*
> <OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: *OpenStack-dev-request at lists.openstack.org*
> <OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
> <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
> --
>
> Thanks,
> Jay Lau (Guangya Liu)
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
> <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
> <http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> --
>
> Thanks,
>
> Jay Lau (Guangya Liu)
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: *OpenStack-dev-request at lists.openstack.org*
> <OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *OpenStack-dev-request at lists.openstack.org?subject:unsubscribe*
> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Thanks,
Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150720/0938b7a5/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150720/0938b7a5/attachment.gif>
More information about the OpenStack-dev
mailing list