[openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

Hongbin Lu hongbin.lu at huawei.com
Sat Jul 18 20:15:27 UTC 2015


Peng,

Several questions Here. You mentioned that HyperStack is a single big “bay”. Then, who is doing the multi-host scheduling, Hyper or something else? Were you suggesting to integrate Hyper with Magnum directly? Or you were suggesting to integrate Hyper with Magnum indirectly (i.e. through k8s, mesos and/or Nova)?

Best regards,
Hongbin

From: Peng Zhao [mailto:peng at hyper.sh]
Sent: July-17-15 12:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal with Hyper

Hi, Adrian, Jay and all,

There could be a much longer version of this, but let me try to explain in a minimalist way.

Bay currently has two modes: VM-based, BM-based. In both cases, Bay helps to isolate different tenants' containers. In other words, bay is single-tenancy. For BM-based bay, the single tenancy is a worthy tradeoff, given the performance merits of LXC vs VM. However, for a VM-based bay, there is no performance gain, but single tenancy seems a must, due to the lack of isolation in container. Hyper, as a hypervisor-based substitute for container, brings the much-needed isolation, and therefore enables multi-tenancy. In HyperStack, we don't really need Ironic to provision multiple Hyper bays. On the other hand,  the entire HyperStack cluster is a single big "bay". Pretty similar to how Nova works.

Also, HyperStack is able to leverage Cinder, Neutron for SDS/SDN functionality. So when someone submits a Docker Compose app, HyperStack would launch HyperVMs and call Cinder/Neutron to setup the volumes and network. The architecture is quite simple.

Here are a blog I'd like to recommend: https://hyper.sh/blog/post/2015/06/29/docker-hyper-and-the-end-of-guest-os.html

Let me know your questions.

Thanks,
Peng

------------------ Original ------------------
From:  "Adrian Otto"<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>;
Date:  Thu, Jul 16, 2015 11:02 PM
To:  "OpenStack Development Mailing List (not for usage questions)"<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>;
Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetalwith Hyper

Jay,

Hyper is a substitute for a Docker host, so I expect it could work equally well for all of the current bay types. Hyper’s idea of a “pod” and a Kubernetes “pod” are similar, but different. I’m not yet convinced that integrating Hyper host creation direct with Magnum (and completely bypassing nova) is a good idea. It probably makes more sense to implement use nova with the ironic dirt driver to provision Hyper hosts so we can use those as substitutes for Bay nodes in our various Bay types. This would fit in the place were we use Fedora Atomic today. We could still rely on nova to do all of the machine instance management and accounting like we do today, but produce bays that use Hyper instead of a Docker host. Everywhere we currently offer CoreOS as an option we could also offer Hyper as an alternative, with some caveats.

There may be some caveats/drawbacks to consider before committing to a Hyper integration. I’ll be asking those of Peng also on this thread, so keep an eye out.

Thanks,

Adrian

On Jul 16, 2015, at 3:23 AM, Jay Lau <jay.lau.513 at gmail.com<mailto:jay.lau.513 at gmail.com>> wrote:

Thanks Peng, then I can see two integration points for Magnum and Hyper:
1) Once Hyper and k8s integration finished, we can deploy k8s in two mode: docker and hyper mode, the end user can select which mode they want to use. For such case, we do not need to create a new bay but may need some enhancement for current k8s bay
2) After mesos and hyper integration,  we can treat mesos and hyper as a new bay to magnum. Just like what we are doing now for mesos+marathon.
Thanks!

2015-07-16 17:38 GMT+08:00 Peng Zhao <peng at hyper.sh<mailto:peng at hyper.sh>>:
Hi Jay,

Yes, we are working with the community to integrate Hyper with Mesos and K8S. Since Hyper uses Pod as the default job unit, it is quite easy to integrate with K8S. Mesos takes a bit more efforts, but still straightforward.

We expect to finish both integration in v0.4 early August.

Best,
Peng

-----------------------------------------------------
Hyper - Make VM run like Container



On Thu, Jul 16, 2015 at 3:47 PM, Jay Lau <jay.lau.513 at gmail.com<mailto:jay.lau.513 at gmail.com>> wrote:
Hi Peng,

Just want to get more for Hyper. If we create a hyper bay, then can I set up multiple hosts in a hyper bay? If so, who will do the scheduling, does mesos or some others integrate with hyper?
I did not find much info for hyper cluster management.

Thanks.

2015-07-16 9:54 GMT+08:00 Peng Zhao <peng at hyper.sh<mailto:peng at hyper.sh>>:






------------------ Original ------------------
From:  “Adrian Otto”<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>;
Date:  Wed, Jul 15, 2015 02:31 AM
To:  “OpenStack Development Mailing List (not for usage questions)“<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>;

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal withHyper

Peng,

On Jul 13, 2015, at 8:37 PM, Peng Zhao <peng at hyper.sh<mailto:peng at hyper.sh>> wrote:

Thanks Adrian!

Hi, all,

Let me recap what is hyper and the idea of hyperstack.

Hyper is a single-host runtime engine. Technically,
Docker = LXC + AUFS
Hyper = Hypervisor + AUFS
where AUFS is the Docker image.

I do not understand the last line above. My understanding is that AUFS == UnionFS, which is used to implement a storage driver for Docker. Others exist for btrfs, and devicemapper. You select which one you want by setting an option like this:

DOCKEROPTS=”-s devicemapper”

Are you trying to say that with Hyper, AUFS is used to provide layered Docker image capability that are shared by multiple hypervisor guests?

Peng >>> Yes, AUFS implies the Docker images here.

My guess is that you are trying to articulate that a host running Hyper is a 1:1 substitute for a host running Docker, and will respond using the Docker remote API. This would result in containers running on the same host that have a superior security isolation than they would if LXC was used as the backend to Docker. Is this correct?

Peng>>> Exactly


Due to the shared-kernel nature of LXC, Docker lacks of the necessary isolation in a multi-tenant CaaS platform, and this is what Hyper/hypervisor is good at.

And because of this, most CaaS today run on top of IaaS: https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
Hyper enables the native, secure, bare-metal CaaS  https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png

From the tech stack perspective, Hyperstack turns Magnum o run in parallel with Nova, not running on atop.

For this to work, we’d expect to get a compute host from Heat, so if the bay type were set to “hyper”, we’d need to use a template that can produce a compute host running Hyper. How would that host be produced, if we do not get it from nova? Might it make more sense to make a dirt driver for nova that could produce a Hyper guest on a host already running the nova-compute agent? That way Magnum would not need to re-create any of Nova’s functionality in order to produce nova instances of type “hyper”.

Peng >>> We don’t have to get the physical host from nova. Let’s say
   OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for everyone else”
   HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like CaaS for everyone else”

Ideally, customers should deploy a single OpenStack cluster, with both nova/kvm and magnum/hyper. I’m looking for a solution to make nova/magnum co-exist.

Is Hyper compatible with libvirt?

Peng>>> We are working on the libvirt integration, expect in v0.5


Can Hyper support nested Docker containers within the Hyper guest?

Peng>>> Docker in Docker? In a HyperVM instance, there is no docker daemon, cgroup and namespace (except MNT for pod). VM serves the purpose of isolation. We plan to support cgroup and namespace, so you can control whether multiple containers in a pod share the same namespace, or completely isolated. But in either case, no docker daemon is present.


Thanks,

Adrian Otto


Best,
Peng

------------------ Original ------------------
From:  “Adrian Otto”<adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>;
Date:  Tue, Jul 14, 2015 07:18 AM
To:  “OpenStack Development Mailing List (not for usage questions)“<openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>;

Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal withHyper

Team,

I woud like to ask for your input about adding support for Hyper in Magnum:

https://blueprints.launchpad.net/magnum/+spec/hyperstack

We touched on this in our last team meeting, and it was apparent that achieving a higher level of understanding of the technology before weighing in about the directional approval of this blueprint. Peng Zhao and Xu Wang have graciously agreed to respond to this thread to address questions about how the technology works, and how it could be integrated with Magnum.

Please take a moment to review the blueprint, and ask your questions here on this thread.

Thanks,

Adrian Otto

On Jul 2, 2015, at 8:48 PM, Peng Zhao <peng at hyper.sh<mailto:peng at hyper.sh>> wrote:

Here is the bp of Magnum+Hyper+Metal integration: https://blueprints.launchpad.net/magnum/+spec/hyperstack

Wanted to hear more thoughts and kickstart some brainstorming.

Thanks,
Peng

-----------------------------------------------------
Hyper - Make VM run like Container


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Thanks,
Jay Lau (Guangya Liu)

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[https://app.mixmax.com/api/track/v2/DrWziYtTJofqeDgT1/ig2cuIXZwlHaAdmblBnI/IyZy9mLrNWY0NnblB3buMHdzlGbAZXZk1yajFGdz5WZw9mI/iQ3cpxEIn5WaslWYNBCduVWbw9GblZXZEByajFGdT5WZw9kI?sc=false]


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150718/192858db/attachment-0001.html>


More information about the OpenStack-dev mailing list