[openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
Bogdan Dobrelya
bdobrelia at mirantis.com
Thu Apr 14 09:30:28 UTC 2016
> On 04/11/2016 09:43 AM, Allison Randal wrote:
>>> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas <davanum at gmail.com> wrote:
>>>> Reading unofficial notes [1], i found one topic very interesting:
>>>> One Platform – How do we truly support containers and bare metal under
>>>> a common API with VMs? (Ironic, Nova, adjacent communities e.g.
>>>> Kubernetes, Apache Mesos etc)
>>>>
>>>> Anyone present at the meeting, please expand on those few notes on
>>>> etherpad? And how if any this feedback is getting back to the
>>>> projects?
>>
>> It was really two separate conversations that got conflated in the
>> summary. One conversation was just being supportive of bare metal, VMs,
>> and containers within the OpenStack umbrella. The other conversation
>> started with Monty talking about his work on shade, and how it wouldn't
>> exist if more APIs were focused on the way users consume the APIs, and
>> less an expression of the implementation details of each project.
>> OpenStackClient was mentioned as a unified CLI for OpenStack focused
>> more on the way users consume the CLI. (OpenStackSDK wasn't mentioned,
>> but falls in the same general category of work.)
>>
>> i.e. There wasn't anything new in the conversation, it was more a matter
>> of the developers/TC members on the board sharing information about work
>> that's already happening.
>
> I agree with that - but would like to clarify the 'bare metal, VMs and
> containers' part a bit. (an in fact, I was concerned in the meeting that
> the messaging around this would be confusing because we 'supporting bare
> metal' and 'supporting containers' mean two different things but we use
> one phrase to talk about it.
>
> It's abundantly clear at the strategic level that having OpenStack be
> able to provide both VMs and Bare Metal as two different sorts of
> resources (ostensibly but not prescriptively via nova) is one of our
> advantages. We wanted to underscore how important it is to be able to do
> that, and wanted to underscore that so that it's really clear how
> important it is any time the "but cloud should just be VMs" sentiment
> arises.
>
> The way we discussed "supporting containers" was quite different and was
> not about nova providing containers. Rather, it was about reaching out
> to our friends in other communities and working with them on making
> OpenStack the best place to run things like kubernetes or docker swarm.
> Those are systems that ultimately need to run, and it seems that good
> integration (like kuryr with libnetwork) can provide a really strong
> story. I think pretty much everyone agrees that there is not much value
> to us or the world for us to compete with kubernetes or docker.
Let me quote exactly here and summarize the proposals mentioned in this
thread (as I understood it):
1. TOSCA YAML service templates [0], or [1], or suchlike to define
unified workloads (BM/VM/lightweight) and placement strategies as well.
Those templates are generated by either users directly or projects like
Solum, Trove shipping Apps-As-A-Service or Kolla, TrippleO, Fuel and
others - to deploy OpenStack services as well.
[0]
http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html
[1] https://review.openstack.org/#/c/210549/15/specs/super-scheduler.rst
2. Heat-translator [2] (or a New Project?) to present the templates as
Heat Orchestration Templates (HOT)
[2] https://github.com/openstack/heat-translator
3. Heat (or TOSCA translator, or...) to translate the HOTs (into API
calls?) and orchestrate the workloads placement to the reworked cloud
workloads schedulers of Nova [3], Magnum, Ironic, Neutron/Kuryr for SDN,
Cinder/Swift/Ceph for volumes mounts and images, then down the road to
their BM/VM/ligtweight-containers drivers nova.virt.ironic,
nova-docker/hypernova, kubernates/mesos/swarm and the like.
[3] https://review.openstack.org/#/c/183837/4
4. At this point, here they are - unified workloads running shiny on top
of OpenStack.
So the question is, do we really need a unified API or rather a unified
(TOSCA YAML) templates and a translator to *reworked* local APIs?
By the way, this flow clearly illustrates why there is no collisions
between the cp spec [1] and related Nova API reworking spec [3]. Those
are just different parts of the whole picture.
>
> So, we do want to be supportive of bare metal and containers - but the
> specific _WAY_ we want to be supportive of those things is different for
> each one.
>
> Monty
--
Best regards,
Bogdan Dobrelya,
Irc #bogdando
More information about the OpenStack-dev
mailing list