[openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
Joshua Harlow
harlowja at fastmail.com
Wed Apr 13 00:29:55 UTC 2016
Steve Baker wrote:
> On 13/04/16 11:07, Joshua Harlow wrote:
>> Fox, Kevin M wrote:
>>> I think my head just exploded. :)
>>>
>>> That idea's similar to neutron sfc stuff, where you just say what
>>> needs to connect to what, and it figures out the plumbing.
>>>
>>> Ideally, it would map somehow to heat& docker COE& neutron sfc to
>>> produce a final set of deployment scripts and then just runs it
>>> through the meat grinder. :)
>>>
>>> It would be awesome to use. It may be very difficult to implement.
>>
>> +1 it will not be easy.
>>
>> Although the complexity of it is probably no different than what a SQL
>> engine has to do to parse a SQL statement into a actionable plan, just
>> in this case it's a application deployment 'statement' and the
>> realization of that plan are of course where the 'meat' of the program
>> is.
>>
>> It would be nice to connect what neutron SFC stuff is being worked
>> on/exists and have a single project for this kind of stuff, but maybe
>> I am dreaming to much right now :-P
>>
>
> This sounds a lot like what the TOSCA spec[1] is aiming to achieve. I
> could imagine heat-translator[2] gaining the ability to translate TOSCA
> templates to either nova or COE specific heat templates which are then
> created as stacks.
>
> [1]
> http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csprd01/TOSCA-Simple-Profile-YAML-v1.0-csprd01.html
Since I don't know, does anyone in the wider-world actually support
TOSCA as there API? Or is TOSCA more of an exploratory kind of thing or
something else (seems like there is TOSCA -> Heat?)? Forgive my lack of
understanding ;)
>
> [2] https://github.com/openstack/heat-translator
>
>>>
>>> If you ignore the non container use case, I think it might be fairly
>>> easily mappable to all three COE's though.
>>>
>>> Thanks,
>>> Kevin
>>>
>>> ________________________________________
>>> From: Joshua Harlow [harlowja at fastmail.com]
>>> Sent: Tuesday, April 12, 2016 2:23 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: foundation at lists.openstack.org
>>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>>> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>>>
>>> Fox, Kevin M wrote:
>>>> I think part of the problem is containers are mostly orthogonal to
>>>> vms/bare metal. Containers are a package for a single service.
>>>> Multiple can run on a single vm/bare metal host. Orchestration like
>>>> Kubernetes comes in to turn a pool of vm's/bare metal into a system
>>>> that can easily run multiple containers.
>>>>
>>>
>>> Is the orthogonal part a problem because we have made it so or is it
>>> just how it really is?
>>>
>>> Brainstorming starts here:
>>>
>>> Imagine a descriptor language like (which I stole from
>>> https://review.openstack.org/#/c/210549 and modified):
>>>
>>> ---
>>> components:
>>> - label: frontend
>>> count: 5
>>> image: ubuntu_vanilla
>>> requirements: high memory, low disk
>>> stateless: true
>>> - label: database
>>> count: 3
>>> image: ubuntu_vanilla
>>> requirements: high memory, high disk
>>> stateless: false
>>> - label: memcache
>>> count: 3
>>> image: debian-squeeze
>>> requirements: high memory, no disk
>>> stateless: true
>>> - label: zookeeper
>>> count: 3
>>> image: debian-squeeze
>>> requirements: high memory, medium disk
>>> stateless: false
>>> backend: VM
>>> networks:
>>> - label: frontend_net
>>> flavor: "public network"
>>> associated_with:
>>> - frontend
>>> - label: database_net
>>> flavor: high bandwidth
>>> associated_with:
>>> - database
>>> - label: backend_net
>>> flavor: high bandwidth and low latency
>>> associated_with:
>>> - zookeeper
>>> - memchache
>>> constraints:
>>> - ref: container_only
>>> params:
>>> - frontend
>>> - ref: no_colocated
>>> params:
>>> - database
>>> - frontend
>>> - ref: spread
>>> params:
>>> - database
>>> - ref: no_colocated
>>> params:
>>> - database
>>> - frontend
>>> - ref: spread
>>> params:
>>> - memcache
>>> - ref: spread
>>> params:
>>> - zookeeper
>>> - ref: isolated_network
>>> params:
>>> - frontend_net
>>> - database_net
>>> - backend_net
>>> ...
>>>
>>>
>>> Now nothing in the above is about container, or baremetal or vms,
>>> (although a 'advanced' constraint can be that a component must be on a
>>> container, and it must say be deployed via docker image XYZ...); instead
>>> it's just about the constraints that a user has on there deployment and
>>> the components associated with it. It can be left up to some consuming
>>> project of that format to decide how to turn that desired description
>>> into an actual description (aka a full expanding of that format into an
>>> actual deployment plan), possibly say by optimizing for density (packing
>>> as many things container) or optimizing for security (by using VMs) or
>>> optimizing for performance (by using bare-metal).
>>>
>>>> So, rather then concern itself with supporting launching through a
>>>> COE and through Nova, which are two totally different code paths,
>>>> OpenStack advanced services like Trove could just use a Magnum COE
>>>> and have a UI that asks which existing Magnum COE to launch in, or
>>>> alternately kick off the "Launch new Magnum COE" workflow in
>>>> horizon, then follow up with the Trove launch workflow. Trove then
>>>> would support being able to use containers, users could potentially
>>>> pack more containers onto their vm's then just Trove, and it still
>>>> would work with both Bare Metal and VM's the same way since Magnum
>>>> can launch on either. I'm afraid supporting both containers and non
>>>> container deployment with Trove will be a large effort with very
>>>> little code sharing. It may be easiest to have a flag version where
>>>> non container deployments are upgraded to containers then non
>>>> container support is dropped.
>>>>
>>>
>>> Sure trove seems like it would be a consumer of whatever interprets that
>>> format, just like many other consumers could be (with the special case
>>> that trove creates such a format on-behalf of some other consumer, aka
>>> the trove user).
>>>
>>>> As for the app-catalog use case, the app-catalog project
>>>> (http://apps.openstack.org) is working on some of that.
>>>>
>>>> Thanks,
>>>> Kevin
>>>> ________________________________________
>>>> From: Joshua Harlow [harlowja at fastmail.com]
>>>> Sent: Tuesday, April 12, 2016 12:16 PM
>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for
>>>> usage questions)
>>>> Cc: foundation at lists.openstack.org
>>>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>>>> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>>>>
>>>> Flavio Percoco wrote:
>>>>> On 11/04/16 18:05 +0000, Amrith Kumar wrote:
>>>>>> Adrian, thx for your detailed mail.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Yes, I was hopeful of a silver bullet and as we’ve discussed
>>>>>> before (I
>>>>>> think it
>>>>>> was Vancouver), there’s likely no silver bullet in this area.
>>>>>> After that
>>>>>> conversation, and some further experimentation, I found that even if
>>>>>> Trove had
>>>>>> access to a single Compute API, there were other significant
>>>>>> complications
>>>>>> further down the road, and I didn’t pursue the project further at the
>>>>>> time.
>>>>>>
>>>>> Adrian, Amrith,
>>>>>
>>>>> I've spent enough time researching on this area during the last month
>>>>> and my
>>>>> conclusion is pretty much the above. There's no silver bullet in this
>>>>> area and
>>>>> I'd argue there shouldn't be one. Containers, bare metal and VMs
>>>>> differ
>>>>> in such
>>>>> a way (feature-wise) that it'd not be good, as far as deploying
>>>>> databases goes,
>>>>> for there to be one compute API. Containers allow for a different
>>>>> deployment
>>>>> architecture than VMs and so does bare metal.
>>>> Just some thoughts from me, but why focus on the
>>>> compute/container/baremetal API at all?
>>>>
>>>> I'd almost like a way that just describes how my app should be
>>>> interconnected, what is required to get it going, and the features
>>>> and/or scheduling requirements for different parts of those app.
>>>>
>>>> To me it feels like this isn't a compute API or really a heat API but
>>>> something else. Maybe it's closer to the docker compose API/template
>>>> format or something like it.
>>>>
>>>> Perhaps such a thing needs a new project. I'm not sure, but it does
>>>> feel
>>>> like that as developers we should be able to make such a thing that
>>>> still exposes the more advanced functionality of the underlying API so
>>>> that it can be used if really needed...
>>>>
>>>> Maybe this is similar to an app-catalog, but that doesn't quite feel
>>>> like it's the right thing either so maybe somewhere in between...
>>>>
>>>> IMHO I'd be nice to have a unified story around what this thing is, so
>>>> that we as a community can drive (as a single group) toward that, maybe
>>>> this is where the product working group can help and we as a developer
>>>> community can also try to unify behind...
>>>>
>>>> P.S. name for project should be 'silver' related, ha.
>>>>
>>>> -Josh
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>> __________________________________________________________________________
>>>>
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __________________________________________________________________________
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list