[openstack-dev] Introducing the new OpenStack service for Containers

Sam Alba sam.alba at gmail.com
Tue Nov 19 18:34:31 UTC 2013


On Tue, Nov 19, 2013 at 6:45 AM, Chuck Short <chuck.short at canonical.com> wrote:
> Hi
>
> I am excited to see containers getting such traction in the openstack
> project.
>
>
> On Mon, Nov 18, 2013 at 7:30 PM, Russell Bryant <rbryant at redhat.com> wrote:
>>
>> On 11/18/2013 06:30 PM, Dan Smith wrote:
>> >> Not having been at the summit (maybe the next one), could somebody
>> >> give a really short explanation as to why it needs to be a separate
>> >> service? It sounds like it should fit within the Nova area. It is,
>> >> after all, just another hypervisor type, or so it seems.
>> >
>> > But it's not just another hypervisor. If all you want from your
>> > containers is lightweight VMs, then nova is a reasonable place to put
>> > that (and it's there right now). If, however, you want to expose the
>> > complex and flexible attributes of a container, such as being able to
>> > overlap filesystems, have fine-grained control over what is shared with
>> > the host OS, look at the processes within a container, etc, then nova
>> > ends up needing quite a bit of change to support that.
>> >
>> > I think the overwhelming majority of folks in the room, after discussing
>> > it, agreed that Nova is infrastructure and containers is more of a
>> > platform thing. Making it a separate service lets us define a mechanism
>> > to manage these that makes much more sense than treating them like VMs.
>> > Using Nova to deploy VMs that run this service is the right approach,
>> > IMHO. Clayton put it very well, I think:
>> >
>> >   If the thing you want to deploy has a kernel, then you need Nova. If
>> >   your thing runs on a kernel, you want $new_service_name.
>> >
>> > I agree.
>> >
>> > Note that this is just another service under the compute project (or
>> > program, or whatever the correct terminology is this week).
>>
>> The Compute program is correct.  That is established terminology as
>> defined by the TC in the last cycle.
>>
>> > So while
>> > distinct from Nova in terms of code, development should be tightly
>> > integrated until (and if at some point) it doesn't make sense.
>>
>> And it may share a whole bunch of the code.
>>
>> Another way to put this:  The API requirements people have for
>> containers include a number of features considered outside of the
>> current scope of Nova (short version: Nova's scope stops before going
>> *inside* the servers it creates, except file injection, which we plan to
>> remove anyway).  That presents a problem.  A new service is one possible
>> solution.
>>
>> My view of the outcome of the session was not "it *will* be a new
>> service".  Instead, it was, "we *think* it should be a new service, but
>> let's do some more investigation to decide for sure".
>>
>> The action item from the session was to go off and come up with a
>> proposal for what a new service would look like.  In particular, we
>> needed a proposal for what the API would look like.  With that in hand,
>> we need to come back and ask the question again of whether a new service
>> is the right answer.
>>
>> I see 3 possible solutions here:
>>
>> 1) Expand the scope of Nova to include all of the things people want to
>> be able to do with containers.
>>
>> This is my least favorite option.  Nova is already really big.  We've
>> worked to split things out (Networking, Block Storage, Images) to keep
>> it under control.  I don't think a significant increase in scope is a
>> smart move for Nova's future.
>>
>
> This is my least favorite option. Like a lot of other responses already I
> see a lot of code duplication  because Nova and the new nova container's
> project. This just doesn't include the scheduler but  things like config
> driver, etc.

Can we dig into this option? Honestly, I'd be glad to find a way to
avoid reimplementing everything again (a new compute service with
Keystone, Glance, Horizon integration, etc...). But I do understand
the limitation of changing Nova to improve containers support.

Can someone bring more details (maybe in the spec etherpad, in a new
section) about this 3rd option?

Since the API (in the front) and the virt API (in the back) have to be
different, I barely see how we can reuse most of Nova's code.

>>
>> 2) Declare containers as explicitly out of scope and start a new project
>> with its own API.
>>
>> That is what is being proposed here.
>>
>> 3) Some middle ground that is a variation of #2.  Consider Ironic.  The
>> idea is that Nova's API will still be used for basic provisioning, which
>> Nova will implement by talking to Ironic.  However, there are a lot of
>> baremetal management things that don't fit in Nova at all, and those
>> only exist in Ironic's API.
>
>
> This is my preferred choice  as well. If we could leverage the existing nova
> API and extend it to include containers and features that users who use
> containers in their existing production environment wants.
>>
>>
>> I wanted to mention this option for completeness, but I don't actually
>> think it's the right choice here.  With Ironic you have a physical
>> resource (managed by Ironic), and then instances of an image running on
>> these physical resources (managed by Nova).
>>
>> With containers, there's a similar line.  You have instances of
>> containers (managed either by Nova or the new service) running on
>> servers (managed by Nova).  I think there is a good line for separating
>> concerns, with a container service on top of Nova.
>>
>>
>> Let's ask ourselves:  How much overlap is there between the current
>> compute API and a proposed containers API?  Effectively, what's the
>> diff?  How much do we expect this diff to change in the coming years?
>>
>> The current diff demonstrates a significant clash with the current scope
>> of Nova.  I also expect a lot of innovation around containers in the
>> next few years, which will result in wanting to do new cool things in
>> the API.  I feel that all of this justifies a new API service to best
>> position ourselves for the long term.

-- 
@sam_alba



More information about the OpenStack-dev mailing list