[openstack-dev] [TripleO] podman: varlink interface for nice API calls

Sergii Golovatiuk sgolovat at redhat.com
Mon Aug 27 09:55:16 UTC 2018


Hi,

On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra <ramishra at redhat.com> wrote:
> On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker <sbaker at redhat.com> wrote:
>>
>>
>>
>> On 24/08/18 04:36, Fox, Kevin M wrote:
>>>
>>> Or use kubelet in standalone mode. It can be configured for either Cri-o
>>> or Docker. You can drive the static manifests from heat/ansible per host as
>>> normal and it would be a step in the greater direction of getting to
>>> Kubernetes without needing the whole thing at once, if that is the goal.
>>
>>
>> I was an advocate for using kubectl standalone for our container
>> orchestration needs well before we started containerizing TripleO. After
>> talking to a few kubernetes folk I cooled on the idea, because they had one
>> of two responses:
>> - cautious encouragement, but uncertainty about kubectl standalone
>> interface support and consideration for those use cases
>> - googly eyed incomprehension followed by "why would you do that??"
>>
>
> AFAIK, kubelet does not have a good set of REST API yet[1], but things like
> heapster do directly interface with kubelet. Last I've seen there was no
> general consensus for kubelet to provide a subset of api-server APIs.
> However, from TripleO standpoint providing a set of pod specs to kubelet
> generated by ansible may be sufficient?
>
> [1] https://github.com/kubernetes/kubernetes/issues/28138

Steve mentioned kubectl (kubernetes CLI which communicates with
kube-api) not kubelet which is only one component of kubernetes. All
kubernetes components may be compiled as one binary (hyperkube) which
can be used to minimize footprint. Generated ansible for kubelet is
not enough as kubelet doesn't have any orchestration logic.

>>
>> This was a while ago now so this could be worth revisiting in the future.
>> We'll be making gradual changes, the first of which is using podman to
>> manage single containers. However podman has native support for the pod
>> format, so I'm hoping we can switch to that once this transition is
>> complete. Then evaluating kubectl becomes much easier.
>>
>>> Question. Rather then writing a middle layer to abstract both container
>>> engines, couldn't you just use CRI? CRI is CRI-O's native language, and
>>> there is support already for Docker as well.
>>
>>
>> We're not writing a middle layer, we're leveraging one which is already
>> there.
>>
>> CRI-O is a socket interface and podman is a CLI interface that both sit on
>> top of the exact same Go libraries. At this point, switching to podman needs
>> a much lower development effort because we're replacing docker CLI calls.
>>
> I see good value in evaluating kubelet standalone and leveraging it's
> inbuilt grpc interfaces with cri-o (rather than using podman) as a long term
> strategy, unless we just want to provide an alternative to docker container
> runtime with cri-o.

I see no value using kubelet without kubernetes IMHO.

>
>>>
>>>
>>> Thanks,
>>> Kevin
>>> ________________________________________
>>> From: Jay Pipes [jaypipes at gmail.com]
>>> Sent: Thursday, August 23, 2018 8:36 AM
>>> To: openstack-dev at lists.openstack.org
>>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice
>>> API calls
>>>
>>> Dan, thanks for the details and answers. Appreciated.
>>>
>>> Best,
>>> -jay
>>>
>>> On 08/23/2018 10:50 AM, Dan Prince wrote:
>>>>
>>>> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes <jaypipes at gmail.com> wrote:
>>>>>
>>>>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>>>>>
>>>>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi <emilien at redhat.com
>>>>>> <mailto:emilien at redhat.com>> wrote:
>>>>>>
>>>>>>       More seriously here: there is an ongoing effort to converge the
>>>>>>       tools around containerization within Red Hat, and we, TripleO
>>>>>> are
>>>>>>       interested to continue the containerization of our services
>>>>>> (which
>>>>>>       was initially done with Docker & Docker-Distribution).
>>>>>>       We're looking at how these containers could be managed by k8s
>>>>>> one
>>>>>>       day but way before that we plan to swap out Docker and join
>>>>>> CRI-O
>>>>>>       efforts, which seem to be using Podman + Buildah (among other
>>>>>> things).
>>>>>>
>>>>>> I guess my wording wasn't the best but Alex explained way better here:
>>>>>>
>>>>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>>>>>
>>>>>> If I may have a chance to rephrase, I guess our current intention is
>>>>>> to
>>>>>> continue our containerization and investigate how we can improve our
>>>>>> tooling to better orchestrate the containers.
>>>>>> We have a nice interface (openstack/paunch) that allows us to run
>>>>>> multiple container backends, and we're currently looking outside of
>>>>>> Docker to see how we could solve our current challenges with the new
>>>>>> tools.
>>>>>> We're looking at CRI-O because it happens to be a project with a great
>>>>>> community, focusing on some problems that we, TripleO have been facing
>>>>>> since we containerized our services.
>>>>>>
>>>>>> We're doing all of this in the open, so feel free to ask any question.
>>>>>
>>>>> I appreciate your response, Emilien, thank you. Alex' responses to
>>>>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>>>>
>>>>> For now, it *seems* to me that all of the chosen tooling is very Red
>>>>> Hat
>>>>> centric. Which makes sense to me, considering Triple-O is a Red Hat
>>>>> product.
>>>>
>>>> Perhaps a slight clarification here is needed. "Director" is a Red Hat
>>>> product. TripleO is an upstream project that is now largely driven by
>>>> Red Hat and is today marked as single vendor. We welcome others to
>>>> contribute to the project upstream just like anybody else.
>>>>
>>>> And for those who don't know the history the TripleO project was once
>>>> multi-vendor as well. So a lot of the abstractions we have in place
>>>> could easily be extended to support distro specific implementation
>>>> details. (Kind of what I view podman as in the scope of this thread).
>>>>
>>>>> I don't know how much of the current reinvention of container runtimes
>>>>> and various tooling around containers is the result of politics. I
>>>>> don't
>>>>> know how much is the result of certain companies wanting to "own" the
>>>>> container stack from top to bottom. Or how much is a result of
>>>>> technical
>>>>> disagreements that simply cannot (or will not) be resolved among
>>>>> contributors in the container development ecosystem.
>>>>>
>>>>> Or is it some combination of the above? I don't know.
>>>>>
>>>>> What I *do* know is that the current "NIH du jour" mentality currently
>>>>> playing itself out in the container ecosystem -- reminding me very much
>>>>> of the Javascript ecosystem -- makes it difficult for any potential
>>>>> *consumers* of container libraries, runtimes or applications to be
>>>>> confident that any choice they make towards one of the other will be
>>>>> the
>>>>> *right* choice or even a *possible* choice next year -- or next week.
>>>>> Perhaps this is why things like openstack/paunch exist -- to give you
>>>>> options if something doesn't pan out.
>>>>
>>>> This is exactly why paunch exists.
>>>>
>>>> Re, the podman thing I look at it as an implementation detail. The
>>>> good news is that given it is almost a parity replacement for what we
>>>> already use we'll still contribute to the OpenStack community in
>>>> similar ways. Ultimately whether you run 'docker run' or 'podman run'
>>>> you end up with the same thing as far as the existing TripleO
>>>> architecture goes.
>>>>
>>>> Dan
>>>>
>>>>> You have a tough job. I wish you all the luck in the world in making
>>>>> these decisions and hope politics and internal corporate management
>>>>> decisions play as little a role in them as possible.
>>>>>
>>>>> Best,
>>>>> -jay
>>>>>
>>>>>
>>>>> __________________________________________________________________________
>>>>> OpenStack Development Mailing List (not for usage questions)
>>>>> Unsubscribe:
>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Regards,
> Rabi Mishra
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Sergii Golovatiuk



More information about the OpenStack-dev mailing list