[openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

Sylvain Bauza sbauza at redhat.com
Mon Sep 28 09:35:03 UTC 2015



Le 28/09/2015 11:23, Duncan Thomas a écrit :
>
> The trouble with putting more intelligence in the clients is that 
> there are more clients than just the one we provide, and the more 
> smarts we require in the clients, the more divergence of functionality 
> we're likely to see. Also, bugs and slowly percolating bug fixes.
>

That's why I consider the layer of orchestration in the client just 
being as identical as what we have in Nova, not more than that. If we 
require more than just a volume creation when asking to boot from a 
volume with source=image, then I agree with you, it has nothing to do in 
the client, but rather in Heat.

The same goes with networks. What is done with Nova for managing CRUD 
operations can be done in python clients, but that's the limit.

About the maintenance burden, I also consider that patching clients is 
far more easier than patching an API unless I missed something.

-Sylvain


> On 28 Sep 2015 11:27, "Sylvain Bauza" <sbauza at redhat.com 
> <mailto:sbauza at redhat.com>> wrote:
>
>
>
>     Le 25/09/2015 16:12, Andrew Laski a écrit :
>
>         On 09/24/15 at 03:13pm, James Penick wrote:
>
>
>
>                 At risk of getting too offtopic I think there's an
>                 alternate solution to
>                 doing this in Nova or on the client side.  I think
>                 we're missing some sort
>                 of OpenStack API and service that can handle this. 
>                 Nova is a low level
>                 infrastructure API and service, it is not designed to
>                 handle these
>                 orchestrations.  I haven't checked in on Heat in a
>                 while but perhaps this
>                 is a role that it could fill.
>
>                 I think that too many people consider Nova to be *the*
>                 OpenStack API when
>                 considering instances/volumes/networking/images and
>                 that's not something I
>                 would like to see continue.  Or at the very least I
>                 would like to see a
>                 split between the orchestration/proxy pieces and the
>                 "manage my
>                 VM/container/baremetal" bits
>
>
>
>             (new thread)
>             You've hit on one of my biggest issues right now: As far
>             as many deployers
>             and consumers are concerned (and definitely what I tell my
>             users within
>             Yahoo): The value of an OpenStack value-stream (compute,
>             network, storage)
>             is to provide a single consistent API for abstracting and
>             managing those
>             infrastructure resources.
>
>             Take networking: I can manage Firewalls, switches, IP
>             selection, SDN, etc
>             through Neutron. But for compute, If I want VM I go
>             through Nova, for
>             Baremetal I can -mostly- go through Nova, and for
>             containers I would talk
>             to Magnum or use something like the nova docker driver.
>
>             This means that, by default, Nova -is- the closest thing
>             to a top level
>             abstraction layer for compute. But if that is explicitly
>             against Nova's
>             charter, and Nova isn't going to be the top level
>             abstraction for all
>             things Compute, then something else needs to fill that
>             space. When that
>             happens, all things common to compute provisioning should
>             come out of Nova
>             and move into that new API. Availability zones, Quota, etc.
>
>
>         I do think Nova is the top level abstraction layer for
>         compute. My issue is when Nova is asked to manage other
>         resources.  There's no API call to tell Cinder "create a
>         volume and attach it to this instance, and create that
>         instance if it doesn't exist."  And I'm not sure why the
>         reverse isn't true.
>
>         I want Nova to be the absolute best API for managing compute
>         resources.  It's when someone is managing compute and volumes
>         and networks together that I don't feel that Nova is the best
>         place for that.  Most importantly right now it seems that not
>         everyone is on the same page on this and I think it would be
>         beneficial to come together and figure out what sort of
>         workloads the Nova API is intending to provide.
>
>
>     I totally agree with you on those points :
>      - nova API should be only supporting CRUD operations for compute
>     VMs and should no longer manage neither volumes nor networks IMHO,
>     because it creates more problems than it resolves
>      - given the above, nova API could possibly accept resources from
>     networks or volumes but only for placement decisions related to
>     instances.
>
>     Tho, I can also understand that operators sometimes just want a
>     single tool for creating this kind of relationship between a
>     volume and an instance (and not provide a YAML file), but IMHO, it
>     doesn't perhaps need a top-level API, just a python client able to
>     do some very simple orchestration between services, something like
>     openstack-client.
>
>     I don't really see a uber-value for getting a proxy API calling
>     Nova or Neutron. IMHO, that should still be done by clients, not
>     services.
>
>     -Sylvain
>
>
>
>             -James
>
>
>             __________________________________________________________________________
>
>             OpenStack Development Mailing List (not for usage questions)
>             Unsubscribe:
>             OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>             <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>             http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>         __________________________________________________________________________
>
>         OpenStack Development Mailing List (not for usage questions)
>         Unsubscribe:
>         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>         <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>         http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150928/60e81509/attachment.html>


More information about the OpenStack-dev mailing list