[openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)
Monty Taylor
monty at inaugust.com
Mon Sep 28 13:50:10 UTC 2015
On 09/28/2015 07:58 AM, Sylvain Bauza wrote:
>
>
> Le 28/09/2015 12:35, Duncan Thomas a écrit :
>>
>>
>> On 28 September 2015 at 12:35, Sylvain Bauza <sbauza at redhat.com
>> <mailto:sbauza at redhat.com>> wrote:
>>
>> About the maintenance burden, I also consider that patching
>> clients is far more easier than patching an API unless I missed
>> something.
>>
>>
>> I think I very much disagree there - patching a central installation
>> is much, much easier than getting N customers to patch M different
>> libraries, even assuming the fix is available for any significant
>> subset of the M libraries, plus making sure that new customers use the
>> correct libraries, plus helping any customers who have some sort of
>> roll-your-own library do the new right thing...
>>
>
> Well, having N versions of clients against one single API version is
> just something we manage since the beginning. I don't really see why it
> suddently becomes so difficult to manage it.
>
>
>> I think there's a definite place for a simple API to do infrastructure
>> level orchestration without needing the complexities of heat - these
>> APIs are in nova because they're useful - there's clear operator
>> desire for them and a couple of operators have been quite vocal about
>> their desire for them not to be removed. Great, let's keep them, but
>> form a team of people interested in getting them right (get rid of
>> fixed timeouts, etc), add any missing pieces (like floating IPs for
>> new VMs) and generally focus on getting this piece of the puzzle
>> right. Breaking another small piece off nova and polishing it has been
>> a generally successful pattern.
>
> I don't want to overthink what could be the right scope of that future
> API but given the Heat mission statement [1] and its service name
> 'orchestration', I don't see why this API endpoint should land in the
> Nova codebase and couldn't be rather provided by the Heat API. Oh sure,
> it would perhaps require another endpoint behind the same service, but
> isn't that better than having another endpoint in Nova ?
>
> [1]
> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L482-L484
>
>
>>
>> I remember Monty Taylor (copied) having a rant about the lack of the
>> perfect 'give me a VM with all its stuff sorted' API. Care to comment,
>> Monty?
>
> Sounds you misunderstood me. I'm not against implementing this excellent
> usecase, I just think the best place is not in Nova and should be done
> elsewhere.
>
Specifically, I want "nova boot" to get me a VM with an IP address. I
don't want it to do fancy orchestration - I want it to not need fancy
orchestration, because needing fancy orchestration to get a VM on a
network is not a feature.
I also VERY MUCH do not want to need Heat to get a VM. I want to use
Heat to do something complex. Getting a VM is not complex. It should not
be complex. What it's complex and to the level of needing Heat, we've
failed somewhere else.
Also, people should stop deploying clouds that require people to use
floating IPs to get basic internet access. It's a misuse of the construct.
Public Network "ext-net" -> shared / directly attachable
Per-tenant Network "private" -> private network, not shared, not routable
If the user chooses, a router can be added with gateway set to ext-net.
This way:
nova boot --network=ext-net -> vm dhcp'd on the public network
nova boot --network=private -> vm dhcp'd on the private network
nova floating-ip-attach -> vm gets a floating ip attached to their
vm from the ext-net network
All of the use cases are handled, basic things are easy (boot a vm on
the network works in one step) and for the 5% of cases where a floating
IP is actually needed (a long-lived service on a single vm that wants to
keep the IP and not just a DNS name across VM migrations and isn't using
a load-balancer) can use that.
This is, btw, the most common public cloud deployment model.
Let's stop making things harder than they need to be and serve our users.
More information about the OpenStack-dev
mailing list