[openstack-dev] [api] API recommendation

Peter Balland peter at balland.net
Fri Oct 17 23:48:35 UTC 2014


On Oct 16, 2014 8:24 AM, "Dean Troyer" <dtroyer at gmail.com> wrote:
>
>
>
> On Thu, Oct 16, 2014 at 4:57 AM, Salvatore Orlando <sorlando at nicira.com>
wrote:
>>
>> From an API guideline viewpoint, I understand that
https://review.openstack.org/#/c/86938/ proposes the introduction of a
rather simple endpoint to query active tasks and filter them by resource
uuid or state, for example.
>
>
> That review/blueprint contains one thing that I want to address in more
detail below along with Sal's comment on persistence...
>
>>
>> While this is hardly questionable, I wonder if it might be worth
"typifying" the task, ie: adding a resource_type attribute, and/or allowing
to retrieve active tasks as a chile resource of an object, eg.: GET
/servers/<server_id>/tasks?state=running or if just for running tasks GET
/servers/<server_id>/active_tasks
>
>
> I'd prefer the filter approach, but more importantly, it should be the
_same_ structure as listing resources themselves.
>
> To note: here is another API design detail, specifying resource types in
the URL path:
>
> /server/<server>/foo
>
> vs
>
> /<server>/foo
>
> or what we have today, for example, in compute:
>
> /<tenant>/foo
>
>> The proposed approach for the multiple server create case also makes
sense to me. Other than "bulk" operations there are indeed cases where a
single API operation needs to perform multiple tasks. For instance, in
Neutron, creating a port implies L2 wiring, setting up DHCP info, and
securing it on the compute node by enforcing anti-spoof rules and security
groups. This means there will be 3/4 active tasks. For this reason I wonder
if it might be the case of differentiating between the concept of
"operation" and "tasks" where the former is the activity explicitly
initiated by the API consumer, and the latter are the activities which need
to complete to fulfil it. This is where we might leverage the already
proposed request_id attribute of the task data structure.
>
>
> I like the ability to track the fan-out, especially if I can get the
state of the entire set of tasks in a single round-trip.  This also makes
it easier to handle backout of failed requests without having to maintain a
lot of client-side state, or make a lot of round-trips.
>

Based on previous experience, I highly recommend maintaining separation
between tracking work at an API call level aggregate and other "subtasks."
In non-provisioning scenarios, tasks may fire independent of API
operations, so there wouldn't be an API handle to query on. It is great to
manage per-API call level tasks in the framework. The "other work" type
tasks are *much* more complicated beasts, deserving of their own design.

>> Finally, a note on persistency. How long a completed task, successfully
or not should be stored for? Do we want to store them until the resource
they operated on is deleted?
>> I don't think it's a great idea to store them indefinitely in the DB.
Tying their lifespan to resources is probably a decent idea, but time-based
cleanup policies might also be considered (e.g.: destroy a task record 24
hours after its completion)
>
>
> I can envision an operator/user wanting to be able to pull a log of an
operation/task for not only cloud debugging (x failed to build, when/why?)
but also app-level debugging (concrete use case not ready at deadline).
This would require a minimum of life-of-resource + some-amount-of-time.
The time might also be variable, failed operations might actually need to
stick around longer.
>
> Even as an operator with access to backend logging, pulling these state
transitions out should not be hard, and should be available to the resource
owner (project).
>
> dt
>
> --
>
> Dean Troyer
> dtroyer at gmail.com
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141018/46f815ae/attachment.html>


More information about the OpenStack-dev mailing list