[openstack-dev] [Ironic] [Oslo] Question about Futurist executors
harlowja at outlook.com
Thu Jul 23 20:01:40 UTC 2015
An example/poc that adds a basic rejection mechanism to various
executors (the process pool excecutor internals are complicated and
doesn't implement it for that).
With that the following (or similar) could be done:
Comments welcome, ideally https://bugs.python.org/issue22737 would be
much more improved than that, but the general idea is the same...
Joshua Harlow wrote:
> Dmitry Tantsur wrote:
>> I'm redirecting your question to oslo folks, as I'm afraid my answer can
>> be wrong.
>> On 07/23/2015 01:55 PM, Jim Rollenhagen wrote:
>>> On Wed, Jul 22, 2015 at 02:40:47PM +0200, Dmitry Tantsur wrote:
>>>> Hi all!
>>>> Currently _spawn_worker in the conductor manager raises
>>>> NoFreeConductorWorker if pool is already full. That's not very user
>>>> (potential source of retries in client) and does not map well on common
>>>> async worker patterns.
>>>> My understanding is that it was done to prevent the conductor thread
>>>> waiting on pool to become free. If this is true, we no longer need it
>>>> switch to Futurist, as Futurist maintains internal queue for its green
>>>> executor, just like thread and process executors in stdlib do.
>>>> Instead of
>>>> blocking the conductor the request will be queued, and a user won't
>>>> have to
>>>> retry vague (and rare!) HTTP 503 error.
>>>> WDYT about me dropping this exception with move to Futurist?
>>> I kind of like this, but with my operator hat on this is a bit scary.
>>> Does Futurist just queue all requests indefinitely? Is it configurable?
>>> Am I able to get any insight into the current state of that queue?
>> I believe answer is no, and the reason IIUC is that Futurist executors
>> are modeled after stdlib executors, but I may be wrong.
> So this is correct, currently executors will queue things up, and that
> queue may get very large. In futurist we can work on making this better,
> although to do it correctly we really need
> https://bugs.python.org/issue22737 and that needs upstream python
> adjustments to make it possible.
> Without that https://bugs.python.org/issue22737 being implemented its
> not to hard to limit the work queue yourself, but it will have to be
> something extra and require tracking of your own.
> For example:
> dispatched = set()
> def on_done(fut):
> executor = futurist.GreenThreadPoolExecutor()
> if len(dispatched) < MAX_DISPATCH:
> raise IamToOverWorkedExeception(...)
> fut = executor.submit(some_work)
> The above will limit how much work is done at the same time
> (https://bugs.python.org/issue22737 makes it work more like java, which
> has executor rejection policies,
> but you can limit this yourself pretty easily... Making issue22737
> happen would be great; I just haven't had enough time to pull that off...
>>> Just indefinitely queueing up everything seems like it could end with a
>>> system that's backlogged to death, with no way to determine if that's
>>> actually the problem or not.
> As for metrics, one of the additions of the futurist executor subclasses
> was to bolt-on the following being gathered,
> (at executor property '.statistics') so hopefully that can help u learn
> about what your executors are doing as well..
>>> // jim
>>> OpenStack Development Mailing List (not for usage questions)
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> OpenStack Development Mailing List (not for usage questions)
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
More information about the OpenStack-dev