[Openstack-operators] [nova][ironic][scheduler][placement] IMPORTANT: Getting rid of the automated reschedule functionality
James Penick
jpenick at gmail.com
Mon May 22 20:01:41 UTC 2017
That depends..
I differentiate between a compute worker running on a hypervisor, and one
running as a service in the control plane (like the compute worker in an
Ironic cluster).
A compute worker that is running on a hypervisor has highly restricted
network access. But if the compute worker is a service in the control
plane, such as it is with my Ironic installations, that's totally ok. It
really comes down to the fact that I don't want any real or logical network
access between an instance and the heart of the control plane.
I'll allow a child cell control plane to call a parent cell, just not a
hypervisor within the child cell.
On Mon, May 22, 2017 at 12:42 PM, Sean Dague <sean at dague.net> wrote:
> On 05/22/2017 02:45 PM, James Penick wrote:
> <snip>
> > During the summit the agreement was, if I recall, that reschedules would
> > happen within a cell, and not between the parent and cell. That was
> > completely acceptable to me.
>
> Follow on question (just because the right folks are in this thread, and
> it could impact paths forward). I know that some of the inability to
> have upcalls in the system is based around firewalling that both Yahoo
> and RAX did blocking the compute workers from communicating out.
>
> If the compute worker or cell conductor wanted to make an HTTP call back
> to nova-api (through the public interface), with the user context, is
> that a network path that would or could be accessible in your case?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170522/718b7ed4/attachment.html>
More information about the OpenStack-operators
mailing list