[openstack-dev] [Nova] FFE Request: hyper-v-remotefx

Alessandro Pilotti apilotti at cloudbasesolutions.com
Fri Sep 6 12:52:35 UTC 2013




On Sep 6, 2013, at 15:36 , "Daniel P. Berrange" <berrange at redhat.com<mailto:berrange at redhat.com>>
 wrote:

On Fri, Sep 06, 2013 at 12:22:27PM +0000, Alessandro Pilotti wrote:

On Sep 6, 2013, at 14:26 , "Daniel P. Berrange" <berrange at redhat.com<mailto:berrange at redhat.com><mailto:berrange at redhat.com>>
wrote:

On Fri, Sep 06, 2013 at 10:56:10AM +0000, Alessandro Pilotti wrote:
The RemoteFX feature allows Hyper-V compute nodes to provide GPU acceleration to instances by sharing the host's GPU resources.

Blueprint: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx

This feature provides big improvements for VDI related scenarios based on OpenStack and Hyper-V compute nodes.
It basically impacts on the driver code only plus an additional optional scheduler filter.

A full architectural description has been added in the blueprint.

The patch has been published during H3 on Aug 18th and initially reviewed
on Sept 4th with some very good ideas for improvements at a larger scale,
subsequently implemented on the same day, unfortunately too late in the cycle.

Simply adding the blueprint description is not sufficient to remove
my objections. The patch as proposed is too incomplete to be merged IMHO.
I pointed out a number of design flaws in the review. The updated info in
the blueprint just re-inforces my review points, to further demonstrate
why this patch should not be merged as it is coded today. It will require
non-negliglable additional dev work to address this properly I believe,
so I'm afraid that I think this is not suitable material for Havana at
this point in time.

I already committed an updated patch after your review on the 4th addressing
your observations, agreeing that the initial implementation was missing
flexibility (I only didn't commit the scheduler filter yet, as IMO it
requires a separate dependent patch, but this can be done anytime).

IMHO the lack of schedular support is a blocker item. Running a VM with
this feature requires that the scheduler be able to place the VM on a
host which supports the feature. Without this, users are just relying on
lucky placement when trying to boot a VM to use this feature.

One detail that has been most probably misanderstood from my previous reply: with the following sentence I meant "now, as part of this blueprint", not "tacked on later".
So, scheduler support (as in a filter) is definitely going to be part of the blueprint in discussion, sorry if this was not clear.

(I only didn't commit the scheduler filter yet, as IMO it
requires a separate dependent patch, but this can be done anytime).


To allow others to add their opinion easily, I'm following up here to
your objections in the blueprint which basically can be reduced (correct
me if I'm wrong) to the following positions, beside the areas on which
we already agreed regarding scheduling and which have already been
implemented:

1) You suggest to define the amount of RemoteFX GPU resources required in the flavour
2) I suggest to provide those requirements in the image custom properties (which is how it is currently implemented, see blueprint description as well).

Beside the fact that the two solutions are IMO not mutually exclusive
as scheduling filters could simply apply both, the reason why I don't
see how flavours should be considered for this patch is simple:

IIUC, this feature consumes finite host & network resources. Allowing
users to request it via the image properties is fine, but on its own
I consider that insecure. The administrator needs to be able to control
how can access these finite resources, in the same way that you don't
allow users to specify arbitrary amounts of memory, or as many vCPUS
as they want. It seems that flavours are the only place to do this.
Again, I consider this a blocker, not something to be tacked on later.

AFAIK Nova flavors at the moment don't support custom properties
(please correct me if I'm wrong here, I looked at the flavors APIs
and implementation, but I admit my ignorance in the flavor internals), so

I'm honestly not sure, since I don't know enough about the flavours
APIs.

1) Adding the remotefx requisites at the system metadata level
  https://github.com/openstack/nova/blob/master/nova/compute/flavors.py#L60
  would be IMO wrong, as it would tightly couple a hypervisor specific limit
  with a generic compute API.
2) Adding custom properties to flavors goes way beyond the scope of this blueprint.

>From an administrative and ACL perspective, administrators can selectively provide access to users to a given image, thus limiting the access to RemoteFX GPU resources (video memory in primis).

Being able to do it ALSO at the flavour level would add additional granularity,
e.g. assigning different screen resolutions, using a single image. I personally
see this as a separate blueprint as it impacts on the design of a fundamental
Nova feature that will need quite some discussion.

That is quite likely/possibly correct.

That we're having this level of design debate, is exactly why I think this
is not suitable for a feature freeze exception. Freeze exception is for
things that are basically complete, baring small bug fixes / changes.

Daniel
--
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130906/aa615c3a/attachment.html>


More information about the OpenStack-dev mailing list