[openstack-dev] Some idea on device assignment support
Jiang, Yunhong
yunhong.jiang at intel.com
Wed Nov 7 08:37:56 UTC 2012
> -----Original Message-----
> From: John Garbutt [mailto:John.Garbutt at citrix.com]
> Sent: Wednesday, November 07, 2012 12:08 AM
> To: OpenStack Development Mailing List; Vishvananda Ishaya
> Subject: Re: [openstack-dev] Some idea on device assignment support
>
> Hi,
>
> It would be good to join in, at Citrix we are looking at:
> https://blueprints.launchpad.net/nova/+spec/xenapi-gpu-passthrough
>
> Two quick questions:
>
> 1) What are the use cases for the more general pass-through?
> I understand the "want GPU flavour" concept, for desktops or HPC workloads.
SR-IOV is another popular reason. For example, SR-IOV NIC pass-through so that
Instance can utilize the network card directly, also some platform has SR-IOV encryption card etc.
Another one is InfiniBand, although I'm not sure the latest progress in VMM side. In 2007, there are some support for IniniBand PV support in Xen community.
> Is the idea for higher performance do the device pass-through?
So it possibly not always high performance, for example, SR-IOV NIC maybe helpful.
> Would the extra spec try to say: "high performance local disk"?
I think it depends on how will we implement the interface. For NIC, user may select 1G NIC card, 10G NIC card. But no idea of other devices, like , how do you think should GPU with different capability be presented to user?
>
> 2) What about unifying Memory, CPU, GPU, Disks et al?
> I see this are all "consumable resources" with configurable/optional "over
> -commit".
> Clearly this is data the scheduler needs to collect and make its decisions on.
I think once "over-commint", CPU/Memory is different because there is no hard-limitation, and thus does not need carefully scheduling.
> I haven't look how that looks in the DB, but it seems worth sharing filters and
> reporting.
Agree, at least the core-filter is not needed with this unifying.
Thanks
--jyh
> I guess this has knock-on issues with Ceilometer and friends.
>
> Thanks,
> John
>
> -----Original Message-----
> From: Vladimir Popovski [mailto:vladimir at zadarastorage.com]
> Sent: 01 November 2012 7:30 PM
> To: Vishvananda Ishaya; OpenStack Development Mailing List
> Subject: Re: [openstack-dev] Some idea on device assignment support
>
> Hi All,
>
> In next couple of days we will resume our PCI passthrough/SR-IOV proposal.
> With the help from Cisco folks I'm sure we will be able to propose it pretty
> soon.
>
> I suppose that on the first stage it might be enough to use instance_types's
> extra specs to specify what exactly should be assigned to the instance. I don't
> think that cloud users should have a control about what and when should be
> attached (for sure not physical USBs). IMHO, having special instance types (like
> with X PCI devices of type Y or with GPU, etc) will be enough.
>
> We can easily propagate information about device availability to the scheduler
> and on scheduler level to make a decision about proper host.
>
> Regarding querying HW resources - we implemented it on host settings level.
> Of course we can add something on virt driver level, but it will be quite different
> per each installation type.
>
> Regards,
> -Vladimir
>
>
> -----Original Message-----
> From: Vishvananda Ishaya [mailto:vishvananda at gmail.com]
> Sent: Thursday, November 01, 2012 11:22 AM
> To: OpenStack Development Mailing List
> Cc: Ian Wells; vladimir at zadarastorage.com
> Subject: Re: [openstack-dev] Some idea on device assignment support
>
>
> On Nov 1, 2012, at 8:39 AM, heut2008 <heut2008 at gmail.com> wrote:
>
> >
> > should we create a new table to manage all these devices? or each
> > nova-compute node manage their own devices allocation,if there is no
> > new table to record this. we need each kind of passthrough devices
> > has a driver to keep track of how many resources is free and how many
> > is in use, these info is updated to schedule by sending capability
> > info periodly.
>
> I would prefer to avoid adding new tables unless we have to. It seems like the
> HostState class in host_manager needs to be able to support arbitrary keys for
> counting resources. Then we could create filter and weighting functions based
> on these arbitrary keys and update the consume method to consume resources
> from these keys based on instance_type_extra_specs.
>
> Perhaps we need some kind of compute_metadata. there is a compute stats
> table which we might be able to abuse for this purpose.
>
> Vish
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list