[openstack-dev] rxtx factor in instance types
Day, Phil
philip.day at hp.com
Fri Feb 1 10:46:23 UTC 2013
Hi Trey,
Thanks for the explanation. I was wondering about expanding the use of this into something that the scheduler can take into account - e.g. each host would have a total value of rxtx that it can support, and a scheduler filter would be able to make sure that it doesn't become over committed. I'm not sure you can really discover the rxtx value from a physical host, so it would probably have to be something configured per host (nova.conf) to a value which represents the number of units a particular host can support (based on NIC configuration, willingness to oversubscribe, etc).
If we did add this to the resources tracked by the scheduler, would that affect your current usage on Xen (aside from you just being able to not use that filter that is) ?
Cheers,
Phil
From: Trey Morris [mailto:trey.morris at RACKSPACE.COM]
Sent: 31 January 2013 17:50
To: OpenStack Development Mailing List
Cc: Matt Dietz; Day, Phil; Vishvananda Ishaya
Subject: Re: [openstack-dev] rxtx factor in instance types
Phil,
Your use case is similar to how we use it. The idea is that the instance_types rxtx_factor is multiplied by the rxtx_base on networks to equal the rxtx_cap which ends up as a meta characteristic of a vif in the network_info model.
For example if you have a network with a 1024kb/s rxtx_base, you can have several different instance types with rxtx_factors of 1.0, 2.0, 4.0, 10.0.... etc. When the network_info object is crafted by the network manager, it multiplies the two and stores the result in the meta field of vif. If you have multiple networks, they can have different rxtx_base values, and this will be reflected in the rxtx_cap for the resulting vifs. At this point anything that has access to the network_info object can grab the values from vif.get_meta('rxtx_cap'). As vish pointed out, the xen vif driver passes the rxtx_cap to xen in the process of creating vifs on the hypervisor.
-trey
On Jan 30, 2013, at 8:32 PM, Vishvananda Ishaya wrote:
Hi phil,
It is used to generate the rxtx_cap in the network_info cache which is in turn used to generate qos params in xenapi in:
~/os/nova/nova/virt/xenapi/vif.py
I don't believe it is used in the other drivers at all.
tr3buchet or _cerberus_ might be able to add a bit more detail
Vish
On Jan 30, 2013, at 1:43 PM, "Day, Phil" <philip.day at hp.com<mailto:philip.day at hp.com>> wrote:
Hi Folks,
Can anyone point me to where the rxtx factor of an instance type is currently used in the code please ?
I was thinking about the use case of having instance types which are differentiated by their network bandwidth, and how that could be accommodated in a scheduler filter and passed down to the virt layer for cgroup configuration, and the rxtx factor seems to be the obvious starting point - but I was struggling to see how its already used.
Has anyone else looked at this as an issue ?
Cheers,
Phil
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130201/66fa5bc1/attachment.html>
More information about the OpenStack-dev
mailing list