[Openstack] iops limiting with OpenStack Nova using Ceph/Network Storage

Haomai Wang haomaiwang at gmail.com
Fri Aug 29 03:23:49 UTC 2014


        tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
            'disk_write_bytes_sec', 'disk_write_iops_sec',
            'disk_total_bytes_sec', 'disk_total_iops_sec']
        # Note(yaguang): Currently, the only tuning available is Block I/O
        # throttling for qemu.
        if self.source_type in ['file', 'block']:
            for key, value in extra_specs.iteritems():
                scope = key.split(':')
                if len(scope) > 1 and scope[0] == 'quota':
                    if scope[1] in tune_items:
                        setattr(info, scope[1], value)
        if not getattr(info, 'disk_total_bytes_sec'):
            setattr(info,
'disk_total_bytes_sec',CONF.libvirt.images_default_bw_second)
        if not getattr(info, 'disk_total_iops_sec'):
            setattr(info,
'disk_total_iops_sec',CONF.libvirt.images_default_iops_second)
        return info

Originally, I suppose you have experience on this.
You need to add options registrations to __imagebackend_ops:
@@ -81,8 +81,14 @@ __imagebackend_opts = [
                deprecated_group='DEFAULT',
                deprecated_name='libvirt_images_rbd_ceph_conf'),
     cfg.BoolOpt('images_rbd_clone_image',
-            default=False,
-            help='Nova clone glance image in rbd'),
+                default=False,
+                help='Nova clone glance image in rbd'),
+    cfg.IntOpt('images_default_bw_second',
+               default=30*1024*1024,
+               help='default bw per second'),
+    cfg.IntOpt('images_default_iops_second',
+               default=100,
+               help='default iops per second'),
         ]

On Thu, Aug 28, 2014 at 12:42 AM, Tyler Wilson <kupo at linuxdigital.net> wrote:
> Thanks for the bug report, I'll keep an eye on the progress there. I've
> attempted the patch mentioned and I can't seem to get it to work. Here is my
> block under tune_items;
>
> http://pastebin.mozilla.org/6181109
>
> I've applied this to all nova-compute servers and restarted nova-compute
> after the changed, spawned a new instance and didn't see any iotune blocks
> in the domain's xml. Here is my current extra_specs for m1.small;
> {"quota:write_iops_sec": "1024", "quota:disk_write_bytes_sec": "10240000",
> "quota:vif_outbound_average": "10240", "quota:cpu_period": "10000",
> "quota:disk_total_iops_sec": "10240", "quota:vif_inbound_average": "10240",
> "quota:read_iops_sec": "1024", "quota:disk_write_iops_sec": "1024",
> "quota:disk_read_bytes_sec": "10240000", "quota:cpu_quota": "10000",
> "quota:disk_read_iops_sec": "1024", "quota:disk_total_bytes_sec": "10240"}
>
>
> On Wed, Aug 27, 2014 at 4:19 AM, Yaguang Tang <heut2008 at gmail.com> wrote:
>>
>> CONF.libvirt.images_default_iops_second isn't in upstream yet,  anyway I
>> think this is definitely a bug and we should support set rate limit for rbd
>> disk backend.
>> I filed a bug here to track this issue.
>> https://bugs.launchpad.net/nova/+bug/1362129
>>
>>
>>
>> On Wed, Aug 27, 2014 at 7:05 PM, Haomai Wang <haomaiwang at gmail.com> wrote:
>>>
>>> Yes, it's possible just like a volume from Cinder. But it still exists
>>> some works todo. We need to pass iops/bw throttle value in via config
>>> or flavor metadata. If you want to get it as quick as possible, you
>>> can add some hack codes like this:
>>>
>>>         tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
>>>             'disk_write_bytes_sec', 'disk_write_iops_sec',
>>>             'disk_total_bytes_sec', 'disk_total_iops_sec']
>>>         # Note(yaguang): Currently, the only tuning available is Block
>>> I/O
>>>         # throttling for qemu.
>>>         if self.source_type in ['file', 'block']:
>>>             for key, value in extra_specs.iteritems():
>>>                 scope = key.split(':')
>>>                 if len(scope) > 1 and scope[0] == 'quota':
>>>                     if scope[1] in tune_items:
>>>                         setattr(info, scope[1], value)
>>> +        if not getattr(info, 'disk_total_bytes_sec'):
>>> +            setattr(info, 'disk_total_bytes_sec',
>>> CONF.libvirt.images_default_bw_second)
>>> +        if not getattr(info, 'disk_total_iops_sec'):
>>> +            setattr(info, 'disk_total_iops_sec',
>>> CONF.libvirt.images_default_iops_second)
>>>         return info
>>>
>>> On Wed, Aug 27, 2014 at 4:32 PM, Tyler Wilson <kupo at linuxdigital.net>
>>> wrote:
>>> > Hey All,
>>> >
>>> > Is it possible to setup a iops/bytesps limitation within nova using
>>> > libvirt
>>> > methods? I've found the following links but cant get it to work with my
>>> > environment;
>>> >
>>> > http://ceph.com/planet/openstack-ceph-rbd-and-qos/
>>> > https://wiki.openstack.org/wiki/InstanceResourceQuota
>>> >
>>> > I see in the commit code that it specifically mentions file and block
>>> > with
>>> > no network in the code;
>>> >
>>> >         tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
>>> >             'disk_write_bytes_sec', 'disk_write_iops_sec',
>>> >             'disk_total_bytes_sec', 'disk_total_iops_sec']
>>> >         # Note(yaguang): Currently, the only tuning available is Block
>>> > I/O
>>> >         # throttling for qemu.
>>> >         if self.source_type in ['file', 'block']:
>>> >             for key, value in extra_specs.iteritems():
>>> >                 scope = key.split(':')
>>> >                 if len(scope) > 1 and scope[0] == 'quota':
>>> >                     if scope[1] in tune_items:
>>> >                         setattr(info, scope[1], value)
>>> >         return info
>>> >
>>> > Is it possible to limit or establish QoS rules for network storage in
>>> > nova
>>> > currently or only in cinder? My source protocol is rbd, qemu driver and
>>> > raw
>>> > disk type.
>>> >
>>> > _______________________________________________
>>> > Mailing list:
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> > Post to     : openstack at lists.openstack.org
>>> > Unsubscribe :
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> >
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Wheat
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>>
>>
>> --
>> Tang Yaguang
>>
>>
>>
>
>



-- 
Best Regards,

Wheat




More information about the Openstack mailing list