[Openstack] iops limiting with OpenStack Nova using Ceph/Network Storage

Yaguang Tang heut2008 at gmail.com
Wed Aug 27 11:19:59 UTC 2014


CONF.libvirt.images_default_iops_second isn't in upstream yet,  anyway I
think this is definitely a bug and we should support set rate limit for rbd
disk backend.
I filed a bug here to track this issue.
https://bugs.launchpad.net/nova/+bug/1362129



On Wed, Aug 27, 2014 at 7:05 PM, Haomai Wang <haomaiwang at gmail.com> wrote:

> Yes, it's possible just like a volume from Cinder. But it still exists
> some works todo. We need to pass iops/bw throttle value in via config
> or flavor metadata. If you want to get it as quick as possible, you
> can add some hack codes like this:
>
>         tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
>             'disk_write_bytes_sec', 'disk_write_iops_sec',
>             'disk_total_bytes_sec', 'disk_total_iops_sec']
>         # Note(yaguang): Currently, the only tuning available is Block I/O
>         # throttling for qemu.
>         if self.source_type in ['file', 'block']:
>             for key, value in extra_specs.iteritems():
>                 scope = key.split(':')
>                 if len(scope) > 1 and scope[0] == 'quota':
>                     if scope[1] in tune_items:
>                         setattr(info, scope[1], value)
> +        if not getattr(info, 'disk_total_bytes_sec'):
> +            setattr(info, 'disk_total_bytes_sec',
> CONF.libvirt.images_default_bw_second)
> +        if not getattr(info, 'disk_total_iops_sec'):
> +            setattr(info, 'disk_total_iops_sec',
> CONF.libvirt.images_default_iops_second)
>         return info
>
> On Wed, Aug 27, 2014 at 4:32 PM, Tyler Wilson <kupo at linuxdigital.net>
> wrote:
> > Hey All,
> >
> > Is it possible to setup a iops/bytesps limitation within nova using
> libvirt
> > methods? I've found the following links but cant get it to work with my
> > environment;
> >
> > http://ceph.com/planet/openstack-ceph-rbd-and-qos/
> > https://wiki.openstack.org/wiki/InstanceResourceQuota
> >
> > I see in the commit code that it specifically mentions file and block
> with
> > no network in the code;
> >
> >         tune_items = ['disk_read_bytes_sec', 'disk_read_iops_sec',
> >             'disk_write_bytes_sec', 'disk_write_iops_sec',
> >             'disk_total_bytes_sec', 'disk_total_iops_sec']
> >         # Note(yaguang): Currently, the only tuning available is Block
> I/O
> >         # throttling for qemu.
> >         if self.source_type in ['file', 'block']:
> >             for key, value in extra_specs.iteritems():
> >                 scope = key.split(':')
> >                 if len(scope) > 1 and scope[0] == 'quota':
> >                     if scope[1] in tune_items:
> >                         setattr(info, scope[1], value)
> >         return info
> >
> > Is it possible to limit or establish QoS rules for network storage in
> nova
> > currently or only in cinder? My source protocol is rbd, qemu driver and
> raw
> > disk type.
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to     : openstack at lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >
>
>
>
> --
> Best Regards,
>
> Wheat
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>



-- 
Tang Yaguang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140827/446e7cb4/attachment.html>


More information about the Openstack mailing list