[Openstack] Issue with total_iops_sec VM Quota

Raghu K rakatti at gmail.com
Fri Nov 28 22:26:18 UTC 2014


Another thing I wanted to mention was I have the latest icehouse code base
with this fix (https://review.openstack.org/#/c/118942/)

On Fri, Nov 28, 2014 at 4:23 PM, Raghu K <rakatti at gmail.com> wrote:

> Here is a problem I am facing with quota. I have set total iops VM quota
> to 99.
>
> ubuntu at ceph-perftest-1:~$ nova --os-username admin --os-password admin
>  --os-tenant-id 38a100db8cab4c89a9602ef1eb38f893  --os-auth-url
> http://172.30.90.89:5000/v2.0/    flavor-show m1.small
>
> +----------------------------+-------------------------------------------------------------------------------+
> | Property                   | Value
>                                   |
>
> +----------------------------+-------------------------------------------------------------------------------+
> | OS-FLV-DISABLED:disabled   | False
>                                   |
> | OS-FLV-EXT-DATA:ephemeral  | 0
>                                   |
> | disk                       | 20
>                                    |
> | extra_specs                | {"quota:disk_total_iops_sec": "99",
> "quota:disk_total_bytes_sec": "62914560"} |
> | id                         | 2
>                                   |
> | name                       | m1.small
>                                    |
> | os-flavor-access:is_public | True
>                                    |
> | ram                        | 2048
>                                    |
> | rxtx_factor                | 1.0
>                                   |
> | swap                       |
>                                   |
> | vcpus                      | 1
>                                   |
>
> +----------------------------+-------------------------------------------------------------------------------+
>
>
> When I run fio tool on a vm of m1.small flavor i see iops more than 99...
> any idea why this happening ?
>
>
> root at t1:/home/ubuntu# fio randread.config
> rbd: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=2
> fio-2.1.3
> Starting 1 process
> Cbs: 1 (f=1): [r] [11.5% done] [3608KB/0KB/0KB /s] [902/0/0 iops] [eta
> 00m:54s]
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141128/09cb8b46/attachment.html>


More information about the Openstack mailing list