[Openstack] Issue with total_iops_sec VM Quota
Raghu K
rakatti at gmail.com
Mon Dec 1 07:35:00 UTC 2014
I noticed QOS in general is not working. Btw, I am using latest icehouse
devstack and testing this on a mapped rbd device. Please, advise.
On Fri, Nov 28, 2014 at 4:23 PM, Raghu K <rakatti at gmail.com> wrote:
> Here is a problem I am facing with quota. I have set total iops VM quota
> to 99.
>
> ubuntu at ceph-perftest-1:~$ nova --os-username admin --os-password admin
> --os-tenant-id 38a100db8cab4c89a9602ef1eb38f893 --os-auth-url
> http://172.30.90.89:5000/v2.0/ flavor-show m1.small
>
> +----------------------------+-------------------------------------------------------------------------------+
> | Property | Value
> |
>
> +----------------------------+-------------------------------------------------------------------------------+
> | OS-FLV-DISABLED:disabled | False
> |
> | OS-FLV-EXT-DATA:ephemeral | 0
> |
> | disk | 20
> |
> | extra_specs | {"quota:disk_total_iops_sec": "99",
> "quota:disk_total_bytes_sec": "62914560"} |
> | id | 2
> |
> | name | m1.small
> |
> | os-flavor-access:is_public | True
> |
> | ram | 2048
> |
> | rxtx_factor | 1.0
> |
> | swap |
> |
> | vcpus | 1
> |
>
> +----------------------------+-------------------------------------------------------------------------------+
>
>
> When I run fio tool on a vm of m1.small flavor i see iops more than 99...
> any idea why this happening ?
>
>
> root at t1:/home/ubuntu# fio randread.config
> rbd: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=2
> fio-2.1.3
> Starting 1 process
> Cbs: 1 (f=1): [r] [11.5% done] [3608KB/0KB/0KB /s] [902/0/0 iops] [eta
> 00m:54s]
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141201/b44b955e/attachment.html>
More information about the Openstack
mailing list