[openstack-dev] [rally] nova boot-and-delele

fdsafdsafd jazeltq at 163.com
Sat Jul 19 04:47:18 UTC 2014


yes,this is run boot-and-delete on one compute-node available.the json is {
    "NovaServers.boot_and_delete_server": [
        {
            "args": {
                "flavor": {
                    "name": "m1.medium"
                },
                "image": {
                    "name": "win2008"
                },
            },
            "runner": {
                "type": "constant",
                "times":50,
                "concurrency":33
            },
            "context": {
                "users": {
                    "tenants": 1,
                    "users_per_tenant":1 
                },
            }
        }
    ]
}the output is:================================================================================
Task 16f3b51a-fa2a-41e8-aa3f-34c449aeccde is finished.
--------------------------------------------------------------------------------

test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{u'args': {u'flavor': u'5a180f8a-51e9-4621-a9f9-7332253e0b32',
           u'image': u'95be464a-ddac-4332-9c14-3c9bc4156c86'},
 u'context': {u'users': {u'concurrent': 30,
                         u'tenants': 1,
                         u'users_per_tenant': 1}},
 u'runner': {u'concurrency': 24, u'times': 50, u'type': u'constant'}}
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| total  | 52.621    | 151.872   | 198.646   | 198.23        | 198.478       | 50.0%   | 50    |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+

HINTS:
* To plot HTML graphics with this data, run:
+-------rally task plot2html 16f3b51a-fa2a-41e8-aa3f-34c449aeccde --out output.html

* To get raw JSON output of task results, run:
+-------rally task results 16f3b51a-fa2a-41e8-aa3f-34c449aeccde

the results report is in attachment named 1.txt




After some research, I found that, when create instance, the resource will update before libvirt spawn, but when delete, the resource for that compute node, is not update when the libvirt destroy the instance. We have to wait the resource tracker to update the resource for that compute node in db. But in concurrency, if one iter is done, then the new boot instance will begin. But right now, the resource for the compute node is not update in time. We have to wait the resource tracker. So we will get some 'no valid error host'.
I think this is the explain why that if you use 65concurrency 65 times, the rally is ok, but if you use 65 concurrency 200 times, the rally will give some failed request for no valid host.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140719/9ed1d8dd/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 1.txt
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140719/9ed1d8dd/attachment-0001.txt>


More information about the OpenStack-dev mailing list