[Openstack-operators] high load because of openvswitch

Juan José Pavlik Salles jjpavlik at gmail.com
Mon Oct 7 03:07:56 UTC 2013


A few hours ago i was checking my compute nodes when i noticed that they
have a load of 5 but they just run a few VMs. I started looking for
problems and i found that quantum-openvswitch-agent was taking too much CPU
time.

I checked the logs and found lots of these:

root at acelga:~# tail -f /var/log/quantum/openvswitch-agent.log
Exit code: 0
Stdout: '{attached-mac="fa:16:3e:66:8a:cb",
iface-id="b8d3bb55-90bb-4a93-9daf-9d5da52171db", iface-status=active,
vm-uuid="5406eca9-3b99-473d-b4e4-3ac53568a5a8"}\n'
Stderr: ''
2013-10-07 02:58:11    DEBUG [quantum.agent.linux.utils] Running command:
['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl',
'--timeout=2', 'get', 'Interface', 'qvobe75299f-e5', 'external_ids']
2013-10-07 02:58:11    DEBUG [quantum.agent.linux.utils]
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'qvof13c912d-35',
'external_ids']
Exit code: 0
Stdout: '{attached-mac="fa:16:3e:8e:7d:54",
iface-id="f13c912d-3516-4ec2-9d60-5cc01db0b2e3", iface-status=active,
vm-uuid="de1f79cd-7289-40d8-a5cf-acf43c089727"}\n'
Stderr: ''
2013-10-07 02:58:11    DEBUG [quantum.agent.linux.utils] Running command:
['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl',
'--timeout=2', 'get', 'Interface', 'qvof4329543-aa', 'external_ids']
2013-10-07 02:58:12    DEBUG [quantum.agent.linux.utils]
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'qvobe75299f-e5',
'external_ids']
Exit code: 0
Stdout: '{attached-mac="fa:16:3e:4f:92:9f",
iface-id="be75299f-e597-4f37-85b5-e517883bbfeb", iface-status=active,
vm-uuid="dd9783c1-344a-43b7-9cc1-b969c003f838"}\n'
Stderr: ''
2013-10-07 02:58:12    DEBUG [quantum.agent.linux.utils] Running command:
['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl',
'--timeout=2', 'get', 'Interface', 'qvod00a1408-ad', 'external_ids']
2013-10-07 02:58:12    DEBUG [quantum.agent.linux.utils]
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'qvof4329543-aa',
'external_ids']
...

It seems to be just a normal check interface message, but i just have 1
instance running on that compute node:

4oot at acelga:~# virsh list
setlocale: No such file or directory
 Id    Name                           State
----------------------------------------------------
 1     instance-00000077              running

root at acelga:~#

With just one network interface attached to it. So i checked ovs to see if
there was something wrong.. and :

root at acelga:~# ovs-vsctl show
bb36ccf6-ba02-48df-970a-4b2b39b12e84
    Bridge "br-eth1"
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-int
        Port "qvoa017fc49-48"
            tag: 1
            Interface "qvoa017fc49-48"
        Port "qvo38b6ba57-81"
            tag: 1
            Interface "qvo38b6ba57-81"
        Port "qvo27d0051a-0d"
            tag: 1
            Interface "qvo27d0051a-0d"
        Port "int-br-eth1"
            Interface "int-br-eth1"
        Port "qvof4329543-aa"
            tag: 4095
            Interface "qvof4329543-aa"
        Port "qvo6bf34ce3-25"
            tag: 1
            Interface "qvo6bf34ce3-25"
        Port "qvo0c4c9cd8-2b"
            tag: 1
            Interface "qvo0c4c9cd8-2b"
           ...
        Port "qvo54a38640-30"
            tag: 2
            Interface "qvo54a38640-30"
    ovs_version: "1.4.0+build0"

Too many ports, way too many actually considering i'm running just one
instance. A few weeks ago i did some tests, creating many instances
deleting them and so on, so it could be related to that, maybe during
delete operations the ovs ports weren't actually deleted and that's why i
have so many of them. Did someone notice this behaviour before? How could i
cleanly get rid of them? I'm runing most of the services in debug mode
because the cloud is not in production yet.

-- 
Pavlik Salles Juan José
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20131007/73338289/attachment.html>


More information about the OpenStack-operators mailing list