[Openstack-operators] UDP Buffer Filling
jpetrini at coredial.com
Fri Jul 28 12:50:05 UTC 2017
Thanks for the info. The parameter is missing completely:
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
I've came across the blueprint for adding the image property
Do you know if this feature is available in Mitaka?
Platforms Engineer // *CoreDial, LLC* // coredial.com // [image:
Twitter] <https://twitter.com/coredial> [image: LinkedIn]
<http://www.linkedin.com/company/99631> [image: Google Plus]
<https://plus.google.com/104062177220750809525/posts> [image: Blog]
751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232 // *F: *215.297.4401 // *E: *
jpetrini at coredial.com
On Fri, Jul 28, 2017 at 3:59 AM, Saverio Proto <zioproto at gmail.com> wrote:
> Hello John,
> a common problem is packets being dropped when they pass from the
> hypervisor to the instance. There is bottleneck there.
> check the 'virsh dumpxml' of one of the instances that is dropping
> packets. Check for the interface section, should look like:
> <interface type='bridge'>
> <mac address='xx:xx:xx:xx:xx:xx'/>
> <source bridge='qbr5b3fc033-e2'/>
> <target dev='tap5b3fc033-e2'/>
> <model type='virtio'/>
> <driver name='vhost' queues='4'/>
> <alias name='net0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> how many queues you have ??? Usually if you have only 1 or if the
> parameter is missing completely is not good.
> in Mitaka nova should use 1 queue for every instance CPU core you
> have. It is worth to check if this is set correctly in your setup.
> 2017-07-27 17:49 GMT+02:00 John Petrini <jpetrini at coredial.com>:
> > Hi List,
> > We are running Mitaka with VLAN provider networking. We've recently
> > encountered a problem where the UDP receive queue on instances is
> filling up
> > and we begin dropping packets. Moving instances out of OpenStack onto
> > metal resolves the issue completely.
> > These instances are running asterisk which should be pulling these
> > off the queue but it appears to be falling behind no matter the
> resources we
> > give it.
> > We can't seem to pin down a reason why we would see this behavior in KVM
> > not on metal. I'm hoping someone on the list might have some insight or
> > ideas.
> > Thank You,
> > John
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators