[Openstack] Max open files limit for nova-api

Prashant Shetty prashantshetty1985 at gmail.com
Mon Dec 19 17:21:56 UTC 2016


Hi Arne,
Thanks for your reply. Currently all these services are running on ubuntu
controller under screen.
Do we have any option to set the file limit option for n-api service in
this case?. I am not using systemd in my setup to run these services.

Thanks,
Prashant

On Mon, Dec 19, 2016 at 10:19 PM, Arne Wiebalck <Arne.Wiebalck at cern.ch>
wrote:

> Prashant,
>
> If this is for systemd, how about changing the nova-api unit file?
>
> Something like
>
> —>
> [Service]
> ...
> LimitNOFILE=65536
> <—
>
> should do it.
>
> Cheers,
>  Arne
>
>
>
> On 19 Dec 2016, at 17:23, Prashant Shetty <prashantshetty1985 at gmail.com>
> wrote:
>
> Team,
>
> I have scale setup and metadata requests are seems to fail from instance.
> Main reason for failure is "Max open files" limit(1024) set on nova-api
> service.
> Though on controller we have set max open file limit of 65k(limit.conf),
> nova-api always comes up with 1024 limit causing failure.
>
> Could someone let me know how can we change the max open files limit of
> nova-api service?
>
> Setup Details:
>
> ·         Single controller
> ·         500 KVM computes
> ·         Devstack branch: stable/newton
> ·         We have native metadata and dhcp running on platform
> ·         3750 instances
>
>
> stack at controller:/opt/stack/logs$ ps aux | grep nova-api
> stack 14998 2.2 0.3 272104 121648 pts/8 S+ 09:53 0:14 /usr/bin/python
> /usr/local/bin/nova-api
> stack at controller:/opt/stack/logs$
> stack at controller:/opt/stack/logs$
> stack at controller:/opt/stack/logs$ cat /proc/14998/limits
> Limit Soft Limit Hard Limit Units
> Max cpu time unlimited unlimited seconds
> Max file size unlimited unlimited bytes
> Max data size unlimited unlimited bytes
> Max stack size 8388608 unlimited bytes
> Max core file size unlimited unlimited bytes
> Max resident set unlimited unlimited bytes
> Max processes 128611 128611 processes
> Max open files 1024 4096 files
> Max locked memory 65536 65536 bytes
> Max address space unlimited unlimited bytes
> Max file locks unlimited unlimited locks
> Max pending signals 128611 128611 signals
> Max msgqueue size 819200 819200 bytes
> Max nice priority 0 0
> Max realtime priority 0 0
> Max realtime timeout unlimited unlimited us
> stack at controller:/opt/stack/logs$
>
> n-api:
>
> 2016-11-08 18:44:26.168 30069 INFO nova.metadata.wsgi.server
> [req-fb4d729b-a1cd-4df1-aaf8-3f854a739cce - -] (30069) wsgi exited,
> is_accepting=True
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py",
> line 457, in fire_timers
>     timer()
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py",
> line 58, in __call__
>     cb(*args, **kw)
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line
> 168, in _do_send
>     waiter.switch(result)
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py",
> line 214, in main
>     result = function(*args, **kwargs)
>   File "/opt/stack/nova/nova/utils.py", line 1066, in context_wrapper
>     return func(*args, **kwargs)
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line
> 865, in server
>     client_socket = sock.accept()
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio/base.py",
> line 214, in accept
>     res = socket_accept(fd)
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenio/base.py",
> line 56, in socket_accept
>     return descriptor.accept()
>   File "/usr/lib/python2.7/socket.py", line 206, in accept
>     sock, addr = self._sock.accept()
> error: [Errno 24] Too many open files
>
> Thanks,
> Prashant
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
> --
> Arne Wiebalck
> CERN IT
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20161219/c080f3e4/attachment.html>


More information about the Openstack mailing list