From gaosong_1250 at 163.com Wed Aug 1 01:24:42 2018 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 1 Aug 2018 09:24:42 +0800 (CST) Subject: [Openstack] [Horizon] Horizon responds very slowly In-Reply-To: <20180731124026.Horde.CUL9oU0nDcVsPS2themtB7m@webmail.nde.ag> References: <6ED5A4C0760EC04A8DFC2AF5B93AE14EB4F2ABD3@MAIL01.syswin.com> <20180719084707.Horde.nJPNJA-tAWUutMcU6gz_cHh@webmail.nde.ag> <583e07eb.916a.164eb7f9e18.Coremail.gaosong_1250@163.com> <20180731124026.Horde.CUL9oU0nDcVsPS2themtB7m@webmail.nde.ag> Message-ID: <7d84fa75.1b28.164f3164bc9.Coremail.gaosong_1250@163.com> Ocata. Document is rigjht,But our platform is deployed using kolla-ansible, So,we utilize haproxy to handle memcache access through virtual IP At 2018-07-31 20:40:26, "Eugen Block" wrote: >Interesting, the HA guide [2] states that memcached should be >configured with the list of hosts: > >> Access to Memcached is not handled by HAProxy because replicated >> access is currently in an experimental state. >> Instead, OpenStack services must be supplied with the full list of >> hosts running Memcached. > >On the other hand, it would be only one of many incorrect statements >in that guide since I've dealt with it, so maybe this is just outdated >information (although the page has been modified on July 25th). Which >OpenStack version are you deploying? > >Regards, >Eugen > >[2] https://docs.openstack.org/ha-guide/controller-ha-memcached.html > >Zitat von "gao.song" : > >> Further report! >> We finally figure it out. >> It because of the original memcache_server configuration which lead >> to load key from the poweroff controller >> configuration example: >> [cache] >> backend = oslo_cache.memcache_pool >> enabled = True >> memcache_servers = controller1:11211,controller2:11211,controller3:11211 >> After change the server set to contoller_vip:11211,problem solved. >> >> >> >> >> >> >> At 2018-07-24 02:35:09, "Ivan Kolodyazhny" wrote: >> >> Hi, >> >> >> It could be a common issue between horizon and keystone. >> >> >> As a temporary workaround for this, you can apply this [1] patch to >> redirect admin user to the different page. >> >> >> [1] https://review.openstack.org/#/c/577090/ >> >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> >> On Thu, Jul 19, 2018 at 11:47 AM, Eugen Block wrote: >> Hi, >> >> we also had to deal with slow dashboard, in our case it was a >> misconfiguration of memcached [0], [1]. >> >> Check with your configuration and make sure you use oslo.cache. >> >> Hope this helps! >> >> [0] https://bugs.launchpad.net/keystone/+bug/1587777 >> [1] >> https://ask.openstack.org/en/question/102611/how-to-configure-memcache-in-openstack-ha/ >> >> >> Zitat von 高松 : >> >> >> After kill one node of a cluster which consist of three nodes, >> I found that Horizon based on keystone with provider set to fernet >> respondes very slowly. >> Admin login will cost at least 20 senconds. >> And cli verbose command return show making authentication is stuck >> about 5 senconds. >> Any help will be appreciated. >> >> >> >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yupeng0921 at gmail.com Wed Aug 1 01:45:12 2018 From: yupeng0921 at gmail.com (peng yu) Date: Tue, 31 Jul 2018 18:45:12 -0700 Subject: [Openstack] [OpenStack][Cinder] Is it valuable to create a distribute block storage backend base on iscsi and lvm? Message-ID: I'm working on a project, which is a distribute block storage system base on iscsi and lvm, I hope it could be a backend of cinder. It exports block devices via iscsi, and aggregate on a host (or computer node). On the host, mirror is used to provide HA, stripe is used to provide high performance, thin provision is used to provide snapshot and thin. Depend on my test, it could provide more than 100k iops for a single volume. I hope to hear some feedbacks about my project, such as whether there are duplicate projects, is it valuable to create such a project or any improvement I could do on it. Here is the document of the project: https://dlvm.readthedocs.io/en/latest/index.html It is in an initialize stage, any feedback is appreciated. Best regards. From zufardhiyaulhaq at gmail.com Thu Aug 2 12:34:05 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Thu, 2 Aug 2018 19:34:05 +0700 Subject: [Openstack] OpenStack neutron error Message-ID: Hi, im trying to install openstack queens from sratch (manual) from openstack documentation. but i have problem in neutron. when im try to verify with `openstack netwrok agent list` there are error `HTTP exception unknown error` when im check the logs from controller in`/var/log/neutron/neutron-server.log` i have this error 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors [-] An error occurred during processing the request: GET /v2.0/extensions HTTP$ Accept: application/json Accept-Encoding: gzip, deflate Connection: keep-alive Content-Type: text/plain Host: controller:9696 User-Agent: python-neutronclient X-Auth-Token: *****: DiscoveryFailure: Could not determine a suitable URL for the plugin 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors Traceback (most recent call last): 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", lin$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors response = req.get_response(self.application) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in send 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors application, catch_exc_info=False) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in call$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors app_iter = application(self.environ, start_response) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors resp = self.call_func(req, *args, **self.kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in call_func 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors return self.func(req, *args, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors response = self.process_request(req) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors resp = super(AuthProtocol, self).process_request(request) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors allow_expired=allow_expired) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors Traceback (most recent call last): 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", lin$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors response = req.get_response(self.application) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in send 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors application, catch_exc_info=False) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in call$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors app_iter = application(self.environ, start_response) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors resp = self.call_func(req, *args, **self.kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in call_func 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors return self.func(req, *args, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors response = self.process_request(req) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors resp = super(AuthProtocol, self).process_request(request) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors allow_expired=allow_expired) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors data = self.fetch_token(token, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors allow_expired=allow_expired) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors auth_ref = self._request_strategy.verify_token( 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors strategy_class = self._get_strategy_class() 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors if self._adapter.get_endpoint(version=klass.AUTH_VERSION): 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 223, $ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors return self.session.get_endpoint(auth or self.auth, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 942, $ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors return auth.get_endpoint(self, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors allow_version_hack=allow_version_hack, **kwargs) 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line$ 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors service_catalog = self.get_access(session).service_catalog 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors raise exceptions.DiscoveryFailure('Could not determine a suitable URL ' 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors DiscoveryFailure: Could not determine a suitable URL for the plugin 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors 2018-08-02 19:21:37.512 2486 INFO neutron.wsgi [-] 10.100.0.70 "GET /v2.0/extensions HTTP/1.1" status: 500 len: 404 time: 0.0035110 -- *Regards,* *Zufar Dhiyaulhaq* -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu Aug 2 12:49:24 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 02 Aug 2018 12:49:24 +0000 Subject: [Openstack] OpenStack neutron error In-Reply-To: Message-ID: <20180802124924.Horde.PQlbDbq-oSDdXdIkGGpnnNb@webmail.nde.ag> Hi, the description in [1] sounds very similar to your problem and seems to be a bug in the docs. Can you check the ports you configured for keystone and which ports you have set in neutron configs? Regards, Eugen [1] https://ask.openstack.org/en/question/114642/neutron-configuration-errot-failed-to-retrieve-extensions-list-from-network-api/ Zitat von Zufar Dhiyaulhaq : > Hi, im trying to install openstack queens from sratch (manual) from > openstack documentation. but i have problem in neutron. when im try to > verify with `openstack netwrok agent list` there are error `HTTP exception > unknown error` > > when im check the logs from controller > in`/var/log/neutron/neutron-server.log` i have this error > > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors [-] An > error occurred during processing the request: GET /v2.0/extensions > HTTP$ > Accept: application/json > Accept-Encoding: gzip, deflate > Connection: keep-alive > Content-Type: text/plain > Host: controller:9696 > User-Agent: python-neutronclient > X-Auth-Token: *****: DiscoveryFailure: Could not determine a suitable > URL for the plugin > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > Traceback (most recent call last): > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", > lin$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > response = req.get_response(self.application) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in > send > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > application, catch_exc_info=False) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in > call$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > app_iter = application(self.environ, start_response) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > resp = self.call_func(req, *args, **self.kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in > call_func > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > return self.func(req, *args, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > response = self.process_request(req) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > resp = super(AuthProtocol, self).process_request(request) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > allow_expired=allow_expired) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > Traceback (most recent call last): > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", > lin$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > response = req.get_response(self.application) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in > send > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > application, catch_exc_info=False) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in > call$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > app_iter = application(self.environ, start_response) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > resp = self.call_func(req, *args, **self.kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in > call_func > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > return self.func(req, *args, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > response = self.process_request(req) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > resp = super(AuthProtocol, self).process_request(request) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > allow_expired=allow_expired) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > data = self.fetch_token(token, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > allow_expired=allow_expired) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > auth_ref = self._request_strategy.verify_token( > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > strategy_class = self._get_strategy_class() > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors if > self._adapter.get_endpoint(version=klass.AUTH_VERSION): > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 223, > $ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > return self.session.get_endpoint(auth or self.auth, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 942, > $ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > return auth.get_endpoint(self, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", > line$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > allow_version_hack=allow_version_hack, **kwargs) > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File > "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", > line$ > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > service_catalog = self.get_access(session).service_catalog > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > raise exceptions.DiscoveryFailure('Could not determine a suitable URL > ' > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > DiscoveryFailure: Could not determine a suitable URL for the plugin > 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors > 2018-08-02 19:21:37.512 2486 INFO neutron.wsgi [-] 10.100.0.70 "GET > /v2.0/extensions HTTP/1.1" status: 500 len: 404 time: 0.0035110 > > > -- > > *Regards,* > *Zufar Dhiyaulhaq* From zufardhiyaulhaq at gmail.com Thu Aug 2 14:07:53 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Thu, 2 Aug 2018 21:07:53 +0700 Subject: [Openstack] OpenStack neutron error In-Reply-To: <20180802124924.Horde.PQlbDbq-oSDdXdIkGGpnnNb@webmail.nde.ag> References: <20180802124924.Horde.PQlbDbq-oSDdXdIkGGpnnNb@webmail.nde.ag> Message-ID: HI Eugen, Thanks for the solution, i think the docs was wrong. now im fix this issue. thank you. On Thu, Aug 2, 2018 at 7:49 PM, Eugen Block wrote: > Hi, > > the description in [1] sounds very similar to your problem and seems to be > a bug in the docs. Can you check the ports you configured for keystone and > which ports you have set in neutron configs? > > Regards, > Eugen > > [1] https://ask.openstack.org/en/question/114642/neutron-configu > ration-errot-failed-to-retrieve-extensions-list-from-network-api/ > > > Zitat von Zufar Dhiyaulhaq : > > Hi, im trying to install openstack queens from sratch (manual) from >> openstack documentation. but i have problem in neutron. when im try to >> verify with `openstack netwrok agent list` there are error `HTTP exception >> unknown error` >> >> when im check the logs from controller >> in`/var/log/neutron/neutron-server.log` i have this error >> >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors [-] An >> error occurred during processing the request: GET /v2.0/extensions >> HTTP$ >> Accept: application/json >> Accept-Encoding: gzip, deflate >> Connection: keep-alive >> Content-Type: text/plain >> Host: controller:9696 >> User-Agent: python-neutronclient >> X-Auth-Token: *****: DiscoveryFailure: Could not determine a suitable >> URL for the plugin >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> Traceback (most recent call last): >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", >> lin$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> response = req.get_response(self.application) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in >> send >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> application, catch_exc_info=False) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in >> call$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> app_iter = application(self.environ, start_response) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> resp = self.call_func(req, *args, **self.kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in >> call_func >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> return self.func(req, *args, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> response = self.process_request(req) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> resp = super(AuthProtocol, self).process_request(request) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> allow_expired=allow_expired) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> Traceback (most recent call last): >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/oslo_middleware/catch_errors.py", >> lin$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> response = req.get_response(self.application) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1316, in >> send >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> application, catch_exc_info=False) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1280, in >> call$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> app_iter = application(self.environ, start_response) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 131, in __call__ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> resp = self.call_func(req, *args, **self.kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 196, in >> call_func >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> return self.func(req, *args, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> response = self.process_request(req) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> resp = super(AuthProtocol, self).process_request(request) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> allow_expired=allow_expired) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> data = self.fetch_token(token, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init_$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> allow_expired=allow_expired) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> auth_ref = self._request_strategy.verify_token( >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> strategy_class = self._get_strategy_class() >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/_identi$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors if >> self._adapter.get_endpoint(version=klass.AUTH_VERSION): >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 223, >> $ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> return self.session.get_endpoint(auth or self.auth, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 942, >> $ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> return auth.get_endpoint(self, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", >> line$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> allow_version_hack=allow_version_hack, **kwargs) >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors File >> "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", >> line$ >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> service_catalog = self.get_access(session).service_catalog >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> raise exceptions.DiscoveryFailure('Could not determine a suitable URL >> ' >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> DiscoveryFailure: Could not determine a suitable URL for the plugin >> 2018-08-02 19:21:37.511 2486 ERROR oslo_middleware.catch_errors >> 2018-08-02 19:21:37.512 2486 INFO neutron.wsgi [-] 10.100.0.70 "GET >> /v2.0/extensions HTTP/1.1" status: 500 len: 404 time: 0.0035110 >> >> >> -- >> >> *Regards,* >> *Zufar Dhiyaulhaq* >> > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > -- *Regards,* *Zufar Dhiyaulhaq* -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri Aug 3 18:32:36 2018 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 3 Aug 2018 14:32:36 -0400 Subject: [Openstack] Queens horizon is very slow Message-ID: Folks, I have deployed pike using openstack-ansible on 3 node (HA) and everything was good Horizon was fast enough but last week i have upgraded to queens and found horizon is painful slow, I did command line test and they are ok but GUI is just hard to watch, I have check all basic setting memcache etc.. all looks good, i am not sure how to troubleshoot this issue. Just wonder if this is queens issue because pike was running fast enough, is there any good guide line or tool to find out speed of GUI From satish.txt at gmail.com Fri Aug 3 18:39:04 2018 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 3 Aug 2018 14:39:04 -0400 Subject: [Openstack] Queens horizon is very slow In-Reply-To: References: Message-ID: forgot to share some result which is here [root at ostack-infra-02-utility-container-c39f9322 ~]# openstack --timing server list +--------------------------------------+--------+---------+----------------------+-----------------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------+---------+----------------------+-----------------+----------+ | d5e16566-1262-4ac7-ad2b-2ad252472b18 | help-1 | ACTIVE | net-vlan31=10.31.1.5 | cirros-raw | m1.tiny | | c6f3920b-93f3-4a3a-a546-a5b575f8815d | help | SHUTOFF | net-vlan31=10.31.1.4 | Centos-7-x86_64 | m1.small | +--------------------------------------+--------+---------+----------------------+-----------------+----------+ +------------------------------------------------+----------+ | URL | Seconds | +------------------------------------------------+----------+ | GET http://172.28.0.9:5000/v3 | 0.013816 | | POST http://172.28.0.9:5000/v3/auth/tokens | 0.357006 | | POST http://172.28.0.9:5000/v3/auth/tokens | 0.547765 | | GET http://172.28.0.9:8774/v2.1/servers/detail | 0.645702 | | GET http://172.28.0.9:8774/v2.1/flavors/detail | 0.093062 | | Total | 1.657351 | +------------------------------------------------+----------+ On Fri, Aug 3, 2018 at 2:32 PM, Satish Patel wrote: > Folks, > > I have deployed pike using openstack-ansible on 3 node (HA) and > everything was good Horizon was fast enough but last week i have > upgraded to queens and found horizon is painful slow, I did command > line test and they are ok but GUI is just hard to watch, I have check > all basic setting memcache etc.. all looks good, i am not sure how to > troubleshoot this issue. > > Just wonder if this is queens issue because pike was running fast > enough, is there any good guide line or tool to find out speed of GUI From amy at demarco.com Sat Aug 4 01:15:30 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 3 Aug 2018 20:15:30 -0500 Subject: [Openstack] New AUC Criteria Message-ID: *Are you an Active User Contributor (AUC)? Well you may be and not even know it! Historically, AUCs met the following criteria: - Organizers of Official OpenStack User Groups: from the Groups Portal- Active members and contributors to functional teams and/or working groups (currently also manually calculated for WGs not using IRC): from IRC logs- Moderators of any of the operators official meet-up sessions: Currently manually calculated.- Contributors to any repository under the UC governance: from Gerrit- Track chairs for OpenStack summits: from the Track Chair tool- Contributors to Superuser (articles, interviews, user stories, etc.): from the Superuser backend- Active moderators on ask.openstack.org : from Ask OpenStackIn July, the User Committee (UC) voted to add the following criteria to becoming an AUC in order to meet the needs of the evolving OpenStack Community. So in addition to the above ways, you can now earn AUC status by meeting the following: - User survey participants who completed a deployment survey- Ops midcycle session moderators- OpenStack Days organizers- SIG Members nominated by SIG leaders- Active Women of OpenStack participants- Active Diversity WG participantsWell that’s great you have met the requirements to become an AUC but what does that mean? AUCs can run for open UC positions and can vote in the elections. AUCs also receive a discounted $300 ticket for OpenStack Summit as well as having the coveted AUC insignia on your badge!* And remember nominations for the User Committee open on Monday, August 6 and end on August, 17 with voting August 20 to August 24. Amy Marrich (spotz) User Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Mon Aug 6 00:02:18 2018 From: codeology.lab at gmail.com (Cody) Date: Sun, 5 Aug 2018 20:02:18 -0400 Subject: [Openstack] [neutron]Why there is an extra router interface in the qrouter namespace in DVR mode? In-Reply-To: References: Message-ID: Hi folks, I have a DVR-enabled cluster made of a controller, network, and compute node. The *agent_mode* is set as *dvr_snat* on the network node, and *dvr* on the compute node. The north-south traffic works in both SNAT and the floating IP scenarios, but I cannot explain the extra interface shown up in the Horizon dashboard for every tenant router. More details as below: *ip netns* output on the compute node: [image: image.png] *ip addr show* from within the router namespace on the compute node: [image: image.png] *Notice there is only *one* *qr* interface, the gateway, for the tenant subnet. Here is the display from the Horizon dashboard router interface page: [image: image.png] Do you see the extra interface (*da7884d9-2ff1*)? The only place I can find it is from inside the additional routing table in the router namespace: [image: image.png] [image: image.png] What does that extra interface come from and what is it for? Why does it show up in the dashboard, but not in the command line output? Thank you all in advance. Cody -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 8067 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 68296 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 20791 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 14421 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9460 bytes Desc: not available URL: From codeology.lab at gmail.com Mon Aug 6 02:24:02 2018 From: codeology.lab at gmail.com (Cody) Date: Sun, 5 Aug 2018 22:24:02 -0400 Subject: [Openstack] [neutron]Why there is an extra router interface in the qrouter namespace in DVR mode? In-Reply-To: References: Message-ID: I didn't realize that inline screenshots would be scrubbed. Below is a re-edited version of my original post. Cody --- Hi folks, I have a DVR-enabled cluster made of a controller, network, and compute node. The agent_mode is set to dvr_snat on the network node and dvr on the compute node. North-south traffic works fine in both SNAT and the floating IP scenarios. The only thing I cannot explain is the extra interface shown in the Horizon dashboard for tenant routers. Here are details: 'ip netns' output on the compute node: [root at compute ~]# ip netns fip-eadd51b1-1b0a-4504-b694-4f54b7b60d3d (id: 1) qrouter-b7570af0-42f6-499d-bbf0-2139a98bc0a3 (id: 0) 'ip addr show' from within the router namespace on the compute node: [root at compute ~]# ip netns exec qrouter-b7570af0-42f6-499d-bbf0-2139a98bc0a3 ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: rfp-b7570af0-4: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether ca:08:94:69:b9:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 169.254.106.114/31 scope global rfp-b7570af0-4 valid_lft forever preferred_lft forever inet6 fe80::c808:94ff:fe69:b96c/64 scope link valid_lft forever preferred_lft forever 13: qr-bd3e180e-2d: mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:c4:a8:3c brd ff:ff:ff:ff:ff:ff inet 192.168.0.1/24 brd 192.168.0.255 scope global qr-bd3e180e-2d valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fec4:a83c/64 scope link valid_lft forever preferred_lft forever Notice there is only one qr interface in the preceding output, but the Horizon dashboard shows an extra interface for the same router: Name Fixed IPs (bd3e180e-2dd6) 192.168.0.1 (da7884d9-2ff1) 192.168.0.5 My questions are where the interface (da7884d9-2ff1) comes from and what it does. Thank you! Cody On Sun, Aug 5, 2018 at 7:55 PM Cody wrote: > > Hi folks, > > I have a DVR-enabled cluster made of a controller, network, and compute node. The agent_mode is set as dvr_snat on the network node, and dvr on the compute node. The north-south traffic works in both SNAT and the floating IP scenarios, but I cannot explain the extra interface shown up in the Horizon dashboard for every tenant router. More details as below: > > ip netns output on the compute node: > > > > ip addr show from within the router namespace on the compute node: > > > > *Notice there is only one qr interface, the gateway, for the tenant subnet. > > Here is the display from the Horizon dashboard router interface page: > > > Do you see the extra interface (da7884d9-2ff1)? The only place I can find it is from inside the additional routing table in the router namespace: > > > > > What does that extra interface come from and what is it for? Why does it show up in the dashboard, but not in the command line output? > > Thank you all in advance. > > Cody > From ed at leafe.com Mon Aug 6 16:52:38 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 6 Aug 2018 11:52:38 -0500 Subject: [Openstack] UC nomination period is now open! Message-ID: <277DC0C9-C34D-47D9-B14F-81E41F136909@leafe.com> As the subject says, the nomination period for the summer[0] User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations are made by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 17, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] Sorry, southern hemisphere people! -- Ed Leafe From d.lake at surrey.ac.uk Tue Aug 7 14:56:10 2018 From: d.lake at surrey.ac.uk (d.lake at surrey.ac.uk) Date: Tue, 7 Aug 2018 14:56:10 +0000 Subject: [Openstack] All-in-One, DPDK with multiple public interefaces Message-ID: Hello I'm trying to build a simple all-in-one system using DevStack with OVS+DPDK with 4 public interfaces. I'm using the local.conf here - https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node I have four physical networks defined here: "ML2_VLAN_RANGES=physnet1:1000:2999,physnet2:1000:2999,physnet3:1000:2999,physnet4:1000:2999" I can see this line as well, but I have no idea how to configure it: "OVS_BRIDGE_MAPPINGS="default:br-" I have DPDK installed and the interfaces are bound to the igbuio driver so they do not have a physical kernel name. Do I need to create an OVS bridge for each external interface prior to stacking? If so, what interface name do I use? Thanks David -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Tue Aug 7 14:57:39 2018 From: codeology.lab at gmail.com (Cody) Date: Tue, 7 Aug 2018 10:57:39 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit Message-ID: Hi everyone, I intentionally triggered an error by launching more instances than it is allowed by the 'cpu_allocation_ratio' set on a compute node. When it comes to logs, the only place contained a clue to explain the launch failure was in the nova-conductor.log on a controller node. Why there is no trace in the nova-scheduler.log (or any other logs) for this type or errors? Thank you all! Cody -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Aug 7 15:26:10 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 7 Aug 2018 11:26:10 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: Message-ID: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> On 08/07/2018 10:57 AM, Cody wrote: > Hi everyone, > > I intentionally triggered an error by launching more instances than it > is allowed by the 'cpu_allocation_ratio' set on a compute node. When it > comes to logs, the only place contained a clue to explain the launch > failure was in the nova-conductor.log on a controller node. Why there is > no trace in the nova-scheduler.log (or any other logs) for this type or > errors? Because it's not an error. You exceeded the capacity of your resources, that's all. Are you asking why there isn't a way to *check* to see whether a particular request to launch a VM (or multiple VMs) will exceed the capacity of your deployment? Best, -jay From codeology.lab at gmail.com Tue Aug 7 16:35:58 2018 From: codeology.lab at gmail.com (Cody) Date: Tue, 7 Aug 2018 12:35:58 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> Message-ID: Hi Jay, Thank you for getting back to my question. I agree that it is not an error; only a preset limit is reached. I just wonder why this incident only got recorded in the nova-conductor.log, but not in other files such as nova-scheduler.log, which would make more sense to me. :-) By the way, I am using the Queens release. Regards, Cody On Tue, Aug 7, 2018 at 11:38 AM Jay Pipes wrote: > > On 08/07/2018 10:57 AM, Cody wrote: > > Hi everyone, > > > > I intentionally triggered an error by launching more instances than it > > is allowed by the 'cpu_allocation_ratio' set on a compute node. When it > > comes to logs, the only place contained a clue to explain the launch > > failure was in the nova-conductor.log on a controller node. Why there is > > no trace in the nova-scheduler.log (or any other logs) for this type or > > errors? > > Because it's not an error. > > You exceeded the capacity of your resources, that's all. > > Are you asking why there isn't a way to *check* to see whether a > particular request to launch a VM (or multiple VMs) will exceed the > capacity of your deployment? > > Best, > -jay > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From victoria at vmartinezdelacruz.com Tue Aug 7 23:47:28 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Tue, 7 Aug 2018 20:47:28 -0300 Subject: [Openstack] Stepping down as coordinator for the Outreachy internships Message-ID: Hi all, I'm reaching you out to let you know that I'll be stepping down as coordinator for OpenStack next round. I had been contributing to this effort for several rounds now and I believe is a good moment for somebody else to take the lead. You all know how important is Outreachy to me and I'm grateful for all the amazing things I've done as part of the Outreachy program and all the great people I've met in the way. I plan to keep involved with the internships but leave the coordination tasks to somebody else. If you are interested in becoming an Outreachy coordinator, let me know and I can share my experience and provide some guidance. Thanks, Victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 8 02:00:11 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 7 Aug 2018 22:00:11 -0400 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hi Victoria, Thank you so much for all your wonderful work especially around Outreachy! :) Sincerely, Mohammed On Tue, Aug 7, 2018 at 7:47 PM, Victoria Martínez de la Cruz wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this effort > for several rounds now and I believe is a good moment for somebody else to > take the lead. You all know how important is Outreachy to me and I'm > grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know and > I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jpichon at redhat.com Wed Aug 8 08:09:41 2018 From: jpichon at redhat.com (Julie Pichon) Date: Wed, 8 Aug 2018 09:09:41 +0100 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: On 8 August 2018 at 00:47, Victoria Martínez de la Cruz wrote: > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this effort > for several rounds now and I believe is a good moment for somebody else to > take the lead. You all know how important is Outreachy to me and I'm > grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. Thanks for doing such a wonderful job and keeping Outreachy going the last few years! :) Julie > If you are interested in becoming an Outreachy coordinator, let me know and > I can share my experience and provide some guidance. > > Thanks, > > Victoria From jayachander.it at gmail.com Wed Aug 8 09:08:28 2018 From: jayachander.it at gmail.com (Jay See) Date: Wed, 8 Aug 2018 11:08:28 +0200 Subject: [Openstack] Adding new Hard disk to Compute Node Message-ID: Hai, I am installing Openstack Queens on Ubuntu Server. My server has extra hard disk(s) apart from main hard disk where OS(Ubuntu) is running. ( https://docs.openstack.org/cinder/queens/install/cinder-storage-install-ubuntu.html ) As suggested in cinder (above link), I have been trying to add the new hard disk but the other hard disks are not getting added. Can anyone tell me , what am i missing to add these hard disks? Other info : neutron-l3-agent on controller is not running, is it related to this issue ? I am thinking it is not related to this issue. I am new to Openstack. ~ Jayachander. -- P *SAVE PAPER – Please do not print this e-mail unless absolutely necessary.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Aug 8 09:24:44 2018 From: eblock at nde.ag (Eugen Block) Date: Wed, 08 Aug 2018 09:24:44 +0000 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: Message-ID: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> Hi, there are a couple of questions rising up: - what do you mean by "disks are not added"? Does the server recognize them? Do you see them in the output of "lsblk"? - Do you already have existing physical volumes for cinder (assuming you deployed cinder with lvm as in the provided link)? - If the system recognizes the new disks and you deployed cinder with lvm you can create a new physical volume and extend your existing volume group to have more space for cinder. Is this a failing step or someting else? - Please describe more precisely what exactly you tried and what exactly fails. The failing neutron-l3-agent shouldn't have to do anything with your disk layout, so it's probably something else. Regards, Eugen Zitat von Jay See : > Hai, > > I am installing Openstack Queens on Ubuntu Server. > > My server has extra hard disk(s) apart from main hard disk where OS(Ubuntu) > is running. > > ( > https://docs.openstack.org/cinder/queens/install/cinder-storage-install-ubuntu.html > ) > As suggested in cinder (above link), I have been trying to add the new hard > disk but the other hard disks are not getting added. > > Can anyone tell me , what am i missing to add these hard disks? > > Other info : neutron-l3-agent on controller is not running, is it related > to this issue ? I am thinking it is not related to this issue. > > I am new to Openstack. > > ~ Jayachander. > -- > P *SAVE PAPER – Please do not print this e-mail unless absolutely > necessary.* From jayachander.it at gmail.com Wed Aug 8 10:06:49 2018 From: jayachander.it at gmail.com (Jay See) Date: Wed, 8 Aug 2018 12:06:49 +0200 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> Message-ID: Hai, Thanks for a quick response. - what do you mean by "disks are not added"? Does the server recognize them? Do you see them in the output of "lsblk"? Server does not add them automatically, I tried to mount them. I tried they way they discussed in the page with /dev/sdb only. Other hard disks I have mounted them my self. Yes I can see them in lsblk output as below root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL NAME FSTYPE SIZE MOUNTPOINT LABEL sda 5.5T ├─sda1 vfat 500M ESP ├─sda2 vfat 100M DIAGS └─sda3 vfat 2G OS sdb 5.5T ├─sdb1 5.5T ├─cinder--volumes-cinder--volumes--pool_tmeta 84M │ └─cinder--volumes-cinder--volumes--pool 5.2T └─cinder--volumes-cinder--volumes--pool_tdata 5.2T └─cinder--volumes-cinder--volumes--pool 5.2T sdc 5.5T └─sdc1 xfs 5.5T sdd 5.5T └─sdd1 xfs 5.5T /var/lib/nova/instances/sdd1 sde 5.5T └─sde1 xfs 5.5T /var/lib/nova/instances/sde1 sdf 5.5T └─sdf1 xfs 5.5T /var/lib/nova/instances/sdf1 sdg 5.5T └─sdg1 xfs 5.5T /var/lib/nova/instances/sdg1 sdh 5.5T └─sdh1 xfs 5.5T /var/lib/nova/instances/sdh1 sdi 5.5T └─sdi1 xfs 5.5T /var/lib/nova/instances/sdi1 sdj 5.5T └─sdj1 xfs 5.5T /var/lib/nova/instances/sdj1 sdk 372G ├─sdk1 ext2 487M /boot ├─sdk2 1K └─sdk5 LVM2_member 371.5G ├─h020--vg-root ext4 370.6G / └─h020--vg-swap_1 swap 976M [SWAP] - Do you already have existing physical volumes for cinder (assuming you deployed cinder with lvm as in the provided link)? Yes, I have tried one of the HD (/dev/sdb) - If the system recognizes the new disks and you deployed cinder with lvm you can create a new physical volume and extend your existing volume group to have more space for cinder. Is this a failing step or someting else? System does not recognize the disks automatically, I have manually mounted them or added them to cinder. In Nova-Compute logs I can only see main hard disk shown in the the complete phys_disk, it was supposed to show more phys_disk available atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am thinking it in the wrong way, I want increase my compute node disk size to launch more VMs) 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F inal resource view: name=h020 phys_ram=515767MB used_ram=512MB *phys_disk=364GB* used_disk=0GB total_vcpus= 40 used_vcpus=0 pci_stats=[] - Please describe more precisely what exactly you tried and what exactly fails. As explained in the previous point, I want to increase the phys_disk size to use the compute node more efficiently. So to add the HD to compute node I am installing cinder on the compute node to add all the HDs. I might be doing something wrong. Thanks and Regards, Jayachander. On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block wrote: > Hi, > > there are a couple of questions rising up: > > - what do you mean by "disks are not added"? Does the server recognize > them? Do you see them in the output of "lsblk"? > - Do you already have existing physical volumes for cinder (assuming you > deployed cinder with lvm as in the provided link)? > - If the system recognizes the new disks and you deployed cinder with lvm > you can create a new physical volume and extend your existing volume group > to have more space for cinder. Is this a failing step or someting else? > - Please describe more precisely what exactly you tried and what exactly > fails. > > The failing neutron-l3-agent shouldn't have to do anything with your disk > layout, so it's probably something else. > > Regards, > Eugen > > > Zitat von Jay See : > > Hai, >> >> I am installing Openstack Queens on Ubuntu Server. >> >> My server has extra hard disk(s) apart from main hard disk where >> OS(Ubuntu) >> is running. >> >> ( >> https://docs.openstack.org/cinder/queens/install/cinder-stor >> age-install-ubuntu.html >> ) >> As suggested in cinder (above link), I have been trying to add the new >> hard >> disk but the other hard disks are not getting added. >> >> Can anyone tell me , what am i missing to add these hard disks? >> >> Other info : neutron-l3-agent on controller is not running, is it related >> to this issue ? I am thinking it is not related to this issue. >> >> I am new to Openstack. >> >> ~ Jayachander. >> -- >> P *SAVE PAPER – Please do not print this e-mail unless absolutely >> necessary.* >> > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > -- ​ P *SAVE PAPER – Please do not print this e-mail unless absolutely necessary.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed Aug 8 11:19:19 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 8 Aug 2018 20:19:19 +0900 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> Message-ID: <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> I would think you don't even reach the scheduling stage. Why bother looking for a suitable compute node if you exceeded your quota anyway? The message is in the conductor log because it's the conductor that does most of the work. The others are just slackers (like nova-api) or wait for instructions from the conductor. The above is my guess, of course, but IMHO a very educated one. Bernd. On 8/8/2018 1:35 AM, Cody wrote: > Hi Jay, > > Thank you for getting back to my question. > > I agree that it is not an error; only a preset limit is reached. I > just wonder why this incident only got recorded in the > nova-conductor.log, but not in other files such as nova-scheduler.log, > which would make more sense to me. :-) > > By the way, I am using the Queens release. > > Regards, -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Wed Aug 8 12:35:11 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 8 Aug 2018 08:35:11 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> Message-ID: On 08/08/2018 07:19 AM, Bernd Bausch wrote: > I would think you don't even reach the scheduling stage. Why bother > looking for a suitable compute node if you exceeded your quota anyway? > > The message is in the conductor log because it's the conductor that does > most of the work. The others are just slackers (like nova-api) or wait > for instructions from the conductor. > > The above is my guess, of course, but IMHO a very educated one. > > Bernd. > > On 8/8/2018 1:35 AM, Cody wrote: >> Hi Jay, >> >> Thank you for getting back to my question. >> >> I agree that it is not an error; only a preset limit is reached. I >> just wonder why this incident only got recorded in the >> nova-conductor.log, but not in other files such as nova-scheduler.log, >> which would make more sense to me. :-) I gave up trying to answer this because the original poster did not include any information about an "error" in either the original post [1] or his reply. So I have no idea what got recorded in the nova-conductor log at all. Until I get some details I have no idea how to further answer the question (or even if there *is* a question still?). [1] http://lists.openstack.org/pipermail/openstack/2018-August/046804.html >> By the way, I am using the Queens release. >> >> Regards, > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From emilien at redhat.com Wed Aug 8 12:43:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 8 Aug 2018 08:43:40 -0400 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks Victoria for all your efforts, highly recognized! --- Emilien Macchi On Tue, Aug 7, 2018, 7:48 PM Victoria Martínez de la Cruz, < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahati.chamarthy at gmail.com Wed Aug 8 12:59:04 2018 From: mahati.chamarthy at gmail.com (Mahati C) Date: Wed, 8 Aug 2018 18:29:04 +0530 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thank you Victoria for the initiative and the effort all these years! On a related note, I will continue to coordinate OpenStack Outreachy for the next round and if anyone else would like to join the effort, please feel free to contact me or Victoria. Best, Mahati On Wed, Aug 8, 2018 at 5:17 AM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Aug 8 13:36:16 2018 From: eblock at nde.ag (Eugen Block) Date: Wed, 08 Aug 2018 13:36:16 +0000 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> Message-ID: <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> Okay, I'm really not sure if I understand your setup correctly. > Server does not add them automatically, I tried to mount them. I tried they > way they discussed in the page with /dev/sdb only. Other hard disks I have > mounted them my self. Yes I can see them in lsblk output as below What do you mean with "tried with /dev/sdb"? I assume this is a fresh setup and Cinder didn't work yet, am I right? The new disks won't be added automatically to your cinder configuration, if that's what you expected. You'll have to create new physical volumes and then extend the existing VG to use new disks. > In Nova-Compute logs I can only see main hard disk shown in the the > complete phys_disk, it was supposed to show more phys_disk available > atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am > thinking it in the wrong way, I want increase my compute node disk size to > launch more VMs) If you plan to use cinder volumes as disks for your instances, you don't need much space in /var/lib/nova/instances but more space available for cinder, so you'll need to grow the VG. Regards Zitat von Jay See : > Hai, > > Thanks for a quick response. > > - what do you mean by "disks are not added"? Does the server recognize > them? Do you see them in the output of "lsblk"? > Server does not add them automatically, I tried to mount them. I tried they > way they discussed in the page with /dev/sdb only. Other hard disks I have > mounted them my self. Yes I can see them in lsblk output as below > root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME FSTYPE SIZE > MOUNTPOINT LABEL > sda 5.5T > ├─sda1 vfat 500M > ESP > ├─sda2 vfat 100M > DIAGS > └─sda3 vfat 2G > OS > sdb 5.5T > ├─sdb1 5.5T > ├─cinder--volumes-cinder--volumes--pool_tmeta 84M > │ └─cinder--volumes-cinder--volumes--pool 5.2T > └─cinder--volumes-cinder--volumes--pool_tdata 5.2T > └─cinder--volumes-cinder--volumes--pool 5.2T > sdc 5.5T > └─sdc1 xfs 5.5T > sdd 5.5T > └─sdd1 xfs 5.5T > /var/lib/nova/instances/sdd1 > sde 5.5T > └─sde1 xfs 5.5T > /var/lib/nova/instances/sde1 > sdf 5.5T > └─sdf1 xfs 5.5T > /var/lib/nova/instances/sdf1 > sdg 5.5T > └─sdg1 xfs 5.5T > /var/lib/nova/instances/sdg1 > sdh 5.5T > └─sdh1 xfs 5.5T > /var/lib/nova/instances/sdh1 > sdi 5.5T > └─sdi1 xfs 5.5T > /var/lib/nova/instances/sdi1 > sdj 5.5T > └─sdj1 xfs 5.5T > /var/lib/nova/instances/sdj1 > sdk 372G > ├─sdk1 ext2 487M /boot > ├─sdk2 1K > └─sdk5 LVM2_member 371.5G > ├─h020--vg-root ext4 370.6G / > └─h020--vg-swap_1 swap 976M [SWAP] > > - Do you already have existing physical volumes for cinder (assuming you > deployed cinder with lvm as in the provided link)? > Yes, I have tried one of the HD (/dev/sdb) > > - If the system recognizes the new disks and you deployed cinder with lvm > you can create a new physical volume and extend your existing volume group > to have more space for cinder. Is this a failing step or someting else? > System does not recognize the disks automatically, I have manually mounted > them or added them to cinder. > > In Nova-Compute logs I can only see main hard disk shown in the the > complete phys_disk, it was supposed to show more phys_disk available > atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am > thinking it in the wrong way, I want increase my compute node disk size to > launch more VMs) > > 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker > [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F > inal resource view: name=h020 phys_ram=515767MB used_ram=512MB > *phys_disk=364GB* used_disk=0GB total_vcpus= > 40 used_vcpus=0 pci_stats=[] > > - Please describe more precisely what exactly you tried and what exactly > fails. > As explained in the previous point, I want to increase the phys_disk size > to use the compute node more efficiently. So to add the HD to compute node > I am installing cinder on the compute node to add all the HDs. > > I might be doing something wrong. > > Thanks and Regards, > Jayachander. > > On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block wrote: > >> Hi, >> >> there are a couple of questions rising up: >> >> - what do you mean by "disks are not added"? Does the server recognize >> them? Do you see them in the output of "lsblk"? >> - Do you already have existing physical volumes for cinder (assuming you >> deployed cinder with lvm as in the provided link)? >> - If the system recognizes the new disks and you deployed cinder with lvm >> you can create a new physical volume and extend your existing volume group >> to have more space for cinder. Is this a failing step or someting else? >> - Please describe more precisely what exactly you tried and what exactly >> fails. >> >> The failing neutron-l3-agent shouldn't have to do anything with your disk >> layout, so it's probably something else. >> >> Regards, >> Eugen >> >> >> Zitat von Jay See : >> >> Hai, >>> >>> I am installing Openstack Queens on Ubuntu Server. >>> >>> My server has extra hard disk(s) apart from main hard disk where >>> OS(Ubuntu) >>> is running. >>> >>> ( >>> https://docs.openstack.org/cinder/queens/install/cinder-stor >>> age-install-ubuntu.html >>> ) >>> As suggested in cinder (above link), I have been trying to add the new >>> hard >>> disk but the other hard disks are not getting added. >>> >>> Can anyone tell me , what am i missing to add these hard disks? >>> >>> Other info : neutron-l3-agent on controller is not running, is it related >>> to this issue ? I am thinking it is not related to this issue. >>> >>> I am new to Openstack. >>> >>> ~ Jayachander. >>> -- >>> P *SAVE PAPER – Please do not print this e-mail unless absolutely >>> necessary.* >>> >> >> >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k >> > > > > -- > ​ > P *SAVE PAPER – Please do not print this e-mail unless absolutely > necessary.* From codeology.lab at gmail.com Wed Aug 8 13:37:25 2018 From: codeology.lab at gmail.com (Cody) Date: Wed, 8 Aug 2018 09:37:25 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> Message-ID: > On 08/08/2018 07:19 AM, Bernd Bausch wrote: > > I would think you don't even reach the scheduling stage. Why bother > > looking for a suitable compute node if you exceeded your quota anyway? > > > > The message is in the conductor log because it's the conductor that does > > most of the work. The others are just slackers (like nova-api) or wait > > for instructions from the conductor. > > > > The above is my guess, of course, but IMHO a very educated one. > > > > Bernd. Thank you, Bernd. I didn't know the inner workflow in this case. Initially, I thought it was for the scheduler to discover that no more resource was left available, hence I expected to see something from the scheduler log. My understanding now is that the quota get checked in the database prior to the deployment. That would explain why the clue was in the nova-conductor.log, not the nova-scheduler.log. Cody > I gave up trying to answer this because the original poster did not > include any information about an "error" in either the original post [1] > or his reply. > > So I have no idea what got recorded in the nova-conductor log at all. > > Until I get some details I have no idea how to further answer the > question (or even if there *is* a question still?). > > [1] http://lists.openstack.org/pipermail/openstack/2018-August/046804.html Hi Jay, My apologies for omitting the log information. I am attaching it below for the record. Hope the format won't get too messy... >From the nova-conductor.log ### BEGIN ### 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations raise exception.NoValidHost(reason="") NoValidHost: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback (most recent call last): 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1116, in schedule_and_build_instances 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_alternates=True) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 716, in _schedule_instances 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return_alternates=return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 726, in wrapped 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return func(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 53, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return getattr(self.instance, __name)(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=self.retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager timeout=timeout, retry=retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 550, in _send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise result 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback (most recent call last): 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return func(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise exception.NoValidHost(reason="") 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] Failed to compute_task_build_instances: No valid host was found. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations raise exception.NoValidHost(reason="") NoValidHost: No valid host was found. : NoValidHost_Remote: No valid host was found. 2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720] Setting instance to ERROR state.: NoValidHost_Remote: No valid host was found. ### END ### From jaypipes at gmail.com Wed Aug 8 13:45:57 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 8 Aug 2018 09:45:57 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> Message-ID: On 08/08/2018 09:37 AM, Cody wrote: >> On 08/08/2018 07:19 AM, Bernd Bausch wrote: >>> I would think you don't even reach the scheduling stage. Why bother >>> looking for a suitable compute node if you exceeded your quota anyway? >>> >>> The message is in the conductor log because it's the conductor that does >>> most of the work. The others are just slackers (like nova-api) or wait >>> for instructions from the conductor. >>> >>> The above is my guess, of course, but IMHO a very educated one. >>> >>> Bernd. > > Thank you, Bernd. I didn't know the inner workflow in this case. > Initially, I thought it was for the scheduler to discover that no more > resource was left available, hence I expected to see something from > the scheduler log. My understanding now is that the quota get checked > in the database prior to the deployment. That would explain why the > clue was in the nova-conductor.log, not the nova-scheduler.log. Quota is checked in the nova-api node, not the nova-conductor. As I said in my previous message, unless you paste what the logs are that you are referring to, it's not possible to know what you are referring to. Best, -jay From codeology.lab at gmail.com Wed Aug 8 13:58:38 2018 From: codeology.lab at gmail.com (Cody) Date: Wed, 8 Aug 2018 09:58:38 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> Message-ID: Hi Jay, Thank you for getting back. I attached the log in my previous reply, but I guess Gmail hided it from you as a quoted message. Here comes again: >From nova-conductor.log ### BEGIN ### 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] Failed to schedule instances: NoValidHost_Remote: No valid host was found. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations raise exception.NoValidHost(reason="") NoValidHost: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback (most recent call last): 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 1116, in schedule_and_build_instances 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_alternates=True) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 716, in _schedule_instances 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return_alternates=return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 726, in wrapped 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return func(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 53, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return getattr(self.instance, __name)(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=self.retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager timeout=timeout, retry=retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=retry) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 550, in _send 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise result 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost_Remote: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback (most recent call last): 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return func(*args, **kwargs) 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise exception.NoValidHost(reason="") 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost: No valid host was found. 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager 2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] Failed to compute_task_build_instances: No valid host was found. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 226, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 139, in select_destinations raise exception.NoValidHost(reason="") NoValidHost: No valid host was found. : NoValidHost_Remote: No valid host was found. 2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720] Setting instance to ERROR state.: NoValidHost_Remote: No valid host was found. ### END ### On Wed, Aug 8, 2018 at 9:45 AM Jay Pipes wrote: > > On 08/08/2018 09:37 AM, Cody wrote: > >> On 08/08/2018 07:19 AM, Bernd Bausch wrote: > >>> I would think you don't even reach the scheduling stage. Why bother > >>> looking for a suitable compute node if you exceeded your quota anyway? > >>> > >>> The message is in the conductor log because it's the conductor that does > >>> most of the work. The others are just slackers (like nova-api) or wait > >>> for instructions from the conductor. > >>> > >>> The above is my guess, of course, but IMHO a very educated one. > >>> > >>> Bernd. > > > > Thank you, Bernd. I didn't know the inner workflow in this case. > > Initially, I thought it was for the scheduler to discover that no more > > resource was left available, hence I expected to see something from > > the scheduler log. My understanding now is that the quota get checked > > in the database prior to the deployment. That would explain why the > > clue was in the nova-conductor.log, not the nova-scheduler.log. > > Quota is checked in the nova-api node, not the nova-conductor. > > As I said in my previous message, unless you paste what the logs are > that you are referring to, it's not possible to know what you are > referring to. > > Best, > -jay From amy at demarco.com Wed Aug 8 14:48:30 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 8 Aug 2018 09:48:30 -0500 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Victoria, Thank you for everything you've down with the Outreachy program! Amy (spotz) On Tue, Aug 7, 2018 at 6:47 PM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Aug 8 15:36:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 8 Aug 2018 11:36:03 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> Message-ID: <635dcd48-e720-e1a6-8b15-1c33829da8ee@gmail.com> So, that is normal operation, actually. The conductor calls the scheduler to find a place for your requested instances. The scheduler responded to the conductor that, sorry, there were no hosts that were able to match the request (I don't know what the details of that request were). And so the conductor set the status of the instance(s) in your request to an ERROR state, since they were not able to be launched. Best, -jay On 08/08/2018 09:58 AM, Cody wrote: > Hi Jay, > > Thank you for getting back. I attached the log in my previous reply, > but I guess Gmail hided it from you as a quoted message. Here comes > again: > > From nova-conductor.log > ### BEGIN ### > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > default default] Failed to schedule instances: NoValidHost_Remote: No > valid host was found. > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 226, in inner > return func(*args, **kwargs) > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", > line 139, in select_destinations > raise exception.NoValidHost(reason="") > > NoValidHost: No valid host was found. > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback > (most recent call last): > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line > 1116, in schedule_and_build_instances > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > instance_uuids, return_alternates=True) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line > 716, in _schedule_instances > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > return_alternates=return_alternates) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 726, > in wrapped > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > func(*args, **kwargs) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", > line 53, in select_destinations > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > instance_uuids, return_objects, return_alternates) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", > line 37, in __run_method > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > getattr(self.instance, __name)(*args, **kwargs) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", > line 42, in select_destinations > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > instance_uuids, return_objects, return_alternates) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, > in select_destinations > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > cctxt.call(ctxt, 'select_destinations', **msg_args) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line > 174, in call > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=self.retry) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line > 131, in _send > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > timeout=timeout, retry=retry) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 559, in send > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=retry) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 550, in _send > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise result > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > NoValidHost_Remote: No valid host was found. > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback > (most recent call last): > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line > 226, in inner > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > func(*args, **kwargs) > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > 139, in select_destinations > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise > exception.NoValidHost(reason="") > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost: > No valid host was found. > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > 2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > default default] Failed to compute_task_build_instances: No valid host > was found. > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 226, in inner > return func(*args, **kwargs) > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", > line 139, in select_destinations > raise exception.NoValidHost(reason="") > > NoValidHost: No valid host was found. > : NoValidHost_Remote: No valid host was found. > 2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720] > Setting instance to ERROR state.: NoValidHost_Remote: No valid host > was found. > ### END ### > On Wed, Aug 8, 2018 at 9:45 AM Jay Pipes wrote: >> >> On 08/08/2018 09:37 AM, Cody wrote: >>>> On 08/08/2018 07:19 AM, Bernd Bausch wrote: >>>>> I would think you don't even reach the scheduling stage. Why bother >>>>> looking for a suitable compute node if you exceeded your quota anyway? >>>>> >>>>> The message is in the conductor log because it's the conductor that does >>>>> most of the work. The others are just slackers (like nova-api) or wait >>>>> for instructions from the conductor. >>>>> >>>>> The above is my guess, of course, but IMHO a very educated one. >>>>> >>>>> Bernd. >>> >>> Thank you, Bernd. I didn't know the inner workflow in this case. >>> Initially, I thought it was for the scheduler to discover that no more >>> resource was left available, hence I expected to see something from >>> the scheduler log. My understanding now is that the quota get checked >>> in the database prior to the deployment. That would explain why the >>> clue was in the nova-conductor.log, not the nova-scheduler.log. >> >> Quota is checked in the nova-api node, not the nova-conductor. >> >> As I said in my previous message, unless you paste what the logs are >> that you are referring to, it's not possible to know what you are >> referring to. >> >> Best, >> -jay From jayachander.it at gmail.com Wed Aug 8 17:30:38 2018 From: jayachander.it at gmail.com (Jay See) Date: Wed, 8 Aug 2018 19:30:38 +0200 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> Message-ID: Hai Eugen, Thanks for your suggestions and I went back to find more about adding the new HD to VG. I think it was successful. (Logs are at the end of the mail) Followed this link - https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group But still on the nova-compute logs it still shows wrong phys_disk size. Even in the horizon it doesn't get updated with the new HD added to compute node. 2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource view: name=h020 phys_ram=515767MB used_ram=512MB *phys_disk=364GB *used_disk=0GB total_vcpus=40 used_vcpus=0 pci_stats=[] I understood they are not supposed to be mounted on /var/lib/nova/instances so removed them now. Thanks Jay. root at h020:~# vgdisplay --- Volume group --- *VG Name h020-vg* System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 371.52 GiB PE Size 4.00 MiB Total PE 95109 * Alloc PE / Size 95105 / 371.50 GiB* * Free PE / Size 4 / 16.00 MiB* VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U root at h020:~# pvcreate */dev/sdb1* Physical volume "/dev/sdb1" successfully created root at h020:~# pvdisplay --- Physical volume --- PV Name /dev/sdk5 VG Name h020-vg PV Size 371.52 GiB / not usable 2.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 95109 Free PE 4 Allocated PE 95105 PV UUID BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR "/dev/sdb1" is a new physical volume of "5.46 TiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 5.46 TiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443 root at h020:~# vgextend /dev/h020-vg /dev/sdb1 Volume group "h020-vg" successfully extended root at h020:~# vgdisplay --- Volume group --- VG Name h020-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 5.82 TiB PE Size 4.00 MiB Total PE 1525900 * Alloc PE / Size 95105 / 371.50 GiB* * Free PE / Size 1430795 / 5.46 TiB* VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U root at h020:~# service nova-compute restart root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL NAME FSTYPE SIZE MOUNTPOINT LABEL sda 5.5T ├─sda1 vfat 500M ESP ├─sda2 vfat 100M DIAGS └─sda3 vfat 2G OS sdb 5.5T └─sdb1 LVM2_member 5.5T sdk 372G ├─sdk1 ext2 487M /boot ├─sdk2 1K └─sdk5 LVM2_member 371.5G ├─h020--vg-root ext4 370.6G / └─h020--vg-swap_1 swap 976M [SWAP] root at h020:~# pvscan PV /dev/sdk5 VG h020-vg lvm2 [371.52 GiB / 16.00 MiB free] PV /dev/sdb1 VG h020-vg lvm2 [5.46 TiB / 5.46 TiB free] Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0 ] root at h020:~# vgs VG #PV #LV #SN Attr VSize VFree h020-vg 2 2 0 wz--n- 5.82t 5.46t root at h020:~# vi /var/log/nova/nova-compute.log root at h020:~# On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block wrote: > Okay, I'm really not sure if I understand your setup correctly. > > Server does not add them automatically, I tried to mount them. I tried they >> way they discussed in the page with /dev/sdb only. Other hard disks I have >> mounted them my self. Yes I can see them in lsblk output as below >> > > What do you mean with "tried with /dev/sdb"? I assume this is a fresh > setup and Cinder didn't work yet, am I right? > The new disks won't be added automatically to your cinder configuration, > if that's what you expected. You'll have to create new physical volumes and > then extend the existing VG to use new disks. > > In Nova-Compute logs I can only see main hard disk shown in the the >> complete phys_disk, it was supposed to show more phys_disk available >> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am >> thinking it in the wrong way, I want increase my compute node disk size to >> launch more VMs) >> > > If you plan to use cinder volumes as disks for your instances, you don't > need much space in /var/lib/nova/instances but more space available for > cinder, so you'll need to grow the VG. > > Regards > > > Zitat von Jay See : > > Hai, >> >> Thanks for a quick response. >> >> - what do you mean by "disks are not added"? Does the server recognize >> them? Do you see them in the output of "lsblk"? >> Server does not add them automatically, I tried to mount them. I tried >> they >> way they discussed in the page with /dev/sdb only. Other hard disks I have >> mounted them my self. Yes I can see them in lsblk output as below >> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL >> NAME FSTYPE SIZE >> MOUNTPOINT LABEL >> sda 5.5T >> ├─sda1 vfat 500M >> ESP >> ├─sda2 vfat 100M >> DIAGS >> └─sda3 vfat 2G >> OS >> sdb 5.5T >> ├─sdb1 5.5T >> ├─cinder--volumes-cinder--volumes--pool_tmeta 84M >> │ └─cinder--volumes-cinder--volumes--pool 5.2T >> └─cinder--volumes-cinder--volumes--pool_tdata 5.2T >> └─cinder--volumes-cinder--volumes--pool 5.2T >> sdc 5.5T >> └─sdc1 xfs 5.5T >> sdd 5.5T >> └─sdd1 xfs 5.5T >> /var/lib/nova/instances/sdd1 >> sde 5.5T >> └─sde1 xfs 5.5T >> /var/lib/nova/instances/sde1 >> sdf 5.5T >> └─sdf1 xfs 5.5T >> /var/lib/nova/instances/sdf1 >> sdg 5.5T >> └─sdg1 xfs 5.5T >> /var/lib/nova/instances/sdg1 >> sdh 5.5T >> └─sdh1 xfs 5.5T >> /var/lib/nova/instances/sdh1 >> sdi 5.5T >> └─sdi1 xfs 5.5T >> /var/lib/nova/instances/sdi1 >> sdj 5.5T >> └─sdj1 xfs 5.5T >> /var/lib/nova/instances/sdj1 >> sdk 372G >> ├─sdk1 ext2 487M /boot >> ├─sdk2 1K >> └─sdk5 LVM2_member 371.5G >> ├─h020--vg-root ext4 370.6G / >> └─h020--vg-swap_1 swap 976M [SWAP] >> >> - Do you already have existing physical volumes for cinder (assuming you >> deployed cinder with lvm as in the provided link)? >> Yes, I have tried one of the HD (/dev/sdb) >> >> - If the system recognizes the new disks and you deployed cinder with lvm >> you can create a new physical volume and extend your existing volume group >> to have more space for cinder. Is this a failing step or someting else? >> System does not recognize the disks automatically, I have manually mounted >> them or added them to cinder. >> >> In Nova-Compute logs I can only see main hard disk shown in the the >> complete phys_disk, it was supposed to show more phys_disk available >> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I am >> thinking it in the wrong way, I want increase my compute node disk size to >> launch more VMs) >> >> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker >> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F >> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB >> *phys_disk=364GB* used_disk=0GB total_vcpus= >> >> 40 used_vcpus=0 pci_stats=[] >> >> - Please describe more precisely what exactly you tried and what exactly >> fails. >> As explained in the previous point, I want to increase the phys_disk size >> to use the compute node more efficiently. So to add the HD to compute node >> I am installing cinder on the compute node to add all the HDs. >> >> I might be doing something wrong. >> >> Thanks and Regards, >> Jayachander. >> >> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block wrote: >> >> Hi, >>> >>> there are a couple of questions rising up: >>> >>> - what do you mean by "disks are not added"? Does the server recognize >>> them? Do you see them in the output of "lsblk"? >>> - Do you already have existing physical volumes for cinder (assuming you >>> deployed cinder with lvm as in the provided link)? >>> - If the system recognizes the new disks and you deployed cinder with lvm >>> you can create a new physical volume and extend your existing volume >>> group >>> to have more space for cinder. Is this a failing step or someting else? >>> - Please describe more precisely what exactly you tried and what exactly >>> fails. >>> >>> The failing neutron-l3-agent shouldn't have to do anything with your disk >>> layout, so it's probably something else. >>> >>> Regards, >>> Eugen >>> >>> >>> Zitat von Jay See : >>> >>> Hai, >>> >>>> >>>> I am installing Openstack Queens on Ubuntu Server. >>>> >>>> My server has extra hard disk(s) apart from main hard disk where >>>> OS(Ubuntu) >>>> is running. >>>> >>>> ( >>>> https://docs.openstack.org/cinder/queens/install/cinder-stor >>>> age-install-ubuntu.html >>>> ) >>>> As suggested in cinder (above link), I have been trying to add the new >>>> hard >>>> disk but the other hard disks are not getting added. >>>> >>>> Can anyone tell me , what am i missing to add these hard disks? >>>> >>>> Other info : neutron-l3-agent on controller is not running, is it >>>> related >>>> to this issue ? I am thinking it is not related to this issue. >>>> >>>> I am new to Openstack. >>>> >>>> ~ Jayachander. >>>> -- >>>> P *SAVE PAPER – Please do not print this e-mail unless absolutely >>>> necessary.* >>>> >>>> >>> >>> >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstac >>> k >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstac >>> k >>> >>> >> >> >> -- >> ​ >> P *SAVE PAPER – Please do not print this e-mail unless absolutely >> necessary.* >> > > > > -- ​ P *SAVE PAPER – Please do not print this e-mail unless absolutely necessary.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From codeology.lab at gmail.com Wed Aug 8 18:00:00 2018 From: codeology.lab at gmail.com (Cody) Date: Wed, 8 Aug 2018 14:00:00 -0400 Subject: [Openstack] [nova] Log files on exceeding cpu allocation limit In-Reply-To: <635dcd48-e720-e1a6-8b15-1c33829da8ee@gmail.com> References: <1b2ee776-404a-54c8-0ed8-5a32d9a22015@gmail.com> <52425905-8442-20d4-9764-3b21eed7c598@gmail.com> <635dcd48-e720-e1a6-8b15-1c33829da8ee@gmail.com> Message-ID: Got it! Thank you, Jay! - Cody On Wed, Aug 8, 2018 at 11:36 AM Jay Pipes wrote: > > So, that is normal operation, actually. The conductor calls the > scheduler to find a place for your requested instances. The scheduler > responded to the conductor that, sorry, there were no hosts that were > able to match the request (I don't know what the details of that request > were). > > And so the conductor set the status of the instance(s) in your request > to an ERROR state, since they were not able to be launched. > > Best, > -jay > > On 08/08/2018 09:58 AM, Cody wrote: > > Hi Jay, > > > > Thank you for getting back. I attached the log in my previous reply, > > but I guess Gmail hided it from you as a quoted message. Here comes > > again: > > > > From nova-conductor.log > > ### BEGIN ### > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > > default default] Failed to schedule instances: NoValidHost_Remote: No > > valid host was found. > > Traceback (most recent call last): > > > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > > line 226, in inner > > return func(*args, **kwargs) > > > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", > > line 139, in select_destinations > > raise exception.NoValidHost(reason="") > > > > NoValidHost: No valid host was found. > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback > > (most recent call last): > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line > > 1116, in schedule_and_build_instances > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > instance_uuids, return_alternates=True) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line > > 716, in _schedule_instances > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > return_alternates=return_alternates) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 726, > > in wrapped > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > > func(*args, **kwargs) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", > > line 53, in select_destinations > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > instance_uuids, return_objects, return_alternates) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", > > line 37, in __run_method > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > > getattr(self.instance, __name)(*args, **kwargs) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", > > line 42, in select_destinations > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > instance_uuids, return_objects, return_alternates) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 158, > > in select_destinations > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > > cctxt.call(ctxt, 'select_destinations', **msg_args) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line > > 174, in call > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=self.retry) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line > > 131, in _send > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > timeout=timeout, retry=retry) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > > line 559, in send > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager retry=retry) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", > > line 550, in _send > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise result > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > NoValidHost_Remote: No valid host was found. > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager Traceback > > (most recent call last): > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line > > 226, in inner > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager return > > func(*args, **kwargs) > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager File > > "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > > 139, in select_destinations > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager raise > > exception.NoValidHost(reason="") > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager NoValidHost: > > No valid host was found. > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > 2018-08-08 09:28:35.974 1648 ERROR nova.conductor.manager > > 2018-08-08 09:28:36.328 1648 WARNING nova.scheduler.utils > > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > > default default] Failed to compute_task_build_instances: No valid host > > was found. > > Traceback (most recent call last): > > > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > > line 226, in inner > > return func(*args, **kwargs) > > > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", > > line 139, in select_destinations > > raise exception.NoValidHost(reason="") > > > > NoValidHost: No valid host was found. > > : NoValidHost_Remote: No valid host was found. > > 2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils > > [req-ef0d8ea1-e801-483e-b913-9148a6ac5d90 > > 2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 - > > default default] [instance: b466a974-06ba-459b-bc04-2ccb2b3ee720] > > Setting instance to ERROR state.: NoValidHost_Remote: No valid host > > was found. > > ### END ### > > On Wed, Aug 8, 2018 at 9:45 AM Jay Pipes wrote: > >> > >> On 08/08/2018 09:37 AM, Cody wrote: > >>>> On 08/08/2018 07:19 AM, Bernd Bausch wrote: > >>>>> I would think you don't even reach the scheduling stage. Why bother > >>>>> looking for a suitable compute node if you exceeded your quota anyway? > >>>>> > >>>>> The message is in the conductor log because it's the conductor that does > >>>>> most of the work. The others are just slackers (like nova-api) or wait > >>>>> for instructions from the conductor. > >>>>> > >>>>> The above is my guess, of course, but IMHO a very educated one. > >>>>> > >>>>> Bernd. > >>> > >>> Thank you, Bernd. I didn't know the inner workflow in this case. > >>> Initially, I thought it was for the scheduler to discover that no more > >>> resource was left available, hence I expected to see something from > >>> the scheduler log. My understanding now is that the quota get checked > >>> in the database prior to the deployment. That would explain why the > >>> clue was in the nova-conductor.log, not the nova-scheduler.log. > >> > >> Quota is checked in the nova-api node, not the nova-conductor. > >> > >> As I said in my previous message, unless you paste what the logs are > >> that you are referring to, it's not possible to know what you are > >> referring to. > >> > >> Best, > >> -jay From berndbausch at gmail.com Thu Aug 9 00:37:14 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 9 Aug 2018 09:37:14 +0900 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> Message-ID: <574e1679-a46a-48ab-c5d2-9e0253007962@gmail.com> Your node uses logical volume /h020--vg-root/ as its root filesystem. This logical volume has a size of 370GB: # lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL NAME                FSTYPE        SIZE MOUNTPOINT LABEL (...) └─sdk5              LVM2_member 371.5G *  ├─h020--vg-root   ext4        370.6G /*   └─h020--vg-swap_1 swap          976M [SWAP] Now you created another physical volume, //dev/sdb1/, and added it to volume group /h020-vg/. This increases the size of the *volume group*, but not the size of the *logical volume*. If you want to provide more space to instances' ephemeral storage, you could: * increase the size of root volume /h020--vg-root/ using the /lvextend/ command, then increase the size of the filesystem on it. I believe that this requires a reboot, since it's the root filesystem. or * create another logical volume, e.g. lvcreate -L1000GB -n lv-instances h020-vg for a 1000GB logical volume, and mount it under //var/lib/nova/instances/: mount /dev/h020-vg/lv-instances /var/lib/nova/instances (before mounting, create a filesystem on /lv-instances/ and transfer the data from //var/lib/nova/instances/ to the new filesystem. Also, don't forget to persist the mount by adding it to //etc/fstab/) The second option is by far better, in my opinion, as you should separate operating system files from OpenStack data. You say that you are new to OpenStack. That's fine, but you seem to be lacking the fundamentals of Linux system management as well. You can't learn OpenStack without a certain level of Linux skills. At least learn about LVM (it's not that hard) and filesystems. You will also need to have networking fundamentals and Linux networking tools under your belt. Good luck! Bernd Bausch On 8/9/2018 2:30 AM, Jay See wrote: > Hai Eugen, > > Thanks for your suggestions and I went back to find more about adding > the new HD to VG. I think it was successful. (Logs are at the end of > the mail) > > Followed this link > - https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group > > But still on the nova-compute logs it still shows wrong phys_disk > size. Even in the horizon it doesn't get updated with the new HD added > to compute node. > > 2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker > [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource > view: name=h020 phys_ram=515767MB used_ram=512MB > *phys_disk=364GB *used_disk=0GB total_vcpus=40 used_vcpus=0 pci_stats=[] > > I understood they are not supposed to be mounted > on /var/lib/nova/instances so removed them now. > > Thanks > Jay. > > > root at h020:~# vgdisplay >   --- Volume group --- >   *VG Name               h020-vg* >   System ID >   Format                lvm2 >   Metadata Areas        1 >   Metadata Sequence No  3 >   VG Access             read/write >   VG Status             resizable >   MAX LV                0 >   Cur LV                2 >   Open LV               2 >   Max PV                0 >   Cur PV                1 >   Act PV                1 >   VG Size               371.52 GiB >   PE Size               4.00 MiB >   Total PE              95109 > *  Alloc PE / Size       95105 / 371.50 GiB* > *  Free  PE / Size       4 / 16.00 MiB* >   VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U > > root at h020:~# pvcreate */dev/sdb1* >   Physical volume "/dev/sdb1" successfully created > root at h020:~# pvdisplay >   --- Physical volume --- >   PV Name               /dev/sdk5 >   VG Name               h020-vg >   PV Size               371.52 GiB / not usable 2.00 MiB >   Allocatable           yes >   PE Size               4.00 MiB >   Total PE              95109 >   Free PE               4 >   Allocated PE          95105 >   PV UUID               BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR > >   "/dev/sdb1" is a new physical volume of "5.46 TiB" >   --- NEW Physical volume --- >   PV Name               /dev/sdb1 >   VG Name >   PV Size               5.46 TiB >   Allocatable           NO >   PE Size               0 >   Total PE              0 >   Free PE               0 >   Allocated PE          0 >   PV UUID               CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443 > > root at h020:~# vgextend /dev/h020-vg /dev/sdb1 >   Volume group "h020-vg" successfully extended > root at h020:~# vgdisplay >   --- Volume group --- >   VG Name               h020-vg >   System ID >   Format                lvm2 >   Metadata Areas        2 >   Metadata Sequence No  4 >   VG Access             read/write >   VG Status             resizable >   MAX LV                0 >   Cur LV                2 >   Open LV               2 >   Max PV                0 >   Cur PV                2 >   Act PV                2 >   VG Size               5.82 TiB >   PE Size               4.00 MiB >   Total PE              1525900 > *  Alloc PE / Size       95105 / 371.50 GiB* > *  Free  PE / Size       1430795 / 5.46 TiB* >   VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U > > root at h020:~# service nova-compute restart > root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME                FSTYPE        SIZE MOUNTPOINT LABEL > sda                               5.5T > ├─sda1              vfat          500M            ESP > ├─sda2              vfat          100M            DIAGS > └─sda3              vfat            2G            OS > sdb                               5.5T > └─sdb1              LVM2_member   5.5T > sdk                               372G > ├─sdk1              ext2          487M /boot > ├─sdk2                              1K > └─sdk5              LVM2_member 371.5G >   ├─h020--vg-root   ext4        370.6G / >   └─h020--vg-swap_1 swap          976M [SWAP] > root at h020:~# pvscan >   PV /dev/sdk5   VG h020-vg         lvm2 [371.52 GiB / 16.00 MiB free] >   PV /dev/sdb1   VG h020-vg         lvm2 [5.46 TiB / 5.46 TiB free] >   Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0   ] > root at h020:~# vgs >   VG      #PV #LV #SN Attr   VSize VFree >   h020-vg   2   2   0 wz--n- 5.82t 5.46t > root at h020:~# vi /var/log/nova/nova-compute.log > root at h020:~#  > > > On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block > wrote: > > Okay, I'm really not sure if I understand your setup correctly. > > Server does not add them automatically, I tried to mount them. > I tried they > way they discussed in the page with /dev/sdb only. Other hard > disks I have > mounted them my self. Yes I can see them in lsblk output as below > > > What do you mean with "tried with /dev/sdb"? I assume this is a > fresh setup and Cinder didn't work yet, am I right? > The new disks won't be added automatically to your cinder > configuration, if that's what you expected. You'll have to create > new physical volumes and then extend the existing VG to use new disks. > > In Nova-Compute logs I can only see main hard disk shown in > the the > complete phys_disk, it was supposed to show more  phys_disk > available > atleast 5.8 TB if only /dev/sdb is added as per my understand > (May be I am > thinking it in the wrong way, I want increase my compute node > disk size to > launch more VMs) > > > If you plan to use cinder volumes as disks for your instances, you > don't need much space in /var/lib/nova/instances but more space > available for cinder, so you'll need to grow the VG. > > Regards > > > Zitat von Jay See >: > > Hai, > > Thanks for a quick response. > > - what do you mean by "disks are not added"? Does the server > recognize > them? Do you see them in the output of "lsblk"? > Server does not add them automatically, I tried to mount them. > I tried they > way they discussed in the page with /dev/sdb only. Other hard > disks I have > mounted them my self. Yes I can see them in lsblk output as below > root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME                                          FSTYPE        SIZE > MOUNTPOINT                   LABEL > sda                                                         5.5T > ├─sda1                                        vfat          500M >                   ESP > ├─sda2                                        vfat          100M >                   DIAGS > └─sda3                                        vfat            2G >                   OS > sdb                                                         5.5T > ├─sdb1                                                      5.5T > ├─cinder--volumes-cinder--volumes--pool_tmeta                84M > │ └─cinder--volumes-cinder--volumes--pool                   5.2T > └─cinder--volumes-cinder--volumes--pool_tdata               5.2T >   └─cinder--volumes-cinder--volumes--pool                   5.2T > sdc                                                         5.5T > └─sdc1                                        xfs           5.5T > sdd                                                         5.5T > └─sdd1                                        xfs           5.5T > /var/lib/nova/instances/sdd1 > sde                                                         5.5T > └─sde1                                        xfs           5.5T > /var/lib/nova/instances/sde1 > sdf                                                         5.5T > └─sdf1                                        xfs           5.5T > /var/lib/nova/instances/sdf1 > sdg                                                         5.5T > └─sdg1                                        xfs           5.5T > /var/lib/nova/instances/sdg1 > sdh                                                         5.5T > └─sdh1                                        xfs           5.5T > /var/lib/nova/instances/sdh1 > sdi                                                         5.5T > └─sdi1                                        xfs           5.5T > /var/lib/nova/instances/sdi1 > sdj                                                         5.5T > └─sdj1                                        xfs           5.5T > /var/lib/nova/instances/sdj1 > sdk                                                         372G > ├─sdk1                                        ext2          > 487M /boot > ├─sdk2                                                        1K > └─sdk5                                        LVM2_member 371.5G >   ├─h020--vg-root                             ext4        370.6G / >   └─h020--vg-swap_1                           swap          > 976M [SWAP] > > - Do you already have existing physical volumes for cinder > (assuming you > deployed cinder with lvm as in the provided link)? > Yes, I have tried one of the HD (/dev/sdb) > > - If the system recognizes the new disks and you deployed > cinder with lvm > you can create a new physical volume and extend your existing > volume group > to have more space for cinder. Is this a failing step or > someting else? > System does not recognize the disks automatically, I have > manually mounted > them or added them to cinder. > > In Nova-Compute logs I can only see main hard disk shown in > the the > complete phys_disk, it was supposed to show more  phys_disk > available > atleast 5.8 TB if only /dev/sdb is added as per my understand > (May be I am > thinking it in the wrong way, I want increase my compute node > disk size to > launch more VMs) > > 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker > [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F > inal resource view: name=h020 phys_ram=515767MB used_ram=512MB > *phys_disk=364GB* used_disk=0GB total_vcpus= > > 40 used_vcpus=0 pci_stats=[] > > - Please describe more precisely what exactly you tried and > what exactly > fails. > As explained in the previous point, I want to increase the  > phys_disk size > to use the compute node more efficiently. So to add the HD to > compute node > I am installing cinder on the compute node to add all the HDs. > > I might be doing something wrong. > > Thanks and Regards, > Jayachander. > > On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block > wrote: > > Hi, > > there are a couple of questions rising up: > > - what do you mean by "disks are not added"? Does the > server recognize > them? Do you see them in the output of "lsblk"? > - Do you already have existing physical volumes for cinder > (assuming you > deployed cinder with lvm as in the provided link)? > - If the system recognizes the new disks and you deployed > cinder with lvm > you can create a new physical volume and extend your > existing volume group > to have more space for cinder. Is this a failing step or > someting else? > - Please describe more precisely what exactly you tried > and what exactly > fails. > > The failing neutron-l3-agent shouldn't have to do anything > with your disk > layout, so it's probably something else. > > Regards, > Eugen > > > Zitat von Jay See >: > > Hai, > > > I am installing Openstack Queens on Ubuntu Server. > > My server has extra hard disk(s) apart from main hard > disk where > OS(Ubuntu) > is running. > > ( > https://docs.openstack.org/cinder/queens/install/cinder-stor > > age-install-ubuntu.html > ) > As suggested in cinder (above link), I have been > trying to add the new > hard > disk but the other hard disks are not getting added. > > Can anyone tell me , what am i missing to add these > hard disks? > > Other info : neutron-l3-agent on controller is not > running, is it related > to this issue ? I am thinking it is not related to > this issue. > > I am new to Openstack. > > ~ Jayachander. > -- > P  *SAVE PAPER – Please do not print this e-mail > unless absolutely > necessary.* > > > > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > > k > Post to     : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > > k > > > > > -- > ​ > P  *SAVE PAPER – Please do not print this e-mail unless absolutely > necessary.* > > > > > > > > -- > ​ > P  *SAVE PAPER – Please do not print this e-mail unless absolutely > necessary.* > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From eblock at nde.ag Thu Aug 9 07:05:22 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 09 Aug 2018 07:05:22 +0000 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: <574e1679-a46a-48ab-c5d2-9e0253007962@gmail.com> References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> <574e1679-a46a-48ab-c5d2-9e0253007962@gmail.com> Message-ID: <20180809070522.Horde.grvR7G2FFSzvbJi6wE0aXIM@webmail.nde.ag> Maybe I should point out more clearly that there are several ways of providing disk space for your instances. If you choose file based storage for your instances (e.g. ephemeral disks as qcow images), you'll need a lot of space in /var/lib/nova/instances as ephemeral storage. If you delete an instance its disk is also gone. Then there's cinder that can provide persistant storage to your instances or for additional volumes to existing VMs. If you delete an instance its disk will not be deleted (if you choose so). Cinder can be configured with different backends, e.g. LVM or Ceph. The short description in [1] scratches only the top of this but maybe this helps understanding the basics. For now you can ignore the HA references. So in conclusion you'll need to make a choice (for now) how to provide disk space for your instances (ephemeral or persistant). You'll see "phys_disk" grow if you provide more space to /var/lib/nova/instances, e.g. we use Ceph as backend and have /var/lib/nova/instances mounted on shared storage which gives us 22 TB of space: Final resource view: name=compute2 phys_ram=64395MB used_ram=68048MB phys_disk=22877GB used_disk=490GB If you use cinder with LVM these statistics will differ, of course. I hope this clears it up a little bit. Regards [1] https://docs.openstack.org/ha-guide/storage-ha-backend.html Zitat von Bernd Bausch : > Your node uses logical volume /h020--vg-root/ as its root filesystem. > This logical volume has a size of 370GB: > > # lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME                FSTYPE        SIZE MOUNTPOINT LABEL > (...) > └─sdk5              LVM2_member 371.5G > *  ├─h020--vg-root   ext4        370.6G /* >   └─h020--vg-swap_1 swap          976M [SWAP] > > Now you created another physical volume, //dev/sdb1/, and added it to > volume group /h020-vg/. This increases the size of the *volume group*, > but not the size of the *logical volume*. > > If you want to provide more space to instances' ephemeral storage, you > could: > > * increase the size of root volume /h020--vg-root/ using the > /lvextend/ command, then increase the size of the filesystem on it. > I believe that this requires a reboot, since it's the root filesystem. > > or > > * create another logical volume, e.g. lvcreate -L1000GB -n > lv-instances h020-vg for a 1000GB logical volume, and mount it under > //var/lib/nova/instances/: mount /dev/h020-vg/lv-instances > /var/lib/nova/instances > (before mounting, create a filesystem on /lv-instances/ and transfer > the data from //var/lib/nova/instances/ to the new filesystem. Also, > don't forget to persist the mount by adding it to //etc/fstab/) > > The second option is by far better, in my opinion, as you should > separate operating system files from OpenStack data. > > You say that you are new to OpenStack. That's fine, but you seem to be > lacking the fundamentals of Linux system management as well. You can't > learn OpenStack without a certain level of Linux skills. At least learn > about LVM (it's not that hard) and filesystems. You will also need to > have networking fundamentals and Linux networking tools under your belt. > > Good luck! > > Bernd Bausch > > On 8/9/2018 2:30 AM, Jay See wrote: >> Hai Eugen, >> >> Thanks for your suggestions and I went back to find more about adding >> the new HD to VG. I think it was successful. (Logs are at the end of >> the mail) >> >> Followed this link >> - https://www.howtoforge.com/logical-volume-manager-how-can-i-extend-a-volume-group >> >> But still on the nova-compute logs it still shows wrong phys_disk >> size. Even in the horizon it doesn't get updated with the new HD added >> to compute node. >> >> 2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker >> [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource >> view: name=h020 phys_ram=515767MB used_ram=512MB >> *phys_disk=364GB *used_disk=0GB total_vcpus=40 used_vcpus=0 pci_stats=[] >> >> I understood they are not supposed to be mounted >> on /var/lib/nova/instances so removed them now. >> >> Thanks >> Jay. >> >> >> root at h020:~# vgdisplay >>   --- Volume group --- >>   *VG Name               h020-vg* >>   System ID >>   Format                lvm2 >>   Metadata Areas        1 >>   Metadata Sequence No  3 >>   VG Access             read/write >>   VG Status             resizable >>   MAX LV                0 >>   Cur LV                2 >>   Open LV               2 >>   Max PV                0 >>   Cur PV                1 >>   Act PV                1 >>   VG Size               371.52 GiB >>   PE Size               4.00 MiB >>   Total PE              95109 >> *  Alloc PE / Size       95105 / 371.50 GiB* >> *  Free  PE / Size       4 / 16.00 MiB* >>   VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U >> >> root at h020:~# pvcreate */dev/sdb1* >>   Physical volume "/dev/sdb1" successfully created >> root at h020:~# pvdisplay >>   --- Physical volume --- >>   PV Name               /dev/sdk5 >>   VG Name               h020-vg >>   PV Size               371.52 GiB / not usable 2.00 MiB >>   Allocatable           yes >>   PE Size               4.00 MiB >>   Total PE              95109 >>   Free PE               4 >>   Allocated PE          95105 >>   PV UUID               BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR >> >>   "/dev/sdb1" is a new physical volume of "5.46 TiB" >>   --- NEW Physical volume --- >>   PV Name               /dev/sdb1 >>   VG Name >>   PV Size               5.46 TiB >>   Allocatable           NO >>   PE Size               0 >>   Total PE              0 >>   Free PE               0 >>   Allocated PE          0 >>   PV UUID               CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443 >> >> root at h020:~# vgextend /dev/h020-vg /dev/sdb1 >>   Volume group "h020-vg" successfully extended >> root at h020:~# vgdisplay >>   --- Volume group --- >>   VG Name               h020-vg >>   System ID >>   Format                lvm2 >>   Metadata Areas        2 >>   Metadata Sequence No  4 >>   VG Access             read/write >>   VG Status             resizable >>   MAX LV                0 >>   Cur LV                2 >>   Open LV               2 >>   Max PV                0 >>   Cur PV                2 >>   Act PV                2 >>   VG Size               5.82 TiB >>   PE Size               4.00 MiB >>   Total PE              1525900 >> *  Alloc PE / Size       95105 / 371.50 GiB* >> *  Free  PE / Size       1430795 / 5.46 TiB* >>   VG UUID               4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U >> >> root at h020:~# service nova-compute restart >> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL >> NAME                FSTYPE        SIZE MOUNTPOINT LABEL >> sda                               5.5T >> ├─sda1              vfat          500M            ESP >> ├─sda2              vfat          100M            DIAGS >> └─sda3              vfat            2G            OS >> sdb                               5.5T >> └─sdb1              LVM2_member   5.5T >> sdk                               372G >> ├─sdk1              ext2          487M /boot >> ├─sdk2                              1K >> └─sdk5              LVM2_member 371.5G >>   ├─h020--vg-root   ext4        370.6G / >>   └─h020--vg-swap_1 swap          976M [SWAP] >> root at h020:~# pvscan >>   PV /dev/sdk5   VG h020-vg         lvm2 [371.52 GiB / 16.00 MiB free] >>   PV /dev/sdb1   VG h020-vg         lvm2 [5.46 TiB / 5.46 TiB free] >>   Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0   ] >> root at h020:~# vgs >>   VG      #PV #LV #SN Attr   VSize VFree >>   h020-vg   2   2   0 wz--n- 5.82t 5.46t >> root at h020:~# vi /var/log/nova/nova-compute.log >> root at h020:~#  >> >> >> On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block > > wrote: >> >> Okay, I'm really not sure if I understand your setup correctly. >> >> Server does not add them automatically, I tried to mount them. >> I tried they >> way they discussed in the page with /dev/sdb only. Other hard >> disks I have >> mounted them my self. Yes I can see them in lsblk output as below >> >> >> What do you mean with "tried with /dev/sdb"? I assume this is a >> fresh setup and Cinder didn't work yet, am I right? >> The new disks won't be added automatically to your cinder >> configuration, if that's what you expected. You'll have to create >> new physical volumes and then extend the existing VG to use new disks. >> >> In Nova-Compute logs I can only see main hard disk shown in >> the the >> complete phys_disk, it was supposed to show more  phys_disk >> available >> atleast 5.8 TB if only /dev/sdb is added as per my understand >> (May be I am >> thinking it in the wrong way, I want increase my compute node >> disk size to >> launch more VMs) >> >> >> If you plan to use cinder volumes as disks for your instances, you >> don't need much space in /var/lib/nova/instances but more space >> available for cinder, so you'll need to grow the VG. >> >> Regards >> >> >> Zitat von Jay See > >: >> >> Hai, >> >> Thanks for a quick response. >> >> - what do you mean by "disks are not added"? Does the server >> recognize >> them? Do you see them in the output of "lsblk"? >> Server does not add them automatically, I tried to mount them. >> I tried they >> way they discussed in the page with /dev/sdb only. Other hard >> disks I have >> mounted them my self. Yes I can see them in lsblk output as below >> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL >> NAME                                          FSTYPE        SIZE >> MOUNTPOINT                   LABEL >> sda                                                         5.5T >> ├─sda1                                        vfat          500M >>                   ESP >> ├─sda2                                        vfat          100M >>                   DIAGS >> └─sda3                                        vfat            2G >>                   OS >> sdb                                                         5.5T >> ├─sdb1                                                      5.5T >> ├─cinder--volumes-cinder--volumes--pool_tmeta                84M >> │ └─cinder--volumes-cinder--volumes--pool                   5.2T >> └─cinder--volumes-cinder--volumes--pool_tdata               5.2T >>   └─cinder--volumes-cinder--volumes--pool                   5.2T >> sdc                                                         5.5T >> └─sdc1                                        xfs           5.5T >> sdd                                                         5.5T >> └─sdd1                                        xfs           5.5T >> /var/lib/nova/instances/sdd1 >> sde                                                         5.5T >> └─sde1                                        xfs           5.5T >> /var/lib/nova/instances/sde1 >> sdf                                                         5.5T >> └─sdf1                                        xfs           5.5T >> /var/lib/nova/instances/sdf1 >> sdg                                                         5.5T >> └─sdg1                                        xfs           5.5T >> /var/lib/nova/instances/sdg1 >> sdh                                                         5.5T >> └─sdh1                                        xfs           5.5T >> /var/lib/nova/instances/sdh1 >> sdi                                                         5.5T >> └─sdi1                                        xfs           5.5T >> /var/lib/nova/instances/sdi1 >> sdj                                                         5.5T >> └─sdj1                                        xfs           5.5T >> /var/lib/nova/instances/sdj1 >> sdk                                                         372G >> ├─sdk1                                        ext2          >> 487M /boot >> ├─sdk2                                                        1K >> └─sdk5                                        LVM2_member 371.5G >>   ├─h020--vg-root                             ext4        370.6G / >>   └─h020--vg-swap_1                           swap          >> 976M [SWAP] >> >> - Do you already have existing physical volumes for cinder >> (assuming you >> deployed cinder with lvm as in the provided link)? >> Yes, I have tried one of the HD (/dev/sdb) >> >> - If the system recognizes the new disks and you deployed >> cinder with lvm >> you can create a new physical volume and extend your existing >> volume group >> to have more space for cinder. Is this a failing step or >> someting else? >> System does not recognize the disks automatically, I have >> manually mounted >> them or added them to cinder. >> >> In Nova-Compute logs I can only see main hard disk shown in >> the the >> complete phys_disk, it was supposed to show more  phys_disk >> available >> atleast 5.8 TB if only /dev/sdb is added as per my understand >> (May be I am >> thinking it in the wrong way, I want increase my compute node >> disk size to >> launch more VMs) >> >> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker >> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F >> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB >> *phys_disk=364GB* used_disk=0GB total_vcpus= >> >> 40 used_vcpus=0 pci_stats=[] >> >> - Please describe more precisely what exactly you tried and >> what exactly >> fails. >> As explained in the previous point, I want to increase the  >> phys_disk size >> to use the compute node more efficiently. So to add the HD to >> compute node >> I am installing cinder on the compute node to add all the HDs. >> >> I might be doing something wrong. >> >> Thanks and Regards, >> Jayachander. >> >> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block > > wrote: >> >> Hi, >> >> there are a couple of questions rising up: >> >> - what do you mean by "disks are not added"? Does the >> server recognize >> them? Do you see them in the output of "lsblk"? >> - Do you already have existing physical volumes for cinder >> (assuming you >> deployed cinder with lvm as in the provided link)? >> - If the system recognizes the new disks and you deployed >> cinder with lvm >> you can create a new physical volume and extend your >> existing volume group >> to have more space for cinder. Is this a failing step or >> someting else? >> - Please describe more precisely what exactly you tried >> and what exactly >> fails. >> >> The failing neutron-l3-agent shouldn't have to do anything >> with your disk >> layout, so it's probably something else. >> >> Regards, >> Eugen >> >> >> Zitat von Jay See > >: >> >> Hai, >> >> >> I am installing Openstack Queens on Ubuntu Server. >> >> My server has extra hard disk(s) apart from main hard >> disk where >> OS(Ubuntu) >> is running. >> >> ( >> https://docs.openstack.org/cinder/queens/install/cinder-stor >> >> >> age-install-ubuntu.html >> ) >> As suggested in cinder (above link), I have been >> trying to add the new >> hard >> disk but the other hard disks are not getting added. >> >> Can anyone tell me , what am i missing to add these >> hard disks? >> >> Other info : neutron-l3-agent on controller is not >> running, is it related >> to this issue ? I am thinking it is not related to >> this issue. >> >> I am new to Openstack. >> >> ~ Jayachander. >> -- >> P  *SAVE PAPER – Please do not print this e-mail >> unless absolutely >> necessary.* >> >> >> >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> >> k >> Post to     : openstack at lists.openstack.org >> >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> >> k >> >> >> >> >> -- >> ​ >> P  *SAVE PAPER – Please do not print this e-mail unless absolutely >> necessary.* >> >> >> >> >> >> >> >> -- >> ​ >> P  *SAVE PAPER – Please do not print this e-mail unless absolutely >> necessary.* >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From majopela at redhat.com Thu Aug 9 08:39:18 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Thu, 9 Aug 2018 10:39:18 +0200 Subject: [Openstack] Queens horizon is very slow In-Reply-To: References: Message-ID: Hi Satish, Can you try listing the resources (instances, and ports) as non-admin user and share your results? We posted a patch recently because of an issue in oslo.policy which could potentially make things better . https://review.openstack.org/#/q/909a1ea3a7aceb6e0637058b9c6a53d14043d6d1 On 3 August 2018 at 20:47:58, Satish Patel (satish.txt at gmail.com) wrote: forgot to share some result which is here [root at ostack-infra-02-utility-container-c39f9322 ~]# openstack --timing server list +--------------------------------------+--------+---------+----------------------+-----------------+----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+--------+---------+----------------------+-----------------+----------+ | d5e16566-1262-4ac7-ad2b-2ad252472b18 | help-1 | ACTIVE | net-vlan31=10.31.1.5 | cirros-raw | m1.tiny | | c6f3920b-93f3-4a3a-a546-a5b575f8815d | help | SHUTOFF | net-vlan31=10.31.1.4 | Centos-7-x86_64 | m1.small | +--------------------------------------+--------+---------+----------------------+-----------------+----------+ +------------------------------------------------+----------+ | URL | Seconds | +------------------------------------------------+----------+ | GET http://172.28.0.9:5000/v3 | 0.013816 | | POST http://172.28.0.9:5000/v3/auth/tokens | 0.357006 | | POST http://172.28.0.9:5000/v3/auth/tokens | 0.547765 | | GET http://172.28.0.9:8774/v2.1/servers/detail | 0.645702 | | GET http://172.28.0.9:8774/v2.1/flavors/detail | 0.093062 | | Total | 1.657351 | +------------------------------------------------+----------+ On Fri, Aug 3, 2018 at 2:32 PM, Satish Patel wrote: > Folks, > > I have deployed pike using openstack-ansible on 3 node (HA) and > everything was good Horizon was fast enough but last week i have > upgraded to queens and found horizon is painful slow, I did command > line test and they are ok but GUI is just hard to watch, I have check > all basic setting memcache etc.. all looks good, i am not sure how to > troubleshoot this issue. > > Just wonder if this is queens issue because pike was running fast > enough, is there any good guide line or tool to find out speed of GUI _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcioprado at marcioprado.eti.br Thu Aug 9 14:28:27 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Thu, 09 Aug 2018 11:28:27 -0300 Subject: [Openstack] Error Neutron: RTNETLINK answers: File exists In-Reply-To: <99001d52739f470ba2abf7d951600436@marcioprado.eti.br> References: <20180727080536.Horde.GzHVtxaoCBgtq_alRDaYS0d@webmail.nde.ag> <99001d52739f470ba2abf7d951600436@marcioprado.eti.br> Message-ID: <857825ade79845e5e31f9bdded3f1733@marcioprado.eti.br> Guys, I figured out part of the problem. The problem is a wireless TP-link router with the OpenWRT firmware configured with bridge. When I connect this wireless router to the switch with the OpenStack cloud servers, the Linux bridge agent starts to make an error and I lose access to the VMs. It is not duplicate IP or DHCP. Does anyone have any idea what it is? Em 27-07-2018 08:32, Marcio Prado escreveu: > Thanks for the help Eugen, > > This log is from the linuxbridge of the controller node. Compute nodes > are not logging errors. > > Follows the output of the "openstack network agent list" > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host > | Availability Zone | Alive | State | Binary | > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | 590f5a6d-379b-4e8d-87ec-f1060cecf230 | Linux bridge agent | > controller | None | True | UP | > neutron-linuxbridge-agent | > | 88fb87c9-4c03-4faa-8286-95be3586fc94 | DHCP agent | > controller | nova | True | UP | neutron-dhcp-agent > | > | b982382e-438c-46a9-8d4e-d58d554150fd | Linux bridge agent | compute1 > | None | True | UP | neutron-linuxbridge-agent | > | c7a9ba41-1fae-46cd-b61f-30bcacb0a4e8 | L3 agent | > controller | nova | True | UP | neutron-l3-agent > | > | c9a1ea4b-2d5d-4bda-9849-cd6e302a2917 | Metadata agent | > controller | None | True | UP | > neutron-metadata-agent | > | e690d4b9-9285-4ddd-a87a-f28ea99d9a73 | Linux bridge agent | compute3 > | None | False | UP | neutron-linuxbridge-agent | > | fdd8f615-f5d6-4100-826e-59f8270df715 | Linux bridge agent | compute2 > | None | False | UP | neutron-linuxbridge-agent | > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > > compute2 and compute3 are turned off intentionally. > > Log compute1 > > /var/log/neutron/neutron-linuxbridge-agent.log > > 2018-07-27 07:43:57.242 1895 INFO neutron.common.config [-] > /usr/bin/neutron-linuxbridge-agent version 10.0.0 > 2018-07-27 07:43:57.243 1895 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent > [-] Interface mappings: {'provider': 'eno3'} > 2018-07-27 07:43:57.243 1895 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent > [-] Bridge mappings: {} > 2018-07-27 07:44:00.954 1895 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent > [-] Agent initialized successfully, now running... > 2018-07-27 07:44:01.582 1895 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] RPC agent_id: > lb525400d52f59 > 2018-07-27 07:44:01.589 1895 INFO > neutron.agent.agent_extensions_manager > [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Loaded agent > extensions: [] > 2018-07-27 07:44:01.716 1895 INFO > neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent > Agent has just been revived. Doing a full sync. > 2018-07-27 07:44:01.778 1895 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge > agent Agent RPC Daemon Started! > 2018-07-27 07:44:01.779 1895 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge > agent Agent out of sync with plugin! > 2018-07-27 07:44:02.418 1895 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect > [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Clearing orphaned > ARP spoofing entries for devices [] > > > I'm using this OpenStack cloud to run my master's experiment. I turned > off all nodes, and after a few days I called again and from that the > VMs were not remotely accessible. > > So I delete existing networks and re-create. It was in an attempt to > solve the problem. > > Here is an attached image. Neutron is creating multiple interfaces on > the 10.0.0.0 network on the router. > > > Em 27-07-2018 05:05, Eugen Block escreveu: >> Hi, >> >> is there anything in the linuxbridge-agent logs on control and/or >> compute node(s)? >> Which neutron services don't start? Can you paste "openstack network >> agent list" output? >> >> The important question is: what was the cause of "neutron stopped >> working" and why did you delete the existing networks? It probably >> would be helpful knowing the reaseon to be able to prevent such >> problemes in the future. Or are the provided logs from before? >> >> We experience network/neutron troubles from time to time, and >> sometimes the only way to fix it is a reboot. >> >> Regards, >> Eugen >> >> >> Zitat von Marcio Prado : >> >>> Good afternoon, >>> >>> For no apparent reason my Neutron stopped working. >>> >>> I deleted the networks, subnets and routers, created everything >>> again. >>> >>> But it does not work. The logs are: >>> >>> >>> 2018-07-26 11:29:16.101 3272 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Linux bridge >>> agent Agent out of sync with plugin! >>> 2018-07-26 11:29:16.101 3272 INFO neutron.agent.securitygroups_rpc >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Preparing >>> filters for devices set(['tap69feb7be-2b', 'tap0efd5228-b0', >>> 'tap83a57ce5-a8', 'tapd50d137f-f6']) >>> 2018-07-26 11:29:18.218 3272 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Port >>> tap69feb7be-2b updated. Details: {u'profile': {}, >>> u'network_qos_policy_id': None, u'qos_policy_id': None, >>> u'allowed_address_pairs': [], u'admin_state_up': True, >>> u'network_id': u'0f293447-ad01-465e-a034-fdaa136a4488', >>> u'segmentation_id': None, u'device_owner': >>> u'network:router_gateway', u'physical_network': u'provider', >>> u'mac_address': u'fa:16:3e:a3:be:5c', u'device': u'tap69feb7be-2b', >>> u'port_security_enabled': False, u'port_id': >>> u'69feb7be-2b9c-4604-a078-32c984d7075a', u'fixed_ips': >>> [{u'subnet_id': u'5ef3df97-d88a-4c60-969c-5a862f04c1e0', >>> u'ip_address': u'192.168.0.14'}], u'network_type': u'flat'} >>> 2018-07-26 11:29:18.871 3272 INFO >>> neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Skipping ARP >>> spoofing rules for port 'tap69feb7be-2b' because it has port >>> security disabled >>> 2018-07-26 11:29:20.208 3272 ERROR neutron.agent.linux.utils >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Exit code: 2; >>> Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists >>> >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Error in agent >>> loop. Devices info: {'current': set(['tap69feb7be-2b', >>> 'tap0efd5228-b0', 'tap83a57ce5-a8', 'tapd50d137f-f6']), >>> 'timestamps': {'tap0efd5228-b0': 9, 'tap69feb7be-2b': 13, >>> 'tap83a57ce5-a8': 10, 'tapd50d137f-f6': 8}, 'removed': set([]), >>> 'added': set(['tap69feb7be-2b', 'tap0efd5228-b0', 'tap83a57ce5-a8', >>> 'tapd50d137f-f6']), 'updated': set([])} >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent Traceback (most >>> recent call last): >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent File >>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", >>> line 453, in daemon_loop >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent sync = >>> self.process_network_devices(device_info) >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent File >>> "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 153, >>> in wrapper >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent return f(*args, >>> **kwargs) >>> 2018-07-26 11:29:20.219 3272 ERROR >>> neutron.plugins.ml2.drivers.agent._common_agent File >>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", >>> line 210, in process_network_devices >>> >>> Has anyone had similar experience? >>> >>> -- Marcio Prado >>> Analista de TI - Infraestrutura e Redes >>> Fone: (35) 9.9821-3561 >>> www.marcioprado.eti.br >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br From eblock at nde.ag Thu Aug 9 15:16:45 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 09 Aug 2018 15:16:45 +0000 Subject: [Openstack] Error Neutron: RTNETLINK answers: File exists In-Reply-To: <857825ade79845e5e31f9bdded3f1733@marcioprado.eti.br> References: <20180727080536.Horde.GzHVtxaoCBgtq_alRDaYS0d@webmail.nde.ag> <99001d52739f470ba2abf7d951600436@marcioprado.eti.br> <857825ade79845e5e31f9bdded3f1733@marcioprado.eti.br> Message-ID: <20180809151645.Horde.WVkep8Gc9YxZbMGvKPGkkUv@webmail.nde.ag> Sorry, somehow I didn't notice your answer and forgot the thread. > The problem is a wireless TP-link router with the OpenWRT firmware > configured with bridge. > When I connect this wireless router to the switch with the OpenStack > cloud servers, the Linux bridge agent starts to make an error and I > lose access to the VMs. It's good you have a hint to the cause, but I'm afraid I can't help you with this. Hopefully someone with more expertise will be able to point you to the right direction. Regards Zitat von Marcio Prado : > Guys, I figured out part of the problem. > > The problem is a wireless TP-link router with the OpenWRT firmware > configured with bridge. > > When I connect this wireless router to the switch with the OpenStack > cloud servers, the Linux bridge agent starts to make an error and I > lose access to the VMs. > > It is not duplicate IP or DHCP. > > Does anyone have any idea what it is? > > > > > Em 27-07-2018 08:32, Marcio Prado escreveu: >> Thanks for the help Eugen, >> >> This log is from the linuxbridge of the controller node. Compute nodes >> are not logging errors. >> >> Follows the output of the "openstack network agent list" >> >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> | ID | Agent Type | Host >> | Availability Zone | Alive | State | Binary | >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> | 590f5a6d-379b-4e8d-87ec-f1060cecf230 | Linux bridge agent | >> controller | None | True | UP | >> neutron-linuxbridge-agent | >> | 88fb87c9-4c03-4faa-8286-95be3586fc94 | DHCP agent | >> controller | nova | True | UP | neutron-dhcp-agent >> | >> | b982382e-438c-46a9-8d4e-d58d554150fd | Linux bridge agent | compute1 >> | None | True | UP | neutron-linuxbridge-agent | >> | c7a9ba41-1fae-46cd-b61f-30bcacb0a4e8 | L3 agent | >> controller | nova | True | UP | neutron-l3-agent >> | >> | c9a1ea4b-2d5d-4bda-9849-cd6e302a2917 | Metadata agent | >> controller | None | True | UP | >> neutron-metadata-agent | >> | e690d4b9-9285-4ddd-a87a-f28ea99d9a73 | Linux bridge agent | compute3 >> | None | False | UP | neutron-linuxbridge-agent | >> | fdd8f615-f5d6-4100-826e-59f8270df715 | Linux bridge agent | compute2 >> | None | False | UP | neutron-linuxbridge-agent | >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> >> compute2 and compute3 are turned off intentionally. >> >> Log compute1 >> >> /var/log/neutron/neutron-linuxbridge-agent.log >> >> 2018-07-27 07:43:57.242 1895 INFO neutron.common.config [-] >> /usr/bin/neutron-linuxbridge-agent version 10.0.0 >> 2018-07-27 07:43:57.243 1895 INFO >> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >> [-] Interface mappings: {'provider': 'eno3'} >> 2018-07-27 07:43:57.243 1895 INFO >> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >> [-] Bridge mappings: {} >> 2018-07-27 07:44:00.954 1895 INFO >> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >> [-] Agent initialized successfully, now running... >> 2018-07-27 07:44:01.582 1895 INFO >> neutron.plugins.ml2.drivers.agent._common_agent >> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] RPC agent_id: >> lb525400d52f59 >> 2018-07-27 07:44:01.589 1895 INFO >> neutron.agent.agent_extensions_manager >> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Loaded agent >> extensions: [] >> 2018-07-27 07:44:01.716 1895 INFO >> neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent >> Agent has just been revived. Doing a full sync. >> 2018-07-27 07:44:01.778 1895 INFO >> neutron.plugins.ml2.drivers.agent._common_agent >> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge >> agent Agent RPC Daemon Started! >> 2018-07-27 07:44:01.779 1895 INFO >> neutron.plugins.ml2.drivers.agent._common_agent >> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge >> agent Agent out of sync with plugin! >> 2018-07-27 07:44:02.418 1895 INFO >> neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect >> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Clearing orphaned >> ARP spoofing entries for devices [] >> >> >> I'm using this OpenStack cloud to run my master's experiment. I turned >> off all nodes, and after a few days I called again and from that the >> VMs were not remotely accessible. >> >> So I delete existing networks and re-create. It was in an attempt to >> solve the problem. >> >> Here is an attached image. Neutron is creating multiple interfaces on >> the 10.0.0.0 network on the router. >> >> >> Em 27-07-2018 05:05, Eugen Block escreveu: >>> Hi, >>> >>> is there anything in the linuxbridge-agent logs on control and/or >>> compute node(s)? >>> Which neutron services don't start? Can you paste "openstack network >>> agent list" output? >>> >>> The important question is: what was the cause of "neutron stopped >>> working" and why did you delete the existing networks? It probably >>> would be helpful knowing the reaseon to be able to prevent such >>> problemes in the future. Or are the provided logs from before? >>> >>> We experience network/neutron troubles from time to time, and >>> sometimes the only way to fix it is a reboot. >>> >>> Regards, >>> Eugen >>> >>> >>> Zitat von Marcio Prado : >>> >>>> Good afternoon, >>>> >>>> For no apparent reason my Neutron stopped working. >>>> >>>> I deleted the networks, subnets and routers, created everything again. >>>> >>>> But it does not work. The logs are: >>>> >>>> >>>> 2018-07-26 11:29:16.101 3272 INFO >>>> neutron.plugins.ml2.drivers.agent._common_agent >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Linux bridge >>>> agent Agent out of sync with plugin! >>>> 2018-07-26 11:29:16.101 3272 INFO >>>> neutron.agent.securitygroups_rpc >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Preparing >>>> filters for devices set(['tap69feb7be-2b', 'tap0efd5228-b0', >>>> 'tap83a57ce5-a8', 'tapd50d137f-f6']) >>>> 2018-07-26 11:29:18.218 3272 INFO >>>> neutron.plugins.ml2.drivers.agent._common_agent >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Port >>>> tap69feb7be-2b updated. Details: {u'profile': {}, >>>> u'network_qos_policy_id': None, u'qos_policy_id': None, >>>> u'allowed_address_pairs': [], u'admin_state_up': True, >>>> u'network_id': u'0f293447-ad01-465e-a034-fdaa136a4488', >>>> u'segmentation_id': None, u'device_owner': >>>> u'network:router_gateway', u'physical_network': u'provider', >>>> u'mac_address': u'fa:16:3e:a3:be:5c', u'device': >>>> u'tap69feb7be-2b', u'port_security_enabled': False, u'port_id': >>>> u'69feb7be-2b9c-4604-a078-32c984d7075a', u'fixed_ips': >>>> [{u'subnet_id': u'5ef3df97-d88a-4c60-969c-5a862f04c1e0', >>>> u'ip_address': u'192.168.0.14'}], u'network_type': u'flat'} >>>> 2018-07-26 11:29:18.871 3272 INFO >>>> neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Skipping ARP >>>> spoofing rules for port 'tap69feb7be-2b' because it has port >>>> security disabled >>>> 2018-07-26 11:29:20.208 3272 ERROR neutron.agent.linux.utils >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Exit code: >>>> 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists >>>> >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent >>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Error in >>>> agent loop. Devices info: {'current': set(['tap69feb7be-2b', >>>> 'tap0efd5228-b0', 'tap83a57ce5-a8', 'tapd50d137f-f6']), >>>> 'timestamps': {'tap0efd5228-b0': 9, 'tap69feb7be-2b': 13, >>>> 'tap83a57ce5-a8': 10, 'tapd50d137f-f6': 8}, 'removed': set([]), >>>> 'added': set(['tap69feb7be-2b', 'tap0efd5228-b0', >>>> 'tap83a57ce5-a8', 'tapd50d137f-f6']), 'updated': set([])} >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent Traceback (most >>>> recent call last): >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 453, in >>>> daemon_loop >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent sync = >>>> self.process_network_devices(device_info) >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>> "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line >>>> 153, in wrapper >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent return >>>> f(*args, **kwargs) >>>> 2018-07-26 11:29:20.219 3272 ERROR >>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", line 210, in >>>> process_network_devices >>>> >>>> Has anyone had similar experience? >>>> >>>> -- Marcio Prado >>>> Analista de TI - Infraestrutura e Redes >>>> Fone: (35) 9.9821-3561 >>>> www.marcioprado.eti.br >>>> >>>> _______________________________________________ >>>> Mailing list: >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> >>> >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -- > Marcio Prado > Analista de TI - Infraestrutura e Redes > Fone: (35) 9.9821-3561 > www.marcioprado.eti.br From marcioprado at marcioprado.eti.br Thu Aug 9 16:00:20 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Thu, 09 Aug 2018 13:00:20 -0300 Subject: [Openstack] Error Neutron: RTNETLINK answers: File exists In-Reply-To: <20180809151645.Horde.WVkep8Gc9YxZbMGvKPGkkUv@webmail.nde.ag> References: <20180727080536.Horde.GzHVtxaoCBgtq_alRDaYS0d@webmail.nde.ag> <99001d52739f470ba2abf7d951600436@marcioprado.eti.br> <857825ade79845e5e31f9bdded3f1733@marcioprado.eti.br> <20180809151645.Horde.WVkep8Gc9YxZbMGvKPGkkUv@webmail.nde.ag> Message-ID: Thanks Eugen. Em 09-08-2018 12:16, Eugen Block escreveu: > Sorry, somehow I didn't notice your answer and forgot the thread. > >> The problem is a wireless TP-link router with the OpenWRT firmware >> configured with bridge. >> When I connect this wireless router to the switch with the OpenStack >> cloud servers, the Linux bridge agent starts to make an error and I >> lose access to the VMs. > > It's good you have a hint to the cause, but I'm afraid I can't help > you with this. Hopefully someone with more expertise will be able to > point you to the right direction. > > Regards > > > Zitat von Marcio Prado : > >> Guys, I figured out part of the problem. >> >> The problem is a wireless TP-link router with the OpenWRT firmware >> configured with bridge. >> >> When I connect this wireless router to the switch with the OpenStack >> cloud servers, the Linux bridge agent starts to make an error and I >> lose access to the VMs. >> >> It is not duplicate IP or DHCP. >> >> Does anyone have any idea what it is? >> >> >> >> >> Em 27-07-2018 08:32, Marcio Prado escreveu: >>> Thanks for the help Eugen, >>> >>> This log is from the linuxbridge of the controller node. Compute >>> nodes >>> are not logging errors. >>> >>> Follows the output of the "openstack network agent list" >>> >>> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >>> | ID | Agent Type | Host >>> | Availability Zone | Alive | State | Binary | >>> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >>> | 590f5a6d-379b-4e8d-87ec-f1060cecf230 | Linux bridge agent | >>> controller | None | True | UP | >>> neutron-linuxbridge-agent | >>> | 88fb87c9-4c03-4faa-8286-95be3586fc94 | DHCP agent | >>> controller | nova | True | UP | neutron-dhcp-agent >>> | >>> | b982382e-438c-46a9-8d4e-d58d554150fd | Linux bridge agent | >>> compute1 >>> | None | True | UP | neutron-linuxbridge-agent | >>> | c7a9ba41-1fae-46cd-b61f-30bcacb0a4e8 | L3 agent | >>> controller | nova | True | UP | neutron-l3-agent >>> | >>> | c9a1ea4b-2d5d-4bda-9849-cd6e302a2917 | Metadata agent | >>> controller | None | True | UP | >>> neutron-metadata-agent | >>> | e690d4b9-9285-4ddd-a87a-f28ea99d9a73 | Linux bridge agent | >>> compute3 >>> | None | False | UP | neutron-linuxbridge-agent | >>> | fdd8f615-f5d6-4100-826e-59f8270df715 | Linux bridge agent | >>> compute2 >>> | None | False | UP | neutron-linuxbridge-agent | >>> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >>> >>> compute2 and compute3 are turned off intentionally. >>> >>> Log compute1 >>> >>> /var/log/neutron/neutron-linuxbridge-agent.log >>> >>> 2018-07-27 07:43:57.242 1895 INFO neutron.common.config [-] >>> /usr/bin/neutron-linuxbridge-agent version 10.0.0 >>> 2018-07-27 07:43:57.243 1895 INFO >>> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >>> [-] Interface mappings: {'provider': 'eno3'} >>> 2018-07-27 07:43:57.243 1895 INFO >>> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >>> [-] Bridge mappings: {} >>> 2018-07-27 07:44:00.954 1895 INFO >>> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent >>> [-] Agent initialized successfully, now running... >>> 2018-07-27 07:44:01.582 1895 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] RPC agent_id: >>> lb525400d52f59 >>> 2018-07-27 07:44:01.589 1895 INFO >>> neutron.agent.agent_extensions_manager >>> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Loaded agent >>> extensions: [] >>> 2018-07-27 07:44:01.716 1895 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge >>> agent >>> Agent has just been revived. Doing a full sync. >>> 2018-07-27 07:44:01.778 1895 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge >>> agent Agent RPC Daemon Started! >>> 2018-07-27 07:44:01.779 1895 INFO >>> neutron.plugins.ml2.drivers.agent._common_agent >>> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Linux bridge >>> agent Agent out of sync with plugin! >>> 2018-07-27 07:44:02.418 1895 INFO >>> neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect >>> [req-3a8a42dc-32fc-40fc-8a4f-ddbb4d8c5f5b - - - - -] Clearing >>> orphaned >>> ARP spoofing entries for devices [] >>> >>> >>> I'm using this OpenStack cloud to run my master's experiment. I >>> turned >>> off all nodes, and after a few days I called again and from that the >>> VMs were not remotely accessible. >>> >>> So I delete existing networks and re-create. It was in an attempt to >>> solve the problem. >>> >>> Here is an attached image. Neutron is creating multiple interfaces on >>> the 10.0.0.0 network on the router. >>> >>> >>> Em 27-07-2018 05:05, Eugen Block escreveu: >>>> Hi, >>>> >>>> is there anything in the linuxbridge-agent logs on control and/or >>>> compute node(s)? >>>> Which neutron services don't start? Can you paste "openstack network >>>> agent list" output? >>>> >>>> The important question is: what was the cause of "neutron stopped >>>> working" and why did you delete the existing networks? It probably >>>> would be helpful knowing the reaseon to be able to prevent such >>>> problemes in the future. Or are the provided logs from before? >>>> >>>> We experience network/neutron troubles from time to time, and >>>> sometimes the only way to fix it is a reboot. >>>> >>>> Regards, >>>> Eugen >>>> >>>> >>>> Zitat von Marcio Prado : >>>> >>>>> Good afternoon, >>>>> >>>>> For no apparent reason my Neutron stopped working. >>>>> >>>>> I deleted the networks, subnets and routers, created everything >>>>> again. >>>>> >>>>> But it does not work. The logs are: >>>>> >>>>> >>>>> 2018-07-26 11:29:16.101 3272 INFO >>>>> neutron.plugins.ml2.drivers.agent._common_agent >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Linux bridge >>>>> agent Agent out of sync with plugin! >>>>> 2018-07-26 11:29:16.101 3272 INFO neutron.agent.securitygroups_rpc >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Preparing >>>>> filters for devices set(['tap69feb7be-2b', 'tap0efd5228-b0', >>>>> 'tap83a57ce5-a8', 'tapd50d137f-f6']) >>>>> 2018-07-26 11:29:18.218 3272 INFO >>>>> neutron.plugins.ml2.drivers.agent._common_agent >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Port >>>>> tap69feb7be-2b updated. Details: {u'profile': {}, >>>>> u'network_qos_policy_id': None, u'qos_policy_id': None, >>>>> u'allowed_address_pairs': [], u'admin_state_up': True, >>>>> u'network_id': u'0f293447-ad01-465e-a034-fdaa136a4488', >>>>> u'segmentation_id': None, u'device_owner': >>>>> u'network:router_gateway', u'physical_network': u'provider', >>>>> u'mac_address': u'fa:16:3e:a3:be:5c', u'device': >>>>> u'tap69feb7be-2b', u'port_security_enabled': False, u'port_id': >>>>> u'69feb7be-2b9c-4604-a078-32c984d7075a', u'fixed_ips': >>>>> [{u'subnet_id': u'5ef3df97-d88a-4c60-969c-5a862f04c1e0', >>>>> u'ip_address': u'192.168.0.14'}], u'network_type': u'flat'} >>>>> 2018-07-26 11:29:18.871 3272 INFO >>>>> neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Skipping ARP >>>>> spoofing rules for port 'tap69feb7be-2b' because it has port >>>>> security disabled >>>>> 2018-07-26 11:29:20.208 3272 ERROR neutron.agent.linux.utils >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Exit code: 2; >>>>> Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists >>>>> >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent >>>>> [req-9ba0ca9f-aeaf-44b2-ba24-c08556aae0ac - - - - -] Error in >>>>> agent loop. Devices info: {'current': set(['tap69feb7be-2b', >>>>> 'tap0efd5228-b0', 'tap83a57ce5-a8', 'tapd50d137f-f6']), >>>>> 'timestamps': {'tap0efd5228-b0': 9, 'tap69feb7be-2b': 13, >>>>> 'tap83a57ce5-a8': 10, 'tapd50d137f-f6': 8}, 'removed': set([]), >>>>> 'added': set(['tap69feb7be-2b', 'tap0efd5228-b0', >>>>> 'tap83a57ce5-a8', 'tapd50d137f-f6']), 'updated': set([])} >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent Traceback (most >>>>> recent call last): >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", >>>>> line 453, in daemon_loop >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent sync = >>>>> self.process_network_devices(device_info) >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>>> "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line >>>>> 153, in wrapper >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent return >>>>> f(*args, **kwargs) >>>>> 2018-07-26 11:29:20.219 3272 ERROR >>>>> neutron.plugins.ml2.drivers.agent._common_agent File >>>>> "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", >>>>> line 210, in process_network_devices >>>>> >>>>> Has anyone had similar experience? >>>>> >>>>> -- Marcio Prado >>>>> Analista de TI - Infraestrutura e Redes >>>>> Fone: (35) 9.9821-3561 >>>>> www.marcioprado.eti.br >>>>> >>>>> _______________________________________________ >>>>> Mailing list: >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> Post to : openstack at lists.openstack.org >>>>> Unsubscribe : >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Mailing list: >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> -- Marcio Prado >> Analista de TI - Infraestrutura e Redes >> Fone: (35) 9.9821-3561 >> www.marcioprado.eti.br -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br From jayachander.it at gmail.com Thu Aug 9 18:34:17 2018 From: jayachander.it at gmail.com (Jay See) Date: Thu, 9 Aug 2018 20:34:17 +0200 Subject: [Openstack] Adding new Hard disk to Compute Node In-Reply-To: <574e1679-a46a-48ab-c5d2-9e0253007962@gmail.com> References: <20180808092444.Horde.Lzws_BFycOtsLcWEhEk2UHQ@webmail.nde.ag> <20180808133616.Horde.ZSEwaZpwVtvl3DIN-skF0Wn@webmail.nde.ag> <574e1679-a46a-48ab-c5d2-9e0253007962@gmail.com> Message-ID: Hai Bernd Bausch, Thanks for your help. As you said , I am not completely familiar with all the underlying concepts. But I am trying to learn thanks for pointing me in the right direction. Now, I have achieved what I wanted. I have followed your second suggestion with some more reading in to LVM (as I am not complete aware of things in linux yet). Regarding your other suggestion with more Linux concepts, I need to do work on them as well (not at the moment). Thanks. Jay. On Thu, Aug 9, 2018 at 2:37 AM, Bernd Bausch wrote: > Your node uses logical volume *h020--vg-root* as its root filesystem. > This logical volume has a size of 370GB: > > # lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME FSTYPE SIZE MOUNTPOINT LABEL > (...) > └─sdk5 LVM2_member 371.5G > * ├─h020--vg-root ext4 370.6G /* > └─h020--vg-swap_1 swap 976M [SWAP] > > Now you created another physical volume, */dev/sdb1*, and added it to > volume group *h020-vg*. This increases the size of the *volume group*, > but not the size of the *logical volume*. > > If you want to provide more space to instances' ephemeral storage, you > could: > > - increase the size of root volume *h020--vg-root* using the *lvextend* > command, then increase the size of the filesystem on it. I believe that > this requires a reboot, since it's the root filesystem. > > or > > - create another logical volume, e.g. lvcreate -L1000GB -n > lv-instances h020-vg for a 1000GB logical volume, and mount it under > */var/lib/nova/instances*: mount /dev/h020-vg/lv-instances > /var/lib/nova/instances > (before mounting, create a filesystem on *lv-instances* and transfer > the data from */var/lib/nova/instances* to the new filesystem. Also, > don't forget to persist the mount by adding it to */etc/fstab*) > > The second option is by far better, in my opinion, as you should separate > operating system files from OpenStack data. > > You say that you are new to OpenStack. That's fine, but you seem to be > lacking the fundamentals of Linux system management as well. You can't > learn OpenStack without a certain level of Linux skills. At least learn > about LVM (it's not that hard) and filesystems. You will also need to have > networking fundamentals and Linux networking tools under your belt. > > Good luck! > > Bernd Bausch > > > On 8/9/2018 2:30 AM, Jay See wrote: > > Hai Eugen, > > Thanks for your suggestions and I went back to find more about adding the > new HD to VG. I think it was successful. (Logs are at the end of the mail) > > Followed this link - https://www.howtoforge.com/ > logical-volume-manager-how-can-i-extend-a-volume-group > > But still on the nova-compute logs it still shows wrong phys_disk size. > Even in the horizon it doesn't get updated with the new HD added to compute > node. > > 2018-08-08 19:22:56.671 3335 INFO nova.compute.resource_tracker > [req-14a2b7e2-7703-4a75-9014-180eb26876ff - - - - -] Final resource view: > name=h020 phys_ram=515767MB used_ram=512MB *phys_disk=364GB *used_disk=0GB > total_vcpus=40 used_vcpus=0 pci_stats=[] > > I understood they are not supposed to be mounted on /var/lib/nova/instances > so removed them now. > > Thanks > Jay. > > > root at h020:~# vgdisplay > --- Volume group --- > *VG Name h020-vg* > System ID > Format lvm2 > Metadata Areas 1 > Metadata Sequence No 3 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 2 > Open LV 2 > Max PV 0 > Cur PV 1 > Act PV 1 > VG Size 371.52 GiB > PE Size 4.00 MiB > Total PE 95109 > * Alloc PE / Size 95105 / 371.50 GiB* > * Free PE / Size 4 / 16.00 MiB* > VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U > > root at h020:~# pvcreate */dev/sdb1* > Physical volume "/dev/sdb1" successfully created > root at h020:~# pvdisplay > --- Physical volume --- > PV Name /dev/sdk5 > VG Name h020-vg > PV Size 371.52 GiB / not usable 2.00 MiB > Allocatable yes > PE Size 4.00 MiB > Total PE 95109 > Free PE 4 > Allocated PE 95105 > PV UUID BjGeac-TRkC-0gi8-GKX8-2Ivc-7awz-DTK2nR > > "/dev/sdb1" is a new physical volume of "5.46 TiB" > --- NEW Physical volume --- > PV Name /dev/sdb1 > VG Name > PV Size 5.46 TiB > Allocatable NO > PE Size 0 > Total PE 0 > Free PE 0 > Allocated PE 0 > PV UUID CPp369-3MwJ-ic3I-Keh1-dJJY-Gcrc-CpC443 > > root at h020:~# vgextend /dev/h020-vg /dev/sdb1 > Volume group "h020-vg" successfully extended > root at h020:~# vgdisplay > --- Volume group --- > VG Name h020-vg > System ID > Format lvm2 > Metadata Areas 2 > Metadata Sequence No 4 > VG Access read/write > VG Status resizable > MAX LV 0 > Cur LV 2 > Open LV 2 > Max PV 0 > Cur PV 2 > Act PV 2 > VG Size 5.82 TiB > PE Size 4.00 MiB > Total PE 1525900 > * Alloc PE / Size 95105 / 371.50 GiB* > * Free PE / Size 1430795 / 5.46 TiB* > VG UUID 4EoW4w-x2cw-xDmC-XrrX-SXBG-RePM-XmWA2U > > root at h020:~# service nova-compute restart > root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL > NAME FSTYPE SIZE MOUNTPOINT LABEL > sda 5.5T > ├─sda1 vfat 500M ESP > ├─sda2 vfat 100M DIAGS > └─sda3 vfat 2G OS > sdb 5.5T > └─sdb1 LVM2_member 5.5T > sdk 372G > ├─sdk1 ext2 487M /boot > ├─sdk2 1K > └─sdk5 LVM2_member 371.5G > ├─h020--vg-root ext4 370.6G / > └─h020--vg-swap_1 swap 976M [SWAP] > root at h020:~# pvscan > PV /dev/sdk5 VG h020-vg lvm2 [371.52 GiB / 16.00 MiB free] > PV /dev/sdb1 VG h020-vg lvm2 [5.46 TiB / 5.46 TiB free] > Total: 2 [5.82 TiB] / in use: 2 [5.82 TiB] / in no VG: 0 [0 ] > root at h020:~# vgs > VG #PV #LV #SN Attr VSize VFree > h020-vg 2 2 0 wz--n- 5.82t 5.46t > root at h020:~# vi /var/log/nova/nova-compute.log > root at h020:~# > > > On Wed, Aug 8, 2018 at 3:36 PM, Eugen Block wrote: > >> Okay, I'm really not sure if I understand your setup correctly. >> >> Server does not add them automatically, I tried to mount them. I tried >>> they >>> way they discussed in the page with /dev/sdb only. Other hard disks I >>> have >>> mounted them my self. Yes I can see them in lsblk output as below >>> >> >> What do you mean with "tried with /dev/sdb"? I assume this is a fresh >> setup and Cinder didn't work yet, am I right? >> The new disks won't be added automatically to your cinder configuration, >> if that's what you expected. You'll have to create new physical volumes and >> then extend the existing VG to use new disks. >> >> In Nova-Compute logs I can only see main hard disk shown in the the >>> complete phys_disk, it was supposed to show more phys_disk available >>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I >>> am >>> thinking it in the wrong way, I want increase my compute node disk size >>> to >>> launch more VMs) >>> >> >> If you plan to use cinder volumes as disks for your instances, you don't >> need much space in /var/lib/nova/instances but more space available for >> cinder, so you'll need to grow the VG. >> >> Regards >> >> >> Zitat von Jay See : >> >> Hai, >>> >>> Thanks for a quick response. >>> >>> - what do you mean by "disks are not added"? Does the server recognize >>> them? Do you see them in the output of "lsblk"? >>> Server does not add them automatically, I tried to mount them. I tried >>> they >>> way they discussed in the page with /dev/sdb only. Other hard disks I >>> have >>> mounted them my self. Yes I can see them in lsblk output as below >>> root at h020:~# lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL >>> NAME FSTYPE SIZE >>> MOUNTPOINT LABEL >>> sda 5.5T >>> ├─sda1 vfat 500M >>> ESP >>> ├─sda2 vfat 100M >>> DIAGS >>> └─sda3 vfat 2G >>> OS >>> sdb 5.5T >>> ├─sdb1 5.5T >>> ├─cinder--volumes-cinder--volumes--pool_tmeta 84M >>> │ └─cinder--volumes-cinder--volumes--pool 5.2T >>> └─cinder--volumes-cinder--volumes--pool_tdata 5.2T >>> └─cinder--volumes-cinder--volumes--pool 5.2T >>> sdc 5.5T >>> └─sdc1 xfs 5.5T >>> sdd 5.5T >>> └─sdd1 xfs 5.5T >>> /var/lib/nova/instances/sdd1 >>> sde 5.5T >>> └─sde1 xfs 5.5T >>> /var/lib/nova/instances/sde1 >>> sdf 5.5T >>> └─sdf1 xfs 5.5T >>> /var/lib/nova/instances/sdf1 >>> sdg 5.5T >>> └─sdg1 xfs 5.5T >>> /var/lib/nova/instances/sdg1 >>> sdh 5.5T >>> └─sdh1 xfs 5.5T >>> /var/lib/nova/instances/sdh1 >>> sdi 5.5T >>> └─sdi1 xfs 5.5T >>> /var/lib/nova/instances/sdi1 >>> sdj 5.5T >>> └─sdj1 xfs 5.5T >>> /var/lib/nova/instances/sdj1 >>> sdk 372G >>> ├─sdk1 ext2 487M /boot >>> ├─sdk2 1K >>> └─sdk5 LVM2_member 371.5G >>> ├─h020--vg-root ext4 370.6G / >>> └─h020--vg-swap_1 swap 976M [SWAP] >>> >>> - Do you already have existing physical volumes for cinder (assuming you >>> deployed cinder with lvm as in the provided link)? >>> Yes, I have tried one of the HD (/dev/sdb) >>> >>> - If the system recognizes the new disks and you deployed cinder with lvm >>> you can create a new physical volume and extend your existing volume >>> group >>> to have more space for cinder. Is this a failing step or someting else? >>> System does not recognize the disks automatically, I have manually >>> mounted >>> them or added them to cinder. >>> >>> In Nova-Compute logs I can only see main hard disk shown in the the >>> complete phys_disk, it was supposed to show more phys_disk available >>> atleast 5.8 TB if only /dev/sdb is added as per my understand (May be I >>> am >>> thinking it in the wrong way, I want increase my compute node disk size >>> to >>> launch more VMs) >>> >>> 2018-08-08 11:58:41.722 34111 INFO nova.compute.resource_tracker >>> [req-a180079f-d7c0-4430-9c14-314ac4d0832b - - - - -] F >>> inal resource view: name=h020 phys_ram=515767MB used_ram=512MB >>> *phys_disk=364GB* used_disk=0GB total_vcpus= >>> >>> 40 used_vcpus=0 pci_stats=[] >>> >>> - Please describe more precisely what exactly you tried and what exactly >>> fails. >>> As explained in the previous point, I want to increase the phys_disk >>> size >>> to use the compute node more efficiently. So to add the HD to compute >>> node >>> I am installing cinder on the compute node to add all the HDs. >>> >>> I might be doing something wrong. >>> >>> Thanks and Regards, >>> Jayachander. >>> >>> On Wed, Aug 8, 2018 at 11:24 AM, Eugen Block wrote: >>> >>> Hi, >>>> >>>> there are a couple of questions rising up: >>>> >>>> - what do you mean by "disks are not added"? Does the server recognize >>>> them? Do you see them in the output of "lsblk"? >>>> - Do you already have existing physical volumes for cinder (assuming you >>>> deployed cinder with lvm as in the provided link)? >>>> - If the system recognizes the new disks and you deployed cinder with >>>> lvm >>>> you can create a new physical volume and extend your existing volume >>>> group >>>> to have more space for cinder. Is this a failing step or someting else? >>>> - Please describe more precisely what exactly you tried and what exactly >>>> fails. >>>> >>>> The failing neutron-l3-agent shouldn't have to do anything with your >>>> disk >>>> layout, so it's probably something else. >>>> >>>> Regards, >>>> Eugen >>>> >>>> >>>> Zitat von Jay See : >>>> >>>> Hai, >>>> >>>>> >>>>> I am installing Openstack Queens on Ubuntu Server. >>>>> >>>>> My server has extra hard disk(s) apart from main hard disk where >>>>> OS(Ubuntu) >>>>> is running. >>>>> >>>>> ( >>>>> https://docs.openstack.org/cinder/queens/install/cinder-stor >>>>> age-install-ubuntu.html >>>>> ) >>>>> As suggested in cinder (above link), I have been trying to add the new >>>>> hard >>>>> disk but the other hard disks are not getting added. >>>>> >>>>> Can anyone tell me , what am i missing to add these hard disks? >>>>> >>>>> Other info : neutron-l3-agent on controller is not running, is it >>>>> related >>>>> to this issue ? I am thinking it is not related to this issue. >>>>> >>>>> I am new to Openstack. >>>>> >>>>> ~ Jayachander. >>>>> -- >>>>> P *SAVE PAPER – Please do not print this e-mail unless absolutely >>>>> necessary.* >>>>> >>>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Mailing list: http://lists.openstack.org/cgi >>>> -bin/mailman/listinfo/openstac >>>> k >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : http://lists.openstack.org/cgi >>>> -bin/mailman/listinfo/openstac >>>> k >>>> >>>> >>> >>> >>> -- >>> ​ >>> P *SAVE PAPER – Please do not print this e-mail unless absolutely >>> necessary.* >>> >> >> >> >> > > > -- > ​ > P *SAVE PAPER – Please do not print this e-mail unless absolutely > necessary.* > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > -- ​ P *SAVE PAPER – Please do not print this e-mail unless absolutely necessary.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Aug 9 21:50:42 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 9 Aug 2018 14:50:42 -0700 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: You have done such amazing things with the program! We appreciate everything you do :) Enjoy the little extra spare time. -Kendall (daiblo_rojo) On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Hi all, > > I'm reaching you out to let you know that I'll be stepping down as > coordinator for OpenStack next round. I had been contributing to this > effort for several rounds now and I believe is a good moment for somebody > else to take the lead. You all know how important is Outreachy to me and > I'm grateful for all the amazing things I've done as part of the Outreachy > program and all the great people I've met in the way. I plan to keep > involved with the internships but leave the coordination tasks to somebody > else. > > If you are interested in becoming an Outreachy coordinator, let me know > and I can share my experience and provide some guidance. > > Thanks, > > Victoria > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Aug 11 18:31:46 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 11 Aug 2018 14:31:46 -0400 Subject: [Openstack] changing novalocal hostname suffix issue Message-ID: I have deployed openstack-ansible (queens) and everything working great but now i want to change "novalocal" suffix when i build instance. This is what i have tired and none of them work. 1. put dhcp_domain = example.com in /etc/nova/nova.conf 2. put dns_domain = example.com in neutron server 3. put dhcp_domain = example.com in dhcp_agent.ini In short i have tried all possible option but still my instance picking foo.novalocal in /etc/hostname I can use #cloud-config to make it change but i want to make it default alway use my own domain name. what i am missing here. From satish.txt at gmail.com Mon Aug 13 13:51:15 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 13 Aug 2018 09:51:15 -0400 Subject: [Openstack] Horizon customize IP Address column Message-ID: Folks, Quick question is there a way in horizon i remove network information from "IP address" column in instance tab when we have multiple interface, because its fonts are so big and looks ugly when you have many instance. Find attached screenshot that is what i am talking about, i don't want network name in "IP address" column, just IP address is enough Any idea how to get rid of that field? -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-08-12 at 12.37.24 AM.png Type: image/png Size: 186603 bytes Desc: not available URL: From terje.lundin at evolved-intelligence.com Mon Aug 13 19:14:25 2018 From: terje.lundin at evolved-intelligence.com (terje.lundin at evolved-intelligence.com) Date: Mon, 13 Aug 2018 20:14:25 +0100 Subject: [Openstack] changing novalocal hostname suffix issue In-Reply-To: References: Message-ID: <005301d43339$da8f3dc0$8fadb940$@evolved-intelligence.com> Hi Satish, Did you restart the dhcp agent service? (service neutron-dhcp-agent restart) It won't pick up on your changes without restart. I followed the steps here and got it working on Queens. https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html Kind regards Terje Lundin -----Original Message----- From: Satish Patel Sent: Saturday, August 11, 2018 7:32 PM To: openstack Subject: [Openstack] changing novalocal hostname suffix issue I have deployed openstack-ansible (queens) and everything working great but now i want to change "novalocal" suffix when i build instance. This is what i have tired and none of them work. 1. put dhcp_domain = example.com in /etc/nova/nova.conf 2. put dns_domain = example.com in neutron server 3. put dhcp_domain = example.com in dhcp_agent.ini In short i have tried all possible option but still my instance picking foo.novalocal in /etc/hostname I can use #cloud-config to make it change but i want to make it default alway use my own domain name. what i am missing here. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From d.lake at surrey.ac.uk Mon Aug 13 23:08:33 2018 From: d.lake at surrey.ac.uk (d.lake at surrey.ac.uk) Date: Mon, 13 Aug 2018 23:08:33 +0000 Subject: [Openstack] OVS-DPDK with NetVirt In-Reply-To: References: , Message-ID: I'm really getting nowhere fast with this. The latest in set of issues appears to be related to the "Permission denied" on the socket for qemu. Just to reprise - this is OVS with DPDK, All-In-One with Intel NICs and ODL NetVirt. Can ANYONE shed any light on this please - I can't believe that this isn't a very standard deployment and given that it works without DPDK on OVS I can't believe that it hasn't been seen hundreds of times beore. Thanks David From: Lake D Mr (PG/R - Elec Electronic Eng) Sent: 13 August 2018 16:35 To: 'Venkatrangan G - ERS, HCL Tech' ; dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi OK - I found some more guides which told me I needed to add: [ovs] datapath_type=netdev to ML2_conf which I have done with an extra line in local.conf. Now I am seeing the ports trying to be added as vhost-user ports. BUT. I am seeing this issue in the log: qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhuab608c58-ae: Failed to connect socket /var/run/openvswitch/vhuab608c58-ae: Permission denied\n']#033[00m Any ideas? This is on an all-in-one system using CentOS 7.5 Thanks David From: Venkatrangan G - ERS, HCL Tech > Sent: 13 August 2018 10:36 To: Lake D Mr (PG/R - Elec Electronic Eng) >; dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi David, I think you can run this ommand on your control node sudo neutron-odl-ovs-hostconfig --config-file=/etc/neutron/neutron.conf --debug --ovs_dpdk --bridge_mappings=physnet1:br-physnet1 (Not exactly sure of all the arguments, Please run this command in the control node with dpdk option, I think that should help) Regards, Venkat G (When there is no wind....row!!!) From: netvirt-dev-bounces at lists.opendaylight.org > On Behalf Of d.lake at surrey.ac.uk Sent: 13 August 2018 14:01 To: dayavanti.gopal.kamath at ericsson.com; netvirt-dev at lists.opendaylight.org Subject: Re: [netvirt-dev] OVS-DPDK with NetVirt Good morning all I wonder if someone could help with this please. I don't know whether I need to add anything into ML2 to have the br-int installed in netdev mode or whether something else is wrong. Thank you in advance David Sent from my iPhone ________________________________ From: Lake D Mr (PG/R - Elec Electronic Eng) Sent: Friday, August 10, 2018 10:57:02 PM To: Dayavanti Gopal Kamath; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi The first link you sent doesn't work? I've no idea what a pseudoagent binding driver is.... All I've done is to follow the instructions for moving to DPDK on my existing ODL+OpenStack system which uses Devstack to install. My understanding is that I needed to enable DPDK in OVS. I do that with the following command: ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true I then unbound the DPDK NICs from the kernel mode driver and bound them to vfio-pci using "dpdk-devbind." Once that is done, I created 4 bridges in OVS which all use the netdev datapath: ovs-vsctl add-br br-dpdk1 -- set bridge br-dpdk1 datapath_type=netdev ovs-vsctl add-br br-dpdk2 -- set bridge br-dpdk2 datapath_type=netdev ovs-vsctl add-br br-dpdk3 -- set bridge br-dpdk3 datapath_type=netdev ovs-vsctl add-br br-dpdk4 -- set bridge br-dpdk4 datapath_type=netdev Then I added the ports for the NICs to each bridge: sudo ovs-vsctl add-port br-dpdk1 dpdk-p1 -- set Interface dpdk-p1 type=dpdk options:dpdk-devargs=0000:04:00.0 sudo ovs-vsctl add-port br-dpdk2 dpdk-p2 -- set Interface dpdk-p2 type=dpdk options:dpdk-devargs=0000:04:00.1 sudo ovs-vsctl add-port br-dpdk3 dpdk-p3 -- set Interface dpdk-p3 type=dpdk options:dpdk-devargs=0000:05:00.0 sudo ovs-vsctl add-port br-dpdk4 dpdk-p4 -- set Interface dpdk-p4 type=dpdk options:dpdk-devargs=0000:05:00.1 Having done that, I can verify that I can see traffic in the bridge using ovs-tcpdump so I know that the data is reaching OVS from the wire. Then I run Devstack stack.sh and I get a working system with four physical networks. However, this blog - https://joshhershberg.wordpress.com/2017/03/07/opendaylight-netvirt-dpdk-plumbing-how-it-all-works-together/ seems to indicate that the br-int should be automatically created by ODL as part of the installation process in netdev mode by virtue of the fact that it has read the datapath type from OVSDB and would therefore ensure that all ports are created with netdev. But this doesn't appear to be happening because I see messages in karaf.log telling me that the ports are NOT in dpdk mode. The symptom is that when I create a VM, a TAP interface is built and I can see traffic into OVS and to/from the netns qdhcp, but traffic is not crossing between the br-dpdk ports and the ports associated with the VMs. I've also read this note https://software.intel.com/en-us/forums/networking/topic/704506 which seems to indicate some additional ML2 configuration is required but that would seem to run counter to the instructions given in the blog referenced earlier! I'm loathed to start manually changing anything in the OVS table because last time I asked a question about adding OVS rules to do routing across OVS I was told that really one should not touch the OVS tables manually if integrated with ODL and NetVirt. This is all rather confusing. David From: Dayavanti Gopal Kamath > Sent: 10 August 2018 19:03 To: Lake D Mr (PG/R - Elec Electronic Eng) >; netvirt-dev at lists.opendaylight.org Subject: RE: OVS-DPDK with NetVirt Hi david, Are you using the pseudoagent binding driver for binding the vif? In that case, ovsdb openvswitch table needs to be populated with host config information- https:/github/.com/openstack/networking-odl/blob/master/doc/source/devref/hostconfig.rst https://blueprints.launchpad.net/networking-odl/+spec/pseudo-agentdb-binding for netdev, your openvswitch table could look like this - external_ids: odl_os_hostconfig_hostid= external_ids: host_type= ODL_L2 external_ids: odl_os_hostconfig_config_odl_l2 = "{"supported_vnic_types": [{"vnic_type": ["normal"], "vif_type": "vhostuser", "vif_details": {"uuid": "TEST_UUID", "has_datapath_type_netdev": True, "support_vhost_user": True, "port_prefix": "vhu", "vhostuser_socket_dir": "/var/run/openvswitch", "vhostuser_ovs_plug": True, "vhostuser_mode": "server", "vhostuser_socket": "/var/run/openvswitch/vhu$PORT_ID"} }], "allowed_network_types": ["vlan", "vxlan"], "bridge_mappings": {" physnet1":"br-ex"}}" From: netvirt-dev-bounces at lists.opendaylight.org [mailto:netvirt-dev-bounces at lists.opendaylight.org] On Behalf Of d.lake at surrey.ac.uk Sent: Friday, August 10, 2018 7:59 PM To: netvirt-dev at lists.opendaylight.org Subject: [netvirt-dev] OVS-DPDK with NetVirt Hello I have installed OVS with DPDK support and created bridges to map my DPDK-mode interfaces to provider networks as below: ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true ovs-vsctl add-br br-dpdk1 -- set bridge br-dpdk1 datapath_type=netdev ovs-vsctl add-br br-dpdk2 -- set bridge br-dpdk2 datapath_type=netdev ovs-vsctl add-br br-dpdk3 -- set bridge br-dpdk3 datapath_type=netdev ovs-vsctl add-br br-dpdk4 -- set bridge br-dpdk4 datapath_type=netdev sudo ovs-vsctl add-port br-dpdk1 dpdk-p1 -- set Interface dpdk-p1 type=dpdk options:dpdk-devargs=0000:04:00.0 sudo ovs-vsctl add-port br-dpdk2 dpdk-p2 -- set Interface dpdk-p2 type=dpdk options:dpdk-devargs=0000:04:00.1 sudo ovs-vsctl add-port br-dpdk3 dpdk-p3 -- set Interface dpdk-p3 type=dpdk options:dpdk-devargs=0000:05:00.0 sudo ovs-vsctl add-port br-dpdk4 dpdk-p4 -- set Interface dpdk-p4 type=dpdk options:dpdk-devargs=0000:05:00.1 I have ODL provider mappings between physnet1:br-dpdk1 etc and I can create flat networks using the provider network names. BUT. I am still seeing the tap interfaces in the ovs-vsctl show and in karaf.log it appears that the VM interfaces are NOT being created as type vhostuser. This blog - https://joshhershberg.wordpress.com/2017/03/07/opendaylight-netvirt-dpdk-plumbing-how-it-all-works-together/ - seems to suggest that the br-int should be created as a netdev but I don't think this is happening. Is there any config change I need to make to ML2 to make br-int into a netdev datapath? Thanks David ::DISCLAIMER:: -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From martialmichel at datamachines.io Tue Aug 14 20:18:16 2018 From: martialmichel at datamachines.io (Martial Michel) Date: Tue, 14 Aug 2018 16:18:16 -0400 Subject: [Openstack] [Scientific] Scientific SIG meeting Aug 15 1100UTC Message-ID: The next Scientific SIG meeting will an IRC on Meeting August 15th 2018: 2018-08-15 1100 UTC in channel #openstack-meeting Agenda is as follow: 1. PTG Topics 2. CFP: HPC Advisory Council Spain Conference - 21st September http://hpcadvisorycouncil.com/events/2018/spain-conference/ 3. Ceph day Berlin - November 12th (the day before the summit) https://ceph.com/cephdays/ceph-day-berlin/ 4. AOB All are welcome to attend https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_15th_2018 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bmc20 at kent.ac.uk Wed Aug 15 15:29:22 2018 From: bmc20 at kent.ac.uk (B.M.Canning) Date: Wed, 15 Aug 2018 15:29:22 +0000 Subject: [Openstack] [OpenStack][Keystone][new_service] Message-ID: Dear OpenStackers, Hello, I'm new to the list. I would like to know what support is available for creating a new OpenStack service that contains role-based access control components, such as a Policy Decision Point (PDP), inside the new service. I have come across oslo.policy in my research, is this what other OpenStack components use for their PEP, PDP, PAP and PIP? If so, what resources are available to help developers use this framework in their projects? Background: As part of my MSc degree in computer science, I am conducting a research project into the application of self-adaptation in authorisation infrastructures as a means of mitigation against insider threats towards cloud computing infrastructures. I'm using Keystone as a role-based access control system to protect access to a web-based game, and actions that a player can perform in the game, which represents computing resources, here snakes and ladders. Cheating in the game represents the malicious behaviour of an insider threat, to which the authorisation infrastructure responds by reducing/removing the user's privileges. The intention is to have the game represent an OpenStack service, like Swift. I am currently using the Queens release of Keystone and v3 of the API for both service-level and infrastructure-level policy decisions. Best wishes, Bruno Canning School of Computing, University of Kent From amy at demarco.com Wed Aug 15 20:00:34 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 15 Aug 2018 15:00:34 -0500 Subject: [Openstack] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is asking for your assistance. We have revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Thu Aug 16 21:17:56 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 16 Aug 2018 14:17:56 -0700 Subject: [Openstack] [OpenStack][Keystone][new_service] In-Reply-To: References: Message-ID: Hi Bruno! What is the new service you're looking to develop? I think the answer depends on your needs. Most openstack projects use the oslo policy library as a PDP to protect API access [1]. On the other hand, if you want dynamic rules and very fine-grained access control, you may also consider Openstack Congress [2] which offers a general and flexible rule framework. Either way, here is how it typically works in an openstack service: Policy rules are written and stored in the chosen policy framework. For oslo policy, this is typically the json file containing policy rules. In Congress, the policy store is managed by Congress service and accessed via Congress API. When an API is accessed, the service serving the API acts as the PEP. It consults the PDP to see whether something is allowed, and enforces that decision. For oslo policy, this is a library call [3]. For Congress, this is an API call to Congress service to query the result of rule evaluation [4][5]. For oslo policy, the main PAP is the json file containing the policy rules. For congress, the policies and rules are managed through the Congress API/GUI/client. Hope that helps. Happy to talk further! Eric OpenStack Congress contributor [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html# [2] https://docs.openstack.org/congress/latest/user/policy.html# [3] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#generic-checks [4] https://docs.openstack.org/congress/latest/user/api.html#policy-table-rows-v1-policies-policy-id-tables-table-id [5] https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py#L113 On Wed, Aug 15, 2018 at 8:29 AM, B.M.Canning wrote: > Dear OpenStackers, > > Hello, I'm new to the list. > > I would like to know what support is available for creating a new > OpenStack service that contains role-based access control components, > such as a Policy Decision Point (PDP), inside the new service. > > I have come across oslo.policy in my research, is this what other OpenStack > components use for their PEP, PDP, PAP and PIP? If so, what resources are > available to help developers use this framework in their projects? > > Background: > As part of my MSc degree in computer science, I am conducting a research > project into the application of self-adaptation in authorisation > infrastructures as a means of mitigation against insider threats towards > cloud computing infrastructures. I'm using Keystone as a role-based > access control system to protect access to a web-based game, and actions > that a player can perform in the game, which represents computing > resources, here snakes and ladders. Cheating in the game represents the > malicious behaviour of an insider threat, to which the authorisation > infrastructure responds by reducing/removing the user's privileges. The > intention is to have the game represent an OpenStack service, like > Swift. I am currently using the Queens release of Keystone and v3 of the > API for both service-level and infrastructure-level policy decisions. > > Best wishes, > Bruno Canning > > School of Computing, University of Kent > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From satish.txt at gmail.com Fri Aug 17 04:21:49 2018 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 17 Aug 2018 00:21:49 -0400 Subject: [Openstack] MySQL server has gone away Message-ID: I have deployed openstack-ansible and somehow i am frequently seeing following error, I have no packet loss in network and max_packet size is also 16MB in mysql does any one know about this issue? nova-placement-api.log ==> ostack-infra-03-nova-api-container-543a1e2a/nova-placement-api.log <== Aug 17 00:18:48 ostack-infra-03-nova-api-container-543a1e2a nova-placement-api: 2018-08-17 00:18:41.497 14880 ERROR oslo_db.sqlalchemy.engines [req-bfc9f182-7b91-4de5-8b99-f353fda4487f 8ec61b0530b94a699c4dcf164115f365 328fc75d4f944a64ad1b8699c02350ca - default default] Database connection was found disconnected; reconnecting: DBConnectionError: (pymysql.err.OperationalError) (2006, "MySQL server has gone away (error(104, 'Connection reset by peer'))") [SQL: u'SELECT 1'] (Background on this error at: http://sqlalche.me/e/e3q8) Aug 17 00:18:48 ostack-infra-03-nova-api-container-543a1e2a nova-placement-api: 2018-08-17 00:18:41.497 14880 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): Aug 17 00:18:48 ostack-infra-03-nova-api-container-543a1e2a nova-placement-api: 2018-08-17 00:18:41.497 14880 ERROR oslo_db.sqlalchemy.engines File "/openstack/venvs/nova-17.0.8/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73, in _connect_ping_listener From armedguy at ludd.ltu.se Fri Aug 17 10:09:06 2018 From: armedguy at ludd.ltu.se (Johan Jatko) Date: Fri, 17 Aug 2018 12:09:06 +0200 Subject: [Openstack] Help with hardcoded project_id query_filter in neutron when not admin Message-ID: <682b74ca6bf72c58888dbc4c247dcc7a@ludd.ltu.se> Hi! I got a problem with neutron-server, and I am not sure if I should consider it a bug, a platform limitation, or a future improvement. My scenario is that I want to allocate floating ips from project "admin" to project "project1". On project admin, I have an external network, and a router connecting the external network "external" and a internal network "access-network" "access-network" is shared to "user1". When a user in "project1" (non-admin) tries to assign floating ip to an instance that is connected to "access-network", the returned error is "Router {ID} could not be found". Letting our projects create their own routers on the external network wastes a lot of IPs for us, so we would like to use a shared router. After debugging I have found out that this is due to a check in neutron/_model_query.py in query_with_hooks that checks if the current context is service or admin, and IF NOT, adds a query_filter that limits the query to the current project. This seems by design but I cannot for the life of me understand why the policy system cannot enforce this instead (or the rbac system?). For now I have decided to just patch it myself and push the change to my cluster, but it would be interesting to hear if there are any design decisions for it. Regards Johan Jatko Luleå Academic Computer Society From bmc20 at kent.ac.uk Fri Aug 17 16:34:06 2018 From: bmc20 at kent.ac.uk (B.M.Canning) Date: Fri, 17 Aug 2018 16:34:06 +0000 Subject: [Openstack] [OpenStack][Keystone][new_service] In-Reply-To: References: , Message-ID: Hi Eric, Thanks for getting back to me. I'm not looking to develop a real, useful, new service for OpenStack but develop a dummy service that plugs into OpenStack's authorisation infrastructure in a way that it looks like an OpenStack service which integrates with Keystone, like, say the Swift service. See picture attached, where the swift object represents a resource in the dummy service. The dummy service itself is a web-based game of snakes and ladders written in JavaScript/jQuery which makes Ajax calls to its PEP, written in PHP. The PHP code interacts with Keystone via the PHP cURL library and also logs all game actions in a MariaDB database. The game has been written in a way that it can be exploited by malicious users who already have access to the system, e.g players can travel up the snakes or simply ignore the snakes. The idea is that an autonomic controller is recording the user's actions, analysing them, planning a response (if necessary) and executing a change. This change could be inserting a policy line into policy.json or via the congress API. It could also be removing a role from a user which denies them further access to the resource in Keystone. The aim of this research is to produce an effective and efficient means of mitigating against insider threats directed at computing resources and information systems. This idea has been previously examined with LDAP serving as an authentication service and PERMIS serving as an authorisation service [1]. What is of interest here is porting the setup to an authorisation infrastructure that is relevant to cloud computing. I've had a look at congress, I have it running on my game server and it is registered as a service in Keystone after following [2] (except I installed the software from CentOS 7 "cloud" repo, "openstack-queens" [3] but at the moment, calls to the API are returning "Service Unavailable (HTTP 503)". This may be because there are no datasources configured. I started to write a driver for the dummy service [4] but as the game itself does not have a RESTful API, I'm not sure what approach to take here. I note that this distinction may favour a driver which is a subclass of PushedDataSourceDriver, rather than PollingDataSourceDriver. Failing that, I might pursue the Oslo policy library route, but again, I'm having difficulty in finding where to start. How might you suggest going about making a new, dummy service, such as that which I have described? Best wishes, Bruno [1] https://core.ac.uk/download/pdf/30710337.pdf - Chapter 6 [2] https://docs.openstack.org/congress/latest/install/index.html [3] http://www.mirrorservice.org/sites/mirror.centos.org/7/cloud/x86_64/openstack-queens [4] https://docs.openstack.org/congress/latest/user/cloudservices.html#drivers From: Eric K Sent: 16 August 2018 22:17 To: openstack at lists.openstack.org Cc: B.M.Canning Subject: Re: [Openstack] [OpenStack][Keystone][new_service]   Hi Bruno! What is the new service you're looking to develop? I think the answer depends on your needs. Most openstack projects use the oslo policy library as a PDP to protect API access [1]. On the other hand, if you want dynamic rules and very fine-grained access control, you may also consider Openstack Congress [2] which offers a general and flexible rule framework. Either way, here is how it typically works in an openstack service: Policy rules are written and stored in the chosen policy framework. For oslo policy, this is typically the json file containing policy rules. In Congress, the policy store is managed by Congress service and accessed via Congress API. When an API is accessed, the service serving the API acts as the PEP. It consults the PDP to see whether something is allowed, and enforces that decision. For oslo policy, this is a library call [3]. For Congress, this is an API call to Congress service to query the result of rule evaluation [4][5]. For oslo policy, the main PAP is the json file containing the policy rules. For congress, the policies and rules are managed through the Congress API/GUI/client. Hope that helps. Happy to talk further! Eric OpenStack Congress contributor [1] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html# [2] https://docs.openstack.org/congress/latest/user/policy.html# [3] https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#generic-checks [4] https://docs.openstack.org/congress/latest/user/api.html#policy-table-rows-v1-policies-policy-id-tables-table-id [5] https://github.com/openstack/python-congressclient/blob/master/congressclient/v1/client.py#L113 On Wed, Aug 15, 2018 at 8:29 AM, B.M.Canning wrote: > Dear OpenStackers, > > Hello, I'm new to the list. > > I would like to know what support is available for creating a new > OpenStack service that contains role-based access control components, > such as a Policy Decision Point (PDP), inside the new service. > > I have come across oslo.policy in my research, is this what other OpenStack > components use for their PEP, PDP, PAP and PIP? If so, what resources are > available to help developers use this framework in their projects? > > Background: > As part of my MSc degree in computer science, I am conducting a research > project into the application of self-adaptation in authorisation > infrastructures as a means of mitigation against insider threats towards > cloud computing infrastructures. I'm using Keystone as a role-based > access control system to protect access to a web-based game, and actions > that a player can perform in the game, which represents computing > resources, here snakes and ladders. Cheating in the game represents the > malicious behaviour of an insider threat, to which the authorisation > infrastructure responds by reducing/removing the user's privileges. The > intention is to have the game represent an OpenStack service, like > Swift. I am currently using the Queens release of Keystone and v3 of the > API for both service-level and infrastructure-level policy decisions. > > Best wishes, > Bruno Canning > > School of Computing, University of Kent > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to     : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- A non-text attachment was scrubbed... Name: os-keystone-swift-auth-arch.png Type: image/png Size: 128006 bytes Desc: os-keystone-swift-auth-arch.png URL: From samueldmq at gmail.com Fri Aug 17 17:56:56 2018 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Fri, 17 Aug 2018 14:56:56 -0300 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hi all, As someone who cares for this cause and participated twice in this program as a mentor, I'd like to candidate as program coordinator. Victoria, thanks for all your lovely work. You are awesome! Best regards, Samuel On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson wrote: > You have done such amazing things with the program! We appreciate > everything you do :) Enjoy the little extra spare time. > > -Kendall (daiblo_rojo) > > > On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < > victoria at vmartinezdelacruz.com> wrote: > >> Hi all, >> >> I'm reaching you out to let you know that I'll be stepping down as >> coordinator for OpenStack next round. I had been contributing to this >> effort for several rounds now and I believe is a good moment for somebody >> else to take the lead. You all know how important is Outreachy to me and >> I'm grateful for all the amazing things I've done as part of the Outreachy >> program and all the great people I've met in the way. I plan to keep >> involved with the internships but leave the coordination tasks to somebody >> else. >> >> If you are interested in becoming an Outreachy coordinator, let me know >> and I can share my experience and provide some guidance. >> >> Thanks, >> >> Victoria >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victoria at vmartinezdelacruz.com Fri Aug 17 20:07:00 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Fri, 17 Aug 2018 17:07:00 -0300 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Thanks everyone for your words! I really love the OpenStack community and I'm glad I could contribute back with this. Samuel has been a great mentor for Outreachy in several rounds and I believe he will excel as coordinator along with Mahati. Thanks for volunteer for this Samuel! All the best, Victoria 2018-08-17 14:56 GMT-03:00 Samuel de Medeiros Queiroz : > Hi all, > > As someone who cares for this cause and participated twice in this program > as a mentor, I'd like to candidate as program coordinator. > > Victoria, thanks for all your lovely work. You are awesome! > > Best regards, > Samuel > > > On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson > wrote: > >> You have done such amazing things with the program! We appreciate >> everything you do :) Enjoy the little extra spare time. >> >> -Kendall (daiblo_rojo) >> >> >> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < >> victoria at vmartinezdelacruz.com> wrote: >> >>> Hi all, >>> >>> I'm reaching you out to let you know that I'll be stepping down as >>> coordinator for OpenStack next round. I had been contributing to this >>> effort for several rounds now and I believe is a good moment for somebody >>> else to take the lead. You all know how important is Outreachy to me and >>> I'm grateful for all the amazing things I've done as part of the Outreachy >>> program and all the great people I've met in the way. I plan to keep >>> involved with the internships but leave the coordination tasks to somebody >>> else. >>> >>> If you are interested in becoming an Outreachy coordinator, let me know >>> and I can share my experience and provide some guidance. >>> >>> Thanks, >>> >>> Victoria >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Fri Aug 17 20:34:47 2018 From: kevin at benton.pub (Kevin Benton) Date: Fri, 17 Aug 2018 16:34:47 -0400 Subject: [Openstack] Help with hardcoded project_id query_filter in neutron when not admin In-Reply-To: <682b74ca6bf72c58888dbc4c247dcc7a@ludd.ltu.se> References: <682b74ca6bf72c58888dbc4c247dcc7a@ludd.ltu.se> Message-ID: This isn't a direct answer to your question, but you can use service subnets to avoid the routers burning public IP addresses to make per-tenant routers feasible: https://docs.openstack.org/neutron/pike/admin/config-service-subnets.html As for enabling the shared router use case, I recommend filing a request for enhancement (RFE) bug because it seems reasonable to allow tenant floating IP allocations via a router attached to a subnet owned by the tenant. On Aug 17, 2018 06:17, "Johan Jatko" wrote: Hi! I got a problem with neutron-server, and I am not sure if I should consider it a bug, a platform limitation, or a future improvement. My scenario is that I want to allocate floating ips from project "admin" to project "project1". On project admin, I have an external network, and a router connecting the external network "external" and a internal network "access-network" "access-network" is shared to "user1". When a user in "project1" (non-admin) tries to assign floating ip to an instance that is connected to "access-network", the returned error is "Router {ID} could not be found". Letting our projects create their own routers on the external network wastes a lot of IPs for us, so we would like to use a shared router. After debugging I have found out that this is due to a check in neutron/_model_query.py in query_with_hooks that checks if the current context is service or admin, and IF NOT, adds a query_filter that limits the query to the current project. This seems by design but I cannot for the life of me understand why the policy system cannot enforce this instead (or the rbac system?). For now I have decided to just patch it myself and push the change to my cluster, but it would be interesting to hear if there are any design decisions for it. Regards Johan Jatko Luleå Academic Computer Society _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ynisha11 at gmail.com Sat Aug 18 04:54:30 2018 From: ynisha11 at gmail.com (Nisha Yadav) Date: Sat, 18 Aug 2018 10:24:30 +0530 Subject: [Openstack] [openstack-dev] Stepping down as coordinator for the Outreachy internships In-Reply-To: References: Message-ID: Hey all, Victoria you are an inspiration! Going through your blog when I embarked on the OpenStack journey gave me a lot of motivation. It was a pleasure working with you. Thanks for all your support and hard work. Good luck Samuel, great to hear. Cheers to Outreachy and OpenStack! Best regards, Nisha On Sat, Aug 18, 2018 at 1:37 AM, Victoria Martínez de la Cruz < victoria at vmartinezdelacruz.com> wrote: > Thanks everyone for your words! > > I really love the OpenStack community and I'm glad I could contribute back > with this. > > Samuel has been a great mentor for Outreachy in several rounds and I > believe he will excel as coordinator along with Mahati. Thanks for > volunteer for this Samuel! > > All the best, > > Victoria > > 2018-08-17 14:56 GMT-03:00 Samuel de Medeiros Queiroz >: > >> Hi all, >> >> As someone who cares for this cause and participated twice in this >> program as a mentor, I'd like to candidate as program coordinator. >> >> Victoria, thanks for all your lovely work. You are awesome! >> >> Best regards, >> Samuel >> >> >> On Thu, Aug 9, 2018 at 6:51 PM Kendall Nelson >> wrote: >> >>> You have done such amazing things with the program! We appreciate >>> everything you do :) Enjoy the little extra spare time. >>> >>> -Kendall (daiblo_rojo) >>> >>> >>> On Tue, Aug 7, 2018 at 4:48 PM Victoria Martínez de la Cruz < >>> victoria at vmartinezdelacruz.com> wrote: >>> >>>> Hi all, >>>> >>>> I'm reaching you out to let you know that I'll be stepping down as >>>> coordinator for OpenStack next round. I had been contributing to this >>>> effort for several rounds now and I believe is a good moment for somebody >>>> else to take the lead. You all know how important is Outreachy to me and >>>> I'm grateful for all the amazing things I've done as part of the Outreachy >>>> program and all the great people I've met in the way. I plan to keep >>>> involved with the internships but leave the coordination tasks to somebody >>>> else. >>>> >>>> If you are interested in becoming an Outreachy coordinator, let me know >>>> and I can share my experience and provide some guidance. >>>> >>>> Thanks, >>>> >>>> Victoria >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayamiact at gmail.com Sat Aug 18 10:59:02 2018 From: jayamiact at gmail.com (Hhhtyh ByNhb) Date: Sat, 18 Aug 2018 17:59:02 +0700 Subject: [Openstack] [kolla-ansible] unable to install kolla-ansible Message-ID: Hi All, I tried to install openstack kolla by following kolla documentation: https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html When doing this command: "kolla-ansible -i ./multinode bootstrap-servers", I observed following error: Error message is "fatal: [control01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ipv4'\n\nThe error appears to have been in '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': line 19, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate /etc/hosts for all of the nodes\n ^ here\n"} Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e action=bootstrap-servers /usr/local/share/kolla-ansible/ansible/kolla-host.yml any suggestion? BR//jaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Sat Aug 18 11:10:48 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Sat, 18 Aug 2018 13:10:48 +0200 Subject: [Openstack] [openstack-dev] [kolla-ansible] unable to install kolla-ansible In-Reply-To: References: Message-ID: Hi, the interface name must be the same for all nodes including localhost (deployment host). If the iface names are not the same along all the hosts will have to: - Comment network_interface (or the interface var which name differs) - Set the variable with an appropriate value at inventory file on each host. In example: [compute] node1 network_interface=eth1 node2 network_interface=eno1 Regards On Sat, Aug 18, 2018, 12:59 PM Hhhtyh ByNhb wrote: > Hi All, > I tried to install openstack kolla by following kolla documentation: > https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html > > When doing this command: "kolla-ansible -i ./multinode bootstrap-servers", > I observed following error: > Error message is "fatal: [control01]: FAILED! => {"msg": "The task > includes an option with an undefined variable. The error was: 'dict object' > has no attribute 'ipv4'\n\nThe error appears to have been in > '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': > line 19, column 3, but may\nbe elsewhere in the file depending on the exact > syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate > /etc/hosts for all of the nodes\n ^ here\n"} > Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml > -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e > action=bootstrap-servers > /usr/local/share/kolla-ansible/ansible/kolla-host.yml > > any suggestion? > > BR//jaya > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayamiact at gmail.com Sat Aug 18 17:47:45 2018 From: jayamiact at gmail.com (Hhhtyh ByNhb) Date: Sun, 19 Aug 2018 00:47:45 +0700 Subject: [Openstack] [openstack-dev] [kolla-ansible] unable to install kolla-ansible In-Reply-To: References: Message-ID: Hi Eduardo, Thanks for your suggestion which is very helpful. Indeed, network_interface is not same in localhost and other nodes. Furthermore, the reason it failed is "network_interface" must have configured IPv4 address and up. This is not mentioned *explicitly *in the quick start documentation. To help someone like me (if any in the future), i've created bug report in the following url https://bugs.launchpad.net/kolla-ansible/+bug/1787750 Thanks again! Regards, J On Sat, Aug 18, 2018 at 6:25 PM Eduardo Gonzalez wrote: > Hi, the interface name must be the same for all nodes including localhost > (deployment host). If the iface names are not the same along all the hosts > will have to: > > - Comment network_interface (or the interface var which name differs) > - Set the variable with an appropriate value at inventory file on each > host. In example: > [compute] > node1 network_interface=eth1 > node2 network_interface=eno1 > > Regards > > On Sat, Aug 18, 2018, 12:59 PM Hhhtyh ByNhb wrote: > >> Hi All, >> I tried to install openstack kolla by following kolla documentation: >> https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html >> >> When doing this command: "kolla-ansible -i ./multinode >> bootstrap-servers", >> I observed following error: >> Error message is "fatal: [control01]: FAILED! => {"msg": "The task >> includes an option with an undefined variable. The error was: 'dict object' >> has no attribute 'ipv4'\n\nThe error appears to have been in >> '/usr/local/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': >> line 19, column 3, but may\nbe elsewhere in the file depending on the exact >> syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate >> /etc/hosts for all of the nodes\n ^ here\n"} >> Command failed ansible-playbook -i ./multinode -e @/etc/kolla/globals.yml >> -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e >> action=bootstrap-servers >> /usr/local/share/kolla-ansible/ansible/kolla-host.yml >> >> any suggestion? >> >> BR//jaya >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Mon Aug 20 07:17:48 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 20 Aug 2018 10:17:48 +0300 Subject: [Openstack] [openstack-ansible] configuration file override Message-ID: <380a3446-0f95-c63c-9224-230fa85c77f7@gmail.com> Dear all, Openstack-ansible (OSA) allows us to override parameters in the configuration files as described here: https://docs.openstack.org/project-deploy-guide/openstack-ansible/draft/app-advanced-config-override.html there is the following statement: "You can also apply overrides on a per-host basis with the following configuration in the /etc/openstack_deploy/openstack_user_config.yml file: compute_hosts: 900089-compute001: ip: 192.0.2.10 host_vars: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 libvirt: cpu_mode: host-model disk_cachemodes: file=directsync,block=none database: idle_timeout: 300 max_pool_size: 10 " In this example the override is part of a compute host definition and there it is in the host_vars section (compute_hosts -> 900089-compute001 -> host_vars -> override). Is it possible to apply such an override for all the compute hosts by not using the hostname? For instance something like: " compute_hosts: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 " would this be correct? Thank you, Laszlo From ekcs.openstack at gmail.com Tue Aug 21 02:11:22 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 20 Aug 2018 19:11:22 -0700 Subject: [Openstack] [OpenStack][Keystone][new_service] In-Reply-To: References: Message-ID: On Fri, Aug 17, 2018 at 9:34 AM, B.M.Canning wrote: > Hi Eric, > > Thanks for getting back to me. > > I'm not looking to develop a real, useful, new service for OpenStack but > develop a dummy service that plugs into OpenStack's authorisation > infrastructure in a way that it looks like an OpenStack service which > integrates with Keystone, like, say the Swift service. See picture > attached, where the swift object represents a resource in the dummy > service. > > The dummy service itself is a web-based game of snakes and ladders > written in JavaScript/jQuery which makes Ajax calls to its PEP, written > in PHP. The PHP code interacts with Keystone via the PHP cURL library > and also logs all game actions in a MariaDB database. > > The game has been written in a way that it can be exploited by malicious > users who already have access to the system, e.g players can travel up > the snakes or simply ignore the snakes. The idea is that an autonomic > controller is recording the user's actions, analysing them, planning a > response (if necessary) and executing a change. This change could be > inserting a policy line into policy.json or via the congress API. It > could also be removing a role from a user which denies them further > access to the resource in Keystone. > > The aim of this research is to produce an effective and efficient means > of mitigating against insider threats directed at computing resources > and information systems. This idea has been previously examined with > LDAP serving as an authentication service and PERMIS serving as an > authorisation service [1]. What is of interest here is porting the setup > to an authorisation infrastructure that is relevant to cloud computing. > > I've had a look at congress, I have it running on my game server and it > is registered as a service in Keystone after following [2] (except I > installed the software from CentOS 7 "cloud" repo, "openstack-queens" > [3] but at the moment, calls to the API are returning "Service > Unavailable (HTTP 503)". This may be because there are no datasources > configured. Ah I think the issue is that there is no rabbitmq server running. We should probably make that clear in docs. https://www.rabbitmq.com/install-rpm.html > I started to write a driver for the dummy service [4] but as > the game itself does not have a RESTful API, I'm not sure what approach > to take here. I note that this distinction may favour a driver which is > a subclass of PushedDataSourceDriver, rather than > PollingDataSourceDriver. I think there is no need to make a driver. Rather, your service can simply make API calls to Congress the same way it calls Keystone. > Failing that, I might pursue the Oslo policy > library route, but again, I'm having difficulty in finding where to > start. How might you suggest going about making a new, dummy service, > such as that which I have described? oslo policy is the stardard used by most openstack services. So if your goal is to demonstrate doing something using the standard framework, then that's the way to go. Though since it's a python library you'd need some kind of bridge between your PHP web service and oslo policy. unfortunately it's not the most obvious how to get started. Here's a simple example (from congress code): step 1: define enforcement function using oslo policy library https://github.com/openstack/congress/blob/master/congress/common/policy.py#L74 step 2: call the enforcement function to check for valid authorization before taking action https://github.com/openstack/congress/blob/master/congress/api/webservice.py#L417 More api reference here: https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#oslo_policy.policy.Enforcer.enforce On the other hand, if you don't want to involve python, you can use directly make API calls to Congress service using PHP. From satish.txt at gmail.com Wed Aug 22 04:27:08 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 00:27:08 -0400 Subject: [Openstack] live_migration only using 8 Mb speed Message-ID: Folks, I am running openstack queens and hypervisor is kvm, my live migration working fine. but somehow it stuck to 8 Mb network speed and taking long time to migrate 1G instance. I have 10Gbps network and i have tried to copy 10G file between two compute node and it did copy in 2 minute, so i am not seeing any network issue also. it seem live_migration has some bandwidth limit, I have tried following option in nova.conf but it didn't work live_migration_bandwidth = 500 My nova.conf look like following: live_migration_uri = "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" live_migration_tunnelled = True live_migration_bandwidth = 500 hw_disk_discard = unmap disk_cachemodes = network=writeback From prometheanfire at gentoo.org Wed Aug 22 04:42:52 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 21 Aug 2018 23:42:52 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: Message-ID: <20180822044252.cylns5dflirhhotr@gentoo.org> On 18-08-22 00:27:08, Satish Patel wrote: > Folks, > > I am running openstack queens and hypervisor is kvm, my live migration > working fine. but somehow it stuck to 8 Mb network speed and taking > long time to migrate 1G instance. I have 10Gbps network and i have > tried to copy 10G file between two compute node and it did copy in 2 > minute, so i am not seeing any network issue also. > > it seem live_migration has some bandwidth limit, I have tried > following option in nova.conf but it didn't work > > live_migration_bandwidth = 500 > > My nova.conf look like following: > > live_migration_uri = > "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > live_migration_tunnelled = True > live_migration_bandwidth = 500 > hw_disk_discard = unmap > disk_cachemodes = network=writeback > Do you have a this patch (and a couple of patches up to it)? https://bugs.launchpad.net/nova/+bug/1786346 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Wed Aug 22 05:02:53 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 01:02:53 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822044252.cylns5dflirhhotr@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> Message-ID: Matthew, Thanks for reply, Look like i don't have this patch https://review.openstack.org/#/c/591761/ So i have to patch following 3 file manually? nova/tests/unit/virt/libvirt/test_driver.py213 nova/tests/unit/virt/test_virt_drivers.py2 nova/virt/libvirt/driver.py On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode wrote: > On 18-08-22 00:27:08, Satish Patel wrote: >> Folks, >> >> I am running openstack queens and hypervisor is kvm, my live migration >> working fine. but somehow it stuck to 8 Mb network speed and taking >> long time to migrate 1G instance. I have 10Gbps network and i have >> tried to copy 10G file between two compute node and it did copy in 2 >> minute, so i am not seeing any network issue also. >> >> it seem live_migration has some bandwidth limit, I have tried >> following option in nova.conf but it didn't work >> >> live_migration_bandwidth = 500 >> >> My nova.conf look like following: >> >> live_migration_uri = >> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >> live_migration_tunnelled = True >> live_migration_bandwidth = 500 >> hw_disk_discard = unmap >> disk_cachemodes = network=writeback >> > > Do you have a this patch (and a couple of patches up to it)? > https://bugs.launchpad.net/nova/+bug/1786346 > > -- > Matthew Thode (prometheanfire) > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From prometheanfire at gentoo.org Wed Aug 22 05:06:09 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 22 Aug 2018 00:06:09 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822044252.cylns5dflirhhotr@gentoo.org> Message-ID: <20180822050609.zdhrraftfmimmhvc@gentoo.org> On 18-08-22 01:02:53, Satish Patel wrote: > Matthew, > > Thanks for reply, Look like i don't have this patch > https://review.openstack.org/#/c/591761/ > > So i have to patch following 3 file manually? > > nova/tests/unit/virt/libvirt/test_driver.py213 > nova/tests/unit/virt/test_virt_drivers.py2 > nova/virt/libvirt/driver.py > > > On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > wrote: > > On 18-08-22 00:27:08, Satish Patel wrote: > >> Folks, > >> > >> I am running openstack queens and hypervisor is kvm, my live migration > >> working fine. but somehow it stuck to 8 Mb network speed and taking > >> long time to migrate 1G instance. I have 10Gbps network and i have > >> tried to copy 10G file between two compute node and it did copy in 2 > >> minute, so i am not seeing any network issue also. > >> > >> it seem live_migration has some bandwidth limit, I have tried > >> following option in nova.conf but it didn't work > >> > >> live_migration_bandwidth = 500 > >> > >> My nova.conf look like following: > >> > >> live_migration_uri = > >> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >> live_migration_tunnelled = True > >> live_migration_bandwidth = 500 > >> hw_disk_discard = unmap > >> disk_cachemodes = network=writeback > >> > > > > Do you have a this patch (and a couple of patches up to it)? > > https://bugs.launchpad.net/nova/+bug/1786346 > > I don't know if that would cleanly apply (there are other patches that changed those functions within the last month and a half. It'd be best to upgrade and not do just one patch (which would be an untested process). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Wed Aug 22 05:57:17 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 01:57:17 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822050609.zdhrraftfmimmhvc@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> Message-ID: What I need to upgrade, any specific component? I have deployed openstack-ansible Sent from my iPhone > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >> On 18-08-22 01:02:53, Satish Patel wrote: >> Matthew, >> >> Thanks for reply, Look like i don't have this patch >> https://review.openstack.org/#/c/591761/ >> >> So i have to patch following 3 file manually? >> >> nova/tests/unit/virt/libvirt/test_driver.py213 >> nova/tests/unit/virt/test_virt_drivers.py2 >> nova/virt/libvirt/driver.py >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >> wrote: >>> On 18-08-22 00:27:08, Satish Patel wrote: >>>> Folks, >>>> >>>> I am running openstack queens and hypervisor is kvm, my live migration >>>> working fine. but somehow it stuck to 8 Mb network speed and taking >>>> long time to migrate 1G instance. I have 10Gbps network and i have >>>> tried to copy 10G file between two compute node and it did copy in 2 >>>> minute, so i am not seeing any network issue also. >>>> >>>> it seem live_migration has some bandwidth limit, I have tried >>>> following option in nova.conf but it didn't work >>>> >>>> live_migration_bandwidth = 500 >>>> >>>> My nova.conf look like following: >>>> >>>> live_migration_uri = >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >>>> live_migration_tunnelled = True >>>> live_migration_bandwidth = 500 >>>> hw_disk_discard = unmap >>>> disk_cachemodes = network=writeback >>>> >>> >>> Do you have a this patch (and a couple of patches up to it)? >>> https://bugs.launchpad.net/nova/+bug/1786346 >>> > > I don't know if that would cleanly apply (there are other patches that > changed those functions within the last month and a half. It'd be best > to upgrade and not do just one patch (which would be an untested > process). > > -- > Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Wed Aug 22 06:02:44 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 22 Aug 2018 01:02:44 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> Message-ID: <20180822060244.5fxrobrtthuow5ug@gentoo.org> On 18-08-22 01:57:17, Satish Patel wrote: > What I need to upgrade, any specific component? > > I have deployed openstack-ansible > > Sent from my iPhone > > > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > >> On 18-08-22 01:02:53, Satish Patel wrote: > >> Matthew, > >> > >> Thanks for reply, Look like i don't have this patch > >> https://review.openstack.org/#/c/591761/ > >> > >> So i have to patch following 3 file manually? > >> > >> nova/tests/unit/virt/libvirt/test_driver.py213 > >> nova/tests/unit/virt/test_virt_drivers.py2 > >> nova/virt/libvirt/driver.py > >> > >> > >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >> wrote: > >>> On 18-08-22 00:27:08, Satish Patel wrote: > >>>> Folks, > >>>> > >>>> I am running openstack queens and hypervisor is kvm, my live migration > >>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >>>> long time to migrate 1G instance. I have 10Gbps network and i have > >>>> tried to copy 10G file between two compute node and it did copy in 2 > >>>> minute, so i am not seeing any network issue also. > >>>> > >>>> it seem live_migration has some bandwidth limit, I have tried > >>>> following option in nova.conf but it didn't work > >>>> > >>>> live_migration_bandwidth = 500 > >>>> > >>>> My nova.conf look like following: > >>>> > >>>> live_migration_uri = > >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >>>> live_migration_tunnelled = True > >>>> live_migration_bandwidth = 500 > >>>> hw_disk_discard = unmap > >>>> disk_cachemodes = network=writeback > >>>> > >>> > >>> Do you have a this patch (and a couple of patches up to it)? > >>> https://bugs.launchpad.net/nova/+bug/1786346 > >>> > > > > I don't know if that would cleanly apply (there are other patches that > > changed those functions within the last month and a half. It'd be best > > to upgrade and not do just one patch (which would be an untested > > process). > > The sha for nova has not been updated yet (next update is 24-48 hours away iirc), once that's done you can use the head of stable/queens from OSA and run a inter-series upgrade (but the minimal thing to do would be to run repo-build and os-nova plays). I'm not sure when that sha bump will be tagged in a full release if you would rather wait on that. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Wed Aug 22 12:35:09 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 08:35:09 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822060244.5fxrobrtthuow5ug@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> Message-ID: Currently in stable/queens i am seeing this sha https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode wrote: > On 18-08-22 01:57:17, Satish Patel wrote: >> What I need to upgrade, any specific component? >> >> I have deployed openstack-ansible >> >> Sent from my iPhone >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >> > >> >> On 18-08-22 01:02:53, Satish Patel wrote: >> >> Matthew, >> >> >> >> Thanks for reply, Look like i don't have this patch >> >> https://review.openstack.org/#/c/591761/ >> >> >> >> So i have to patch following 3 file manually? >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 >> >> nova/tests/unit/virt/test_virt_drivers.py2 >> >> nova/virt/libvirt/driver.py >> >> >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >> >> wrote: >> >>> On 18-08-22 00:27:08, Satish Patel wrote: >> >>>> Folks, >> >>>> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have >> >>>> tried to copy 10G file between two compute node and it did copy in 2 >> >>>> minute, so i am not seeing any network issue also. >> >>>> >> >>>> it seem live_migration has some bandwidth limit, I have tried >> >>>> following option in nova.conf but it didn't work >> >>>> >> >>>> live_migration_bandwidth = 500 >> >>>> >> >>>> My nova.conf look like following: >> >>>> >> >>>> live_migration_uri = >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >> >>>> live_migration_tunnelled = True >> >>>> live_migration_bandwidth = 500 >> >>>> hw_disk_discard = unmap >> >>>> disk_cachemodes = network=writeback >> >>>> >> >>> >> >>> Do you have a this patch (and a couple of patches up to it)? >> >>> https://bugs.launchpad.net/nova/+bug/1786346 >> >>> >> > >> > I don't know if that would cleanly apply (there are other patches that >> > changed those functions within the last month and a half. It'd be best >> > to upgrade and not do just one patch (which would be an untested >> > process). >> > > > The sha for nova has not been updated yet (next update is 24-48 hours > away iirc), once that's done you can use the head of stable/queens from > OSA and run a inter-series upgrade (but the minimal thing to do would be > to run repo-build and os-nova plays). I'm not sure when that sha bump > will be tagged in a full release if you would rather wait on that. > > -- > Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Wed Aug 22 14:24:51 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 22 Aug 2018 09:24:51 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> Message-ID: <20180822142451.rni6ivioqyuyyzge@gentoo.org> On 18-08-22 08:35:09, Satish Patel wrote: > Currently in stable/queens i am seeing this sha > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > wrote: > > On 18-08-22 01:57:17, Satish Patel wrote: > >> What I need to upgrade, any specific component? > >> > >> I have deployed openstack-ansible > >> > >> Sent from my iPhone > >> > >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >> > > >> >> On 18-08-22 01:02:53, Satish Patel wrote: > >> >> Matthew, > >> >> > >> >> Thanks for reply, Look like i don't have this patch > >> >> https://review.openstack.org/#/c/591761/ > >> >> > >> >> So i have to patch following 3 file manually? > >> >> > >> >> nova/tests/unit/virt/libvirt/test_driver.py213 > >> >> nova/tests/unit/virt/test_virt_drivers.py2 > >> >> nova/virt/libvirt/driver.py > >> >> > >> >> > >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >> >> wrote: > >> >>> On 18-08-22 00:27:08, Satish Patel wrote: > >> >>>> Folks, > >> >>>> > >> >>>> I am running openstack queens and hypervisor is kvm, my live migration > >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have > >> >>>> tried to copy 10G file between two compute node and it did copy in 2 > >> >>>> minute, so i am not seeing any network issue also. > >> >>>> > >> >>>> it seem live_migration has some bandwidth limit, I have tried > >> >>>> following option in nova.conf but it didn't work > >> >>>> > >> >>>> live_migration_bandwidth = 500 > >> >>>> > >> >>>> My nova.conf look like following: > >> >>>> > >> >>>> live_migration_uri = > >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >> >>>> live_migration_tunnelled = True > >> >>>> live_migration_bandwidth = 500 > >> >>>> hw_disk_discard = unmap > >> >>>> disk_cachemodes = network=writeback > >> >>>> > >> >>> > >> >>> Do you have a this patch (and a couple of patches up to it)? > >> >>> https://bugs.launchpad.net/nova/+bug/1786346 > >> >>> > >> > > >> > I don't know if that would cleanly apply (there are other patches that > >> > changed those functions within the last month and a half. It'd be best > >> > to upgrade and not do just one patch (which would be an untested > >> > process). > >> > > > > > The sha for nova has not been updated yet (next update is 24-48 hours > > away iirc), once that's done you can use the head of stable/queens from > > OSA and run a inter-series upgrade (but the minimal thing to do would be > > to run repo-build and os-nova plays). I'm not sure when that sha bump > > will be tagged in a full release if you would rather wait on that. it's this sha that needs updating. https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Wed Aug 22 14:33:11 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 10:33:11 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822142451.rni6ivioqyuyyzge@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> Message-ID: Thanks Matthew, Can i put that sha in my OSA at playbooks/defaults/repo_packages/openstack_services.yml by hand and run playbooks [repo/nova] ? On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode wrote: > On 18-08-22 08:35:09, Satish Patel wrote: >> Currently in stable/queens i am seeing this sha >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 >> >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode >> wrote: >> > On 18-08-22 01:57:17, Satish Patel wrote: >> >> What I need to upgrade, any specific component? >> >> >> >> I have deployed openstack-ansible >> >> >> >> Sent from my iPhone >> >> >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >> >> > >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: >> >> >> Matthew, >> >> >> >> >> >> Thanks for reply, Look like i don't have this patch >> >> >> https://review.openstack.org/#/c/591761/ >> >> >> >> >> >> So i have to patch following 3 file manually? >> >> >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 >> >> >> nova/virt/libvirt/driver.py >> >> >> >> >> >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >> >> >> wrote: >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: >> >> >>>> Folks, >> >> >>>> >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 >> >> >>>> minute, so i am not seeing any network issue also. >> >> >>>> >> >> >>>> it seem live_migration has some bandwidth limit, I have tried >> >> >>>> following option in nova.conf but it didn't work >> >> >>>> >> >> >>>> live_migration_bandwidth = 500 >> >> >>>> >> >> >>>> My nova.conf look like following: >> >> >>>> >> >> >>>> live_migration_uri = >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >> >> >>>> live_migration_tunnelled = True >> >> >>>> live_migration_bandwidth = 500 >> >> >>>> hw_disk_discard = unmap >> >> >>>> disk_cachemodes = network=writeback >> >> >>>> >> >> >>> >> >> >>> Do you have a this patch (and a couple of patches up to it)? >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 >> >> >>> >> >> > >> >> > I don't know if that would cleanly apply (there are other patches that >> >> > changed those functions within the last month and a half. It'd be best >> >> > to upgrade and not do just one patch (which would be an untested >> >> > process). >> >> > >> > >> > The sha for nova has not been updated yet (next update is 24-48 hours >> > away iirc), once that's done you can use the head of stable/queens from >> > OSA and run a inter-series upgrade (but the minimal thing to do would be >> > to run repo-build and os-nova plays). I'm not sure when that sha bump >> > will be tagged in a full release if you would rather wait on that. > > it's this sha that needs updating. > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > -- > Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Wed Aug 22 14:46:26 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 22 Aug 2018 09:46:26 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> Message-ID: <20180822144626.nvsodno3vhjhlhmd@gentoo.org> On 18-08-22 10:33:11, Satish Patel wrote: > Thanks Matthew, > > Can i put that sha in my OSA at > playbooks/defaults/repo_packages/openstack_services.yml by hand and > run playbooks [repo/nova] ? > > On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > wrote: > > On 18-08-22 08:35:09, Satish Patel wrote: > >> Currently in stable/queens i am seeing this sha > >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > >> > >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > >> wrote: > >> > On 18-08-22 01:57:17, Satish Patel wrote: > >> >> What I need to upgrade, any specific component? > >> >> > >> >> I have deployed openstack-ansible > >> >> > >> >> Sent from my iPhone > >> >> > >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >> >> > > >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: > >> >> >> Matthew, > >> >> >> > >> >> >> Thanks for reply, Look like i don't have this patch > >> >> >> https://review.openstack.org/#/c/591761/ > >> >> >> > >> >> >> So i have to patch following 3 file manually? > >> >> >> > >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 > >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 > >> >> >> nova/virt/libvirt/driver.py > >> >> >> > >> >> >> > >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >> >> >> wrote: > >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: > >> >> >>>> Folks, > >> >> >>>> > >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration > >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have > >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 > >> >> >>>> minute, so i am not seeing any network issue also. > >> >> >>>> > >> >> >>>> it seem live_migration has some bandwidth limit, I have tried > >> >> >>>> following option in nova.conf but it didn't work > >> >> >>>> > >> >> >>>> live_migration_bandwidth = 500 > >> >> >>>> > >> >> >>>> My nova.conf look like following: > >> >> >>>> > >> >> >>>> live_migration_uri = > >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >> >> >>>> live_migration_tunnelled = True > >> >> >>>> live_migration_bandwidth = 500 > >> >> >>>> hw_disk_discard = unmap > >> >> >>>> disk_cachemodes = network=writeback > >> >> >>>> > >> >> >>> > >> >> >>> Do you have a this patch (and a couple of patches up to it)? > >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 > >> >> >>> > >> >> > > >> >> > I don't know if that would cleanly apply (there are other patches that > >> >> > changed those functions within the last month and a half. It'd be best > >> >> > to upgrade and not do just one patch (which would be an untested > >> >> > process). > >> >> > > >> > > >> > The sha for nova has not been updated yet (next update is 24-48 hours > >> > away iirc), once that's done you can use the head of stable/queens from > >> > OSA and run a inter-series upgrade (but the minimal thing to do would be > >> > to run repo-build and os-nova plays). I'm not sure when that sha bump > >> > will be tagged in a full release if you would rather wait on that. > > > > it's this sha that needs updating. > > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > I'm not sure how you are doing overrides, but set the following as an override, then rerun the repo-build playbook (to rebuild the nova venv) then rerun the nova playbook to install it. nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Wed Aug 22 14:58:48 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 10:58:48 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822144626.nvsodno3vhjhlhmd@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> Message-ID: Matthew, I have two option looks like, correct me if i am wrong. 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to 17.0.8-23-g0aff517 and upgrade full OSA 2. Just do override as you said "nova_git_install_branch:" in my /etc/openstack_deploy/user_variables.yml file, and run playbooks. I think option [2] is safe to just touch specific component, also am i correct about override in /etc/openstack_deploy/user_variables.yml file? You mentioned "nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode wrote: > On 18-08-22 10:33:11, Satish Patel wrote: >> Thanks Matthew, >> >> Can i put that sha in my OSA at >> playbooks/defaults/repo_packages/openstack_services.yml by hand and >> run playbooks [repo/nova] ? >> >> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode >> wrote: >> > On 18-08-22 08:35:09, Satish Patel wrote: >> >> Currently in stable/queens i am seeing this sha >> >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 >> >> >> >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode >> >> wrote: >> >> > On 18-08-22 01:57:17, Satish Patel wrote: >> >> >> What I need to upgrade, any specific component? >> >> >> >> >> >> I have deployed openstack-ansible >> >> >> >> >> >> Sent from my iPhone >> >> >> >> >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >> >> >> > >> >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: >> >> >> >> Matthew, >> >> >> >> >> >> >> >> Thanks for reply, Look like i don't have this patch >> >> >> >> https://review.openstack.org/#/c/591761/ >> >> >> >> >> >> >> >> So i have to patch following 3 file manually? >> >> >> >> >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 >> >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 >> >> >> >> nova/virt/libvirt/driver.py >> >> >> >> >> >> >> >> >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >> >> >> >> wrote: >> >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: >> >> >> >>>> Folks, >> >> >> >>>> >> >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration >> >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking >> >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have >> >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 >> >> >> >>>> minute, so i am not seeing any network issue also. >> >> >> >>>> >> >> >> >>>> it seem live_migration has some bandwidth limit, I have tried >> >> >> >>>> following option in nova.conf but it didn't work >> >> >> >>>> >> >> >> >>>> live_migration_bandwidth = 500 >> >> >> >>>> >> >> >> >>>> My nova.conf look like following: >> >> >> >>>> >> >> >> >>>> live_migration_uri = >> >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >> >> >> >>>> live_migration_tunnelled = True >> >> >> >>>> live_migration_bandwidth = 500 >> >> >> >>>> hw_disk_discard = unmap >> >> >> >>>> disk_cachemodes = network=writeback >> >> >> >>>> >> >> >> >>> >> >> >> >>> Do you have a this patch (and a couple of patches up to it)? >> >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 >> >> >> >>> >> >> >> > >> >> >> > I don't know if that would cleanly apply (there are other patches that >> >> >> > changed those functions within the last month and a half. It'd be best >> >> >> > to upgrade and not do just one patch (which would be an untested >> >> >> > process). >> >> >> > >> >> > >> >> > The sha for nova has not been updated yet (next update is 24-48 hours >> >> > away iirc), once that's done you can use the head of stable/queens from >> >> > OSA and run a inter-series upgrade (but the minimal thing to do would be >> >> > to run repo-build and os-nova plays). I'm not sure when that sha bump >> >> > will be tagged in a full release if you would rather wait on that. >> > >> > it's this sha that needs updating. >> > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 >> > > > I'm not sure how you are doing overrides, but set the following as an > override, then rerun the repo-build playbook (to rebuild the nova venv) > then rerun the nova playbook to install it. > > nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > -- > Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Wed Aug 22 15:28:05 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 22 Aug 2018 10:28:05 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> Message-ID: <20180822152805.c4gl55yz4jibtqre@gentoo.org> On 18-08-22 10:58:48, Satish Patel wrote: > Matthew, > > I have two option looks like, correct me if i am wrong. > > 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > 17.0.8-23-g0aff517 and upgrade full OSA > > 2. Just do override as you said "nova_git_install_branch:" in my > /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > I think option [2] is safe to just touch specific component, also am i > correct about override in /etc/openstack_deploy/user_variables.yml > file? > > You mentioned "nova_git_install_branch: > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > wrote: > > On 18-08-22 10:33:11, Satish Patel wrote: > >> Thanks Matthew, > >> > >> Can i put that sha in my OSA at > >> playbooks/defaults/repo_packages/openstack_services.yml by hand and > >> run playbooks [repo/nova] ? > >> > >> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > >> wrote: > >> > On 18-08-22 08:35:09, Satish Patel wrote: > >> >> Currently in stable/queens i am seeing this sha > >> >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > >> >> > >> >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > >> >> wrote: > >> >> > On 18-08-22 01:57:17, Satish Patel wrote: > >> >> >> What I need to upgrade, any specific component? > >> >> >> > >> >> >> I have deployed openstack-ansible > >> >> >> > >> >> >> Sent from my iPhone > >> >> >> > >> >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >> >> >> > > >> >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: > >> >> >> >> Matthew, > >> >> >> >> > >> >> >> >> Thanks for reply, Look like i don't have this patch > >> >> >> >> https://review.openstack.org/#/c/591761/ > >> >> >> >> > >> >> >> >> So i have to patch following 3 file manually? > >> >> >> >> > >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 > >> >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 > >> >> >> >> nova/virt/libvirt/driver.py > >> >> >> >> > >> >> >> >> > >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >> >> >> >> wrote: > >> >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: > >> >> >> >>>> Folks, > >> >> >> >>>> > >> >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration > >> >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >> >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have > >> >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 > >> >> >> >>>> minute, so i am not seeing any network issue also. > >> >> >> >>>> > >> >> >> >>>> it seem live_migration has some bandwidth limit, I have tried > >> >> >> >>>> following option in nova.conf but it didn't work > >> >> >> >>>> > >> >> >> >>>> live_migration_bandwidth = 500 > >> >> >> >>>> > >> >> >> >>>> My nova.conf look like following: > >> >> >> >>>> > >> >> >> >>>> live_migration_uri = > >> >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >> >> >> >>>> live_migration_tunnelled = True > >> >> >> >>>> live_migration_bandwidth = 500 > >> >> >> >>>> hw_disk_discard = unmap > >> >> >> >>>> disk_cachemodes = network=writeback > >> >> >> >>>> > >> >> >> >>> > >> >> >> >>> Do you have a this patch (and a couple of patches up to it)? > >> >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 > >> >> >> >>> > >> >> >> > > >> >> >> > I don't know if that would cleanly apply (there are other patches that > >> >> >> > changed those functions within the last month and a half. It'd be best > >> >> >> > to upgrade and not do just one patch (which would be an untested > >> >> >> > process). > >> >> >> > > >> >> > > >> >> > The sha for nova has not been updated yet (next update is 24-48 hours > >> >> > away iirc), once that's done you can use the head of stable/queens from > >> >> > OSA and run a inter-series upgrade (but the minimal thing to do would be > >> >> > to run repo-build and os-nova plays). I'm not sure when that sha bump > >> >> > will be tagged in a full release if you would rather wait on that. > >> > > >> > it's this sha that needs updating. > >> > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > >> > > > > > I'm not sure how you are doing overrides, but set the following as an > > override, then rerun the repo-build playbook (to rebuild the nova venv) > > then rerun the nova playbook to install it. > > > > nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > The sha I gave was head of the queens branch of openstack/nova. It's also the commit in that branch that containst the fix. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From bokke at google.com Wed Aug 22 15:53:26 2018 From: bokke at google.com (David van der Bokke) Date: Wed, 22 Aug 2018 08:53:26 -0700 Subject: [Openstack] git-review tagging schedule Message-ID: Hi, We are curious about when the next version of git-review will be tagged so that we can create a debian package release for it. Specifically we want to pick up the change in https://git.openstack.org/cgit/openstack-infra/git-review/commit/?id=694f532ca803882d7b3446c31f5fc690e9669042 before refs/publish is completely removed. Thanks, David van der Bokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 22 16:26:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 22 Aug 2018 16:26:14 +0000 Subject: [Openstack] git-review tagging schedule In-Reply-To: References: Message-ID: <20180822162614.sm5nedavyslanmug@yuggoth.org> On 2018-08-22 08:53:26 -0700 (-0700), David van der Bokke wrote: > We are curious about when the next version of git-review will be tagged so > that we can create a debian package release for it. Specifically we want > to pick up the change in > https://git.openstack.org/cgit/openstack-infra/git-review/commit/?id=694f532ca803882d7b3446c31f5fc690e9669042 > before refs/publish is completely removed. I've redirected this to the openstack-infra at lists.openstack.org mailing list where it's more on topic; please continue corresponding there instead and drop openstack at lists.openstack.org from any further replies. Have the Gerrit maintainers indicated what version will drop support for the refs/publish path? Or is the urgency more about silencing the deprecation warning? Looking at the currently merged commits since 1.26.0, I see some which are feature additions so we're likely talking about tagging this as 1.27.0 rather than 1.26.1 (depending on whether dropping the vestigial -c command line option counts as a reason to make it 2.0.0 instead, but I feel like it's probably not warranted). Are there other outstanding changes which are important to get into 1.27.0? At a minimum I think we'll want to get https://review.openstack.org/593670 and its parent change merged so that the release note about -c going away will be included in the release. Anything else? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sterdnotshaken at gmail.com Wed Aug 22 17:56:05 2018 From: sterdnotshaken at gmail.com (Sterdnot Shaken) Date: Wed, 22 Aug 2018 11:56:05 -0600 Subject: [Openstack] Lose 30+ seconds of packets to instance during Live-Migration Message-ID: Version: Pike OVS version: 2.9 VM-A (On Compute A) ----- (On Compute B) VM-B What is it in Neutron that might delay vxlan tunnel construction on the destination compute node during live-migration? As the VM is live-migrated, I'm watch the flows and the vxlan tunnel interfaces on br-tun on the Compute node where the VM is moving too and they don't appear until 30+ seconds into the migration. I'm wondering if this is the cause of packet loss during this migration that's around ~35 seconds or so. The strange thing is, if I start a continuous ping from VM B on compute B to VM A on compute A and then initiate a live-migration of VM A to move to Compute B, I only lose ~1 second of traffic, which leads me to suspect this issue is related to said tunnels or flows on br-tun... Any help would be greatly appreciated! Thanks! Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From sterdnotshaken at gmail.com Wed Aug 22 22:20:40 2018 From: sterdnotshaken at gmail.com (Sterdnot Shaken) Date: Wed, 22 Aug 2018 16:20:40 -0600 Subject: [Openstack] Lose 30+ seconds of packets to instance during Live-Migration In-Reply-To: References: Message-ID: After turning off L2 population on the compute and network nodes, the packet loss during live migration diminished from 30+ to about 3 seconds... Does anyone have an explanation for this? I'd really like to be able to use L2 pop and ARP responder if I can, but not at the cost of that large of a hit when I live migrate. Thanks in advance! Steve On Wed, Aug 22, 2018 at 11:56 AM Sterdnot Shaken wrote: > Version: Pike > OVS version: 2.9 > > VM-A (On Compute A) ----- (On Compute B) VM-B > > What is it in Neutron that might delay vxlan tunnel construction on the > destination compute node during live-migration? As the VM is live-migrated, > I'm watch the flows and the vxlan tunnel interfaces on br-tun on the > Compute node where the VM is moving too and they don't appear until 30+ > seconds into the migration. I'm wondering if this is the cause of packet > loss during this migration that's around ~35 seconds or so. > > The strange thing is, if I start a continuous ping from VM B on compute B > to VM A on compute A and then initiate a live-migration of VM A to move to > Compute B, I only lose ~1 second of traffic, which leads me to suspect this > issue is related to said tunnels or flows on br-tun... > > Any help would be greatly appreciated! > > Thanks! > > Steve > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Aug 23 03:04:57 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 22 Aug 2018 23:04:57 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180822152805.c4gl55yz4jibtqre@gentoo.org> References: <20180822044252.cylns5dflirhhotr@gentoo.org> <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> Message-ID: Mathew, I haven't applied any patch yet but i am noticing in cluster some host migrating VM super fast and some host migrating very slow. Is this known behavior? On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode wrote: > On 18-08-22 10:58:48, Satish Patel wrote: >> Matthew, >> >> I have two option looks like, correct me if i am wrong. >> >> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to >> 17.0.8-23-g0aff517 and upgrade full OSA >> >> 2. Just do override as you said "nova_git_install_branch:" in my >> /etc/openstack_deploy/user_variables.yml file, and run playbooks. >> >> >> I think option [2] is safe to just touch specific component, also am i >> correct about override in /etc/openstack_deploy/user_variables.yml >> file? >> >> You mentioned "nova_git_install_branch: >> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be >> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? >> >> >> >> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode >> wrote: >> > On 18-08-22 10:33:11, Satish Patel wrote: >> >> Thanks Matthew, >> >> >> >> Can i put that sha in my OSA at >> >> playbooks/defaults/repo_packages/openstack_services.yml by hand and >> >> run playbooks [repo/nova] ? >> >> >> >> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode >> >> wrote: >> >> > On 18-08-22 08:35:09, Satish Patel wrote: >> >> >> Currently in stable/queens i am seeing this sha >> >> >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 >> >> >> >> >> >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode >> >> >> wrote: >> >> >> > On 18-08-22 01:57:17, Satish Patel wrote: >> >> >> >> What I need to upgrade, any specific component? >> >> >> >> >> >> >> >> I have deployed openstack-ansible >> >> >> >> >> >> >> >> Sent from my iPhone >> >> >> >> >> >> >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >> >> >> >> > >> >> >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: >> >> >> >> >> Matthew, >> >> >> >> >> >> >> >> >> >> Thanks for reply, Look like i don't have this patch >> >> >> >> >> https://review.openstack.org/#/c/591761/ >> >> >> >> >> >> >> >> >> >> So i have to patch following 3 file manually? >> >> >> >> >> >> >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 >> >> >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 >> >> >> >> >> nova/virt/libvirt/driver.py >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >> >> >> >> >> wrote: >> >> >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: >> >> >> >> >>>> Folks, >> >> >> >> >>>> >> >> >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration >> >> >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking >> >> >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have >> >> >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 >> >> >> >> >>>> minute, so i am not seeing any network issue also. >> >> >> >> >>>> >> >> >> >> >>>> it seem live_migration has some bandwidth limit, I have tried >> >> >> >> >>>> following option in nova.conf but it didn't work >> >> >> >> >>>> >> >> >> >> >>>> live_migration_bandwidth = 500 >> >> >> >> >>>> >> >> >> >> >>>> My nova.conf look like following: >> >> >> >> >>>> >> >> >> >> >>>> live_migration_uri = >> >> >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >> >> >> >> >>>> live_migration_tunnelled = True >> >> >> >> >>>> live_migration_bandwidth = 500 >> >> >> >> >>>> hw_disk_discard = unmap >> >> >> >> >>>> disk_cachemodes = network=writeback >> >> >> >> >>>> >> >> >> >> >>> >> >> >> >> >>> Do you have a this patch (and a couple of patches up to it)? >> >> >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 >> >> >> >> >>> >> >> >> >> > >> >> >> >> > I don't know if that would cleanly apply (there are other patches that >> >> >> >> > changed those functions within the last month and a half. It'd be best >> >> >> >> > to upgrade and not do just one patch (which would be an untested >> >> >> >> > process). >> >> >> >> > >> >> >> > >> >> >> > The sha for nova has not been updated yet (next update is 24-48 hours >> >> >> > away iirc), once that's done you can use the head of stable/queens from >> >> >> > OSA and run a inter-series upgrade (but the minimal thing to do would be >> >> >> > to run repo-build and os-nova plays). I'm not sure when that sha bump >> >> >> > will be tagged in a full release if you would rather wait on that. >> >> > >> >> > it's this sha that needs updating. >> >> > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 >> >> > >> > >> > I'm not sure how you are doing overrides, but set the following as an >> > override, then rerun the repo-build playbook (to rebuild the nova venv) >> > then rerun the nova playbook to install it. >> > >> > nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b >> > > > The sha I gave was head of the queens branch of openstack/nova. It's > also the commit in that branch that containst the fix. > > -- > Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Thu Aug 23 06:30:32 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 23 Aug 2018 01:30:32 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> Message-ID: <20180823063032.jcf2xsfpatws7y3a@gentoo.org> On 18-08-22 23:04:57, Satish Patel wrote: > Mathew, > > I haven't applied any patch yet but i am noticing in cluster some host > migrating VM super fast and some host migrating very slow. Is this > known behavior? > > On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > wrote: > > On 18-08-22 10:58:48, Satish Patel wrote: > >> Matthew, > >> > >> I have two option looks like, correct me if i am wrong. > >> > >> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > >> 17.0.8-23-g0aff517 and upgrade full OSA > >> > >> 2. Just do override as you said "nova_git_install_branch:" in my > >> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > >> > >> > >> I think option [2] is safe to just touch specific component, also am i > >> correct about override in /etc/openstack_deploy/user_variables.yml > >> file? > >> > >> You mentioned "nova_git_install_branch: > >> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > >> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > >> > >> > >> > >> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > >> wrote: > >> > On 18-08-22 10:33:11, Satish Patel wrote: > >> >> Thanks Matthew, > >> >> > >> >> Can i put that sha in my OSA at > >> >> playbooks/defaults/repo_packages/openstack_services.yml by hand and > >> >> run playbooks [repo/nova] ? > >> >> > >> >> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > >> >> wrote: > >> >> > On 18-08-22 08:35:09, Satish Patel wrote: > >> >> >> Currently in stable/queens i am seeing this sha > >> >> >> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > >> >> >> > >> >> >> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > >> >> >> wrote: > >> >> >> > On 18-08-22 01:57:17, Satish Patel wrote: > >> >> >> >> What I need to upgrade, any specific component? > >> >> >> >> > >> >> >> >> I have deployed openstack-ansible > >> >> >> >> > >> >> >> >> Sent from my iPhone > >> >> >> >> > >> >> >> >> > On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >> >> >> >> > > >> >> >> >> >> On 18-08-22 01:02:53, Satish Patel wrote: > >> >> >> >> >> Matthew, > >> >> >> >> >> > >> >> >> >> >> Thanks for reply, Look like i don't have this patch > >> >> >> >> >> https://review.openstack.org/#/c/591761/ > >> >> >> >> >> > >> >> >> >> >> So i have to patch following 3 file manually? > >> >> >> >> >> > >> >> >> >> >> nova/tests/unit/virt/libvirt/test_driver.py213 > >> >> >> >> >> nova/tests/unit/virt/test_virt_drivers.py2 > >> >> >> >> >> nova/virt/libvirt/driver.py > >> >> >> >> >> > >> >> >> >> >> > >> >> >> >> >> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >> >> >> >> >> wrote: > >> >> >> >> >>> On 18-08-22 00:27:08, Satish Patel wrote: > >> >> >> >> >>>> Folks, > >> >> >> >> >>>> > >> >> >> >> >>>> I am running openstack queens and hypervisor is kvm, my live migration > >> >> >> >> >>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >> >> >> >> >>>> long time to migrate 1G instance. I have 10Gbps network and i have > >> >> >> >> >>>> tried to copy 10G file between two compute node and it did copy in 2 > >> >> >> >> >>>> minute, so i am not seeing any network issue also. > >> >> >> >> >>>> > >> >> >> >> >>>> it seem live_migration has some bandwidth limit, I have tried > >> >> >> >> >>>> following option in nova.conf but it didn't work > >> >> >> >> >>>> > >> >> >> >> >>>> live_migration_bandwidth = 500 > >> >> >> >> >>>> > >> >> >> >> >>>> My nova.conf look like following: > >> >> >> >> >>>> > >> >> >> >> >>>> live_migration_uri = > >> >> >> >> >>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >> >> >> >> >>>> live_migration_tunnelled = True > >> >> >> >> >>>> live_migration_bandwidth = 500 > >> >> >> >> >>>> hw_disk_discard = unmap > >> >> >> >> >>>> disk_cachemodes = network=writeback > >> >> >> >> >>>> > >> >> >> >> >>> > >> >> >> >> >>> Do you have a this patch (and a couple of patches up to it)? > >> >> >> >> >>> https://bugs.launchpad.net/nova/+bug/1786346 > >> >> >> >> >>> > >> >> >> >> > > >> >> >> >> > I don't know if that would cleanly apply (there are other patches that > >> >> >> >> > changed those functions within the last month and a half. It'd be best > >> >> >> >> > to upgrade and not do just one patch (which would be an untested > >> >> >> >> > process). > >> >> >> >> > > >> >> >> > > >> >> >> > The sha for nova has not been updated yet (next update is 24-48 hours > >> >> >> > away iirc), once that's done you can use the head of stable/queens from > >> >> >> > OSA and run a inter-series upgrade (but the minimal thing to do would be > >> >> >> > to run repo-build and os-nova plays). I'm not sure when that sha bump > >> >> >> > will be tagged in a full release if you would rather wait on that. > >> >> > > >> >> > it's this sha that needs updating. > >> >> > https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > >> >> > > >> > > >> > I'm not sure how you are doing overrides, but set the following as an > >> > override, then rerun the repo-build playbook (to rebuild the nova venv) > >> > then rerun the nova playbook to install it. > >> > > >> > nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > >> > > > > > The sha I gave was head of the queens branch of openstack/nova. It's > > also the commit in that branch that containst the fix. > > I don't think that is known behavior, different memory pressure causing the difference maybe? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Thu Aug 23 11:18:24 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 07:18:24 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: <20180823063032.jcf2xsfpatws7y3a@gentoo.org> References: <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> Message-ID: I'm testing this in lab, no load yet Sent from my iPhone > On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > >> On 18-08-22 23:04:57, Satish Patel wrote: >> Mathew, >> >> I haven't applied any patch yet but i am noticing in cluster some host >> migrating VM super fast and some host migrating very slow. Is this >> known behavior? >> >> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode >> wrote: >>> On 18-08-22 10:58:48, Satish Patel wrote: >>>> Matthew, >>>> >>>> I have two option looks like, correct me if i am wrong. >>>> >>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to >>>> 17.0.8-23-g0aff517 and upgrade full OSA >>>> >>>> 2. Just do override as you said "nova_git_install_branch:" in my >>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. >>>> >>>> >>>> I think option [2] is safe to just touch specific component, also am i >>>> correct about override in /etc/openstack_deploy/user_variables.yml >>>> file? >>>> >>>> You mentioned "nova_git_install_branch: >>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be >>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? >>>> >>>> >>>> >>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode >>>> wrote: >>>>> On 18-08-22 10:33:11, Satish Patel wrote: >>>>>> Thanks Matthew, >>>>>> >>>>>> Can i put that sha in my OSA at >>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and >>>>>> run playbooks [repo/nova] ? >>>>>> >>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode >>>>>> wrote: >>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: >>>>>>>> Currently in stable/queens i am seeing this sha >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 >>>>>>>> >>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode >>>>>>>> wrote: >>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: >>>>>>>>>> What I need to upgrade, any specific component? >>>>>>>>>> >>>>>>>>>> I have deployed openstack-ansible >>>>>>>>>> >>>>>>>>>> Sent from my iPhone >>>>>>>>>> >>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: >>>>>>>>>>>> Matthew, >>>>>>>>>>>> >>>>>>>>>>>> Thanks for reply, Look like i don't have this patch >>>>>>>>>>>> https://review.openstack.org/#/c/591761/ >>>>>>>>>>>> >>>>>>>>>>>> So i have to patch following 3 file manually? >>>>>>>>>>>> >>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 >>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 >>>>>>>>>>>> nova/virt/libvirt/driver.py >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >>>>>>>>>>>> wrote: >>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: >>>>>>>>>>>>>> Folks, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration >>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking >>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have >>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 >>>>>>>>>>>>>> minute, so i am not seeing any network issue also. >>>>>>>>>>>>>> >>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried >>>>>>>>>>>>>> following option in nova.conf but it didn't work >>>>>>>>>>>>>> >>>>>>>>>>>>>> live_migration_bandwidth = 500 >>>>>>>>>>>>>> >>>>>>>>>>>>>> My nova.conf look like following: >>>>>>>>>>>>>> >>>>>>>>>>>>>> live_migration_uri = >>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >>>>>>>>>>>>>> live_migration_tunnelled = True >>>>>>>>>>>>>> live_migration_bandwidth = 500 >>>>>>>>>>>>>> hw_disk_discard = unmap >>>>>>>>>>>>>> disk_cachemodes = network=writeback >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? >>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that >>>>>>>>>>> changed those functions within the last month and a half. It'd be best >>>>>>>>>>> to upgrade and not do just one patch (which would be an untested >>>>>>>>>>> process). >>>>>>>>>>> >>>>>>>>> >>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours >>>>>>>>> away iirc), once that's done you can use the head of stable/queens from >>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be >>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump >>>>>>>>> will be tagged in a full release if you would rather wait on that. >>>>>>> >>>>>>> it's this sha that needs updating. >>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 >>>>>>> >>>>> >>>>> I'm not sure how you are doing overrides, but set the following as an >>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) >>>>> then rerun the nova playbook to install it. >>>>> >>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b >>>>> >>> >>> The sha I gave was head of the queens branch of openstack/nova. It's >>> also the commit in that branch that containst the fix. >>> > > I don't think that is known behavior, different memory pressure causing > the difference maybe? > > -- > Matthew Thode (prometheanfire) From correajl at gmail.com Thu Aug 23 16:53:03 2018 From: correajl at gmail.com (Jorge Luiz Correa) Date: Thu, 23 Aug 2018 13:53:03 -0300 Subject: [Openstack] Help with ipv6 self-service and ip6tables rule on mangle chain Message-ID: Hi all I'm deploying a Queens on Ubuntu 18.04 with one controller, one network controller e for now one compute node. I'm using ML2 with linuxbridge mechanism driver and a self-service type of network. This is is a dual stack environment (v4 and v6). IPv4 is working fine, NATs oks and packets flowing. With IPv6 I'm having a problem. Packets from external networks to a project network are stopping on qrouter namespace firewall. I've a project with one network, one v4 subnet and one v6 subnet. Adressing are all ok, virtual machines are getting their IPs and can ping the network gateway. However, from external to project network, using ipv6, the packets stop in a DROP rule inside de qrouter namespace. The ip6tables path is: mangle prerouting -> neutron-l3-agent-PREROUTING -> neutron-l3-agent-scope -> here we have a MARK rule: pkts bytes target prot opt in out source destination 3 296 MARK all qr-7f2944e7-cc * ::/0 ::/0 MARK xset 0x4000000/0xffff0000 qr interface is the internal network interface of the project (subnet gateway). So, packets from this interface are marked. But, the returning is the problem. The packets doesn't returns. I've rules from the nexthop firewall and packets arrive on the external bridge (network node). But, when they arrive on external interface of the qrouter namespace, they are filtered. Inside qrouter namespace this is the rule: ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t mangle -L -n -v ... Chain neutron-l3-agent-scope (1 references) pkts bytes target prot opt in out source destination 0 0 DROP all * qr-7f2944e7-cc ::/0 ::/0 mark match ! 0x4000000/0xffff0000 ... If I create the following rule everything works great: ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t mangle -I neutron-l3-agent-scope -i qg-b6757bfe-c1 -j MARK --set-xmark 0x4000000/0xffff0000 where qg is the external interface of virtual router. So, if I mark packets from external interface on mangle, they are not filtered. Is this normal? I've to manually add a rule to do that? How to use the "external_ingress_mark" option on l3-agent.ini ? Can I use it to mark packets using a configuration parameter instead of manually inserted ip6tables rule? Thanks a lot! - JLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Aug 23 18:33:44 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 14:33:44 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822050609.zdhrraftfmimmhvc@gentoo.org> <20180822060244.5fxrobrtthuow5ug@gentoo.org> <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> Message-ID: Matt, I am going to override following in user_variable.yml file in that case do i need to run ./bootstrap-ansible.sh script? ## Nova service nova_git_repo: https://git.openstack.org/openstack/nova nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # HEAD of "stable/queens" as of 06.08.2018 nova_git_project_group: nova_all On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > I'm testing this in lab, no load yet > > Sent from my iPhone > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: >> >>> On 18-08-22 23:04:57, Satish Patel wrote: >>> Mathew, >>> >>> I haven't applied any patch yet but i am noticing in cluster some host >>> migrating VM super fast and some host migrating very slow. Is this >>> known behavior? >>> >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode >>> wrote: >>>> On 18-08-22 10:58:48, Satish Patel wrote: >>>>> Matthew, >>>>> >>>>> I have two option looks like, correct me if i am wrong. >>>>> >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to >>>>> 17.0.8-23-g0aff517 and upgrade full OSA >>>>> >>>>> 2. Just do override as you said "nova_git_install_branch:" in my >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. >>>>> >>>>> >>>>> I think option [2] is safe to just touch specific component, also am i >>>>> correct about override in /etc/openstack_deploy/user_variables.yml >>>>> file? >>>>> >>>>> You mentioned "nova_git_install_branch: >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? >>>>> >>>>> >>>>> >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode >>>>> wrote: >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: >>>>>>> Thanks Matthew, >>>>>>> >>>>>>> Can i put that sha in my OSA at >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and >>>>>>> run playbooks [repo/nova] ? >>>>>>> >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode >>>>>>> wrote: >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: >>>>>>>>> Currently in stable/queens i am seeing this sha >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 >>>>>>>>> >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode >>>>>>>>> wrote: >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: >>>>>>>>>>> What I need to upgrade, any specific component? >>>>>>>>>>> >>>>>>>>>>> I have deployed openstack-ansible >>>>>>>>>>> >>>>>>>>>>> Sent from my iPhone >>>>>>>>>>> >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: >>>>>>>>>>>>> Matthew, >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ >>>>>>>>>>>>> >>>>>>>>>>>>> So i have to patch following 3 file manually? >>>>>>>>>>>>> >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 >>>>>>>>>>>>> nova/virt/libvirt/driver.py >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode >>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: >>>>>>>>>>>>>>> Folks, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried >>>>>>>>>>>>>>> following option in nova.conf but it didn't work >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> live_migration_bandwidth = 500 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> My nova.conf look like following: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> live_migration_uri = >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" >>>>>>>>>>>>>>> live_migration_tunnelled = True >>>>>>>>>>>>>>> live_migration_bandwidth = 500 >>>>>>>>>>>>>>> hw_disk_discard = unmap >>>>>>>>>>>>>>> disk_cachemodes = network=writeback >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 >>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested >>>>>>>>>>>> process). >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump >>>>>>>>>> will be tagged in a full release if you would rather wait on that. >>>>>>>> >>>>>>>> it's this sha that needs updating. >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 >>>>>>>> >>>>>> >>>>>> I'm not sure how you are doing overrides, but set the following as an >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) >>>>>> then rerun the nova playbook to install it. >>>>>> >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b >>>>>> >>>> >>>> The sha I gave was head of the queens branch of openstack/nova. It's >>>> also the commit in that branch that containst the fix. >>>> >> >> I don't think that is known behavior, different memory pressure causing >> the difference maybe? >> >> -- >> Matthew Thode (prometheanfire) From prometheanfire at gentoo.org Thu Aug 23 18:47:20 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 23 Aug 2018 13:47:20 -0500 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> Message-ID: <20180823184720.zp4jdf7nd7mbgams@gentoo.org> On 18-08-23 14:33:44, Satish Patel wrote: > Matt, > > I am going to override following in user_variable.yml file in that > case do i need to run ./bootstrap-ansible.sh script? > > ## Nova service > nova_git_repo: https://git.openstack.org/openstack/nova > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > HEAD of "stable/queens" as of 06.08.2018 > nova_git_project_group: nova_all > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > I'm testing this in lab, no load yet > > > > Sent from my iPhone > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > >> > >>> On 18-08-22 23:04:57, Satish Patel wrote: > >>> Mathew, > >>> > >>> I haven't applied any patch yet but i am noticing in cluster some host > >>> migrating VM super fast and some host migrating very slow. Is this > >>> known behavior? > >>> > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > >>> wrote: > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > >>>>> Matthew, > >>>>> > >>>>> I have two option looks like, correct me if i am wrong. > >>>>> > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > >>>>> > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > >>>>> > >>>>> > >>>>> I think option [2] is safe to just touch specific component, also am i > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > >>>>> file? > >>>>> > >>>>> You mentioned "nova_git_install_branch: > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > >>>>> > >>>>> > >>>>> > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > >>>>> wrote: > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > >>>>>>> Thanks Matthew, > >>>>>>> > >>>>>>> Can i put that sha in my OSA at > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > >>>>>>> run playbooks [repo/nova] ? > >>>>>>> > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > >>>>>>> wrote: > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > >>>>>>>>> Currently in stable/queens i am seeing this sha > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > >>>>>>>>> > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > >>>>>>>>> wrote: > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > >>>>>>>>>>> What I need to upgrade, any specific component? > >>>>>>>>>>> > >>>>>>>>>>> I have deployed openstack-ansible > >>>>>>>>>>> > >>>>>>>>>>> Sent from my iPhone > >>>>>>>>>>> > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > >>>>>>>>>>>>> > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > >>>>>>>>>>>>> Matthew, > >>>>>>>>>>>>> > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > >>>>>>>>>>>>> > >>>>>>>>>>>>> So i have to patch following 3 file manually? > >>>>>>>>>>>>> > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > >>>>>>>>>>>>> wrote: > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > >>>>>>>>>>>>>>> Folks, > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> My nova.conf look like following: > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> live_migration_uri = > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > >>>>>>>>>>>>>>> live_migration_tunnelled = True > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > >>>>>>>>>>>>>>> hw_disk_discard = unmap > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > >>>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > >>>>>>>>>>>> process). > >>>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > >>>>>>>> > >>>>>>>> it's this sha that needs updating. > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > >>>>>>>> > >>>>>> > >>>>>> I'm not sure how you are doing overrides, but set the following as an > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > >>>>>> then rerun the nova playbook to install it. > >>>>>> > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > >>>>>> > >>>> > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > >>>> also the commit in that branch that containst the fix. > >>>> > >> > >> I don't think that is known behavior, different memory pressure causing > >> the difference maybe? > >> You just need the following var. nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c Once defined you'll need to `cd` into the playbooks directory within openstack-ansible and run `openstack-ansible repo-build.yml` and `openstack-ansible os-nova-install.yml`. That should get you updated. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From satish.txt at gmail.com Thu Aug 23 19:12:59 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 15:12:59 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> <20180823184720.zp4jdf7nd7mbgams@gentoo.org> Message-ID: Matt, I've added "nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and run repo-build.yml playbook but it didn't change anything I am inside the repo container and still its showing old timestamp on all nova file and i check all file they seems didn't change at this path in repo container /var/www/repo/openstackgit/nova/nova repo-build.yml should update that dir right? On Thu, Aug 23, 2018 at 2:58 PM Satish Patel wrote: > > Thanks Matthew, > > Going to do that and will update you in few min. > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode wrote: > > > > On 18-08-23 14:33:44, Satish Patel wrote: > > > Matt, > > > > > > I am going to override following in user_variable.yml file in that > > > case do i need to run ./bootstrap-ansible.sh script? > > > > > > ## Nova service > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > HEAD of "stable/queens" as of 06.08.2018 > > > nova_git_project_group: nova_all > > > > > > > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > > > I'm testing this in lab, no load yet > > > > > > > > Sent from my iPhone > > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > > > >> > > > >>> On 18-08-22 23:04:57, Satish Patel wrote: > > > >>> Mathew, > > > >>> > > > >>> I haven't applied any patch yet but i am noticing in cluster some host > > > >>> migrating VM super fast and some host migrating very slow. Is this > > > >>> known behavior? > > > >>> > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > > > >>> wrote: > > > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > > > >>>>> Matthew, > > > >>>>> > > > >>>>> I have two option looks like, correct me if i am wrong. > > > >>>>> > > > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > > > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > > > >>>>> > > > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > > > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > >>>>> > > > >>>>> > > > >>>>> I think option [2] is safe to just touch specific component, also am i > > > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > > > >>>>> file? > > > >>>>> > > > >>>>> You mentioned "nova_git_install_branch: > > > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > > > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > >>>>> > > > >>>>> > > > >>>>> > > > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > > > >>>>> wrote: > > > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > > > >>>>>>> Thanks Matthew, > > > >>>>>>> > > > >>>>>>> Can i put that sha in my OSA at > > > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > > > >>>>>>> run playbooks [repo/nova] ? > > > >>>>>>> > > > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > > > >>>>>>> wrote: > > > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > > > >>>>>>>>> Currently in stable/queens i am seeing this sha > > > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > > >>>>>>>>> > > > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > > > >>>>>>>>> wrote: > > > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > > > >>>>>>>>>>> What I need to upgrade, any specific component? > > > >>>>>>>>>>> > > > >>>>>>>>>>> I have deployed openstack-ansible > > > >>>>>>>>>>> > > > >>>>>>>>>>> Sent from my iPhone > > > >>>>>>>>>>> > > > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > > > >>>>>>>>>>>>> Matthew, > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > > > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> So i have to patch following 3 file manually? > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > > > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > > > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> > > > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > > > >>>>>>>>>>>>> wrote: > > > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > > > >>>>>>>>>>>>>>> Folks, > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > > > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > > > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > > > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > > > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > > > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>>> My nova.conf look like following: > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>>> live_migration_uri = > > > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > > > >>>>>>>>>>>>>>> live_migration_tunnelled = True > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > >>>>>>>>>>>>>>> hw_disk_discard = unmap > > > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > > > >>>>>>>>>>>>>>> > > > >>>>>>>>>>>>>> > > > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > > > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > > > >>>>>>>>>>>>>> > > > >>>>>>>>>>>> > > > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > > > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > > > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > > > >>>>>>>>>>>> process). > > > >>>>>>>>>>>> > > > >>>>>>>>>> > > > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > > > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > > > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > > > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > > > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > > > >>>>>>>> > > > >>>>>>>> it's this sha that needs updating. > > > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > > >>>>>>>> > > > >>>>>> > > > >>>>>> I'm not sure how you are doing overrides, but set the following as an > > > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > > > >>>>>> then rerun the nova playbook to install it. > > > >>>>>> > > > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > > >>>>>> > > > >>>> > > > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > > > >>>> also the commit in that branch that containst the fix. > > > >>>> > > > >> > > > >> I don't think that is known behavior, different memory pressure causing > > > >> the difference maybe? > > > >> > > > > You just need the following var. > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c > > > > Once defined you'll need to `cd` into the playbooks directory within > > openstack-ansible and run `openstack-ansible repo-build.yml` and > > `openstack-ansible os-nova-install.yml`. That should get you updated. > > > > -- > > Matthew Thode (prometheanfire) From satish.txt at gmail.com Thu Aug 23 19:26:49 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 15:26:49 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> <20180823184720.zp4jdf7nd7mbgams@gentoo.org> Message-ID: Look like it need all 3 line in user_variables.yml file.. after putting all 3 lines it works!! ## Nova service nova_git_repo: https://git.openstack.org/openstack/nova nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # HEAD of "stable/queens" as of 06.08.2018 nova_git_project_group: nova_all On Thu, Aug 23, 2018 at 3:12 PM Satish Patel wrote: > > Matt, > > I've added "nova_git_install_branch: > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and > run repo-build.yml playbook but it didn't change anything > > I am inside the repo container and still its showing old timestamp on > all nova file and i check all file they seems didn't change > > at this path in repo container /var/www/repo/openstackgit/nova/nova > > repo-build.yml should update that dir right? > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel wrote: > > > > Thanks Matthew, > > > > Going to do that and will update you in few min. > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode wrote: > > > > > > On 18-08-23 14:33:44, Satish Patel wrote: > > > > Matt, > > > > > > > > I am going to override following in user_variable.yml file in that > > > > case do i need to run ./bootstrap-ansible.sh script? > > > > > > > > ## Nova service > > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > > HEAD of "stable/queens" as of 06.08.2018 > > > > nova_git_project_group: nova_all > > > > > > > > > > > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > > > > I'm testing this in lab, no load yet > > > > > > > > > > Sent from my iPhone > > > > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > > > > >> > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote: > > > > >>> Mathew, > > > > >>> > > > > >>> I haven't applied any patch yet but i am noticing in cluster some host > > > > >>> migrating VM super fast and some host migrating very slow. Is this > > > > >>> known behavior? > > > > >>> > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > > > > >>> wrote: > > > > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > > > > >>>>> Matthew, > > > > >>>>> > > > > >>>>> I have two option looks like, correct me if i am wrong. > > > > >>>>> > > > > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > > > > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > > > > >>>>> > > > > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > > > > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > > >>>>> > > > > >>>>> > > > > >>>>> I think option [2] is safe to just touch specific component, also am i > > > > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > > > > >>>>> file? > > > > >>>>> > > > > >>>>> You mentioned "nova_git_install_branch: > > > > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > > > > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > > >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > > > > >>>>> wrote: > > > > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > > > > >>>>>>> Thanks Matthew, > > > > >>>>>>> > > > > >>>>>>> Can i put that sha in my OSA at > > > > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > > > > >>>>>>> run playbooks [repo/nova] ? > > > > >>>>>>> > > > > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > > > > >>>>>>> wrote: > > > > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > > > > >>>>>>>>> Currently in stable/queens i am seeing this sha > > > > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > > > >>>>>>>>> > > > > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > > > > >>>>>>>>> wrote: > > > > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > > > > >>>>>>>>>>> What I need to upgrade, any specific component? > > > > >>>>>>>>>>> > > > > >>>>>>>>>>> I have deployed openstack-ansible > > > > >>>>>>>>>>> > > > > >>>>>>>>>>> Sent from my iPhone > > > > >>>>>>>>>>> > > > > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > > > > >>>>>>>>>>>>> Matthew, > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > > > > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> So i have to patch following 3 file manually? > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > > > > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > > > > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> > > > > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > > > > >>>>>>>>>>>>> wrote: > > > > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > > > > >>>>>>>>>>>>>>> Folks, > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > > > > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > > > > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > > > > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > > > > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > > > > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>>> My nova.conf look like following: > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>>> live_migration_uri = > > > > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > > > > >>>>>>>>>>>>>>> live_migration_tunnelled = True > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > >>>>>>>>>>>>>>> hw_disk_discard = unmap > > > > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > > > > >>>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>> > > > > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > > > > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > > > > >>>>>>>>>>>>>> > > > > >>>>>>>>>>>> > > > > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > > > > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > > > > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > > > > >>>>>>>>>>>> process). > > > > >>>>>>>>>>>> > > > > >>>>>>>>>> > > > > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > > > > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > > > > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > > > > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > > > > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > > > > >>>>>>>> > > > > >>>>>>>> it's this sha that needs updating. > > > > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > > > >>>>>>>> > > > > >>>>>> > > > > >>>>>> I'm not sure how you are doing overrides, but set the following as an > > > > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > > > > >>>>>> then rerun the nova playbook to install it. > > > > >>>>>> > > > > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > > > >>>>>> > > > > >>>> > > > > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > > > > >>>> also the commit in that branch that containst the fix. > > > > >>>> > > > > >> > > > > >> I don't think that is known behavior, different memory pressure causing > > > > >> the difference maybe? > > > > >> > > > > > > You just need the following var. > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c > > > > > > Once defined you'll need to `cd` into the playbooks directory within > > > openstack-ansible and run `openstack-ansible repo-build.yml` and > > > `openstack-ansible os-nova-install.yml`. That should get you updated. > > > > > > -- > > > Matthew Thode (prometheanfire) From satish.txt at gmail.com Thu Aug 23 20:36:15 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 16:36:15 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> <20180823184720.zp4jdf7nd7mbgams@gentoo.org> Message-ID: I have upgraded my nova and all nova component got upgrade but still my live_migration running on 8Mbps speed.. what else is wrong here? I am using CentOS 7.5 On Thu, Aug 23, 2018 at 3:26 PM Satish Patel wrote: > > Look like it need all 3 line in user_variables.yml file.. after > putting all 3 lines it works!! > > ## Nova service > nova_git_repo: https://git.openstack.org/openstack/nova > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > HEAD of "stable/queens" as of 06.08.2018 > nova_git_project_group: nova_all > On Thu, Aug 23, 2018 at 3:12 PM Satish Patel wrote: > > > > Matt, > > > > I've added "nova_git_install_branch: > > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and > > run repo-build.yml playbook but it didn't change anything > > > > I am inside the repo container and still its showing old timestamp on > > all nova file and i check all file they seems didn't change > > > > at this path in repo container /var/www/repo/openstackgit/nova/nova > > > > repo-build.yml should update that dir right? > > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel wrote: > > > > > > Thanks Matthew, > > > > > > Going to do that and will update you in few min. > > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode wrote: > > > > > > > > On 18-08-23 14:33:44, Satish Patel wrote: > > > > > Matt, > > > > > > > > > > I am going to override following in user_variable.yml file in that > > > > > case do i need to run ./bootstrap-ansible.sh script? > > > > > > > > > > ## Nova service > > > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > > > HEAD of "stable/queens" as of 06.08.2018 > > > > > nova_git_project_group: nova_all > > > > > > > > > > > > > > > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > > > > > I'm testing this in lab, no load yet > > > > > > > > > > > > Sent from my iPhone > > > > > > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > > > > > >> > > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote: > > > > > >>> Mathew, > > > > > >>> > > > > > >>> I haven't applied any patch yet but i am noticing in cluster some host > > > > > >>> migrating VM super fast and some host migrating very slow. Is this > > > > > >>> known behavior? > > > > > >>> > > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > > > > > >>> wrote: > > > > > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > > > > > >>>>> Matthew, > > > > > >>>>> > > > > > >>>>> I have two option looks like, correct me if i am wrong. > > > > > >>>>> > > > > > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > > > > > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > > > > > >>>>> > > > > > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > > > > > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> I think option [2] is safe to just touch specific component, also am i > > > > > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > > > > > >>>>> file? > > > > > >>>>> > > > > > >>>>> You mentioned "nova_git_install_branch: > > > > > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > > > > > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > > > > > >>>>> wrote: > > > > > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > > > > > >>>>>>> Thanks Matthew, > > > > > >>>>>>> > > > > > >>>>>>> Can i put that sha in my OSA at > > > > > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > > > > > >>>>>>> run playbooks [repo/nova] ? > > > > > >>>>>>> > > > > > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > > > > > >>>>>>> wrote: > > > > > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > > > > > >>>>>>>>> Currently in stable/queens i am seeing this sha > > > > > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > > > > >>>>>>>>> > > > > > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > > > > > >>>>>>>>> wrote: > > > > > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > > > > > >>>>>>>>>>> What I need to upgrade, any specific component? > > > > > >>>>>>>>>>> > > > > > >>>>>>>>>>> I have deployed openstack-ansible > > > > > >>>>>>>>>>> > > > > > >>>>>>>>>>> Sent from my iPhone > > > > > >>>>>>>>>>> > > > > > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > > > > > >>>>>>>>>>>>> Matthew, > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > > > > > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> So i have to patch following 3 file manually? > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > > > > > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> > > > > > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > > > > > >>>>>>>>>>>>> wrote: > > > > > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > > > > > >>>>>>>>>>>>>>> Folks, > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > > > > > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > > > > > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > > > > > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > > > > > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > > > > > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>>> My nova.conf look like following: > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>>> live_migration_uri = > > > > > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > > > > > >>>>>>>>>>>>>>> live_migration_tunnelled = True > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > >>>>>>>>>>>>>>> hw_disk_discard = unmap > > > > > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > > > > > >>>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>> > > > > > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > > > > > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > > > > > >>>>>>>>>>>>>> > > > > > >>>>>>>>>>>> > > > > > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > > > > > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > > > > > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > > > > > >>>>>>>>>>>> process). > > > > > >>>>>>>>>>>> > > > > > >>>>>>>>>> > > > > > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > > > > > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > > > > > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > > > > > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > > > > > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > > > > > >>>>>>>> > > > > > >>>>>>>> it's this sha that needs updating. > > > > > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > > > > >>>>>>>> > > > > > >>>>>> > > > > > >>>>>> I'm not sure how you are doing overrides, but set the following as an > > > > > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > > > > > >>>>>> then rerun the nova playbook to install it. > > > > > >>>>>> > > > > > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > > > > >>>>>> > > > > > >>>> > > > > > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > > > > > >>>> also the commit in that branch that containst the fix. > > > > > >>>> > > > > > >> > > > > > >> I don't think that is known behavior, different memory pressure causing > > > > > >> the difference maybe? > > > > > >> > > > > > > > > You just need the following var. > > > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c > > > > > > > > Once defined you'll need to `cd` into the playbooks directory within > > > > openstack-ansible and run `openstack-ansible repo-build.yml` and > > > > `openstack-ansible os-nova-install.yml`. That should get you updated. > > > > > > > > -- > > > > Matthew Thode (prometheanfire) From cmart at cyverse.org Thu Aug 23 21:26:34 2018 From: cmart at cyverse.org (Chris Martin) Date: Thu, 23 Aug 2018 17:26:34 -0400 Subject: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend Message-ID: I back up my volumes daily, using incremental backups to minimize network traffic and storage consumption. I want to periodically remove old backups, and during this pruning operation, avoid entering a state where a volume has no recent backups. Ceph RBD appears to support this workflow, but unfortunately, Cinder does not. I can only delete the *latest* backup of a given volume, and this precludes any reasonable way to prune backups. Here, I'll show you. Let's make three backups of the same volume: ``` openstack volume backup create --name backup-1 --force volume-foo openstack volume backup create --name backup-2 --force volume-foo openstack volume backup create --name backup-3 --force volume-foo ``` Cinder reports the following via `volume backup show`: - backup-1 is not an incremental backup, but backup-2 and backup-3 are (`is_incremental`). - All but the latest backup have dependent backups (`has_dependent_backups`). We take a backup every day, and after a week we're on backup-7. We want to start deleting older backups so that we don't keep accumulating backups forever! What happens when we try? ``` # openstack volume backup delete backup-1 Failed to delete backup with name or ID 'backup-1': Invalid backup: Incremental backups exist for this backup. (HTTP 400) ``` We can't delete backup-1 because Cinder considers it a "base" backup which `has_dependent_backups`. What about backup-2? Same story. Adding the `--force` flag just gives a slightly different error message. The *only* backup that Cinder will delete is backup-7 -- the very latest one. This means that if we want to remove the oldest backups of a volume, *we must first remove all newer backups of the same volume*, i.e. delete literally all of our backups. Also, we cannot force creation of another *full* (non-incrmental) backup in order to free all of the earlier backups for removal. (Omitting the `--incremental` flag has no effect; you still get an incremental backup.) Can we hope for better? Let's reach behind Cinder to the Ceph backend. Volume backups are represented as a "base" RBD image with a snapshot for each incremental backup: ``` # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base SNAPID NAME SIZE TIMESTAMP 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 10240 MB Thu Aug 23 10:57:48 2018 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44 10240 MB Thu Aug 23 11:05:43 2018 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46 10240 MB Thu Aug 23 11:06:47 2018 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 10240 MB Thu Aug 23 11:22:23 2018 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72 10240 MB Thu Aug 23 11:22:47 2018 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82 10240 MB Thu Aug 23 11:23:04 2018 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26 10240 MB Thu Aug 23 11:23:31 2018 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52 10240 MB Thu Aug 23 12:32:43 2018 ``` It seems that each snapshot stands alone and doesn't depend on others. Ceph lets me delete the older snapshots. ``` # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 Removing snap: 100% complete...done. # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 Removing snap: 100% complete...done. ``` Now that we nuked backup-1 and backup-4, can we still restore from backup-7 and launch an instance with it? ``` openstack volume create --size 10 --bootable volume-foo-restored openstack volume backup restore backup-7 volume-foo-restored openstack server create --volume volume-foo-restored --flavor medium1 instance-restored-from-backup-7 ``` Yes! We can SSH to the instance and it appears intact. Perhaps each snapshot in Ceph stores a complete diff from the base RBD image (rather than each successive snapshot depending on the last). If this is true, then Cinder is unnecessarily protective of older backups. Cinder represents these as "with dependents" and doesn't let us touch them, even though Ceph will let us delete older RBD snapshots, apparently without disrupting newer snapshots of the same volume. If we could remove this limitation, Cinder backups would be significantly more useful for us. We mostly host servers with non-cloud-native workloads (IaaS for research scientists). For these, full-disk backups at the infrastructure level are an important supplement to file-level or application-level backups. It would be great if someone else could confirm or disprove what I'm seeing here. I'd also love to hear from anyone else using Cinder backups this way. Regards, Chris Martin at CyVerse From satish.txt at gmail.com Thu Aug 23 22:36:15 2018 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 23 Aug 2018 18:36:15 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> <20180823184720.zp4jdf7nd7mbgams@gentoo.org> Message-ID: I have updated this bug here something is wrong: https://bugs.launchpad.net/nova/+bug/1786346 After nova upgrade i have compared these 3 files https://review.openstack.org/#/c/591761/ and i am not seeing any change here so look like this is not a complete patch. Are you sure they push this changes in nova repo? On Thu, Aug 23, 2018 at 4:36 PM Satish Patel wrote: > > I have upgraded my nova and all nova component got upgrade but still > my live_migration running on 8Mbps speed.. what else is wrong here? > > I am using CentOS 7.5 > > On Thu, Aug 23, 2018 at 3:26 PM Satish Patel wrote: > > > > Look like it need all 3 line in user_variables.yml file.. after > > putting all 3 lines it works!! > > > > ## Nova service > > nova_git_repo: https://git.openstack.org/openstack/nova > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > HEAD of "stable/queens" as of 06.08.2018 > > nova_git_project_group: nova_all > > On Thu, Aug 23, 2018 at 3:12 PM Satish Patel wrote: > > > > > > Matt, > > > > > > I've added "nova_git_install_branch: > > > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and > > > run repo-build.yml playbook but it didn't change anything > > > > > > I am inside the repo container and still its showing old timestamp on > > > all nova file and i check all file they seems didn't change > > > > > > at this path in repo container /var/www/repo/openstackgit/nova/nova > > > > > > repo-build.yml should update that dir right? > > > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel wrote: > > > > > > > > Thanks Matthew, > > > > > > > > Going to do that and will update you in few min. > > > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode wrote: > > > > > > > > > > On 18-08-23 14:33:44, Satish Patel wrote: > > > > > > Matt, > > > > > > > > > > > > I am going to override following in user_variable.yml file in that > > > > > > case do i need to run ./bootstrap-ansible.sh script? > > > > > > > > > > > > ## Nova service > > > > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > > > > HEAD of "stable/queens" as of 06.08.2018 > > > > > > nova_git_project_group: nova_all > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > > > > > > I'm testing this in lab, no load yet > > > > > > > > > > > > > > Sent from my iPhone > > > > > > > > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > > > > > > >> > > > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote: > > > > > > >>> Mathew, > > > > > > >>> > > > > > > >>> I haven't applied any patch yet but i am noticing in cluster some host > > > > > > >>> migrating VM super fast and some host migrating very slow. Is this > > > > > > >>> known behavior? > > > > > > >>> > > > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > > > > > > >>> wrote: > > > > > > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > > > > > > >>>>> Matthew, > > > > > > >>>>> > > > > > > >>>>> I have two option looks like, correct me if i am wrong. > > > > > > >>>>> > > > > > > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > > > > > > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > > > > > > >>>>> > > > > > > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > > > > > > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> I think option [2] is safe to just touch specific component, also am i > > > > > > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > > > > > > >>>>> file? > > > > > > >>>>> > > > > > > >>>>> You mentioned "nova_git_install_branch: > > > > > > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > > > > > > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > > > > > > >>>>> wrote: > > > > > > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > > > > > > >>>>>>> Thanks Matthew, > > > > > > >>>>>>> > > > > > > >>>>>>> Can i put that sha in my OSA at > > > > > > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > > > > > > >>>>>>> run playbooks [repo/nova] ? > > > > > > >>>>>>> > > > > > > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > > > > > > >>>>>>> wrote: > > > > > > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > > > > > > >>>>>>>>> Currently in stable/queens i am seeing this sha > > > > > > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > > > > > >>>>>>>>> > > > > > > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > > > > > > >>>>>>>>> wrote: > > > > > > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > > > > > > >>>>>>>>>>> What I need to upgrade, any specific component? > > > > > > >>>>>>>>>>> > > > > > > >>>>>>>>>>> I have deployed openstack-ansible > > > > > > >>>>>>>>>>> > > > > > > >>>>>>>>>>> Sent from my iPhone > > > > > > >>>>>>>>>>> > > > > > > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > > > > > > >>>>>>>>>>>>> Matthew, > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > > > > > > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> So i have to patch following 3 file manually? > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > > > > > > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > > > > > > >>>>>>>>>>>>> wrote: > > > > > > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > > > > > > >>>>>>>>>>>>>>> Folks, > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > > > > > > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > > > > > > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > > > > > > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > > > > > > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > > > > > > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>>> My nova.conf look like following: > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>>> live_migration_uri = > > > > > > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > > > > > > >>>>>>>>>>>>>>> live_migration_tunnelled = True > > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > > >>>>>>>>>>>>>>> hw_disk_discard = unmap > > > > > > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > > > > > > >>>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > > > > > > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > > > > > > >>>>>>>>>>>>>> > > > > > > >>>>>>>>>>>> > > > > > > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > > > > > > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > > > > > > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > > > > > > >>>>>>>>>>>> process). > > > > > > >>>>>>>>>>>> > > > > > > >>>>>>>>>> > > > > > > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > > > > > > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > > > > > > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > > > > > > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > > > > > > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > > > > > > >>>>>>>> > > > > > > >>>>>>>> it's this sha that needs updating. > > > > > > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > > > > > >>>>>>>> > > > > > > >>>>>> > > > > > > >>>>>> I'm not sure how you are doing overrides, but set the following as an > > > > > > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > > > > > > >>>>>> then rerun the nova playbook to install it. > > > > > > >>>>>> > > > > > > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > > > > > >>>>>> > > > > > > >>>> > > > > > > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > > > > > > >>>> also the commit in that branch that containst the fix. > > > > > > >>>> > > > > > > >> > > > > > > >> I don't think that is known behavior, different memory pressure causing > > > > > > >> the difference maybe? > > > > > > >> > > > > > > > > > > You just need the following var. > > > > > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c > > > > > > > > > > Once defined you'll need to `cd` into the playbooks directory within > > > > > openstack-ansible and run `openstack-ansible repo-build.yml` and > > > > > `openstack-ansible os-nova-install.yml`. That should get you updated. > > > > > > > > > > -- > > > > > Matthew Thode (prometheanfire) From openstack at medberry.net Fri Aug 24 00:27:07 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 23 Aug 2018 18:27:07 -0600 Subject: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend In-Reply-To: References: Message-ID: Hi Chris, Unless I overlooked something, I don't see Cinder or Ceph versions posted. Feel free to just post the codenames but give us some inkling. On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin wrote: > I back up my volumes daily, using incremental backups to minimize > network traffic and storage consumption. I want to periodically remove > old backups, and during this pruning operation, avoid entering a state > where a volume has no recent backups. Ceph RBD appears to support this > workflow, but unfortunately, Cinder does not. I can only delete the > *latest* backup of a given volume, and this precludes any reasonable > way to prune backups. Here, I'll show you. > > Let's make three backups of the same volume: > ``` > openstack volume backup create --name backup-1 --force volume-foo > openstack volume backup create --name backup-2 --force volume-foo > openstack volume backup create --name backup-3 --force volume-foo > ``` > > Cinder reports the following via `volume backup show`: > - backup-1 is not an incremental backup, but backup-2 and backup-3 are > (`is_incremental`). > - All but the latest backup have dependent backups > (`has_dependent_backups`). > > We take a backup every day, and after a week we're on backup-7. We > want to start deleting older backups so that we don't keep > accumulating backups forever! What happens when we try? > > ``` > # openstack volume backup delete backup-1 > Failed to delete backup with name or ID 'backup-1': Invalid backup: > Incremental backups exist for this backup. (HTTP 400) > ``` > > We can't delete backup-1 because Cinder considers it a "base" backup > which `has_dependent_backups`. What about backup-2? Same story. Adding > the `--force` flag just gives a slightly different error message. The > *only* backup that Cinder will delete is backup-7 -- the very latest > one. This means that if we want to remove the oldest backups of a > volume, *we must first remove all newer backups of the same volume*, > i.e. delete literally all of our backups. > > Also, we cannot force creation of another *full* (non-incrmental) > backup in order to free all of the earlier backups for removal. > (Omitting the `--incremental` flag has no effect; you still get an > incremental backup.) > > Can we hope for better? Let's reach behind Cinder to the Ceph backend. > Volume backups are represented as a "base" RBD image with a snapshot > for each incremental backup: > > ``` > # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base > SNAPID NAME > SIZE TIMESTAMP > 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 > 10240 MB Thu Aug 23 10:57:48 2018 > 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44 > 10240 MB Thu Aug 23 11:05:43 2018 > 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46 > 10240 MB Thu Aug 23 11:06:47 2018 > 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 > 10240 MB Thu Aug 23 11:22:23 2018 > 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72 > 10240 MB Thu Aug 23 11:22:47 2018 > 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82 > 10240 MB Thu Aug 23 11:23:04 2018 > 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26 > 10240 MB Thu Aug 23 11:23:31 2018 > 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52 > 10240 MB Thu Aug 23 12:32:43 2018 > ``` > > It seems that each snapshot stands alone and doesn't depend on others. > Ceph lets me delete the older snapshots. > > ``` > # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@ > backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 > Removing snap: 100% complete...done. > # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@ > backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 > Removing snap: 100% complete...done. > ``` > > Now that we nuked backup-1 and backup-4, can we still restore from > backup-7 and launch an instance with it? > > ``` > openstack volume create --size 10 --bootable volume-foo-restored > openstack volume backup restore backup-7 volume-foo-restored > openstack server create --volume volume-foo-restored --flavor medium1 > instance-restored-from-backup-7 > ``` > > Yes! We can SSH to the instance and it appears intact. > > Perhaps each snapshot in Ceph stores a complete diff from the base RBD > image (rather than each successive snapshot depending on the last). If > this is true, then Cinder is unnecessarily protective of older > backups. Cinder represents these as "with dependents" and doesn't let > us touch them, even though Ceph will let us delete older RBD > snapshots, apparently without disrupting newer snapshots of the same > volume. If we could remove this limitation, Cinder backups would be > significantly more useful for us. We mostly host servers with > non-cloud-native workloads (IaaS for research scientists). For these, > full-disk backups at the infrastructure level are an important > supplement to file-level or application-level backups. > > It would be great if someone else could confirm or disprove what I'm > seeing here. I'd also love to hear from anyone else using Cinder > backups this way. > > Regards, > > Chris Martin at CyVerse > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmart at cyverse.org Fri Aug 24 00:54:32 2018 From: cmart at cyverse.org (Chris Martin) Date: Thu, 23 Aug 2018 20:54:32 -0400 Subject: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend In-Reply-To: References: Message-ID: Apologies -- I'm running Pike release of Cinder, Luminous release of Ceph. Deployed with OpenStack-Ansible and Ceph-Ansible respectively. On Thu, Aug 23, 2018 at 8:27 PM, David Medberry wrote: > Hi Chris, > > Unless I overlooked something, I don't see Cinder or Ceph versions posted. > > Feel free to just post the codenames but give us some inkling. > > On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin wrote: >> >> I back up my volumes daily, using incremental backups to minimize >> network traffic and storage consumption. I want to periodically remove >> old backups, and during this pruning operation, avoid entering a state >> where a volume has no recent backups. Ceph RBD appears to support this >> workflow, but unfortunately, Cinder does not. I can only delete the >> *latest* backup of a given volume, and this precludes any reasonable >> way to prune backups. Here, I'll show you. >> >> Let's make three backups of the same volume: >> ``` >> openstack volume backup create --name backup-1 --force volume-foo >> openstack volume backup create --name backup-2 --force volume-foo >> openstack volume backup create --name backup-3 --force volume-foo >> ``` >> >> Cinder reports the following via `volume backup show`: >> - backup-1 is not an incremental backup, but backup-2 and backup-3 are >> (`is_incremental`). >> - All but the latest backup have dependent backups >> (`has_dependent_backups`). >> >> We take a backup every day, and after a week we're on backup-7. We >> want to start deleting older backups so that we don't keep >> accumulating backups forever! What happens when we try? >> >> ``` >> # openstack volume backup delete backup-1 >> Failed to delete backup with name or ID 'backup-1': Invalid backup: >> Incremental backups exist for this backup. (HTTP 400) >> ``` >> >> We can't delete backup-1 because Cinder considers it a "base" backup >> which `has_dependent_backups`. What about backup-2? Same story. Adding >> the `--force` flag just gives a slightly different error message. The >> *only* backup that Cinder will delete is backup-7 -- the very latest >> one. This means that if we want to remove the oldest backups of a >> volume, *we must first remove all newer backups of the same volume*, >> i.e. delete literally all of our backups. >> >> Also, we cannot force creation of another *full* (non-incrmental) >> backup in order to free all of the earlier backups for removal. >> (Omitting the `--incremental` flag has no effect; you still get an >> incremental backup.) >> >> Can we hope for better? Let's reach behind Cinder to the Ceph backend. >> Volume backups are represented as a "base" RBD image with a snapshot >> for each incremental backup: >> >> ``` >> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base >> SNAPID NAME >> SIZE TIMESTAMP >> 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 >> 10240 MB Thu Aug 23 10:57:48 2018 >> 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44 >> 10240 MB Thu Aug 23 11:05:43 2018 >> 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46 >> 10240 MB Thu Aug 23 11:06:47 2018 >> 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 >> 10240 MB Thu Aug 23 11:22:23 2018 >> 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72 >> 10240 MB Thu Aug 23 11:22:47 2018 >> 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82 >> 10240 MB Thu Aug 23 11:23:04 2018 >> 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26 >> 10240 MB Thu Aug 23 11:23:31 2018 >> 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52 >> 10240 MB Thu Aug 23 12:32:43 2018 >> ``` >> >> It seems that each snapshot stands alone and doesn't depend on others. >> Ceph lets me delete the older snapshots. >> >> ``` >> # rbd snap rm >> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 >> Removing snap: 100% complete...done. >> # rbd snap rm >> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 >> Removing snap: 100% complete...done. >> ``` >> >> Now that we nuked backup-1 and backup-4, can we still restore from >> backup-7 and launch an instance with it? >> >> ``` >> openstack volume create --size 10 --bootable volume-foo-restored >> openstack volume backup restore backup-7 volume-foo-restored >> openstack server create --volume volume-foo-restored --flavor medium1 >> instance-restored-from-backup-7 >> ``` >> >> Yes! We can SSH to the instance and it appears intact. >> >> Perhaps each snapshot in Ceph stores a complete diff from the base RBD >> image (rather than each successive snapshot depending on the last). If >> this is true, then Cinder is unnecessarily protective of older >> backups. Cinder represents these as "with dependents" and doesn't let >> us touch them, even though Ceph will let us delete older RBD >> snapshots, apparently without disrupting newer snapshots of the same >> volume. If we could remove this limitation, Cinder backups would be >> significantly more useful for us. We mostly host servers with >> non-cloud-native workloads (IaaS for research scientists). For these, >> full-disk backups at the infrastructure level are an important >> supplement to file-level or application-level backups. >> >> It would be great if someone else could confirm or disprove what I'm >> seeing here. I'd also love to hear from anyone else using Cinder >> backups this way. >> >> Regards, >> >> Chris Martin at CyVerse >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > From yu_qearl at 163.com Fri Aug 24 03:01:40 2018 From: yu_qearl at 163.com (=?GBK?B?0+Dmw+bD?=) Date: Fri, 24 Aug 2018 11:01:40 +0800 (CST) Subject: [Openstack] The problem of how to update resouce allocation ratio dynamically. Message-ID: <47ad3358.68de.16569e15778.Coremail.yu_qearl@163.com> Hi: Sorry fo bothering everyone. Now I update my openstack to queen,and use the nova-placement-api to provider resource. When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to update memory_mb allocation_ratio, and it success.But after some minutes,it recove to old value automatically. Then I find it report the value from compute_node in nova-compute automatically. But the allocation_ratio of compute_node was came from the nova.conf.So that means,We can't update the allocation_ratio until we update the nova.conf? But I wish to update the allocation_ratio dynamically other to update the nova.conf. I don't known how to update resouce allocation ratio dynamically. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri Aug 24 04:07:13 2018 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 24 Aug 2018 00:07:13 -0400 Subject: [Openstack] live_migration only using 8 Mb speed In-Reply-To: References: <20180822142451.rni6ivioqyuyyzge@gentoo.org> <20180822144626.nvsodno3vhjhlhmd@gentoo.org> <20180822152805.c4gl55yz4jibtqre@gentoo.org> <20180823063032.jcf2xsfpatws7y3a@gentoo.org> <20180823184720.zp4jdf7nd7mbgams@gentoo.org> Message-ID: Forgive me, by mistake i grab wrong commit and that was the reason i didn't see any changer after applying patch. It works after applying correct version :) Thanks On Thu, Aug 23, 2018 at 6:36 PM Satish Patel wrote: > > I have updated this bug here something is wrong: > https://bugs.launchpad.net/nova/+bug/1786346 > > After nova upgrade i have compared these 3 files > https://review.openstack.org/#/c/591761/ and i am not seeing any > change here so look like this is not a complete patch. > > Are you sure they push this changes in nova repo? > On Thu, Aug 23, 2018 at 4:36 PM Satish Patel wrote: > > > > I have upgraded my nova and all nova component got upgrade but still > > my live_migration running on 8Mbps speed.. what else is wrong here? > > > > I am using CentOS 7.5 > > > > On Thu, Aug 23, 2018 at 3:26 PM Satish Patel wrote: > > > > > > Look like it need all 3 line in user_variables.yml file.. after > > > putting all 3 lines it works!! > > > > > > ## Nova service > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > HEAD of "stable/queens" as of 06.08.2018 > > > nova_git_project_group: nova_all > > > On Thu, Aug 23, 2018 at 3:12 PM Satish Patel wrote: > > > > > > > > Matt, > > > > > > > > I've added "nova_git_install_branch: > > > > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and > > > > run repo-build.yml playbook but it didn't change anything > > > > > > > > I am inside the repo container and still its showing old timestamp on > > > > all nova file and i check all file they seems didn't change > > > > > > > > at this path in repo container /var/www/repo/openstackgit/nova/nova > > > > > > > > repo-build.yml should update that dir right? > > > > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel wrote: > > > > > > > > > > Thanks Matthew, > > > > > > > > > > Going to do that and will update you in few min. > > > > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode wrote: > > > > > > > > > > > > On 18-08-23 14:33:44, Satish Patel wrote: > > > > > > > Matt, > > > > > > > > > > > > > > I am going to override following in user_variable.yml file in that > > > > > > > case do i need to run ./bootstrap-ansible.sh script? > > > > > > > > > > > > > > ## Nova service > > > > > > > nova_git_repo: https://git.openstack.org/openstack/nova > > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c # > > > > > > > HEAD of "stable/queens" as of 06.08.2018 > > > > > > > nova_git_project_group: nova_all > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel wrote: > > > > > > > > I'm testing this in lab, no load yet > > > > > > > > > > > > > > > > Sent from my iPhone > > > > > > > > > > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote: > > > > > > > >> > > > > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote: > > > > > > > >>> Mathew, > > > > > > > >>> > > > > > > > >>> I haven't applied any patch yet but i am noticing in cluster some host > > > > > > > >>> migrating VM super fast and some host migrating very slow. Is this > > > > > > > >>> known behavior? > > > > > > > >>> > > > > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode > > > > > > > >>> wrote: > > > > > > > >>>> On 18-08-22 10:58:48, Satish Patel wrote: > > > > > > > >>>>> Matthew, > > > > > > > >>>>> > > > > > > > >>>>> I have two option looks like, correct me if i am wrong. > > > > > > > >>>>> > > > > > > > >>>>> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to > > > > > > > >>>>> 17.0.8-23-g0aff517 and upgrade full OSA > > > > > > > >>>>> > > > > > > > >>>>> 2. Just do override as you said "nova_git_install_branch:" in my > > > > > > > >>>>> /etc/openstack_deploy/user_variables.yml file, and run playbooks. > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> I think option [2] is safe to just touch specific component, also am i > > > > > > > >>>>> correct about override in /etc/openstack_deploy/user_variables.yml > > > > > > > >>>>> file? > > > > > > > >>>>> > > > > > > > >>>>> You mentioned "nova_git_install_branch: > > > > > > > >>>>> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be > > > > > > > >>>>> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct? > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> > > > > > > > >>>>> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode > > > > > > > >>>>> wrote: > > > > > > > >>>>>> On 18-08-22 10:33:11, Satish Patel wrote: > > > > > > > >>>>>>> Thanks Matthew, > > > > > > > >>>>>>> > > > > > > > >>>>>>> Can i put that sha in my OSA at > > > > > > > >>>>>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and > > > > > > > >>>>>>> run playbooks [repo/nova] ? > > > > > > > >>>>>>> > > > > > > > >>>>>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode > > > > > > > >>>>>>> wrote: > > > > > > > >>>>>>>> On 18-08-22 08:35:09, Satish Patel wrote: > > > > > > > >>>>>>>>> Currently in stable/queens i am seeing this sha > > > > > > > >>>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112 > > > > > > > >>>>>>>>> > > > > > > > >>>>>>>>> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode > > > > > > > >>>>>>>>> wrote: > > > > > > > >>>>>>>>>> On 18-08-22 01:57:17, Satish Patel wrote: > > > > > > > >>>>>>>>>>> What I need to upgrade, any specific component? > > > > > > > >>>>>>>>>>> > > > > > > > >>>>>>>>>>> I have deployed openstack-ansible > > > > > > > >>>>>>>>>>> > > > > > > > >>>>>>>>>>> Sent from my iPhone > > > > > > > >>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> On Aug 22, 2018, at 1:06 AM, Matthew Thode wrote: > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> On 18-08-22 01:02:53, Satish Patel wrote: > > > > > > > >>>>>>>>>>>>> Matthew, > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> Thanks for reply, Look like i don't have this patch > > > > > > > >>>>>>>>>>>>> https://review.openstack.org/#/c/591761/ > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> So i have to patch following 3 file manually? > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/libvirt/test_driver.py213 > > > > > > > >>>>>>>>>>>>> nova/tests/unit/virt/test_virt_drivers.py2 > > > > > > > >>>>>>>>>>>>> nova/virt/libvirt/driver.py > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode > > > > > > > >>>>>>>>>>>>> wrote: > > > > > > > >>>>>>>>>>>>>> On 18-08-22 00:27:08, Satish Patel wrote: > > > > > > > >>>>>>>>>>>>>>> Folks, > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>>> I am running openstack queens and hypervisor is kvm, my live migration > > > > > > > >>>>>>>>>>>>>>> working fine. but somehow it stuck to 8 Mb network speed and taking > > > > > > > >>>>>>>>>>>>>>> long time to migrate 1G instance. I have 10Gbps network and i have > > > > > > > >>>>>>>>>>>>>>> tried to copy 10G file between two compute node and it did copy in 2 > > > > > > > >>>>>>>>>>>>>>> minute, so i am not seeing any network issue also. > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>>> it seem live_migration has some bandwidth limit, I have tried > > > > > > > >>>>>>>>>>>>>>> following option in nova.conf but it didn't work > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>>> My nova.conf look like following: > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>>> live_migration_uri = > > > > > > > >>>>>>>>>>>>>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" > > > > > > > >>>>>>>>>>>>>>> live_migration_tunnelled = True > > > > > > > >>>>>>>>>>>>>>> live_migration_bandwidth = 500 > > > > > > > >>>>>>>>>>>>>>> hw_disk_discard = unmap > > > > > > > >>>>>>>>>>>>>>> disk_cachemodes = network=writeback > > > > > > > >>>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>>>> Do you have a this patch (and a couple of patches up to it)? > > > > > > > >>>>>>>>>>>>>> https://bugs.launchpad.net/nova/+bug/1786346 > > > > > > > >>>>>>>>>>>>>> > > > > > > > >>>>>>>>>>>> > > > > > > > >>>>>>>>>>>> I don't know if that would cleanly apply (there are other patches that > > > > > > > >>>>>>>>>>>> changed those functions within the last month and a half. It'd be best > > > > > > > >>>>>>>>>>>> to upgrade and not do just one patch (which would be an untested > > > > > > > >>>>>>>>>>>> process). > > > > > > > >>>>>>>>>>>> > > > > > > > >>>>>>>>>> > > > > > > > >>>>>>>>>> The sha for nova has not been updated yet (next update is 24-48 hours > > > > > > > >>>>>>>>>> away iirc), once that's done you can use the head of stable/queens from > > > > > > > >>>>>>>>>> OSA and run a inter-series upgrade (but the minimal thing to do would be > > > > > > > >>>>>>>>>> to run repo-build and os-nova plays). I'm not sure when that sha bump > > > > > > > >>>>>>>>>> will be tagged in a full release if you would rather wait on that. > > > > > > > >>>>>>>> > > > > > > > >>>>>>>> it's this sha that needs updating. > > > > > > > >>>>>>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/openstack_services.yml#L173 > > > > > > > >>>>>>>> > > > > > > > >>>>>> > > > > > > > >>>>>> I'm not sure how you are doing overrides, but set the following as an > > > > > > > >>>>>> override, then rerun the repo-build playbook (to rebuild the nova venv) > > > > > > > >>>>>> then rerun the nova playbook to install it. > > > > > > > >>>>>> > > > > > > > >>>>>> nova_git_install_branch: dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b > > > > > > > >>>>>> > > > > > > > >>>> > > > > > > > >>>> The sha I gave was head of the queens branch of openstack/nova. It's > > > > > > > >>>> also the commit in that branch that containst the fix. > > > > > > > >>>> > > > > > > > >> > > > > > > > >> I don't think that is known behavior, different memory pressure causing > > > > > > > >> the difference maybe? > > > > > > > >> > > > > > > > > > > > > You just need the following var. > > > > > > > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c > > > > > > > > > > > > Once defined you'll need to `cd` into the playbooks directory within > > > > > > openstack-ansible and run `openstack-ansible repo-build.yml` and > > > > > > `openstack-ansible os-nova-install.yml`. That should get you updated. > > > > > > > > > > > > -- > > > > > > Matthew Thode (prometheanfire) From satish.txt at gmail.com Fri Aug 24 05:13:55 2018 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 24 Aug 2018 01:13:55 -0400 Subject: [Openstack] live migration dedicated network issue Message-ID: I am trying to set dedicated network for live migration and for that i did following in nova.conf My dedicated network is 172.29.0.0/24 live_migration_uri = "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa" live_migration_tunnelled = False live_migration_inbound_addr = "172.29.0.25" When i am trying to migrate VM i am getting following error, despite error i am able to ping remote machine and ssh also. why i am getting this error? 2018-08-24 01:07:55.608 61304 ERROR nova.virt.libvirt.driver [req-26561823-4ae0-43ca-b6fe-5dd9609e796b eebe97b4bc714b8f814af8a44d08c2a4 2927a06cf30f4f7e938fdda2cc05aed2 - default default] [instance: a61e7e6f-f819-4ddf-9314-8a142515f3d6] Live Migration failure: unable to connect to server at '172.29.0.24:49152': No route to host: libvirtError: unable to connect to server at '172.29.0.24:49152': No route to host From kennelson11 at gmail.com Fri Aug 24 18:15:26 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 24 Aug 2018 11:15:26 -0700 Subject: [Openstack] Berlin Community Contributor Awards Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), I thought I would kick off the Community Contributor Award nominations early this round. For those of you that already know what they are, here is the form[1]. For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in the community that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. So go forth and nominate these amazing community members[1]! Nominations will close on October 21st at 7:00 UTC and winners will be announced at the OpenStack Summit in Berlin. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Fri Aug 24 19:02:10 2018 From: Remo at italy1.com (Remo Mattei) Date: Fri, 24 Aug 2018 12:02:10 -0700 Subject: [Openstack] OVS and VXLan Message-ID: <271EBD35-D351-43CD-940F-844D08C13DA3@italy1.com> Hello guys, The company I am working for is looking for some use cases, which shows the usage of OVS with VXLan. We are going to use Tripleo, and keep the basic networking options. I heard some people but wanted to check with the list. Thanks, Remo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ristovvtt at gmail.com Mon Aug 27 13:40:06 2018 From: ristovvtt at gmail.com (Risto Vaaraniemi) Date: Mon, 27 Aug 2018 16:40:06 +0300 Subject: [Openstack] [nova]Capacity discrepancy between command line and MySQL query Message-ID: Hi, I tried to migrate a guest to another host but it failed with a message saying there's not enough capacity on the target host even though the server should me nearly empty. The guest I'm trying to move needs 4 cores, 4 GB of memory and 50 GB of disk. Each compute node should have 20 cores, 128 GB RAM & 260 GB HD space. When I check it with "openstack host show compute1" I see that there's plenty of free resources. However, when I check it directly in MariaDB nova_api or using Placement API calls I see different results i.e. not enough cores & disk. Is there a safe way to make the different registries / databases to match? Can I just overwrite it using the Placement API? I'm using Pike. BR, Risto PS I did make a few attempts to resize the guest that now runs on compute1 but for some reason they failed and by default the resize tries to restart the resized guest on a different host (compute1). In the end I was able to do the resize on the same host (compute2). I was wondering if the resize attempts messed up the compute1 resource management. PPS I wrote a question about this on ask.openstack.org in mid-July but the message is still in moderation: https://ask.openstack.org/en/question/115200/ From satish.txt at gmail.com Mon Aug 27 17:49:26 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 27 Aug 2018 13:49:26 -0400 Subject: [Openstack] Horizon customize IP Address column In-Reply-To: References: Message-ID: Folks, any idea or clue ? On Mon, Aug 13, 2018 at 9:51 AM Satish Patel wrote: > > Folks, > > Quick question is there a way in horizon i remove network information > from "IP address" column in instance tab when we have multiple > interface, because its fonts are so big and looks ugly when you have > many instance. > > Find attached screenshot that is what i am talking about, i don't want > network name in "IP address" column, just IP address is enough > > Any idea how to get rid of that field? From haleyb.dev at gmail.com Mon Aug 27 18:33:53 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 27 Aug 2018 14:33:53 -0400 Subject: [Openstack] Help with ipv6 self-service and ip6tables rule on mangle chain In-Reply-To: References: Message-ID: On 08/23/2018 12:53 PM, Jorge Luiz Correa wrote: > Hi all > > I'm deploying a Queens on Ubuntu 18.04 with one controller, one network > controller e for now one compute node. I'm using ML2 with linuxbridge > mechanism driver and a self-service type of network. This is is a dual > stack environment (v4 and v6). > > IPv4 is working fine, NATs oks and packets flowing. > > With IPv6 I'm having a problem. Packets from external networks to a > project network are stopping on qrouter namespace firewall. I've a > project with one network, one v4 subnet and one v6 subnet. Adressing are > all ok, virtual machines are getting their IPs and can ping the network > gateway. > > However, from external to project network, using ipv6, the packets stop > in a DROP rule inside de qrouter namespace. This looks like the address scopes of the subnets are different, so the rule to mark packets is not being inserted. How are you assigning the subnet addresses on the external and internal networks? Typically you would define a subnet pool and allocate from that, which should work. Perhaps this guide would help with that: https://docs.openstack.org/neutron/queens/admin/config-address-scopes.html The last sentence there seems to describe the problem you're having: "If the address scopes match between networks then pings and other traffic route directly through. If the scopes do not match between networks, the router either drops the traffic or applies NAT to cross scope boundaries." IPv6 in neutron does not use NAT... -Brian > The ip6tables path is: > > mangle prerouting -> neutron-l3-agent-PREROUTING -> > neutron-l3-agent-scope -> here we have a MARK rule: > > pkts bytes target     prot opt in     out     source > destination >     3   296 MARK       all      qr-7f2944e7-cc * > ::/0                 ::/0                 MARK xset 0x4000000/0xffff0000 > > qr interface is the internal network interface of the project (subnet > gateway). So, packets from this interface are marked. > > But, the returning is the problem. The packets doesn't returns. I've > rules from the nexthop firewall and packets arrive on the external > bridge (network node). But, when they arrive on external interface of > the qrouter namespace, they are filtered. > > Inside qrouter namespace this is the rule: > > ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t > mangle -L -n -v > > ... > Chain neutron-l3-agent-scope (1 references) >  pkts bytes target     prot opt in     out     source > destination >     0     0 DROP       all      *      qr-7f2944e7-cc > ::/0                 ::/0                 mark match ! 0x4000000/0xffff0000 > ... > > If I create the following rule everything works great: > > ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t > mangle -I neutron-l3-agent-scope -i qg-b6757bfe-c1 -j MARK --set-xmark > 0x4000000/0xffff0000 > > where qg is the external interface of virtual router. So, if I mark > packets from external interface on mangle, they are not filtered. > > Is this normal? I've to manually add a rule to do that? > > How to use the "external_ingress_mark" option on l3-agent.ini ? Can I > use it to mark packets using a configuration parameter instead of > manually inserted ip6tables rule? > > Thanks a lot! > > - JLC > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From qianmy.fnst at cn.fujitsu.com Tue Aug 28 12:52:15 2018 From: qianmy.fnst at cn.fujitsu.com (Qian, Mingyue) Date: Tue, 28 Aug 2018 12:52:15 +0000 Subject: [Openstack] BluePrint Approve Request (by FUJITSU) Message-ID: <0ED38455DAB40640AF3FBB0FB4D85A49661CCF42@G08CNEXMBPEKD02.g08.fujitsu.local> Dear Sean McGinnis I am FUJITSU employee. My Name is Qian Mingyue. I am dealling update of FUJITSU OpenStack Cinder Volume Driver, and have registered a blueprint on Launchpad, which URL is https://blueprints.launchpad.net/cinder/+spec/fujitsu-eternus-dx-driver-update Could you please check the blueprint, and approve it If it is convenient. Your Sincerely, Qian Mingyue From jaypipes at gmail.com Tue Aug 28 14:06:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 28 Aug 2018 10:06:18 -0400 Subject: [Openstack] [nova]Capacity discrepancy between command line and MySQL query In-Reply-To: References: Message-ID: <6d211944-6001-722a-60ea-c17dec096914@gmail.com> On 08/27/2018 09:40 AM, Risto Vaaraniemi wrote: > Hi, > > I tried to migrate a guest to another host but it failed with a > message saying there's not enough capacity on the target host even > though the server should me nearly empty. The guest I'm trying to > move needs 4 cores, 4 GB of memory and 50 GB of disk. Each compute > node should have 20 cores, 128 GB RAM & 260 GB HD space. > > When I check it with "openstack host show compute1" I see that there's > plenty of free resources. However, when I check it directly in MariaDB > nova_api or using Placement API calls I see different results i.e. not > enough cores & disk. > > Is there a safe way to make the different registries / databases to > match? Can I just overwrite it using the Placement API? > > I'm using Pike. > > BR, > Risto > > PS > I did make a few attempts to resize the guest that now runs on > compute1 but for some reason they failed and by default the resize > tries to restart the resized guest on a different host (compute1). > In the end I was able to do the resize on the same host (compute2). > I was wondering if the resize attempts messed up the compute1 resource > management. Very likely, yes. It's tough to say what exact sequence of resize and migrate commands have caused your inventory and allocation records in placement to become corrupted. Have you tried restarting the nova-compute services on both compute nodes and seeing whether the placement service tries to adjust allocations upon restart? Also, please check the logs on the nova-compute workers looking for any warnings or errors related to communication with placement. Best, -jay From lance at haigmail.com Tue Aug 28 18:44:36 2018 From: lance at haigmail.com (Lance Haig) Date: Tue, 28 Aug 2018 20:44:36 +0200 Subject: [Openstack] Usery Portal for Cloud Sandboxes Message-ID: <7431f86d-9c91-4cb6-777f-3ff6759e2440@haigmail.com> Hi, I have written a Sandbox portal for cloud users who want to test Openstack clouds. I released a beta here**https://github.com/lhaig/usery/tree/v0.1-beta.1* * I would appreciate some feedback on it if anyone has the time to look at it.* * Thanks Lance* * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Aug 29 10:17:08 2018 From: eblock at nde.ag (Eugen Block) Date: Wed, 29 Aug 2018 10:17:08 +0000 Subject: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend Message-ID: <20180829101708.Horde.PeGhBqtR7mnqVhik5PUtfUE@webmail.nde.ag> Hi Chris, I can't seem to reproduce your issue. What OpenStack release are you using? > openstack volume backup create --name backup-1 --force volume-foo > openstack volume backup create --name backup-2 --force volume-foo > openstack volume backup create --name backup-3 --force volume-foo > ``` > Cinder reports the following via `volume backup show`: > - backup-1 is not an incremental backup, but backup-2 and backup-3 are > (`is_incremental`). > - All but the latest backup have dependent backups (`has_dependent_backups`). If I don't create the backups with the --incremental flag, they're all indepentent and don't have dependent backups: ---cut here--- (openstack) volume backup create --name backup1 --force 51c18b65-db03-485e-98fd-ccb0f0c2422d (openstack) volume backup create --name backup2 --force 51c18b65-db03-485e-98fd-ccb0f0c2422d (openstack) volume backup show backup1 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | images | | created_at | 2018-08-29T09:33:42.000000 | | data_timestamp | 2018-08-29T09:33:42.000000 | | description | None | | fail_reason | None | | has_dependent_backups | False | | id | 8c9b20a5-bf31-4771-b8db-b828664bb810 | | is_incremental | False | | name | backup1 | | object_count | 0 | | size | 2 | | snapshot_id | None | | status | available | | updated_at | 2018-08-29T09:34:14.000000 | | volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d | +-----------------------+--------------------------------------+ (openstack) volume backup show backup2 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | availability_zone | nova | | container | images | | created_at | 2018-08-29T09:34:20.000000 | | data_timestamp | 2018-08-29T09:34:20.000000 | | description | None | | fail_reason | None | | has_dependent_backups | False | | id | 9de60042-b4b6-478a-ac4d-49bf1b00d297 | | is_incremental | False | | name | backup2 | | object_count | 0 | | size | 2 | | snapshot_id | None | | status | available | | updated_at | 2018-08-29T09:34:52.000000 | | volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d | +-----------------------+--------------------------------------+ (openstack) volume backup delete backup1 (openstack) volume backup list +--------------------------------------+---------+-------------+-----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+---------+-------------+-----------+------+ | 9de60042-b4b6-478a-ac4d-49bf1b00d297 | backup2 | None | available | 2 | +--------------------------------------+---------+-------------+-----------+------+ (openstack) volume backup create --name backup-inc1 --incremental --force 51c18b65-db03-485e-98fd-ccb0f0c2422d +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 79e2f71b-1c3b-42d1-8582-4934568fea80 | | name | backup-inc1 | +-------+--------------------------------------+ (openstack) volume backup create --name backup-inc2 --incremental --force 51c18b65-db03-485e-98fd-ccb0f0c2422d +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | e1033d2a-f2c2-409a-880a-9630e45f1312 | | name | backup-inc2 | +-------+--------------------------------------+ # Now backup2 ist the base backup, it has dependents now (openstack) volume backup show 9de60042-b4b6-478a-ac4d-49bf1b00d297 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | has_dependent_backups | True | | id | 9de60042-b4b6-478a-ac4d-49bf1b00d297 | | is_incremental | False | | name | backup2 | | volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d | +-----------------------+--------------------------------------+ # backup-inc1 is incremental and has dependent backups (openstack) volume backup show 79e2f71b-1c3b-42d1-8582-4934568fea80 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | has_dependent_backups | True | | id | 79e2f71b-1c3b-42d1-8582-4934568fea80 | | is_incremental | True | | name | backup-inc1 | | volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d | +-----------------------+--------------------------------------+ # backup-inc2 has no dependent backups (openstack) volume backup show e1033d2a-f2c2-409a-880a-9630e45f1312 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | has_dependent_backups | False | | id | e1033d2a-f2c2-409a-880a-9630e45f1312 | | is_incremental | True | | name | backup-inc2 | | volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d | +-----------------------+--------------------------------------+ # But I don't see a base backup like your output shows control:~ # rbd -p images ls | grep 51c18b65-db03-485e-98fd-ccb0f0c2422d volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.79e2f71b-1c3b-42d1-8582-4934568fea80 volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.9de60042-b4b6-478a-ac4d-49bf1b00d297 volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.e1033d2a-f2c2-409a-880a-9630e45f1312 # The snapshots in Ceph are only visible during incremental backup: control:~ # for vol in $(rbd -p images ls | grep 51c18b65-db03-485e-98fd-ccb0f0c2422d); do echo "VOLUME: $vol"; rbd snap ls images/$vol; done VOLUME: volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.79e2f71b-1c3b-42d1-8582-4934568fea80 VOLUME: volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.9de60042-b4b6-478a-ac4d-49bf1b00d297 VOLUME: volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.e1033d2a-f2c2-409a-880a-9630e45f1312 # Of course, I can't delete the first two (openstack) volume backup delete 9de60042-b4b6-478a-ac4d-49bf1b00d297 Failed to delete backup with name or ID '9de60042-b4b6-478a-ac4d-49bf1b00d297': Invalid backup: Incremental backups exist for this backup. (HTTP 400) (Request-ID: req-7720fd51-3f63-484d-b421-d5b957d5fa83) 1 of 1 backups failed to delete. (openstack) volume backup delete 79e2f71b-1c3b-42d1-8582-4934568fea80 Failed to delete backup with name or ID '79e2f71b-1c3b-42d1-8582-4934568fea80': Invalid backup: Incremental backups exist for this backup. (HTTP 400) (Request-ID: req-a0ea1e8f-13b8-43cb-9f71-8b395d181438) 1 of 1 backups failed to delete. ---cut here--- This is Ocata we're running on. Do you have any setting configured to automatically create incremental backups? But this would fail in case there is no previous full backup. If I delete all existing backups and run it again starting with an incremental backup, it fails: (openstack) volume backup create --name backup-inc1 --incremental --force 51c18b65-db03-485e-98fd-ccb0f0c2422d Invalid backup: No backups available to do an incremental backup. We backup our volumes on RBD level, not with cinder, so they aren't incremental. Maybe someone else is able to reproduce your issue. Regards, Eugen Zitat von Chris Martin : > I back up my volumes daily, using incremental backups to minimize > network traffic and storage consumption. I want to periodically remove > old backups, and during this pruning operation, avoid entering a state > where a volume has no recent backups. Ceph RBD appears to support this > workflow, but unfortunately, Cinder does not. I can only delete the > *latest* backup of a given volume, and this precludes any reasonable > way to prune backups. Here, I'll show you. > > Let's make three backups of the same volume: > ``` > openstack volume backup create --name backup-1 --force volume-foo > openstack volume backup create --name backup-2 --force volume-foo > openstack volume backup create --name backup-3 --force volume-foo > ``` > > Cinder reports the following via `volume backup show`: > - backup-1 is not an incremental backup, but backup-2 and backup-3 are > (`is_incremental`). > - All but the latest backup have dependent backups (`has_dependent_backups`). > > We take a backup every day, and after a week we're on backup-7. We > want to start deleting older backups so that we don't keep > accumulating backups forever! What happens when we try? > > ``` > # openstack volume backup delete backup-1 > Failed to delete backup with name or ID 'backup-1': Invalid backup: > Incremental backups exist for this backup. (HTTP 400) > ``` > > We can't delete backup-1 because Cinder considers it a "base" backup > which `has_dependent_backups`. What about backup-2? Same story. Adding > the `--force` flag just gives a slightly different error message. The > *only* backup that Cinder will delete is backup-7 -- the very latest > one. This means that if we want to remove the oldest backups of a > volume, *we must first remove all newer backups of the same volume*, > i.e. delete literally all of our backups. > > Also, we cannot force creation of another *full* (non-incrmental) > backup in order to free all of the earlier backups for removal. > (Omitting the `--incremental` flag has no effect; you still get an > incremental backup.) > > Can we hope for better? Let's reach behind Cinder to the Ceph backend. > Volume backups are represented as a "base" RBD image with a snapshot > for each incremental backup: > > ``` > # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base > SNAPID NAME > SIZE TIMESTAMP > 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 > 10240 MB Thu Aug 23 10:57:48 2018 > 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44 > 10240 MB Thu Aug 23 11:05:43 2018 > 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46 > 10240 MB Thu Aug 23 11:06:47 2018 > 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 > 10240 MB Thu Aug 23 11:22:23 2018 > 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72 > 10240 MB Thu Aug 23 11:22:47 2018 > 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82 > 10240 MB Thu Aug 23 11:23:04 2018 > 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26 > 10240 MB Thu Aug 23 11:23:31 2018 > 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52 > 10240 MB Thu Aug 23 12:32:43 2018 > ``` > > It seems that each snapshot stands alone and doesn't depend on others. > Ceph lets me delete the older snapshots. > > ``` > # rbd snap rm > volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43 > Removing snap: 100% complete...done. > # rbd snap rm > volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base at backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71 > Removing snap: 100% complete...done. > ``` > > Now that we nuked backup-1 and backup-4, can we still restore from > backup-7 and launch an instance with it? > > ``` > openstack volume create --size 10 --bootable volume-foo-restored > openstack volume backup restore backup-7 volume-foo-restored > openstack server create --volume volume-foo-restored --flavor medium1 > instance-restored-from-backup-7 > ``` > > Yes! We can SSH to the instance and it appears intact. > > Perhaps each snapshot in Ceph stores a complete diff from the base RBD > image (rather than each successive snapshot depending on the last). If > this is true, then Cinder is unnecessarily protective of older > backups. Cinder represents these as "with dependents" and doesn't let > us touch them, even though Ceph will let us delete older RBD > snapshots, apparently without disrupting newer snapshots of the same > volume. If we could remove this limitation, Cinder backups would be > significantly more useful for us. We mostly host servers with > non-cloud-native workloads (IaaS for research scientists). For these, > full-disk backups at the infrastructure level are an important > supplement to file-level or application-level backups. > > It would be great if someone else could confirm or disprove what I'm > seeing here. I'd also love to hear from anyone else using Cinder > backups this way. > > Regards, > > Chris Martin at CyVerse > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From eblock at nde.ag Thu Aug 30 14:19:38 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 30 Aug 2018 14:19:38 +0000 Subject: [Openstack] [nova] In-Reply-To: <20180829101708.Horde.PeGhBqtR7mnqVhik5PUtfUE@webmail.nde.ag> Message-ID: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> Hi *, I posted my question in [1] a week ago, but no answer yet. When does Nova apply its filters (Ram, CPU, etc.)? Of course at instance creation and (live-)migration of existing instances. But what about existing instances that have been shutdown and in the meantime more instances on the same hypervisor have been launched? When you start one of the pre-existing instances and even with RAM overcommitment you can end up with an OOM-Killer resulting in forceful shutdowns if you reach the limits. Is there something I've been missing or maybe a bad configuration of my scheduler filters? Or is it the admin's task to keep an eye on the load? I'd appreciate any insights or pointers to something I've missed. Regards, Eugen [1] https://ask.openstack.org/en/question/115812/nova-scheduler-when-are-filters-applied/ From eblock at nde.ag Thu Aug 30 14:21:11 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 30 Aug 2018 14:21:11 +0000 Subject: [Openstack] [nova] Nova-scheduler: when are filters applied? In-Reply-To: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> References: <20180829101708.Horde.PeGhBqtR7mnqVhik5PUtfUE@webmail.nde.ag> <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> Message-ID: <20180830142111.Horde.qciumbbXM4IzF87DQDc7apd@webmail.nde.ag> Sorry. I was to quick with the send button... Hi *, I posted my question in [1] a week ago, but no answer yet. When does Nova apply its filters (Ram, CPU, etc.)? Of course at instance creation and (live-)migration of existing instances. But what about existing instances that have been shutdown and in the meantime more instances on the same hypervisor have been launched? When you start one of the pre-existing instances and even with RAM overcommitment you can end up with an OOM-Killer resulting in forceful shutdowns if you reach the limits. Is there something I've been missing or maybe a bad configuration of my scheduler filters? Or is it the admin's task to keep an eye on the load? I'd appreciate any insights or pointers to something I've missed. Regards, Eugen [1] https://ask.openstack.org/en/question/115812/nova-scheduler-when-are-filters-applied/ From jaypipes at gmail.com Thu Aug 30 14:35:00 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 30 Aug 2018 10:35:00 -0400 Subject: [Openstack] [nova] In-Reply-To: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> References: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> Message-ID: <9996fe76-a744-d3b8-baab-9efbb6389ffe@gmail.com> On 08/30/2018 10:19 AM, Eugen Block wrote: > When does Nova apply its filters (Ram, CPU, etc.)? > Of course at instance creation and (live-)migration of existing > instances. But what about existing instances that have been shutdown and > in the meantime more instances on the same hypervisor have been launched? > > When you start one of the pre-existing instances and even with RAM > overcommitment you can end up with an OOM-Killer resulting in forceful > shutdowns if you reach the limits. Is there something I've been missing > or maybe a bad configuration of my scheduler filters? Or is it the > admin's task to keep an eye on the load? > > I'd appreciate any insights or pointers to something I've missed. You need to set your ram_allocation_ratio nova.CONF option to 1.0 if you're running into OOM issues. This will prevent overcommit of memory on your compute nodes. Best, -jay From pablobrunetti at hotmail.com Thu Aug 30 14:50:08 2018 From: pablobrunetti at hotmail.com (pablo brunetti) Date: Thu, 30 Aug 2018 14:50:08 +0000 Subject: [Openstack] [TACKER] VIM STATUS PENDING IN OPENSTACK ROCKY Message-ID: Hi, I installed OpenStack Rocky Multinode with Kolla-Ansible. In the Tacker module, I am having trouble creating the VIM, the status is pending and the VNFFG does not work because of this. I've activated all the modules that Tacker needs (Barbican, Mistral, Networking-SFC, Ceilometer, Heat). Has anyone had this problem yet? VIM0 70e75906-d9d5-42b1-9835-65bab0c23671 False http://192.168.0.64:5000/v3 RegionOne admin admin PENDING openstack LOGS: TACKER-SERVER 2018-08-30 11:45:10.611 27 INFO tacker.wsgi [req-7c57c13c-e480-462f-9b1a-5a239c671e8f d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - - -] 192.168.0.64 - - [30/Aug/2018 11:45:10] "GET /v1.0/vnfds.json?template_source=onboarded HTTP/1.1" 200 211 0.010963 2018-08-30 11:45:12.827 41 INFO tacker.wsgi [req-8daf5c47-de62-4a1e-ab5d-a2abb062965b d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - - -] 192.168.0.64 - - [30/Aug/2018 11:45:12] "GET /v1.0/vims.json HTTP/1.1" 200 899 0.118279 MISTRAL-ENGINE 2018-08-30 11:45:01.852 8 INFO workflow_trace [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - default default] Task 'monitor_ping_vimPingVIMTASK' (caebddf1-7c1c-414f-9e9d-4f8de15349e0) [RUNNING -> SUCCESS, msg=None] (execution_id=2780e3eb-2862-4d5f-9990-343fceeecd3d) 2018-08-30 11:45:02.719 8 INFO workflow_trace [req-9f001438-bb30-4f35-b448-1fa4a57e23c8 - - - - -] Workflow 'vim_id_70e75906-d9d5-42b1-9835-65bab0c23671' [RUNNING -> SUCCESS, msg=None] (execution_id=2780e3eb-2862-4d5f-9990-343fceeecd3d) MISTRAL-SERVER 2018-08-30 11:44:37.898 24 WARNING oslo_db.sqlalchemy.utils [req-06cf535b-b8a5-475c-b758-c8496d0ae48e d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - default default] Unique keys not in sort_keys. The sorting order may be unstable. MISTRAL-EXECUTOR 2018-08-30 11:45:01.268 8 INFO mistral.executors.executor_server [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - default default] Received RPC request 'run_action'[action_ex_id=b9f18b10-b4ae-49a4-bd94-4233a2a625c7, action_cls_str=tacker.nfvo.workflows.vim_monitor.vim_ping_action.PingVimAction, action_cls_attrs={}, params={count: 1, targetip: 192.168.0.64, vim_id: 70e75906-d9d5-42b1-9835-65bab0c23671, interval: 1, tim..., timeout=None] 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 8a11667d9ac8477eb5ac50f4020add81 - default default] failed to run mistral action for vim 70e75906-d9d5-42b1-9835-65bab0c23671: OSError: [Errno 2] No such file or directory 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action Traceback (most recent call last): 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/nfvo/workflows/vim_monitor/vim_ping_action.py", line 88, in run 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action status = self._ping() 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/nfvo/workflows/vim_monitor/vim_ping_action.py", line 60, in _ping 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action debuglog=False) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/agent/linux/utils.py", line 66, in execute 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action addl_env=addl_env, debuglog=debuglog) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/agent/linux/utils.py", line 55, in create_process 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action env=env) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/common/utils.py", line 128, in subprocess_popen 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action close_fds=True, env=env) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/green/subprocess.py", line 55, in __init__ 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/usr/lib/python2.7/subprocess.py", line 394, in __init__ 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action errread, errwrite) 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action raise child_exception 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action OSError: [Errno 2] No such file or directory 2018-08-30 11:45:01.832 8 ERROR tacker.nfvo.workflows.vim_monitor.vim_ping_action Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu Aug 30 14:54:35 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 30 Aug 2018 14:54:35 +0000 Subject: [Openstack] [nova] Nova-scheduler: when are filters applied? In-Reply-To: <9996fe76-a744-d3b8-baab-9efbb6389ffe@gmail.com> References: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> <9996fe76-a744-d3b8-baab-9efbb6389ffe@gmail.com> Message-ID: <20180830145435.Horde.qGpUxaiNIIbQGcCo43g-PRn@webmail.nde.ag> Hi Jay, > You need to set your ram_allocation_ratio nova.CONF option to 1.0 if > you're running into OOM issues. This will prevent overcommit of > memory on your compute nodes. I understand that, the overcommitment works quite well most of the time. It just has been an issue twice when I booted an instance that had been shutdown a while ago. In the meantime there were new instances created on that hypervisor, and this old instance caused the OOM. I would expect that with a ratio of 1.0 I would experience the same issue, wouldn't I? As far as I understand the scheduler only checks at instance creation, not when booting existing instances. Is that a correct assumption? Regards, Eugen Zitat von Jay Pipes : > On 08/30/2018 10:19 AM, Eugen Block wrote: >> When does Nova apply its filters (Ram, CPU, etc.)? >> Of course at instance creation and (live-)migration of existing >> instances. But what about existing instances that have been >> shutdown and in the meantime more instances on the same hypervisor >> have been launched? >> >> When you start one of the pre-existing instances and even with RAM >> overcommitment you can end up with an OOM-Killer resulting in >> forceful shutdowns if you reach the limits. Is there something I've >> been missing or maybe a bad configuration of my scheduler filters? >> Or is it the admin's task to keep an eye on the load? >> >> I'd appreciate any insights or pointers to something I've missed. > > You need to set your ram_allocation_ratio nova.CONF option to 1.0 if > you're running into OOM issues. This will prevent overcommit of > memory on your compute nodes. > > Best, > -jay > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From dabarren at gmail.com Thu Aug 30 15:37:15 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 30 Aug 2018 17:37:15 +0200 Subject: [Openstack] [TACKER] VIM STATUS PENDING IN OPENSTACK ROCKY In-Reply-To: References: Message-ID: Hi, from the logs I think is something missing in mistral executor images from tacker. What distribution and install type are you using, latest kolla and kolla ansible from git? Regards On Thu, Aug 30, 2018, 5:09 PM pablo brunetti wrote: > Hi, > > > I installed OpenStack Rocky Multinode with Kolla-Ansible. In the Tacker > module, I am having trouble creating the VIM, the status is pending and the > VNFFG does not work because of this. I've activated all the modules that > Tacker needs (Barbican, Mistral, Networking-SFC, Ceilometer, Heat). > > > Has anyone had this problem yet? > > > VIM0 > 70e75906-d9d5-42b1-9835-65bab0c23671 False > http://192.168.0.64:5000/v3 RegionOne admin admin > PENDING openstack > > > LOGS: > > TACKER-SERVER > > > 2018-08-30 11:45:10.611 27 INFO tacker.wsgi > [req-7c57c13c-e480-462f-9b1a-5a239c671e8f d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - - -] 192.168.0.64 - - [30/Aug/2018 > 11:45:10] "GET /v1.0/vnfds.json?template_source=onboarded HTTP/1.1" 200 211 > 0.010963 > 2018-08-30 11:45:12.827 41 INFO tacker.wsgi > [req-8daf5c47-de62-4a1e-ab5d-a2abb062965b d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - - -] 192.168.0.64 - - [30/Aug/2018 > 11:45:12] "GET /v1.0/vims.json HTTP/1.1" 200 899 0.118279 > > MISTRAL-ENGINE > > 2018-08-30 11:45:01.852 8 INFO workflow_trace > [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - default default] Task > 'monitor_ping_vimPingVIMTASK' (caebddf1-7c1c-414f-9e9d-4f8de15349e0) > [RUNNING -> SUCCESS, msg=None] > (execution_id=2780e3eb-2862-4d5f-9990-343fceeecd3d) > 2018-08-30 11:45:02.719 8 INFO workflow_trace > [req-9f001438-bb30-4f35-b448-1fa4a57e23c8 - - - - -] Workflow > 'vim_id_70e75906-d9d5-42b1-9835-65bab0c23671' [RUNNING -> SUCCESS, > msg=None] (execution_id=2780e3eb-2862-4d5f-9990-343fceeecd3d) > > MISTRAL-SERVER > > 2018-08-30 11:44:37.898 24 WARNING oslo_db.sqlalchemy.utils > [req-06cf535b-b8a5-475c-b758-c8496d0ae48e d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - default default] Unique keys not in > sort_keys. The sorting order may be unstable. > > MISTRAL-EXECUTOR > > 2018-08-30 11:45:01.268 8 INFO mistral.executors.executor_server > [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - default default] Received RPC request > 'run_action'[action_ex_id=b9f18b10-b4ae-49a4-bd94-4233a2a625c7, > action_cls_str=tacker.nfvo.workflows.vim_monitor.vim_ping_action.PingVimAction, > action_cls_attrs={}, params={count: 1, targetip: 192.168.0.64, vim_id: > 70e75906-d9d5-42b1-9835-65bab0c23671, interval: 1, tim..., timeout=None] > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action > [req-90f08fff-96a7-4d2a-9c57-1c2f9f9f408c d096a0e324274a8abf8c326f6ac9e9a5 > 8a11667d9ac8477eb5ac50f4020add81 - default default] failed to run mistral > action for vim 70e75906-d9d5-42b1-9835-65bab0c23671: OSError: [Errno 2] No > such file or directory > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action Traceback (most recent > call last): > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/nfvo/workflows/vim_monitor/vim_ping_action.py", > line 88, in run > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action status = self._ping() > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/nfvo/workflows/vim_monitor/vim_ping_action.py", > line 60, in _ping > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action debuglog=False) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/agent/linux/utils.py", > line 66, in execute > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action addl_env=addl_env, > debuglog=debuglog) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/agent/linux/utils.py", > line 55, in create_process > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action env=env) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/tacker/common/utils.py", > line 128, in subprocess_popen > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action close_fds=True, > env=env) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/var/lib/kolla/venv/local/lib/python2.7/site-packages/eventlet/green/subprocess.py", > line 55, in __init__ > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action > subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/usr/lib/python2.7/subprocess.py", line 394, in __init__ > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action errread, errwrite) > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action File > "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action raise child_exception > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action OSError: [Errno 2] No > such file or directory > 2018-08-30 11:45:01.832 8 ERROR > tacker.nfvo.workflows.vim_monitor.vim_ping_action > > > > > Thanks. > > > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Aug 30 17:02:46 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 30 Aug 2018 11:02:46 -0600 Subject: [Openstack] [nova] Nova-scheduler: when are filters applied? In-Reply-To: <20180830145435.Horde.qGpUxaiNIIbQGcCo43g-PRn@webmail.nde.ag> References: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> <9996fe76-a744-d3b8-baab-9efbb6389ffe@gmail.com> <20180830145435.Horde.qGpUxaiNIIbQGcCo43g-PRn@webmail.nde.ag> Message-ID: <5B882336.9080106@windriver.com> On 08/30/2018 08:54 AM, Eugen Block wrote: > Hi Jay, > >> You need to set your ram_allocation_ratio nova.CONF option to 1.0 if you're >> running into OOM issues. This will prevent overcommit of memory on your >> compute nodes. > > I understand that, the overcommitment works quite well most of the time. > > It just has been an issue twice when I booted an instance that had been shutdown > a while ago. In the meantime there were new instances created on that > hypervisor, and this old instance caused the OOM. > > I would expect that with a ratio of 1.0 I would experience the same issue, > wouldn't I? As far as I understand the scheduler only checks at instance > creation, not when booting existing instances. Is that a correct assumption? The system keeps track of how much memory is available and how much has been assigned to instances on each compute node. With a ratio of 1.0 it shouldn't let you consume more RAM than is available even if the instances have been shut down. Chris From fungi at yuggoth.org Thu Aug 30 17:03:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 17:03:50 +0000 Subject: [Openstack] [all] Bringing the community together (combine the lists!) Message-ID: <20180830170350.wrz4wlanb276kncb@yuggoth.org> The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists on lists.openstack.org see an increasing amount of cross-posting and thread fragmentation as conversants attempt to reach various corners of our community with topics of interest to one or more (and sometimes all) of those overlapping groups of subscribers. For some time we've been discussing and trying ways to bring our developers, distributors, operators and end users together into a less isolated, more cohesive community. An option which keeps coming up is to combine these different but overlapping mailing lists into one single discussion list. As we covered[1] in Vancouver at the last Forum there are a lot of potential up-sides: 1. People with questions are no longer asking them in a different place than many of the people who have the answers to those questions (the "not for usage questions" in the openstack-dev ML title only serves to drive the wedge between developers and users deeper). 2. The openstack-sigs mailing list hasn't seem much uptake (an order of magnitude fewer subscribers and posts) compared to the other three lists, yet it was intended to bridge the communication gap between them; combining those lists would have been a better solution to the problem than adding yet another turned out to be. 3. At least one out of every ten messages to any of these lists is cross-posted to one or more of the others, because we have topics that span across these divided groups yet nobody is quite sure which one is the best venue for them; combining would eliminate the fragmented/duplicative/divergent discussion which results from participants following up on the different subsets of lists to which they're subscribed, 4. Half of the people who are actively posting to at least one of the four lists subscribe to two or more, and a quarter to three if not all four; they would no longer be receiving multiple copies of the various cross-posts if these lists were combined. The proposal is simple: create a new openstack-discuss mailing list to cover all the above sorts of discussion and stop using the other four. As the OpenStack ecosystem continues to mature and its software and services stabilize, the nature of our discourse is changing (becoming increasingly focused with fewer heated debates, distilling to a more manageable volume), so this option is looking much more attractive than in the past. That's not to say it's quiet (we're looking at roughly 40 messages a day across them on average, after deduplicating the cross-posts), but we've grown accustomed to tagging the subjects of these messages to make it easier for other participants to quickly filter topics which are relevant to them and so would want a good set of guidelines on how to do so for the combined list (a suggested set is already being brainstormed[2]). None of this is set in stone of course, and I expect a lot of continued discussion across these lists (oh, the irony) while we try to settle on a plan, so definitely please follow up with your questions, concerns, ideas, et cetera. As an aside, some of you have probably also seen me talking about experiments I've been doing with Mailman 3... I'm hoping new features in its Hyperkitty and Postorius WebUIs make some of this easier or more accessible to casual participants (particularly in light of the combined list scenario), but none of the plan above hinges on MM3 and should be entirely doable with the MM2 version we're currently using. Also, in case you were wondering, no the irony of cross-posting this message to four mailing lists is not lost on me. ;) [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community [2] https://etherpad.openstack.org/p/common-openstack-ml-topics -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu Aug 30 17:17:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 30 Aug 2018 13:17:14 -0400 Subject: [Openstack] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <1535649366-sup-1027@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-08-30 17:03:50 +0000: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics I fully support the idea of merging the lists. Doug From jimmy at openstack.org Thu Aug 30 17:19:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 12:19:55 -0500 Subject: [Openstack] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B88273B.3000206@openstack.org> Absolutely support merging. Jeremy Stanley wrote: > The openstack, openstack-dev, openstack-sigs and openstack-operators > mailing lists on lists.openstack.org see an increasing amount of > cross-posting and thread fragmentation as conversants attempt to > reach various corners of our community with topics of interest to > one or more (and sometimes all) of those overlapping groups of > subscribers. For some time we've been discussing and trying ways to > bring our developers, distributors, operators and end users together > into a less isolated, more cohesive community. An option which keeps > coming up is to combine these different but overlapping mailing > lists into one single discussion list. As we covered[1] in Vancouver > at the last Forum there are a lot of potential up-sides: > > 1. People with questions are no longer asking them in a different > place than many of the people who have the answers to those > questions (the "not for usage questions" in the openstack-dev ML > title only serves to drive the wedge between developers and users > deeper). > > 2. The openstack-sigs mailing list hasn't seem much uptake (an order > of magnitude fewer subscribers and posts) compared to the other > three lists, yet it was intended to bridge the communication gap > between them; combining those lists would have been a better > solution to the problem than adding yet another turned out to be. > > 3. At least one out of every ten messages to any of these lists is > cross-posted to one or more of the others, because we have topics > that span across these divided groups yet nobody is quite sure which > one is the best venue for them; combining would eliminate the > fragmented/duplicative/divergent discussion which results from > participants following up on the different subsets of lists to which > they're subscribed, > > 4. Half of the people who are actively posting to at least one of > the four lists subscribe to two or more, and a quarter to three if > not all four; they would no longer be receiving multiple copies of > the various cross-posts if these lists were combined. > > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. As the OpenStack ecosystem continues to mature and its > software and services stabilize, the nature of our discourse is > changing (becoming increasingly focused with fewer heated debates, > distilling to a more manageable volume), so this option is looking > much more attractive than in the past. That's not to say it's quiet > (we're looking at roughly 40 messages a day across them on average, > after deduplicating the cross-posts), but we've grown accustomed to > tagging the subjects of these messages to make it easier for other > participants to quickly filter topics which are relevant to them and > so would want a good set of guidelines on how to do so for the > combined list (a suggested set is already being brainstormed[2]). > None of this is set in stone of course, and I expect a lot of > continued discussion across these lists (oh, the irony) while we try > to settle on a plan, so definitely please follow up with your > questions, concerns, ideas, et cetera. > > As an aside, some of you have probably also seen me talking about > experiments I've been doing with Mailman 3... I'm hoping new > features in its Hyperkitty and Postorius WebUIs make some of this > easier or more accessible to casual participants (particularly in > light of the combined list scenario), but none of the plan above > hinges on MM3 and should be entirely doable with the MM2 version > we're currently using. > > Also, in case you were wondering, no the irony of cross-posting this > message to four mailing lists is not lost on me. ;) > > [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community > [2] https://etherpad.openstack.org/p/common-openstack-ml-topics > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From correajl at gmail.com Thu Aug 30 17:25:53 2018 From: correajl at gmail.com (Jorge Luiz Correa) Date: Thu, 30 Aug 2018 14:25:53 -0300 Subject: [Openstack] Help with ipv6 self-service and ip6tables rule on mangle chain In-Reply-To: References: Message-ID: Thank you so much Brian. I was not using "address scope". After your indication I've read about this feature working together with "subnet pool". However the official documentation is not so clear. For those looking for something else about usage of address scope and subnet pool, I recommend this tutorial: https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html Now I have 3 address scopes: 1 for IPv6 (this has 2 subnet pools, one for provider network and one for projects networks, so IPv6 is routed), 1 for IPv4 provider subnet and 1 for IPv4 projects networks (so, IPv4 has 2 address scopes and are NATed). One thing I've noticed is that when creating subnets using openstack command line client, using a subnet pool, I can't specify allocation pools neither gateway. I've CARP and my gateway address is not the first IP, so I've to change that. But, using Horizon web interface I can change these configurations. Now the environment is dual stack. Thank you! - JLC On Mon, Aug 27, 2018 at 3:33 PM Brian Haley wrote: > On 08/23/2018 12:53 PM, Jorge Luiz Correa wrote: > > Hi all > > > > I'm deploying a Queens on Ubuntu 18.04 with one controller, one network > > controller e for now one compute node. I'm using ML2 with linuxbridge > > mechanism driver and a self-service type of network. This is is a dual > > stack environment (v4 and v6). > > > > IPv4 is working fine, NATs oks and packets flowing. > > > > With IPv6 I'm having a problem. Packets from external networks to a > > project network are stopping on qrouter namespace firewall. I've a > > project with one network, one v4 subnet and one v6 subnet. Adressing are > > all ok, virtual machines are getting their IPs and can ping the network > > gateway. > > > > However, from external to project network, using ipv6, the packets stop > > in a DROP rule inside de qrouter namespace. > > This looks like the address scopes of the subnets are different, so the > rule to mark packets is not being inserted. How are you assigning the > subnet addresses on the external and internal networks? Typically you > would define a subnet pool and allocate from that, which should work. > Perhaps this guide would help with that: > > https://docs.openstack.org/neutron/queens/admin/config-address-scopes.html > > The last sentence there seems to describe the problem you're having: > > "If the address scopes match between networks then pings and other > traffic route directly through. If the scopes do not match between > networks, the router either drops the traffic or applies NAT to cross > scope boundaries." > > IPv6 in neutron does not use NAT... > > -Brian > > > > The ip6tables path is: > > > > mangle prerouting -> neutron-l3-agent-PREROUTING -> > > neutron-l3-agent-scope -> here we have a MARK rule: > > > > pkts bytes target prot opt in out source > > destination > > 3 296 MARK all qr-7f2944e7-cc * > > ::/0 ::/0 MARK xset 0x4000000/0xffff0000 > > > > qr interface is the internal network interface of the project (subnet > > gateway). So, packets from this interface are marked. > > > > But, the returning is the problem. The packets doesn't returns. I've > > rules from the nexthop firewall and packets arrive on the external > > bridge (network node). But, when they arrive on external interface of > > the qrouter namespace, they are filtered. > > > > Inside qrouter namespace this is the rule: > > > > ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t > > mangle -L -n -v > > > > ... > > Chain neutron-l3-agent-scope (1 references) > > pkts bytes target prot opt in out source > > destination > > 0 0 DROP all * qr-7f2944e7-cc > > ::/0 ::/0 mark match ! > 0x4000000/0xffff0000 > > ... > > > > If I create the following rule everything works great: > > > > ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t > > mangle -I neutron-l3-agent-scope -i qg-b6757bfe-c1 -j MARK --set-xmark > > 0x4000000/0xffff0000 > > > > where qg is the external interface of virtual router. So, if I mark > > packets from external interface on mangle, they are not filtered. > > > > Is this normal? I've to manually add a rule to do that? > > > > How to use the "external_ingress_mark" option on l3-agent.ini ? Can I > > use it to mark packets using a configuration parameter instead of > > manually inserted ip6tables rule? > > > > Thanks a lot! > > > > - JLC > > > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Aug 30 18:57:31 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 30 Aug 2018 12:57:31 -0600 Subject: [Openstack] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830170350.wrz4wlanb276kncb@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <5B883E1B.2070101@windriver.com> On 08/30/2018 11:03 AM, Jeremy Stanley wrote: > The proposal is simple: create a new openstack-discuss mailing list > to cover all the above sorts of discussion and stop using the other > four. Do we want to merge usage and development onto one list? That could be a busy list for someone who's just asking a simple usage question. Alternately, if we are going to merge everything then why not just use the "openstack" mailing list since it already exists and there are references to it on the web. (Or do you want to force people to move to something new to make them recognize that something has changed?) Chris From jaypipes at gmail.com Thu Aug 30 20:33:33 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 30 Aug 2018 16:33:33 -0400 Subject: [Openstack] [nova] Nova-scheduler: when are filters applied? In-Reply-To: <20180830145435.Horde.qGpUxaiNIIbQGcCo43g-PRn@webmail.nde.ag> References: <20180830141938.Horde.oWg04EkYxTBMGLQrn__TgQg@webmail.nde.ag> <9996fe76-a744-d3b8-baab-9efbb6389ffe@gmail.com> <20180830145435.Horde.qGpUxaiNIIbQGcCo43g-PRn@webmail.nde.ag> Message-ID: On 08/30/2018 10:54 AM, Eugen Block wrote: > Hi Jay, > >> You need to set your ram_allocation_ratio nova.CONF option to 1.0 if >> you're running into OOM issues. This will prevent overcommit of memory >> on your compute nodes. > > I understand that, the overcommitment works quite well most of the time. > > It just has been an issue twice when I booted an instance that had been > shutdown a while ago. In the meantime there were new instances created > on that hypervisor, and this old instance caused the OOM. > > I would expect that with a ratio of 1.0 I would experience the same > issue, wouldn't I? As far as I understand the scheduler only checks at > instance creation, not when booting existing instances. Is that a > correct assumption? To echo what cfriesen said, if you set your allocation ratio to 1.0, the system will not overcommit memory. Shut down instances consume memory from an inventory management perspective. If you don't want any danger of an instance causing an OOM, you must set you ram_allocation_ratio to 1.0. The scheduler doesn't really have anything to do with this. Best, -jay From fungi at yuggoth.org Thu Aug 30 21:12:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:12:57 +0000 Subject: [Openstack] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> Message-ID: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: [...] > What needs to be done for this is full topic categories support > under `options` page so people get to filter emails properly. [...] Unfortunately, topic filtering is one of the MM2 features the Mailman community decided nobody used (or at least not enough to warrant preserving it in MM3). I do think we need to be consistent about tagging subjects to make client-side filtering more effective for people who want that, but if we _do_ want to be able to upgrade we shouldn't continue to rely on server-side filtering support in Mailman unless we can somehow work with them to help in reimplementing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:25:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:25:37 +0000 Subject: [Openstack] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B883E1B.2070101@windriver.com> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> Message-ID: <20180830212536.yzirmxzxiqhciyby@yuggoth.org> On 2018-08-30 12:57:31 -0600 (-0600), Chris Friesen wrote: [...] > Do we want to merge usage and development onto one list? That > could be a busy list for someone who's just asking a simple usage > question. A counterargument though... projecting the number of unique posts to all four lists combined for this year (both based on trending for the past several years and also simply scaling the count of messages this year so far based on how many days are left) comes out roughly equal to the number of posts which were made to the general openstack mailing list in 2012. > Alternately, if we are going to merge everything then why not just > use the "openstack" mailing list since it already exists and there > are references to it on the web. This was an option we discussed in the "One Community" forum session as well. There seemed to be a slight preference for making a new -disscuss list and retiring the old general one. I see either as an potential solution here. > (Or do you want to force people to move to something new to make them > recognize that something has changed?) That was one of the arguments made. Also I believe we have a *lot* of "black hole" subscribers who aren't actually following that list but whose addresses aren't bouncing new posts we send them for any of a number of possible reasons. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Aug 30 21:33:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 30 Aug 2018 21:33:41 +0000 Subject: [Openstack] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> Message-ID: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: [...] > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. I understand where you're coming from, and I used to feel similarly. I was accustomed to communities where developers had one mailing list, users had another, and whenever a user asked a question on the developer mailing list they were told to go away and bother the user mailing list instead (not even a good, old-fashioned "RTFM" for their trouble). You're probably intimately familiar with at least one of these communities. ;) As the years went by, it's become apparent to me that this is actually an antisocial behavior pattern, and actively harmful to the user base. I believe OpenStack actually wants users to see the development work which is underway, come to understand it, and become part of that process. Requiring them to have their conversations elsewhere sends the opposite message. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Thu Aug 30 21:45:17 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 30 Aug 2018 16:45:17 -0500 Subject: [Openstack] [openstack-dev] [Openstack-operators] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830213341.yuxyen2elx2c3is4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <5B88656D.1020209@openstack.org> Jeremy Stanley wrote: > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] >> I really don't want this. I'm happy with things being sorted in >> multiple lists, even though I'm subscribed to multiples. IMO this is easily solved by tagging. If emails are properly tagged (which they typically are), most email clients will properly sort on rules and you can just auto-delete if you're 100% not interested in a particular topic. > SNIP > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. I really and truly believe that it has become a blocker for our community. Conversations sent to multiple lists inherently splinter and we end up with different groups coming up with different solutions for a single problem. Literally the opposite desired result of sending things to multiple lists. I believe bringing these groups together, with tags, will solve a lot of immediate problems. It will also have an added bonus of allowing people "catching up" on the community to look to a single place for a thread i/o 1-5 separate lists. It's better in both the short and long term. Cheers, Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Aug 30 23:08:56 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 30 Aug 2018 18:08:56 -0500 Subject: [Openstack] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: <5B88656D.1020209@openstack.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: I think the more we can reduce the ML sprawl the better. I also recall us discussing having some documentation or way of notifying net new signups of how to interact with the ML successfully. An example was having some general guidelines around tagging. Also as a maintainer for at least one of the mailing lists over the past 6+ months I have to inquire about how that will happen going forward which again could be part of this documentation/initial message. Also there are many times I miss messages that for one reason or another do not hit the proper mailing list. I mean we could dive into the minutia or start up the mountain of why keeping things the way they are is worst than making this change and vice versa but I am willing to bet there are more advantages than disadvantages. On Thu, Aug 30, 2018 at 4:45 PM Jimmy McArthur wrote: > > > Jeremy Stanley wrote: > > On 2018-08-30 22:49:26 +0200 (+0200), Thomas Goirand wrote: > [...] > > I really don't want this. I'm happy with things being sorted in > multiple lists, even though I'm subscribed to multiples. > > IMO this is easily solved by tagging. If emails are properly tagged > (which they typically are), most email clients will properly sort on rules > and you can just auto-delete if you're 100% not interested in a particular > topic. > Yes, there are definitely ways to go about discarding unwanted mail automagically or not seeing it at all. And to be honest I think if we are relying on so many separate MLs to do that for us it is better community wide for the responsibility for that to be on individuals. It becomes very tiring and inefficient time wise to have to go through the various issues of the way things are now; cross-posting is a great example that is steadily getting worse. > SNIP > > As the years went by, it's become apparent to me that this is > actually an antisocial behavior pattern, and actively harmful to the > user base. I believe OpenStack actually wants users to see the > development work which is underway, come to understand it, and > become part of that process. Requiring them to have their > conversations elsewhere sends the opposite message. > > I really and truly believe that it has become a blocker for our > community. Conversations sent to multiple lists inherently splinter and we > end up with different groups coming up with different solutions for a > single problem. Literally the opposite desired result of sending things to > multiple lists. I believe bringing these groups together, with tags, will > solve a lot of immediate problems. It will also have an added bonus of > allowing people "catching up" on the community to look to a single place > for a thread i/o 1-5 separate lists. It's better in both the short and > long term. > +1 > > Cheers, > Jimmy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 31 00:03:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 31 Aug 2018 10:03:35 +1000 Subject: [Openstack] [Openstack-sigs] [all] Bringing the community together (combine the lists!) In-Reply-To: <20180830211257.oa6hxd4pningzqf4@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> Message-ID: <20180831000334.GR26778@thor.bakeyournoodle.com> On Thu, Aug 30, 2018 at 09:12:57PM +0000, Jeremy Stanley wrote: > On 2018-08-31 01:13:58 +0800 (+0800), Rico Lin wrote: > [...] > > What needs to be done for this is full topic categories support > > under `options` page so people get to filter emails properly. > [...] > > Unfortunately, topic filtering is one of the MM2 features the > Mailman community decided nobody used (or at least not enough to > warrant preserving it in MM3). I do think we need to be consistent > about tagging subjects to make client-side filtering more effective > for people who want that, but if we _do_ want to be able to upgrade > we shouldn't continue to rely on server-side filtering support in > Mailman unless we can somehow work with them to help in > reimplementing it. The suggestion is to implement it as a 3rd party plugin or work with the mm community to implement: https://wiki.mailman.psf.io/DEV/Dynamic%20Sublists So if we decide we really want that in mm3 we have options. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 00:21:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 00:21:22 +0000 Subject: [Openstack] [Openstack-sigs] [Openstack-operators] [openstack-dev] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> <5B88656D.1020209@openstack.org> Message-ID: <20180831002121.ch76mvqeskplqew2@yuggoth.org> On 2018-08-30 18:08:56 -0500 (-0500), Melvin Hillsman wrote: [...] > I also recall us discussing having some documentation or way of > notifying net new signups of how to interact with the ML > successfully. An example was having some general guidelines around > tagging. Also as a maintainer for at least one of the mailing > lists over the past 6+ months I have to inquire about how that > will happen going forward which again could be part of this > documentation/initial message. [...] Mailman supports customizable welcome messages for new subscribers, so the *technical* implementation there is easy. I do think (and failed to highlight it explicitly earlier I'm afraid) that this proposal comes with an expectation that we provide recommended guidelines for mailing list use/etiquette appropriate to our community. It could be contained entirely within the welcome message, or merely linked to a published document (and whether that's best suited for the Infra Manual or New Contributor Guide or somewhere else entirely is certainly up for debate), or even potentially both. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From terje.lundin at evolved-intelligence.com Fri Aug 31 12:27:46 2018 From: terje.lundin at evolved-intelligence.com (Terry Lundin) Date: Fri, 31 Aug 2018 13:27:46 +0100 Subject: [Openstack] installation of Gnocchi on Queens Message-ID: <67746dd8-c0ff-8371-6765-e7ef6995d65a@evolved-intelligence.com> Hello, We are trying to install ceilometer and gnocchi on Openstack Queens (Ubuntu 16.04) following the official instructions from https://docs.openstack.org/ceilometer/queens/install/install-base-ubuntu.html we end up in serious problems. When we issue the gnocchi install via apt-get, e.g. # apt-get install gnocchi-api gnocchi-metricd python-gnocchiclient *it will uninstall the dashboard, keystone and placement api *(it was a nice few hours fixing that). A suggestion to solve this was to use the pre-release archive: sudo add-apt-repository cloud-archive:queens-proposed sudo apt-get update This installs gnocchi without removing keystone, but the gnocchi api won't install as a service anymore. It seems like the gnocchi version is not compatible with Queens. Has anybody else managed to install gnocchi/ceilometer on the Queens/Ubuntu 16.04 combo? Appreciate any help Terje -------------- next part -------------- An HTML attachment was scrubbed... URL: From sileht at sileht.net Fri Aug 31 14:42:38 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Fri, 31 Aug 2018 16:42:38 +0200 Subject: [Openstack] installation of Gnocchi on Queens In-Reply-To: <67746dd8-c0ff-8371-6765-e7ef6995d65a@evolved-intelligence.com> References: <67746dd8-c0ff-8371-6765-e7ef6995d65a@evolved-intelligence.com> Message-ID: <20180831144238.qb3zp22yv45dxlih@sileht.net> On Fri, Aug 31, 2018 at 01:27:46PM +0100, Terry Lundin wrote: >Hello, > >We are trying to install ceilometer and gnocchi on Openstack Queens >(Ubuntu 16.04) following the official instructions from https://docs.openstack.org/ceilometer/queens/install/install-base-ubuntu.html >we end up in serious problems. When we issue the gnocchi install via >apt-get, e.g. > ># apt-get install gnocchi-api gnocchi-metricd python-gnocchiclient > >*it will uninstall the dashboard, keystone and placement api *(it was >a nice few hours fixing that). > >A suggestion to solve this was to use the pre-release archive: > >sudo add-apt-repository cloud-archive:queens-proposed >sudo apt-get update > >This installs gnocchi without removing keystone, but the gnocchi api >won't install as a service anymore. It seems like the gnocchi version >is not compatible with Queens. For sure, Gnocchi is Queens compatible for sure. This is an bug of the Ubuntu Cloud Archive packaging. I think you hitting this: https://bugs.launchpad.net/ubuntu/+source/gnocchi/+bug/1746992 >Has anybody else managed to install gnocchi/ceilometer on the >Queens/Ubuntu 16.04 combo? -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From jaypipes at gmail.com Fri Aug 31 14:48:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 31 Aug 2018 10:48:18 -0400 Subject: [Openstack] The problem of how to update resouce allocation ratio dynamically. In-Reply-To: <47ad3358.68de.16569e15778.Coremail.yu_qearl@163.com> References: <47ad3358.68de.16569e15778.Coremail.yu_qearl@163.com> Message-ID: <0943ad7e-e25e-8d54-7525-966f49ac0a1e@gmail.com> On 08/23/2018 11:01 PM, 余婷婷 wrote: > Hi: >    Sorry fo bothering everyone. Now I update my openstack to queen,and > use the nova-placement-api to provider resource. >   When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to > update memory_mb allocation_ratio, and it success.But after some > minutes,it recove to old value automatically. Then I find it report the > value from compute_node in nova-compute automatically. But the > allocation_ratio of compute_node was came from the nova.conf.So that > means,We can't update the allocation_ratio until we update the > nova.conf? But I wish to update the allocation_ratio dynamically other > to update the nova.conf. I don't known how  to update resouce allocation > ratio dynamically. We are attempting to determine what is going on with the allocation ratios being improperly set on the following bug: https://bugs.launchpad.net/nova/+bug/1789654 Please bear with us as we try to fix it. Best, -jay From terje.lundin at evolved-intelligence.com Fri Aug 31 14:54:50 2018 From: terje.lundin at evolved-intelligence.com (Terry Lundin) Date: Fri, 31 Aug 2018 15:54:50 +0100 Subject: [Openstack] installation of Gnocchi on Queens In-Reply-To: <20180831144238.qb3zp22yv45dxlih@sileht.net> References: <67746dd8-c0ff-8371-6765-e7ef6995d65a@evolved-intelligence.com> <20180831144238.qb3zp22yv45dxlih@sileht.net> Message-ID: <596564a3-eefd-48cc-6fc4-6008c9b58e6f@evolved-intelligence.com> On 31/08/18 15:42, Mehdi Abaakouk wrote: > On Fri, Aug 31, 2018 at 01:27:46PM +0100, Terry Lundin wrote: >> Hello, >> >> We are trying to install ceilometer and gnocchi on Openstack Queens >> (Ubuntu 16.04) following the official instructions from >> https://docs.openstack.org/ceilometer/queens/install/install-base-ubuntu.html >> we end up in serious problems. When we issue the gnocchi install via >> apt-get, e.g. >> >> #  apt-get install gnocchi-api gnocchi-metricd python-gnocchiclient >> >> *it will uninstall the dashboard, keystone and placement api *(it was >> a nice few hours fixing that). >> >> A suggestion to solve this was to use the pre-release archive: >> >> sudo add-apt-repository cloud-archive:queens-proposed >> sudo apt-get update >> >> This installs gnocchi without removing keystone, but the gnocchi api >> won't install as a service anymore. It seems like the gnocchi version >> is not compatible with Queens. > > For sure, Gnocchi is Queens compatible for sure. This is an bug of the > Ubuntu Cloud Archive packaging. I  think you hitting this: > > https://bugs.launchpad.net/ubuntu/+source/gnocchi/+bug/1746992 Yes, hitting that one. I understand it's an issue with gnocchi-api requiring python-3 while openstack queens is running on python-2. What is the work-around? Install gnocchi/gnocchi api on a separate apache server outside openstack? >> Has anybody else managed to install gnocchi/ceilometer on the >> Queens/Ubuntu 16.04 combo? > From sileht at sileht.net Fri Aug 31 14:57:39 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Fri, 31 Aug 2018 16:57:39 +0200 Subject: [Openstack] installation of Gnocchi on Queens In-Reply-To: <596564a3-eefd-48cc-6fc4-6008c9b58e6f@evolved-intelligence.com> References: <67746dd8-c0ff-8371-6765-e7ef6995d65a@evolved-intelligence.com> <20180831144238.qb3zp22yv45dxlih@sileht.net> <596564a3-eefd-48cc-6fc4-6008c9b58e6f@evolved-intelligence.com> Message-ID: <20180831145738.dxfyegat6kdqk73w@sileht.net> On Fri, Aug 31, 2018 at 03:54:50PM +0100, Terry Lundin wrote: > > >On 31/08/18 15:42, Mehdi Abaakouk wrote: >>On Fri, Aug 31, 2018 at 01:27:46PM +0100, Terry Lundin wrote: >>>Hello, >>> >>>We are trying to install ceilometer and gnocchi on Openstack >>>Queens (Ubuntu 16.04) following the official instructions from https://docs.openstack.org/ceilometer/queens/install/install-base-ubuntu.html >>>we end up in serious problems. When we issue the gnocchi install >>>via apt-get, e.g. >>> >>>#  apt-get install gnocchi-api gnocchi-metricd python-gnocchiclient >>> >>>*it will uninstall the dashboard, keystone and placement api *(it >>>was a nice few hours fixing that). >>> >>>A suggestion to solve this was to use the pre-release archive: >>> >>>sudo add-apt-repository cloud-archive:queens-proposed >>>sudo apt-get update >>> >>>This installs gnocchi without removing keystone, but the gnocchi >>>api won't install as a service anymore. It seems like the gnocchi >>>version is not compatible with Queens. >> >>For sure, Gnocchi is Queens compatible for sure. This is an bug of the >>Ubuntu Cloud Archive packaging. I  think you hitting this: >> >>https://bugs.launchpad.net/ubuntu/+source/gnocchi/+bug/1746992 > >Yes, hitting that one. I understand it's an issue with gnocchi-api >requiring python-3 while openstack queens is running on python-2. What >is the work-around? According the bug tracker a new package update will come soon in queens-proposed. So just waiting > Install gnocchi/gnocchi api on a separate apache server outside openstack? That's could work. -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From fungi at yuggoth.org Fri Aug 31 16:17:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:17:26 +0000 Subject: [Openstack] Mailman topic filtering (was: Bringing the community together...) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180830211257.oa6hxd4pningzqf4@yuggoth.org> <20180831000334.GR26778@thor.bakeyournoodle.com> Message-ID: <20180831161726.wtjbzr6yvz2wgghv@yuggoth.org> On 2018-08-31 09:35:55 +0100 (+0100), Stephen Finucane wrote: [...] > I've tinked with mailman 3 before so I could probably take a shot at > this over the next few week(end)s; however, I've no idea how this > feature is supposed to work. Any chance an admin of the current list > could send me a couple of screenshots of the feature in mailman 2 along > with a brief description of the feature? Alternatively, maybe we could > upload them to the wiki page Tony linked above or, better yet, to the > technical details page for same: > > https://wiki.mailman.psf.io/DEV/Brief%20Technical%20Details Looks like this should be https://wiki.list.org/DEV/Brief%20Technical%20Details instead, however reading through it doesn't really sound like the topic filtering feature from MM2. The List Member Manual has a very brief description of the feature from the subscriber standpoint: http://www.list.org/mailman-member/node29.html The List Administration Manual unfortunately doesn't have any content for the feature, just a stubbed-out section heading: http://www.list.org/mailman-admin/node30.html Sending screenshots to the ML is a bit tough, but luckily MIT's listadmins have posted some so we don't need to: http://web.mit.edu/lists/mailman/topics.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 31 16:45:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 31 Aug 2018 16:45:24 +0000 Subject: [Openstack] [all] Bringing the community together (combine the lists!) In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <5B883E1B.2070101@windriver.com> <1122931c-0716-5dee-264f-94f1f4b54d77@debian.org> <20180830213341.yuxyen2elx2c3is4@yuggoth.org> Message-ID: <20180831164524.mlksltzbzey6tdyo@yuggoth.org> On 2018-08-31 14:02:23 +0200 (+0200), Thomas Goirand wrote: [...] > I'm coming from the time when OpenStack had a list on launchpad > where everything was mixed. We did the split because it was really > annoying to have everything mixed. [...] These days (just running stats for this calendar year) we've been averaging 4 messages a day on the general openstack at lists.o.o ML, so if it's volume you're worried about most of it would be the current -operators and -dev ML discussions anyway (many of which are general questions from users already, because as you also pointed out we don't usually tell them to take their questions elsewhere any more). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: