From xyzjerry at gmail.com Mon Jan 1 05:51:12 2018 From: xyzjerry at gmail.com (Jerry Xinyu Zhao) Date: Sun, 31 Dec 2017 21:51:12 -0800 Subject: [Openstack] Fwd: [Openstack-zh] openstack CN domain and keyword In-Reply-To: <20171204214645074585@chinaregistry-gl.com> References: <20171204214645074585@chinaregistry-gl.com> Message-ID: Forward to the general list from Chinese zh mailing list. People concerned please look into it. Thanks! ---------- Forwarded message ---------- From: Thomas Liu < customerservice01 at chinaregistry-gl.com> Date: Mon, Dec 4, 2017 at 5:46 AM Subject: [Openstack-zh] openstack CN domain and keyword To: openstack-zh at lists.openstack.org (It's very urgent, please transfer this email to your CEO. Thanks) This is a formal email. We are the Domain Registration Service company in China. Here I have something to confirm with you. On Dec 4, 2017, we received an application from Jiahong Ltd requested "openstack" as their internet keyword and China (CN) domain names (openstack.cn, openstack.com.cn, openstack.net.cn, openstack.org.cn). But after checking it, we find this name conflict with your company name or trademark. In order to deal with this matter better, it's necessary to send email to you and confirm whether this company is your business partner in China? Best Regards, *Thomas Liu* | Service & Operations Manager *China Registry (Head Office)* | 6012, Xingdi Building, No. 1698 Yishan Road, Shanghai 201103, China Tel: +86-02164193517 <+86%2021%206419%203517> | Fax: +86-02161918697 <+86%2021%206191%208697> | Mob: +86-13816428671 <+86%20138%201642%208671> Email: thomas at chinaregistry.org.cn Web: www.chinaregistry.org.cn This email contains privileged and confidential information intended for the addressee only. If you are not the intended recipient, please destroy this email and inform the sender immediately. We appreciate you respecting the confidentiality of this information by not disclosing or using the information in this email. _______________________________________________ Openstack-zh mailing list Openstack-zh at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-zh -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jan 1 22:15:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 1 Jan 2018 22:15:40 +0000 Subject: [Openstack] Fwd: [Openstack-zh] openstack CN domain and keyword In-Reply-To: References: <20171204214645074585@chinaregistry-gl.com> Message-ID: <20180101221539.qj3kujkqxzp5ueph@yuggoth.org> On 2017-12-31 21:51:12 -0800 (-0800), Jerry Xinyu Zhao wrote: > Forward to the general list from Chinese zh mailing list. > People concerned please look into it. [...] Thanks for the heads up, though this looks like a phishing scam (reputable registrars would get in touch with the domain contact listed in the cn TLD whois DB rather than a random discussion mailing list, and the sender's address seems to be at a throw-away domain which doesn't match the more official-looking address in their signature at all). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From berndbausch at gmail.com Tue Jan 2 02:40:59 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 2 Jan 2018 11:40:59 +0900 Subject: [Openstack] Fwd: [Openstack-zh] openstack CN domain and keyword In-Reply-To: <20180101221539.qj3kujkqxzp5ueph@yuggoth.org> References: <20171204214645074585@chinaregistry-gl.com> <20180101221539.qj3kujkqxzp5ueph@yuggoth.org> Message-ID: <034001d38373$215e6b30$641b4190$@gmail.com> When googling for chinaregistry, several pages like this one come up: https://www.onlinethreatalerts.com/article/2017/10/10/beware-of-www-chinareg istry-com-cn-it-is-a-fake-cn-and-asia-domain-name-registration-website/ -----Original Message----- From: Jeremy Stanley [mailto:fungi at yuggoth.org] Sent: Tuesday, January 2, 2018 7:16 AM To: openstack at lists.openstack.org Subject: Re: [Openstack] Fwd: [Openstack-zh] openstack CN domain and keyword On 2017-12-31 21:51:12 -0800 (-0800), Jerry Xinyu Zhao wrote: > Forward to the general list from Chinese zh mailing list. > People concerned please look into it. [...] Thanks for the heads up, though this looks like a phishing scam (reputable registrars would get in touch with the domain contact listed in the cn TLD whois DB rather than a random discussion mailing list, and the sender's address seems to be at a throw-away domain which doesn't match the more official-looking address in their signature at all). -- Jeremy Stanley From guoyongxhzhf at outlook.com Tue Jan 2 11:09:42 2018 From: guoyongxhzhf at outlook.com (Guo James) Date: Tue, 2 Jan 2018 11:09:42 +0000 Subject: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint? Message-ID: Hi guys I know that Ironic has support multi-nova-compute. But I am not sure whether OpenStack support the situation than every nova-compute has a unshare ironic And these ironic share a nova and a neutron Thanks From vishalr1 at umbc.edu Tue Jan 2 11:30:34 2018 From: vishalr1 at umbc.edu (Vishal Rathod) Date: Tue, 2 Jan 2018 06:30:34 -0500 Subject: [Openstack] Run and test new changes in Openstack Message-ID: Hello All, I have modify nova policy engine code and have to test these changes, so could you please provide set of commands through which I can run the openstack and test these changes. Also, do I have to restart any services while doing the same? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Jan 2 12:59:25 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 2 Jan 2018 07:59:25 -0500 Subject: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint? In-Reply-To: References: Message-ID: On 01/02/2018 06:09 AM, Guo James wrote: > Hi guys > I know that Ironic has support multi-nova-compute. > But I am not sure whether OpenStack support the situation than every nova-compute has a unshare ironic > And these ironic share a nova and a neutron I'm not quite following you... what do you mean by "has a unshare ironic"? Best, -jay From guoyongxhzhf at outlook.com Tue Jan 2 14:10:10 2018 From: guoyongxhzhf at outlook.com (Guo James) Date: Tue, 2 Jan 2018 14:10:10 +0000 Subject: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint? In-Reply-To: References: Message-ID: I mean that there are two nova-compute In a OpenStack environment. Every nova-compute are configured to map to baremental. They communicate with different ironic endpoint. That means there are two ironic, a nova, a neutron in a OpenStack environment Does everything go well? Thanks > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Tuesday, January 02, 2018 8:59 PM > To: openstack at lists.openstack.org > Subject: Re: [Openstack] [openstack] [ironic] Does Ironic support that different > nova-compute map to different ironic endpoint? > > On 01/02/2018 06:09 AM, Guo James wrote: > > Hi guys > > I know that Ironic has support multi-nova-compute. > > But I am not sure whether OpenStack support the situation than every > > nova-compute has a unshare ironic And these ironic share a nova and a > > neutron > > I'm not quite following you... what do you mean by "has a unshare ironic"? > > Best, > -jay > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From jaypipes at gmail.com Tue Jan 2 15:29:40 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 2 Jan 2018 10:29:40 -0500 Subject: [Openstack] [openstack] [ironic] Does Ironic support that different nova-compute map to different ironic endpoint? In-Reply-To: References: Message-ID: On 01/02/2018 09:10 AM, Guo James wrote: > I mean that there are two nova-compute In a OpenStack environment. > Every nova-compute are configured to map to baremental. > They communicate with different ironic endpoint. I see. So, two different ironic-api service endpoints. > That means there are two ironic, a nova, a neutron in a OpenStack environment > > Does everything go well? Sure, that should work just fine. Best, -jay > Thanks > >> -----Original Message----- >> From: Jay Pipes [mailto:jaypipes at gmail.com] >> Sent: Tuesday, January 02, 2018 8:59 PM >> To: openstack at lists.openstack.org >> Subject: Re: [Openstack] [openstack] [ironic] Does Ironic support that different >> nova-compute map to different ironic endpoint? >> >> On 01/02/2018 06:09 AM, Guo James wrote: >>> Hi guys >>> I know that Ironic has support multi-nova-compute. >>> But I am not sure whether OpenStack support the situation than every >>> nova-compute has a unshare ironic And these ironic share a nova and a >>> neutron >> >> I'm not quite following you... what do you mean by "has a unshare ironic"? >> >> Best, >> -jay >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From mrhillsman at gmail.com Wed Jan 3 13:52:43 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Wed, 3 Jan 2018 07:52:43 -0600 Subject: [Openstack] Ohayo! Q1 2018 Message-ID: https://etherpad.openstack.org/p/TYO-ops-meetup-2018 ​ Hey everyone, What do you think about the new logo! Just a friendly reminder that the Ops Meetup for Spring 2018 is approaching March 7-8, 2018 in Tokyo and we are looking for additional topics. Spring 2018 will have NFV+General on day one and Enterprise+General on day two. Add additional topics to the etherpad or +/- 1 those already proposed. Additionally if you are attending and would like to moderate a session, add your name to the moderator list near the bottom of the etherpad. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: opsmeetuplogo.png Type: image/png Size: 38873 bytes Desc: not available URL: From doug at doughellmann.com Thu Jan 4 14:42:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 04 Jan 2018 09:42:08 -0500 Subject: [Openstack] Run and test new changes in Openstack In-Reply-To: References: Message-ID: <1515076789-sup-2135@lrrr.local> Excerpts from Vishal Rathod's message of 2018-01-02 06:30:34 -0500: > Hello All, > > I have modify nova policy engine code and have to test these changes, so > could you please provide set of commands through which I can run the > openstack and test these changes. Also, do I have to restart any services > while doing the same? > > Thanks! Hi, Vishal, I assume you have already reviewed the documentation for the policy library (https://docs.openstack.org/oslo.policy/latest/). You are right that we are working on moving the defaults for policy settings into the source code. It is still possible to provide a YAML or JSON file with overrides for those rules, though, just as in the past. The policy_file and policy_dirs configuration options (described in the policy library docs) control where each program looks for the files. The policy rules control the permissions allowed to individual users in the REST API. So the commands to run will depend on what changes to policy you are making. You can test using the python-openstackclient program (https://pypi.python.org/pypi/python-openstackclient) or a client written in another language if you’re more comfortable working with Java or golang. I hope that helps, Doug From sashang at gmail.com Fri Jan 5 00:55:31 2018 From: sashang at gmail.com (Sashan Govender) Date: Fri, 05 Jan 2018 00:55:31 +0000 Subject: [Openstack] expected tables in database after sync Message-ID: Hi I'm going through the glance installation guide here: https://docs.openstack.org/newton/install-guide-rdo/glance-install.html The last step is: su -s /bin/sh -c "glance-manage db_sync" glance What tables should I expect in the glance database? After executing that command above, this is what it currently shows: MariaDB [keystone]> use glance; Database changed MariaDB [glance]> use tables; ERROR 1049 (42000): Unknown database 'tables' MariaDB [glance]> show tables; Empty set (0.00 sec) MariaDB [glance]> I recall having the same issue with keystone when setting up its database. For a while it wasn't populated with anything, but now the keystone db has stuff in it after I fiddled around. I can't remember what I did though. MariaDB [mysql]> use keystone; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MariaDB [keystone]> show tables; +------------------------+ | Tables_in_keystone | +------------------------+ | access_token | | assignment | | config_register | | consumer | | credential | | endpoint | | endpoint_group | | federated_user | | federation_protocol | | group | | id_mapping | | identity_provider | I expect a similar set of tables in the glance db. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sashang at gmail.com Fri Jan 5 02:30:18 2018 From: sashang at gmail.com (Sashan Govender) Date: Fri, 05 Jan 2018 02:30:18 +0000 Subject: [Openstack] expected tables in database after sync In-Reply-To: References: Message-ID: Further, if i try the verification steps on the next page, https://docs.openstack.org/newton/install-guide-rdo/glance-verify.html I get the following error: openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public 500 Internal Server Error The server has either erred or is incapable of performing the requested operation. (HTTP 500) Looking in /var/log/glance/api.log: ERROR glance.common.wsgi ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 'glance.images' doesn't exist") So it's failing because there are no tables in the glance db. Which brings me back to my original question, what is ' su -s /bin/sh -c "glance-manage db_sync" glance' meant to do? On Fri, Jan 5, 2018 at 11:55 AM Sashan Govender wrote: > Hi > > I'm going through the glance installation guide here: > https://docs.openstack.org/newton/install-guide-rdo/glance-install.html > > The last step is: > su -s /bin/sh -c "glance-manage db_sync" glance > > What tables should I expect in the glance database? > After executing that command above, this is what it currently shows: > > MariaDB [keystone]> use glance; > Database changed > MariaDB [glance]> use tables; > ERROR 1049 (42000): Unknown database 'tables' > MariaDB [glance]> show tables; > Empty set (0.00 sec) > > MariaDB [glance]> > > I recall having the same issue with keystone when setting up its database. > For a while it wasn't populated with anything, but now the keystone db has > stuff in it after I fiddled around. I can't remember what I did though. > > MariaDB [mysql]> use keystone; > Reading table information for completion of table and column names > You can turn off this feature to get a quicker startup with -A > > Database changed > MariaDB [keystone]> show tables; > +------------------------+ > | Tables_in_keystone | > +------------------------+ > | access_token | > | assignment | > | config_register | > | consumer | > | credential | > | endpoint | > | endpoint_group | > | federated_user | > | federation_protocol | > | group | > | id_mapping | > | identity_provider | > > I expect a similar set of tables in the glance db. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.thomas at bristol.ac.uk Fri Jan 5 19:28:14 2018 From: ben.thomas at bristol.ac.uk (Ben Thomas) Date: Fri, 5 Jan 2018 19:28:14 +0000 Subject: [Openstack] Unable to spin up VMs using SR-IOV network interfaces Message-ID: Hello, I am trying to configure Openstack Ocata with support for SR-IOV on Intel X710 NICs, but am struggling to launch any VMs. I've followed the steps available at: https://docs.openstack.org/ocata/networking-guide/config-sriov.html, but am encountering an issue where my available hosts are filtered. ##### Problem ##### Launching a VM, configured using a port-id that uses "--binding:vnic_type direct" fails to launch. ##### Details ##### Following the guide available at https://docs.openstack.org/ocata/networking-guide/config-sriov.html I have tried to enable the use of Virtual Function (VF) ports using SR-IOV on an on Intel X710 NIC. When I come to spin-up a VM using the steps documented, e.g. openstack server create --flavor m1.large --image cirros --nic port-id=$port_id test-sriov the instantiation fails. In /var/log/nova-conductor.log, I can see: - 2018-01-05 18:49:04.470 36491 WARNING nova.scheduler.utils [req-788817bb-4e78-4a1b-8bf5-240a20472cd8 0218d276a04e40ab806d4075671ef01b dd2323e3ab4f45f29b6e7840fb7c0d2c - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 218, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 98, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 79, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. 2018-01-05 18:49:04.471 36491 WARNING nova.scheduler.utils [req-788817bb-4e78-4a1b-8bf5-240a20472cd8 0218d276a04e40ab806d4075671ef01b dd2323e3ab4f45f29b6e7840fb7c0d2c - - -] [instance: f18454bb-ff7e-4171-8ebd-982197e353f7] Setting instance to ERROR state. Note: NoValidHost: No valid host was found. There are not enough hosts available. In /var/log/nova-scheduler.log I also see the following: - 2018-01-05 18:49:04.371 37065 INFO nova.filters [...] Filter PciPassthroughFilter returned 0 hosts 2018-01-05 18:49:04.372 37065 INFO nova.filters [...] Filtering removed all hosts for the request with instance ID 'f18454bb-ff7e-4171-8ebd-982197e353f7'. Filter results: ['PciPassthroughFilter: (start: 2, end: 0)'] Initially, I felt this might be because the VM flavor used didn't have a "pci_passthrough:alias" value set. I've made sure to set my flavor like so: - +----------------------------+--------------------------------------------------------------------+ | Field | Value | +----------------------------+--------------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | af8fadc445164950a6d7ad4f91e1e06b, dd2323e3ab4f45f29b6e7840fb7c0d2c | | disk | 5 | | id | fc76fed6-0583-4c6b-908b-3c49d6cd6652 | | name | m2.mini | | os-flavor-access:is_public | False | | properties | pci_passthrough:alias='a1:1' | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+--------------------------------------------------------------------+ where "alias" is set in /etc/nova/nova.conf on the controller as: - [pci] alias = { "vendor_id":"8086", "product_id":"154c", "device_type":"type-VF","name":"a1" } I've checked the PCI details on my VF's, and they are as follows: - 81:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 01) 81:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 01) 81:02.0 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual Function [8086:154c] (rev 01) 81:02.1 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual Function [8086:154c] (rev 01) 81:02.2 Ethernet controller [0200]: Intel Corporation XL710/X710 Virtual Function [8086:154c] (rev 01) [and so on... 30x VFs in total] As far as other configuration files look, /etc/nova/nova.conf on the Compute is configured as so: - [pci] passthrough_whitelist ={ "devname": "enp129s0f0", "physical_network": "physnet2"} the /etc/neutron/plugins/ml2/sriov_agent.ini on the Compute is configured to map my physical network to the NIC as follows: - [sriov_nic] physical_device_mappings =physnet2:enp129s0f0 and firewall disabled with: - [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver /etc/neutron/plugins/ml2/ml2_conf.ini on the Controller has mechanism_drivers configures as: - mechanism_drivers = linuxbridge,sriovnicswitch And lastly, /etc/nova/nova.conf on the Controller as the following filters set: - [DEFAULT] scheduler_default_filters =PciPassthroughFilter scheduler_available_filters = nova.scheduler.filters.all_filters I've been investigating this for some time now, and am able to confirm that without pursuing the SR-IOV route I am able to create VLAN Provider networks on my NIC interface quite happily, and VMs can connect to them with no issue. Are there any obvious debugging routes I should investigate to try and resolve this? Thanks for replies! Ben Ben Thomas Senior Research Associate UK 5G Testbeds and Trials - University of Bristol Level 0 - Merchant Venturers Building Woodland Road, Clifton, Bristol, BS8 1UB, United Kingdom Email: ben.thomas at bristol.ac.uk Smart Internet Lab -------------- next part -------------- An HTML attachment was scrubbed... URL: From ycc0301 at gmail.com Mon Jan 8 05:50:10 2018 From: ycc0301 at gmail.com (Ying-Chuan Chen) Date: Mon, 8 Jan 2018 13:50:10 +0800 Subject: [Openstack] How to setup nova's policy.json ensure only owner can list his instance? Message-ID: Hi guys, I want to ensure that only the owner of the instances can list his instances. I try to add rules in /etc/openstack-dashboard/nova_policy.json like below: "owner": "user_id:%(user_id)s", "compute:get": "rule:owner", But, it can't work. How to setup policy ensure only owner can list his instance? Version: Ocata, OS: CentOS 7.3 Thanks a lot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From markus.hentsch at cloudandheat.com Mon Jan 8 06:23:01 2018 From: markus.hentsch at cloudandheat.com (Markus Hentsch) Date: Mon, 8 Jan 2018 07:23:01 +0100 Subject: [Openstack] How to setup nova's policy.json ensure only owner can list his instance? In-Reply-To: References: Message-ID: <91689962-40ab-6fad-f1f8-5a177ef20fcb@cloudandheat.com> Hello, as far as I am aware, the lowest possible level you can (officially) reach with the policy files is project-level not user-level. Some APIs still provide user-level checks but those are a thing from the past and effectively deprecated. Nova API was migrated to Oslo Policies for API 2.1 where the user-level was removed entirely from the policy implementation, if I recall correctly. Kind regards, Markus Hentsch Cloud&Heat Technologies On 08.01.2018 at 06:50, Ying-Chuan Chen wrote: > Hi guys,  > I want to ensure that only the owner of the instances can list his > instances. > I try to add rules in /etc/openstack-dashboard/nova_policy.json like > below: > > "owner": "user_id:%(user_id)s", > > "compute:get": "rule:owner", > > But, it can't work.  > How to setup policy ensure only owner can list his instance? > Version: Ocata, OS: CentOS 7.3 > > Thanks a lot! > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Jan 8 12:45:19 2018 From: eblock at nde.ag (Eugen Block) Date: Mon, 08 Jan 2018 12:45:19 +0000 Subject: [Openstack] Upgrade Mitaka to Pike In-Reply-To: Message-ID: <20180108124519.Horde._i0opRpMHbzUWUl1BpOd81B@webmail.nde.ag> Hi, in case you haven't received any answers yet I thought it might still help if I share our experiences with the upgrade procedure. It really was tricky in some parts, but we managed to get both Ceph and OpenStack up and running. We migrated from ceph jewel to luminous and OpenStack Mitaka to Ocata, not Pike yet. Basically, your procedure is correct. This is how we did it: 1. upgraded OS of each ceph server, no issues 2. upgraded ceph packages from jewel to luminous on each ceph server, no issues 3. upgraded ceph packages from jewel to luminous on all cloud nodes, no problems 4. upgraded controller to Newton, no problems yet clients could still work properly if they were in external networks, because neutron had to be stopped 5. double-upgraded control node OS and Newton to Ocata because ceph-client caused some troubles 6. upgraded OS of compute nodes last, no issues this was quite easy since we could live-migrate instances to other hosts The biggest trouble was caused by the database migration. We had to manipulate the DB ourselves based on the error messages in the logs, but eventually it worked. After the most important services were back online I started to update the configs according to the warnings in the logs for cinder, nova, neutron etc. There were some guides that helped me keeping the right order: [1] https://docs.openstack.org/nova/latest/user/upgrade.html [2] https://www.rdoproject.org/install/upgrading-rdo/ Unfortunately, I don't have a detailed step-by-step guide about our procedures and issues we had to resolve. Since it was kind of time critical I was focused on resolving the problems instead of documenting everything. ;-) I hope this helps anyway if you haven't already managed it. We didn't have a real downtime of our VMs, at least not the ones in production use since they work in external networks and are not depending on the neutron services on the control node. Being able to live-migrate instances was also quite helpful. :-) Regards, Eugen Zitat von Sam Huracan : > Hi OpenStackers, > > I'm planning upgrading my OpenStack System. Currently version is Mitaka, on > Ubuntu 14.04.5. > I want to upgrade to latest version of Pike. > > I've read some documents and know that Mitaka does not have Rolling > upgrade, which means there will have downtime in upgrade process. > > Our system has 3 HA Controllers, all VMs and Storage were put in Ceph. > > At the moment, I can list some step-by-step to upgrade: > > 1. Upgrade OS to Ubuntu16.04 > 2. Upgrage package in order: Mitaka -> Newton -> Ocata -> Pike > 3. Upgrade DB in order: Mitaka -> Newton -> Ocata -> Pike > > Do I lack any step? Could you guys share me some experiences fulfil > solution, to reduce maximum downtime of system? > > Thanks in advance. -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From remo at italy1.com Mon Jan 8 15:49:39 2018 From: remo at italy1.com (remo at italy1.com) Date: Mon, 8 Jan 2018 07:49:39 -0800 Subject: [Openstack] How to setup nova's policy.json ensure only owner can list his instance? In-Reply-To: <91689962-40ab-6fad-f1f8-5a177ef20fcb@cloudandheat.com> References: <91689962-40ab-6fad-f1f8-5a177ef20fcb@cloudandheat.com> Message-ID: Content-Type: multipart/alternative; boundary="=_e1c133c1702ac0d488f565a56418f0a4" --=_e1c133c1702ac0d488f565a56418f0a4 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSBhZ3JlZSB3aXRoIE1hcmt1cyBwcm9qZWN0IGxldmVsIGFuZCB5b3UgbmVlZCB0byBtb2RpZnkg dGhlIC9ldGMva2V5c3RvbmUvcG9saWN5Lmpzb24gYW5kIHRoZW4gY29weSBpdCBvdmVyIHRoZSBo b3Jpem9uIGZvbGRlciANCg0KSWwgZ2lvcm5vIDA3IGdlbiAyMDE4LCBhbGxlIG9yZSAyMjoyMywg TWFya3VzIEhlbnRzY2ggPG1hcmt1cy5oZW50c2NoQGNsb3VkYW5kaGVhdC5jb20+IGhhIHNjcml0 dG86DQoNCkhlbGxvLA0KDQphcyBmYXIgYXMgSSBhbSBhd2FyZSwgdGhlIGxvd2VzdCBwb3NzaWJs ZSBsZXZlbCB5b3UgY2FuIChvZmZpY2lhbGx5KSByZWFjaCB3aXRoIHRoZSBwb2xpY3kgZmlsZXMg aXMgcHJvamVjdC1sZXZlbCBub3QgdXNlci1sZXZlbC4gU29tZSBBUElzIHN0aWxsIHByb3ZpZGUg dXNlci1sZXZlbCBjaGVja3MgYnV0IHRob3NlICAgICAgIGFyZSBhIHRoaW5nIGZyb20gdGhlIHBh c3QgYW5kIGVmZmVjdGl2ZWx5IGRlcHJlY2F0ZWQuIE5vdmEgQVBJIHdhcyBtaWdyYXRlZCB0byBP c2xvIFBvbGljaWVzIGZvciBBUEkgMi4xIHdoZXJlIHRoZSB1c2VyLWxldmVsIHdhcyByZW1vdmVk IGVudGlyZWx5IGZyb20gdGhlIHBvbGljeSBpbXBsZW1lbnRhdGlvbiwgaWYgSSByZWNhbGwgY29y cmVjdGx5Lg0KS2luZCByZWdhcmRzLA0KDQpNYXJrdXMgSGVudHNjaA0KQ2xvdWQmSGVhdCBUZWNo bm9sb2dpZXMNCg0KDQo+IE9uIDA4LjAxLjIwMTggYXQgMDY6NTAsIFlpbmctQ2h1YW4gQ2hlbiB3 cm90ZToNCj4gSGkgZ3V5cywgDQo+IEkgd2FudCB0byBlbnN1cmUgdGhhdCBvbmx5IHRoZSBvd25l ciBvZiB0aGUgaW5zdGFuY2VzIGNhbiBsaXN0IGhpcyBpbnN0YW5jZXMuDQo+IEkgdHJ5IHRvIGFk ZCBydWxlcyBpbiAvZXRjL29wZW5zdGFjay1kYXNoYm9hcmQvbm92YV9wb2xpY3kuanNvbiBsaWtl IGJlbG93Og0KPiANCj4gIm93bmVyIjogInVzZXJfaWQ6JSh1c2VyX2lkKXMiLA0KPiANCj4gImNv bXB1dGU6Z2V0IjogInJ1bGU6b3duZXIiLA0KPiANCj4gQnV0LCBpdCBjYW4ndCB3b3JrLiANCj4g SG93IHRvIHNldHVwIHBvbGljeSBlbnN1cmUgb25seSBvd25lciBjYW4gbGlzdCBoaXMgaW5zdGFu Y2U/DQo+IFZlcnNpb246IE9jYXRhLCBPUzogQ2VudE9TIDcuMw0KPiANCj4gVGhhbmtzIGEgbG90 IQ0KPiANCj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0 cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5v cmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KDQpfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KTWFpbGluZyBsaXN0OiBodHRwOi8vbGlz dHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQpQb3N0 IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQpVbnN1YnNjcmliZSA6IGh0 dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3Rh Y2sNCg0K --=_e1c133c1702ac0d488f565a56418f0a4 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5JIGFncmVlIHdpdGggTWFya3VzIHByb2plY3QgbGV2ZWwgYW5kIHlvdSBuZWVkIHRvIG1v ZGlmeSB0aGUgL2V0Yy9rZXlzdG9uZS9wb2xpY3kuanNvbiBhbmQgdGhlbiBjb3B5IGl0IG92ZXIg dGhlIGhvcml6b24gZm9sZGVyJm5ic3A7PC9kaXY+PGRpdj48YnI+SWwgZ2lvcm5vIDA3IGdlbiAy MDE4LCBhbGxlIG9yZSAyMjoyMywgTWFya3VzIEhlbnRzY2ggJmx0OzxhIGhyZWY9Im1haWx0bzpt YXJrdXMuaGVudHNjaEBjbG91ZGFuZGhlYXQuY29tIj5tYXJrdXMuaGVudHNjaEBjbG91ZGFuZGhl YXQuY29tPC9hPiZndDsgaGEgc2NyaXR0bzo8YnI+PGJyPjwvZGl2PjxkaXY+DQogIA0KICAgIDxt ZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0 PXV0Zi04Ij4NCiAgDQogIA0KICAgIDxwPkhlbGxvLDwvcD4NCiAgICA8cD5hcyBmYXIgYXMgSSBh bSBhd2FyZSwgdGhlIGxvd2VzdCBwb3NzaWJsZSBsZXZlbCB5b3UgY2FuDQogICAgICAob2ZmaWNp YWxseSkgcmVhY2ggd2l0aCB0aGUgcG9saWN5IGZpbGVzIGlzIHByb2plY3QtbGV2ZWwgbm90DQog ICAgICB1c2VyLWxldmVsLiBTb21lIEFQSXMgc3RpbGwgcHJvdmlkZSB1c2VyLWxldmVsIGNoZWNr cyBidXQgdGhvc2UNCiAgICAgIGFyZSBhIHRoaW5nIGZyb20gdGhlIHBhc3QgYW5kIGVmZmVjdGl2 ZWx5IGRlcHJlY2F0ZWQuIE5vdmEgQVBJIHdhcw0KICAgICAgbWlncmF0ZWQgdG8gT3NsbyBQb2xp Y2llcyBmb3IgQVBJIDIuMSB3aGVyZSB0aGUgdXNlci1sZXZlbCB3YXMNCiAgICAgIHJlbW92ZWQg ZW50aXJlbHkgZnJvbSB0aGUgcG9saWN5IGltcGxlbWVudGF0aW9uLCBpZiBJIHJlY2FsbA0KICAg ICAgY29ycmVjdGx5Ljxicj4NCiAgICA8L3A+DQogICAgPHA+S2luZCByZWdhcmRzLDwvcD4NCiAg ICBNYXJrdXMgSGVudHNjaDxicj4NCiAgICBDbG91ZCZhbXA7SGVhdCBUZWNobm9sb2dpZXM8YnI+ DQogICAgPGJyPg0KICAgIDxkaXYgY2xhc3M9Im1vei1jaXRlLXByZWZpeCI+PGJyPg0KICAgICAg T24gMDguMDEuMjAxOCBhdCAwNjo1MCwgWWluZy1DaHVhbiBDaGVuIHdyb3RlOjxicj4NCiAgICA8 L2Rpdj4NCiAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjaXRlPSJtaWQ6Q0ErPUZKemoyeEhI THl5X3Y2dlVDbWJ0QTQ2dF9leW4wR2FvOURCYnk1dEN0a2g3UXJ3QG1haWwuZ21haWwuY29tIj4N CiAgICAgIDxkaXYgZGlyPSJsdHIiPg0KICAgICAgICA8ZGl2IHN0eWxlPSJmb250LXNpemU6MTRw eCI+SGkgZ3V5cywmbmJzcDs8L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0 cHgiPkkgd2FudCB0byBlbnN1cmUgdGhhdCBvbmx5IHRoZSBvd25lcg0KICAgICAgICAgIG9mIHRo ZSBpbnN0YW5jZXMgY2FuIGxpc3QgaGlzIGluc3RhbmNlcy48L2Rpdj4NCiAgICAgICAgPGRpdiBz dHlsZT0iZm9udC1zaXplOjE0cHgiPkkgdHJ5IHRvIGFkZCBydWxlcyBpbg0KICAgICAgICAgIC9l dGMvb3BlbnN0YWNrLWRhc2hib2FyZC9ub3ZhXzx3YnI+cG9saWN5Lmpzb24gbGlrZSBiZWxvdzo8 L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPjxicj4NCiAgICAgICAg PC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij4ib3duZXIiOiAidXNl cl9pZDolKHVzZXJfaWQpcyIsPC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZTox NHB4Ij48YnI+DQogICAgICAgIDwvZGl2Pg0KICAgICAgICA8ZGl2IHN0eWxlPSJmb250LXNpemU6 MTRweCI+ImNvbXB1dGU6Z2V0IjogInJ1bGU6b3duZXIiLDwvZGl2Pg0KICAgICAgICA8ZGl2IHN0 eWxlPSJmb250LXNpemU6MTRweCI+PGJyPg0KICAgICAgICA8L2Rpdj4NCiAgICAgICAgPGRpdiBz dHlsZT0iZm9udC1zaXplOjE0cHgiPkJ1dCwgaXQgY2FuJ3Qgd29yay4mbmJzcDs8L2Rpdj4NCiAg ICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPkhvdyB0byBzZXR1cCBwb2xpY3kgZW5z dXJlIG9ubHkNCiAgICAgICAgICBvd25lciBjYW4gbGlzdCBoaXMgaW5zdGFuY2U/PC9kaXY+DQog ICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij5WZXJzaW9uOiBPY2F0YSwgT1M6IENl bnRPUyA3LjM8L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPjxicj4N CiAgICAgICAgPC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij5UaGFu a3MgYSBsb3QhPC9kaXY+DQogICAgICA8L2Rpdj4NCiAgICAgIDxicj4NCiAgICAgIDxmaWVsZHNl dCBjbGFzcz0ibWltZUF0dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+DQogICAgICA8YnI+DQog ICAgICA8cHJlIHdyYXA9IiI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX18NCk1haWxpbmcgbGlzdDogPGEgY2xhc3M9Im1vei10eHQtbGluay1mcmVldGV4dCIg aHJlZj0iaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xp c3RpbmZvL29wZW5zdGFjazwvYT4NClBvc3QgdG8gICAgIDogPGEgY2xhc3M9Im1vei10eHQtbGlu ay1hYmJyZXZpYXRlZCIgaHJlZj0ibWFpbHRvOm9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3Jn Ij5vcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZzwvYT4NClVuc3Vic2NyaWJlIDogPGEgY2xh c3M9Im1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0iaHR0cDovL2xpc3RzLm9wZW5zdGFjay5v cmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjayI+aHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjazwvYT4NCjwvcHJlPg0K ICAgIDwvYmxvY2txdW90ZT4NCiAgICA8YnI+DQogICAgPGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6 dGFob21hLEFyaWFsLHNhbnMtc2VyaWY7Zm9udC1zaXplOjEwcHQ7Y29sb3I6IzQ3NDU0MzsiPg0K ICAgICAgPGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6dGFob21hLEFyaWFsLHNhbnMtc2VyaWY7Zm9u dC1zaXplOjEwcHQ7Y29sb3I6IzQ3NDU0MzsiPg0KICAgICAgICA8cCBzdHlsZT0iZm9udC13ZWln aHQ6Ym9sZDsiPg0KICAgICAgICA8L3A+DQogICAgICA8L2Rpdj4NCiAgICA8L2Rpdj4NCiAgDQoN Cg0KPC9kaXY+PGRpdj48c3Bhbj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXzwvc3Bhbj48YnI+PHNwYW4+TWFpbGluZyBsaXN0OiA8YSBocmVmPSJodHRwOi8v bGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrIj5o dHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0 YWNrPC9hPjwvc3Bhbj48YnI+PHNwYW4+UG9zdCB0byAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDs6 IDxhIGhyZWY9Im1haWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNr QGxpc3RzLm9wZW5zdGFjay5vcmc8L2E+PC9zcGFuPjxicj48c3Bhbj5VbnN1YnNjcmliZSA6IDxh IGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5m by9vcGVuc3RhY2siPmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9s aXN0aW5mby9vcGVuc3RhY2s8L2E+PC9zcGFuPjxicj48L2Rpdj48L2JvZHk+PC9odG1sPg0K --=_e1c133c1702ac0d488f565a56418f0a4-- From remo at italy1.com Mon Jan 8 15:49:39 2018 From: remo at italy1.com (remo at italy1.com) Date: Mon, 8 Jan 2018 07:49:39 -0800 Subject: [Openstack] How to setup nova's policy.json ensure only owner can list his instance? In-Reply-To: <91689962-40ab-6fad-f1f8-5a177ef20fcb@cloudandheat.com> References: <91689962-40ab-6fad-f1f8-5a177ef20fcb@cloudandheat.com> Message-ID: Content-Type: multipart/alternative; boundary="=_e1c133c1702ac0d488f565a56418f0a4" --=_e1c133c1702ac0d488f565a56418f0a4 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSBhZ3JlZSB3aXRoIE1hcmt1cyBwcm9qZWN0IGxldmVsIGFuZCB5b3UgbmVlZCB0byBtb2RpZnkg dGhlIC9ldGMva2V5c3RvbmUvcG9saWN5Lmpzb24gYW5kIHRoZW4gY29weSBpdCBvdmVyIHRoZSBo b3Jpem9uIGZvbGRlciANCg0KSWwgZ2lvcm5vIDA3IGdlbiAyMDE4LCBhbGxlIG9yZSAyMjoyMywg TWFya3VzIEhlbnRzY2ggPG1hcmt1cy5oZW50c2NoQGNsb3VkYW5kaGVhdC5jb20+IGhhIHNjcml0 dG86DQoNCkhlbGxvLA0KDQphcyBmYXIgYXMgSSBhbSBhd2FyZSwgdGhlIGxvd2VzdCBwb3NzaWJs ZSBsZXZlbCB5b3UgY2FuIChvZmZpY2lhbGx5KSByZWFjaCB3aXRoIHRoZSBwb2xpY3kgZmlsZXMg aXMgcHJvamVjdC1sZXZlbCBub3QgdXNlci1sZXZlbC4gU29tZSBBUElzIHN0aWxsIHByb3ZpZGUg dXNlci1sZXZlbCBjaGVja3MgYnV0IHRob3NlICAgICAgIGFyZSBhIHRoaW5nIGZyb20gdGhlIHBh c3QgYW5kIGVmZmVjdGl2ZWx5IGRlcHJlY2F0ZWQuIE5vdmEgQVBJIHdhcyBtaWdyYXRlZCB0byBP c2xvIFBvbGljaWVzIGZvciBBUEkgMi4xIHdoZXJlIHRoZSB1c2VyLWxldmVsIHdhcyByZW1vdmVk IGVudGlyZWx5IGZyb20gdGhlIHBvbGljeSBpbXBsZW1lbnRhdGlvbiwgaWYgSSByZWNhbGwgY29y cmVjdGx5Lg0KS2luZCByZWdhcmRzLA0KDQpNYXJrdXMgSGVudHNjaA0KQ2xvdWQmSGVhdCBUZWNo bm9sb2dpZXMNCg0KDQo+IE9uIDA4LjAxLjIwMTggYXQgMDY6NTAsIFlpbmctQ2h1YW4gQ2hlbiB3 cm90ZToNCj4gSGkgZ3V5cywgDQo+IEkgd2FudCB0byBlbnN1cmUgdGhhdCBvbmx5IHRoZSBvd25l ciBvZiB0aGUgaW5zdGFuY2VzIGNhbiBsaXN0IGhpcyBpbnN0YW5jZXMuDQo+IEkgdHJ5IHRvIGFk ZCBydWxlcyBpbiAvZXRjL29wZW5zdGFjay1kYXNoYm9hcmQvbm92YV9wb2xpY3kuanNvbiBsaWtl IGJlbG93Og0KPiANCj4gIm93bmVyIjogInVzZXJfaWQ6JSh1c2VyX2lkKXMiLA0KPiANCj4gImNv bXB1dGU6Z2V0IjogInJ1bGU6b3duZXIiLA0KPiANCj4gQnV0LCBpdCBjYW4ndCB3b3JrLiANCj4g SG93IHRvIHNldHVwIHBvbGljeSBlbnN1cmUgb25seSBvd25lciBjYW4gbGlzdCBoaXMgaW5zdGFu Y2U/DQo+IFZlcnNpb246IE9jYXRhLCBPUzogQ2VudE9TIDcuMw0KPiANCj4gVGhhbmtzIGEgbG90 IQ0KPiANCj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFja0BsaXN0 cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5v cmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KDQpfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KTWFpbGluZyBsaXN0OiBodHRwOi8vbGlz dHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrDQpQb3N0 IHRvICAgICA6IG9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnDQpVbnN1YnNjcmliZSA6IGh0 dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3Rh Y2sNCg0K --=_e1c133c1702ac0d488f565a56418f0a4 Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5JIGFncmVlIHdpdGggTWFya3VzIHByb2plY3QgbGV2ZWwgYW5kIHlvdSBuZWVkIHRvIG1v ZGlmeSB0aGUgL2V0Yy9rZXlzdG9uZS9wb2xpY3kuanNvbiBhbmQgdGhlbiBjb3B5IGl0IG92ZXIg dGhlIGhvcml6b24gZm9sZGVyJm5ic3A7PC9kaXY+PGRpdj48YnI+SWwgZ2lvcm5vIDA3IGdlbiAy MDE4LCBhbGxlIG9yZSAyMjoyMywgTWFya3VzIEhlbnRzY2ggJmx0OzxhIGhyZWY9Im1haWx0bzpt YXJrdXMuaGVudHNjaEBjbG91ZGFuZGhlYXQuY29tIj5tYXJrdXMuaGVudHNjaEBjbG91ZGFuZGhl YXQuY29tPC9hPiZndDsgaGEgc2NyaXR0bzo8YnI+PGJyPjwvZGl2PjxkaXY+DQogIA0KICAgIDxt ZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0 PXV0Zi04Ij4NCiAgDQogIA0KICAgIDxwPkhlbGxvLDwvcD4NCiAgICA8cD5hcyBmYXIgYXMgSSBh bSBhd2FyZSwgdGhlIGxvd2VzdCBwb3NzaWJsZSBsZXZlbCB5b3UgY2FuDQogICAgICAob2ZmaWNp YWxseSkgcmVhY2ggd2l0aCB0aGUgcG9saWN5IGZpbGVzIGlzIHByb2plY3QtbGV2ZWwgbm90DQog ICAgICB1c2VyLWxldmVsLiBTb21lIEFQSXMgc3RpbGwgcHJvdmlkZSB1c2VyLWxldmVsIGNoZWNr cyBidXQgdGhvc2UNCiAgICAgIGFyZSBhIHRoaW5nIGZyb20gdGhlIHBhc3QgYW5kIGVmZmVjdGl2 ZWx5IGRlcHJlY2F0ZWQuIE5vdmEgQVBJIHdhcw0KICAgICAgbWlncmF0ZWQgdG8gT3NsbyBQb2xp Y2llcyBmb3IgQVBJIDIuMSB3aGVyZSB0aGUgdXNlci1sZXZlbCB3YXMNCiAgICAgIHJlbW92ZWQg ZW50aXJlbHkgZnJvbSB0aGUgcG9saWN5IGltcGxlbWVudGF0aW9uLCBpZiBJIHJlY2FsbA0KICAg ICAgY29ycmVjdGx5Ljxicj4NCiAgICA8L3A+DQogICAgPHA+S2luZCByZWdhcmRzLDwvcD4NCiAg ICBNYXJrdXMgSGVudHNjaDxicj4NCiAgICBDbG91ZCZhbXA7SGVhdCBUZWNobm9sb2dpZXM8YnI+ DQogICAgPGJyPg0KICAgIDxkaXYgY2xhc3M9Im1vei1jaXRlLXByZWZpeCI+PGJyPg0KICAgICAg T24gMDguMDEuMjAxOCBhdCAwNjo1MCwgWWluZy1DaHVhbiBDaGVuIHdyb3RlOjxicj4NCiAgICA8 L2Rpdj4NCiAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjaXRlPSJtaWQ6Q0ErPUZKemoyeEhI THl5X3Y2dlVDbWJ0QTQ2dF9leW4wR2FvOURCYnk1dEN0a2g3UXJ3QG1haWwuZ21haWwuY29tIj4N CiAgICAgIDxkaXYgZGlyPSJsdHIiPg0KICAgICAgICA8ZGl2IHN0eWxlPSJmb250LXNpemU6MTRw eCI+SGkgZ3V5cywmbmJzcDs8L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0 cHgiPkkgd2FudCB0byBlbnN1cmUgdGhhdCBvbmx5IHRoZSBvd25lcg0KICAgICAgICAgIG9mIHRo ZSBpbnN0YW5jZXMgY2FuIGxpc3QgaGlzIGluc3RhbmNlcy48L2Rpdj4NCiAgICAgICAgPGRpdiBz dHlsZT0iZm9udC1zaXplOjE0cHgiPkkgdHJ5IHRvIGFkZCBydWxlcyBpbg0KICAgICAgICAgIC9l dGMvb3BlbnN0YWNrLWRhc2hib2FyZC9ub3ZhXzx3YnI+cG9saWN5Lmpzb24gbGlrZSBiZWxvdzo8 L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPjxicj4NCiAgICAgICAg PC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij4ib3duZXIiOiAidXNl cl9pZDolKHVzZXJfaWQpcyIsPC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZTox NHB4Ij48YnI+DQogICAgICAgIDwvZGl2Pg0KICAgICAgICA8ZGl2IHN0eWxlPSJmb250LXNpemU6 MTRweCI+ImNvbXB1dGU6Z2V0IjogInJ1bGU6b3duZXIiLDwvZGl2Pg0KICAgICAgICA8ZGl2IHN0 eWxlPSJmb250LXNpemU6MTRweCI+PGJyPg0KICAgICAgICA8L2Rpdj4NCiAgICAgICAgPGRpdiBz dHlsZT0iZm9udC1zaXplOjE0cHgiPkJ1dCwgaXQgY2FuJ3Qgd29yay4mbmJzcDs8L2Rpdj4NCiAg ICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPkhvdyB0byBzZXR1cCBwb2xpY3kgZW5z dXJlIG9ubHkNCiAgICAgICAgICBvd25lciBjYW4gbGlzdCBoaXMgaW5zdGFuY2U/PC9kaXY+DQog ICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij5WZXJzaW9uOiBPY2F0YSwgT1M6IENl bnRPUyA3LjM8L2Rpdj4NCiAgICAgICAgPGRpdiBzdHlsZT0iZm9udC1zaXplOjE0cHgiPjxicj4N CiAgICAgICAgPC9kaXY+DQogICAgICAgIDxkaXYgc3R5bGU9ImZvbnQtc2l6ZToxNHB4Ij5UaGFu a3MgYSBsb3QhPC9kaXY+DQogICAgICA8L2Rpdj4NCiAgICAgIDxicj4NCiAgICAgIDxmaWVsZHNl dCBjbGFzcz0ibWltZUF0dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+DQogICAgICA8YnI+DQog ICAgICA8cHJlIHdyYXA9IiI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX18NCk1haWxpbmcgbGlzdDogPGEgY2xhc3M9Im1vei10eHQtbGluay1mcmVldGV4dCIg aHJlZj0iaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xp c3RpbmZvL29wZW5zdGFjazwvYT4NClBvc3QgdG8gICAgIDogPGEgY2xhc3M9Im1vei10eHQtbGlu ay1hYmJyZXZpYXRlZCIgaHJlZj0ibWFpbHRvOm9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3Jn Ij5vcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZzwvYT4NClVuc3Vic2NyaWJlIDogPGEgY2xh c3M9Im1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0iaHR0cDovL2xpc3RzLm9wZW5zdGFjay5v cmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjayI+aHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjazwvYT4NCjwvcHJlPg0K ICAgIDwvYmxvY2txdW90ZT4NCiAgICA8YnI+DQogICAgPGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6 dGFob21hLEFyaWFsLHNhbnMtc2VyaWY7Zm9udC1zaXplOjEwcHQ7Y29sb3I6IzQ3NDU0MzsiPg0K ICAgICAgPGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6dGFob21hLEFyaWFsLHNhbnMtc2VyaWY7Zm9u dC1zaXplOjEwcHQ7Y29sb3I6IzQ3NDU0MzsiPg0KICAgICAgICA8cCBzdHlsZT0iZm9udC13ZWln aHQ6Ym9sZDsiPg0KICAgICAgICA8L3A+DQogICAgICA8L2Rpdj4NCiAgICA8L2Rpdj4NCiAgDQoN Cg0KPC9kaXY+PGRpdj48c3Bhbj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXzwvc3Bhbj48YnI+PHNwYW4+TWFpbGluZyBsaXN0OiA8YSBocmVmPSJodHRwOi8v bGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNrIj5o dHRwOi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0 YWNrPC9hPjwvc3Bhbj48YnI+PHNwYW4+UG9zdCB0byAmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDs6 IDxhIGhyZWY9Im1haWx0bzpvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZyI+b3BlbnN0YWNr QGxpc3RzLm9wZW5zdGFjay5vcmc8L2E+PC9zcGFuPjxicj48c3Bhbj5VbnN1YnNjcmliZSA6IDxh IGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5m by9vcGVuc3RhY2siPmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9s aXN0aW5mby9vcGVuc3RhY2s8L2E+PC9zcGFuPjxicj48L2Rpdj48L2JvZHk+PC9odG1sPg0K --=_e1c133c1702ac0d488f565a56418f0a4-- From contact at arkade.info Tue Jan 9 09:42:44 2018 From: contact at arkade.info (aRkadeFR) Date: Tue, 9 Jan 2018 10:42:44 +0100 Subject: [Openstack] [openstack][nova] VRRP packets lost Message-ID: <20180109094244.nm3rg3djdul6vltj@lenodoc> Hello, After a manual failover from the master to the slave keepalived by shutting down keepalived service on the master, then restarting it again, the VRRP packets the master sends are "lost" or at least not visible from the slave during couple of secondes / minutes, leading to the IP being on the two instances during this lapse of time. We are running OpenStack Juno, with two debian wheezy VMs as keepalived instances inside the OS, on KVM. I diagnostic the problem doing the manual failover and monitoring the IP and network with ``ip addr show`` and ``tcpdump -n -i any vrrp`` command inside the VMs. I suspect the OpenStack network agent not routing correctly the VRRP packets during this lapse of time. Do you have any clue on how to fix the problem or diagnostic more the problem? Thank you, -- aRkadeFR From guoyongxhzhf at outlook.com Wed Jan 10 09:10:06 2018 From: guoyongxhzhf at outlook.com (Guo James) Date: Wed, 10 Jan 2018 09:10:06 +0000 Subject: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? Message-ID: I notice that after an ironic user get a bare mental successfully, he can access ipmi through ipmi device although he can't access ipmi through LAN How to prevent the situation? If he modify ipmi configuration, that will be mess. From xiefp88 at sina.com Thu Jan 11 01:11:45 2018 From: xiefp88 at sina.com (xiefp88 at sina.com) Date: Thu, 11 Jan 2018 09:11:45 +0800 Subject: [Openstack] =?gbk?b?u9i4tKO6IFtpcm9uaWNdIGhvdyB0byBwcmV2ZW50IGly?= =?gbk?q?onic_user_to_controle_ipmi_through_OS=3F?= Message-ID: <20180111011145.504724C0315@webmail.sinamail.sina.com.cn> If you can not get the usename and password of the OS, you can not modify ipmi configuration through you got the ironic user info. ----- 原始邮件 ----- 发件人:Guo James 收件人:"openstack at lists.openstack.org" 主题:[Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? 日期:2018年01月10日 17点21分 I notice that after an ironic user get a bare mental successfully, he can access ipmi through ipmi device although he can't access ipmi through LAN How to prevent the situation? If he modify ipmi configuration, that will be mess. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From guoyongxhzhf at outlook.com Thu Jan 11 03:16:34 2018 From: guoyongxhzhf at outlook.com (Guo James) Date: Thu, 11 Jan 2018 03:16:34 +0000 Subject: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? Message-ID: Ironic user can change ipmi address so that OpenStack ironic lose control of bare mental. I think that is unacceptable. It seems that we should make ironic image without root privilege From: xiefp88 at sina.com [mailto:xiefp88 at sina.com] Sent: Thursday, January 11, 2018 9:12 AM To: Guo James; openstack Subject: 回复:[Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? If you can not get the usename and password of the OS, you can not modify ipmi configuration through you got the ironic user info. ----- 原始邮件 ----- 发件人:Guo James > 收件人:"openstack at lists.openstack.org" > 主题:[Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? 日期:2018年01月10日 17点21分 I notice that after an ironic user get a bare mental successfully, he can access ipmi through ipmi device although he can't access ipmi through LAN How to prevent the situation? If he modify ipmi configuration, that will be mess. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Jan 11 18:26:05 2018 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 11 Jan 2018 13:26:05 -0500 Subject: [Openstack] FOSDEM table volunteers needed Message-ID: If you'll be at FOSDEM, please consider spending an hour at the OpenStack table. https://etherpad.openstack.org/p/fosdem-2018 It's easy work (although it tends to be cold!) with most of the questions at a pretty basic level. And you get to meet your colleagues from other companies. Thanks! --Rich -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From ondrej.vaskoo at gmail.com Fri Jan 12 13:31:35 2018 From: ondrej.vaskoo at gmail.com (=?UTF-8?Q?Ondrej_Va=C5=A1ko?=) Date: Fri, 12 Jan 2018 14:31:35 +0100 Subject: [Openstack] [Nova]Update glance image contents Message-ID: Hello guys, I am dealing with one issue and that is a question *What is the right approach for updating openstack glance images?* When new version of cloud image comes out, for example Ubuntu cloud images , I want to update old openstack glance image with that new cloud image (As currently urged with Spectre mitigation update). I see several possibilities: 1. Create new image with different name. > > Downside is that I will have many images of same distribution and > release with different updates/kernel and tenants wouldn't know which to > use. Also there will be additional disk space used. > 2. Create new image with same name as old image and delete the old one (I think a lot of people is doing it like that). > Downside is that instances which used the old image will have empty > image name in horizon and rebuild or other operations may not be working. > Basically the `*image_ref`* column in nova.instances is pointing to > UUID of old image and all operations with this UUID will potentially fail. > 3. Create new image with different name but hide old image by making it private and with no members. > Hacky way and other issues us above. > 4. Create new image, delete the old one and change column `*image_ref*` in nova database table instances using the old one to new one. > Hacky way and possible negative impact. > Also notice that API function for glance upload is working only for images with status *queued* and cannot be used. So is there a preferred way to update glance image to new or is this still an open and not addressed issue? I found similar openstack mailing list conversation in this link , but no systematic answer was there so I am asking again here after 3 years. Thank you forward for your shared insight, advices and experiences. -------------- next part -------------- An HTML attachment was scrubbed... URL: From moreira.belmiro.email.lists at gmail.com Fri Jan 12 13:56:39 2018 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Fri, 12 Jan 2018 14:56:39 +0100 Subject: [Openstack] [Nova]Update glance image contents In-Reply-To: References: Message-ID: Hi Ondrej, the following spec tries to address the issue that you described. https://review.openstack.org/#/c/508133/ Let me know if you have comments/suggestions. cheers, Belmiro On Fri, Jan 12, 2018 at 2:31 PM, Ondrej Vaško wrote: > Hello guys, > > I am dealing with one issue and that is a question *What is the right > approach for updating openstack glance images?* > > When new version of cloud image comes out, for example Ubuntu cloud images > , I want to update old > openstack glance image with that new cloud image (As currently urged with > Spectre mitigation update). I see several possibilities: > > 1. Create new image with different name. >> >> Downside is that I will have many images of same distribution and >> release with different updates/kernel and tenants wouldn't know which to >> use. Also there will be additional disk space used. >> > 2. Create new image with same name as old image and delete the old one > (I think a lot of people is doing it like that). > >> Downside is that instances which used the old image will have empty >> image name in horizon and rebuild or other operations may not be working. >> Basically the `*image_ref`* column in nova.instances is pointing to >> UUID of old image and all operations with this UUID will potentially fail. >> > 3. Create new image with different name but hide old image by making > it private and with no members. > >> Hacky way and other issues us above. >> > 4. Create new image, delete the old one and change column `*image_ref*` > in nova database table instances using the old one to new one. > >> Hacky way and possible negative impact. >> > > Also notice that API function for glance upload > > is working only for images with status *queued* and cannot be used. > > So is there a preferred way to update glance image to new or is this still > an open and not addressed issue? > > I found similar openstack mailing list conversation in this link > , but no systematic > answer was there so I am asking again here after 3 years. > > Thank you forward for your shared insight, advices and experiences. > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcolestock at gmail.com Fri Jan 12 14:27:39 2018 From: jcolestock at gmail.com (Jimmy Colestock) Date: Fri, 12 Jan 2018 09:27:39 -0500 Subject: [Openstack] Cinder iscsi performance Message-ID: <848D2635-AD3D-4530-9DF0-035E89EF95CA@gmail.com> Hello All, Fighting an issue with cinder iscsi performance being much slow than local ephemeral disk. We’re running multi-path with dual 10Gb nic, offloading is working and bandwidth across the links is minimal. I’ve looked at kernel schedulers in the VM’s and tried them all. I still need to dig into the storage side (3Par), but just curious if anyone has any suggestions while I keep troubleshooting. Thanks for any help. JC Jim Colestock jcolestock at gmail.com https://www.linkedin.com/in/jcolestock/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Fri Jan 12 16:10:41 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 12 Jan 2018 18:10:41 +0200 Subject: [Openstack] Cinder iscsi performance In-Reply-To: <848D2635-AD3D-4530-9DF0-035E89EF95CA@gmail.com> References: <848D2635-AD3D-4530-9DF0-035E89EF95CA@gmail.com> Message-ID: Hi Jim, What target driver do you use? LIO should be faster than a default tgtd. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Jan 12, 2018 at 4:27 PM, Jimmy Colestock wrote: > Hello All, > > Fighting an issue with cinder iscsi performance being much slow than local > ephemeral disk. We’re running multi-path with dual 10Gb nic, offloading is > working and bandwidth across the links is minimal. I’ve looked at kernel > schedulers in the VM’s and tried them all. > > I still need to dig into the storage side (3Par), but just curious if > anyone has any suggestions while I keep troubleshooting. > > Thanks for any help. > > JC > > > Jim Colestock > jcolestock at gmail.com > https://www.linkedin.com/in/jcolestock/ > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablobrunetti at hotmail.com Fri Jan 12 17:56:45 2018 From: pablobrunetti at hotmail.com (pablo brunetti) Date: Fri, 12 Jan 2018 17:56:45 +0000 Subject: [Openstack] [Gnocchi][Ceilometer]Alter storage time in gnocchi Message-ID: Hello guys, I installed for my project the Openstack Pike Multinode with 1 controller node and 2 controller compute. I need update the time storage in database metrics for one minute or less. For default the gnocchi store each metric in 5 minutes. I upated in [dispatcher_gnocchi] for policy medium or high, but the store has not changed. The logs archive are ok. I found similar issue in ask openstack: https://ask.openstack.org/en/question/111236/how-to-change-telemetry-measurement-interval-with-gnocchi/ but no systematic answer was there. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sashang at gmail.com Mon Jan 15 05:19:04 2018 From: sashang at gmail.com (Sashan Govender) Date: Mon, 15 Jan 2018 05:19:04 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin Message-ID: Hi I've setup an openstack system based on the instructions here: https://docs.openstack.org/newton/ I'm trying to launch an instance: $ . demo-openrc $ openstack server create --flavor m1.nano --image cirros --nic net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default --key-name mykey provider-instance but get this error in the nova-conductor log file: 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils [req-5b47171a-f74e-4e8e-8659-89cce144f284 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state. 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node compute): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could not determine a suitable URL for the plugin\n'] 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: 0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state. Any tips how to resolve this? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Jan 15 08:56:31 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 15 Jan 2018 17:56:31 +0900 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: References: Message-ID: <006501d38dde$bf202f50$3d608df0$@gmail.com> This may be due to misconfiguration in nova.conf on the compute node. You may have provided incorrect information how to contact Keystone. Double-check the [keystone_authtoken] section, in particular the URLs in there. Also check the Nova compute log on the compute host for additional information. From: Sashan Govender [mailto:sashang at gmail.com] Sent: Monday, January 15, 2018 2:19 PM To: openstack at lists.openstack.org Subject: [Openstack] Could not determine a suitable URL for the plugin Hi I've setup an openstack system based on the instructions here: https://docs.openstack.org/newton/ I'm trying to launch an instance: $ . demo-openrc $ openstack server create --flavor m1.nano --image cirros --nic net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default --key-name mykey provider-instance but get this error in the nova-conductor log file: 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils [req-5b47171a-f74e-4e8e-8659-89cce144f284 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state. 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node compute): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could not determine a suitable URL for the plugin\n'] 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce 61b0b2b23b08419596bd923f2c544956 - - -] [instance: 0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state. Any tips how to resolve this? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Jan 15 08:57:42 2018 From: eblock at nde.ag (Eugen Block) Date: Mon, 15 Jan 2018 08:57:42 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: Message-ID: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> Hi, you should check your config settings again, especially the "auth_url" settings in the section(s) "[keystone_authtoken]" of all the config files. Are all the services up (nova, cinder and neutron) and running? What is the output of 'nova service-list'? Have you checked other log files for errors? Is there something interesting in nova-compute.log? Regards, Eugen Zitat von Sashan Govender : > Hi > > I've setup an openstack system based on the instructions here: > > https://docs.openstack.org/newton/ > > I'm trying to launch an instance: > $ . demo-openrc > $ openstack server create --flavor m1.nano --image cirros --nic > net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default > --key-name mykey provider-instance > > but get this error in the nova-conductor log file: > > 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils > [req-5b47171a-f74e-4e8e-8659-89cce144f284 82858c289ca444bf90fcd41123d069ce > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state. > 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node > compute): [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in > _do_build_and_run_instance\n filter_properties)\n', u' File > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in > _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u'RescheduledException: Build of instance > 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could > not determine a suitable URL for the plugin\n'] > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce > 61b0b2b23b08419596bd923f2c544956 - - -] Failed to > compute_task_build_instances: No valid host was found. There are not enough > hosts available. > Traceback (most recent call last): > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 199, in inner > return func(*args, **kwargs) > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > 104, in select_destinations > dests = self.driver.select_destinations(ctxt, spec_obj) > > File > "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line > 74, in select_destinations > raise exception.NoValidHost(reason=reason) > > NoValidHost: No valid host was found. There are not enough hosts available. > > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a 82858c289ca444bf90fcd41123d069ce > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > 0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state. > > Any tips how to resolve this? > > Thanks -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From jcolestock at gmail.com Mon Jan 15 15:45:47 2018 From: jcolestock at gmail.com (Jimmy Colestock) Date: Mon, 15 Jan 2018 10:45:47 -0500 Subject: [Openstack] Cinder iscsi performance In-Reply-To: References: <848D2635-AD3D-4530-9DF0-035E89EF95CA@gmail.com> Message-ID: Thanks Remo, Agree’d, ephemeral will pretty much always be faster, I’m seeing orders of magnitude difference between the 2, at times. After a bunch of testing, I’m pretty convinced its not KVM/QEMU or iscsi configurations, it’s looking like its something with my 3par configuration. Initially thought it may just be a function of growing a sparce volume, but running IO tests multiple times, I still get inconsistent results. Anyway, thanks for the reply.. JC Jim Colestock jcolestock at gmail.com > On Jan 12, 2018, at 11:18 AM, Remo Mattei wrote: > > I think that ephemeral will always be faster than iscsi. We do use PURE on 10GB even though it’s fast but local is faster. > > Remo > >> On Jan 12, 2018, at 6:27 AM, Jimmy Colestock > wrote: >> >> Hello All, >> >> Fighting an issue with cinder iscsi performance being much slow than local ephemeral disk. We’re running multi-path with dual 10Gb nic, offloading is working and bandwidth across the links is minimal. I’ve looked at kernel schedulers in the VM’s and tried them all. >> >> I still need to dig into the storage side (3Par), but just curious if anyone has any suggestions while I keep troubleshooting. >> >> Thanks for any help. >> >> JC >> >> >> Jim Colestock >> jcolestock at gmail.com >> https://www.linkedin.com/in/jcolestock/ >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcolestock at gmail.com Mon Jan 15 15:48:22 2018 From: jcolestock at gmail.com (Jimmy Colestock) Date: Mon, 15 Jan 2018 10:48:22 -0500 Subject: [Openstack] Cinder iscsi performance In-Reply-To: References: <848D2635-AD3D-4530-9DF0-035E89EF95CA@gmail.com> Message-ID: <95C21B20-457F-438E-8FDA-229C8978B04A@gmail.com> Hey Ivan, Running 3par for my back end.. Just replied back to another user, but at this point I don’t think its anything on my Openstack or iscsi set up, After doing a bunch of tests, with no consistent results, time to dig into my 3par config. Thanks for the reply.. JC Jim Colestock jcolestock at gmail.com > On Jan 12, 2018, at 11:10 AM, Ivan Kolodyazhny wrote: > > Hi Jim, > > What target driver do you use? LIO should be faster than a default tgtd. > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > On Fri, Jan 12, 2018 at 4:27 PM, Jimmy Colestock > wrote: > Hello All, > > Fighting an issue with cinder iscsi performance being much slow than local ephemeral disk. We’re running multi-path with dual 10Gb nic, offloading is working and bandwidth across the links is minimal. I’ve looked at kernel schedulers in the VM’s and tried them all. > > I still need to dig into the storage side (3Par), but just curious if anyone has any suggestions while I keep troubleshooting. > > Thanks for any help. > > JC > > > Jim Colestock > jcolestock at gmail.com > https://www.linkedin.com/in/jcolestock/ > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Jan 15 16:29:34 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 15 Jan 2018 11:29:34 -0500 Subject: [Openstack] Production deployment mirantis vs tripleO Message-ID: I’m planning to deploy production openstack, after lots of reading seems tripleO and mirantis looks good so far, which one people most prefers in production? From chirukamalakannan at gmail.com Mon Jan 15 16:44:24 2018 From: chirukamalakannan at gmail.com (kamalakannan sanjeevan) Date: Mon, 15 Jan 2018 22:44:24 +0600 Subject: [Openstack] Production deployment mirantis vs tripleO In-Reply-To: References: Message-ID: Satish, TripleO has taken the lead, but both are equally good. Kamal On 15-Jan-2018 22:11, "Satish Patel" wrote: > I’m planning to deploy production openstack, after lots of reading seems > tripleO and mirantis looks good so far, which one people most prefers in > production? > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Jan 15 17:58:21 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 15 Jan 2018 12:58:21 -0500 Subject: [Openstack] Production deployment mirantis vs tripleO In-Reply-To: References: Message-ID: But Fuel is active project, isn't it? https://docs.openstack.org/fuel-docs/latest/ On Mon, Jan 15, 2018 at 11:56 AM, Remo Mattei wrote: > OOO, I would be careful with mira fuel since they stop their deployments. > > > Remo > > On Jan 15, 2018, at 8:29 AM, Satish Patel wrote: > > I’m planning to deploy production openstack, after lots of reading seems > tripleO and mirantis looks good so far, which one people most prefers in > production? > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > From jaypipes at gmail.com Mon Jan 15 18:46:02 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 15 Jan 2018 13:46:02 -0500 Subject: [Openstack] Production deployment mirantis vs tripleO In-Reply-To: References: Message-ID: On 01/15/2018 12:58 PM, Satish Patel wrote: > But Fuel is active project, isn't it? > > https://docs.openstack.org/fuel-docs/latest/ No, it is no longer developed or supported. -jay From satish.txt at gmail.com Mon Jan 15 18:57:38 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 15 Jan 2018 13:57:38 -0500 Subject: [Openstack] DVR Public IP consumption Message-ID: I am planning to build openstack on production and big question is network (legacy vs DVR) but in DVR big concern is number of Public IP used on every compute node, I am planning to add max 100 node in cluster in that case it will use 100 public IP for compute node, Ouch! If i use legacy compute node then it could be bottleneck or failure node if not in HA. what most of company use for network node? DVR or legacy? From mrhillsman at gmail.com Mon Jan 15 19:11:25 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 15 Jan 2018 13:11:25 -0600 Subject: [Openstack] Production deployment mirantis vs tripleO In-Reply-To: References: Message-ID: <4A0251EA-A3A6-4BEF-B301-5BAD534F24DA@gmail.com> There are also the options of OpenStack-Ansible[0] or Kolla-Ansible[1] [0] https://docs.openstack.org/project-deploy-guide/openstack-ansible/pike/ [1] https://docs.openstack.org/project-deploy-guide/kolla-ansible/pike/ -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: +1 (832) 264-2646 irc: mrhillsman On 1/15/18, 12:55 PM, "Jay Pipes" wrote: On 01/15/2018 12:58 PM, Satish Patel wrote: > But Fuel is active project, isn't it? > > https://docs.openstack.org/fuel-docs/latest/ No, it is no longer developed or supported. -jay _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From Remo at italy1.com Mon Jan 15 19:36:37 2018 From: Remo at italy1.com (Remo Mattei) Date: Mon, 15 Jan 2018 11:36:37 -0800 Subject: [Openstack] Production deployment mirantis vs tripleO In-Reply-To: <4A0251EA-A3A6-4BEF-B301-5BAD534F24DA@gmail.com> References: <4A0251EA-A3A6-4BEF-B301-5BAD534F24DA@gmail.com> Message-ID: <278339D1-2ED5-48E4-90EF-B3E031AEA230@italy1.com> There is also kolla-kube https://github.com/openstack/kolla-kubernetes > On Jan 15, 2018, at 11:11 AM, Melvin Hillsman wrote: > > There are also the options of OpenStack-Ansible[0] or Kolla-Ansible[1] > > [0] https://docs.openstack.org/project-deploy-guide/openstack-ansible/pike/ > [1] https://docs.openstack.org/project-deploy-guide/kolla-ansible/pike/ > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: +1 (832) 264-2646 > irc: mrhillsman > > On 1/15/18, 12:55 PM, "Jay Pipes" wrote: > > On 01/15/2018 12:58 PM, Satish Patel wrote: >> But Fuel is active project, isn't it? >> >> https://docs.openstack.org/fuel-docs/latest/ > > No, it is no longer developed or supported. > > -jay > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Tue Jan 16 02:47:00 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 15 Jan 2018 21:47:00 -0500 Subject: [Openstack] DVR Public IP consumption In-Reply-To: References: Message-ID: On 01/15/2018 01:57 PM, Satish Patel wrote: > I am planning to build openstack on production and big question is > network (legacy vs DVR) but in DVR big concern is number of Public IP > used on every compute node, I am planning to add max 100 node in > cluster in that case it will use 100 public IP for compute node, Ouch! You can reduce this public IP consumption by using multiple subnets on the external network, with one just for those DVR interfaces. See https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html Example #2 for a possible configuration. As for what type of L3 configuration to run, it seems like you have a good idea of some of the trade-offs with each. -Brian > If i use legacy compute node then it could be bottleneck or failure > node if not in HA. > > what most of company use for network node? DVR or legacy? > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From sashang at gmail.com Tue Jan 16 03:57:26 2018 From: sashang at gmail.com (Sashan Govender) Date: Tue, 16 Jan 2018 03:57:26 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> Message-ID: Note that I don't have cinder installed. The docs said the block storage service was optional. I was following the newton guide here: https://docs.openstack.org/newton/install-guide-rdo/launch-instance-provider.html so the config files should be the same as the site. Running a controller and compute node using centos 7 in kvm with nested kvm turned on on my host machine. Firewall is disabled on controller and compute node. Contents of nova.conf on the compute node: [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:rootroot at controller auth_strategy = keystone my_ip = 192.168.122.5 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = rootroot [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp Contents from nova.conf on the controller: [DEFAULT] auth_strategy=keystone my_ip=192.168.122.186 use_neutron=true enabled_apis=osapi_compute,metadata firewall_driver=nova.virt.firewall.NoopFirewallDriver debug=false transport_url=rabbit://openstack:rootroot at controller [api_database] connection=mysql+pymysql://nova:rootroot at controller/nova_api [database] connection=mysql+pymysql://nova:rootroot at controller/nova [glance] api_servers=http://controller:9292 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = rootroot [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = rootroot service_metadata_proxy = True metadata_proxy_shared_secret = sharedsecret [vnc] vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip Output from various commands: [sashan at controller ~]$ openstack flavor list +----+---------+-----+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+---------+-----+------+-----------+-------+-----------+ | 0 | m1.nano | 64 | 1 | 0 | 1 | True | +----+---------+-----+------+-----------+-------+-----------+ [sashan at controller ~]$ [sashan at controller ~]$ openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 225d12b19b9d47e890537acbfa25d1ed | nova | compute | | 84913aec768d4734a912c62965ba0462 | keystone | identity | | ee5afcaa96c64baba15e7fa9cf02672f | glance | image | | f862d1a46c474cb284ef381525948b8d | neutron | network | +----------------------------------+----------+----------+ [sashan at controller ~]$ openstack server list +--------------------------------------+-------------------+--------+----------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+-------------------+--------+----------+------------+ | 105b46c4-d30e-4a7c-99fd-ea8cbc8b43c3 | provider-instance | ERROR | | cirros | | 503b7a3a-66c2-4de7-8106-0958e595771f | provider-instance | ERROR | | cirros | | 62e2ec41-8711-4470-881d-d63bd5b348c0 | provider-instance | ERROR | | cirros | | 6c0fe163-d587-41a8-89c7-18085e80bd38 | provider-instance | ERROR | | cirros | | 2849a455-b5a4-4dd5-a12e-6fd2497eed9e | provider-instance | ERROR | | cirros | | 465acda2-0d1a-47e3-9e48-2e63e7cf7a30 | provider-instance | ERROR | | cirros | +--------------------------------------+-------------------+--------+----------+------------+ [sashan at controller ~]$ Content from nova-compute.log on the compute node. I don't think the warning about the placement api is relevant. What about the other one: Unable to refresh my resource provider record? [root at compute ~]# tail /var/log/nova/nova-compute.log 2018-01-16 11:16:32.209 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view: name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB total_vcpus=2 used_vcpus=0 pci_stats=[] 2018-01-16 11:16:32.236 1435 WARNING nova.scheduler.client.report [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my resource provider record 2018-01-16 11:16:32.236 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record updated for compute:compute 2018-01-16 11:17:33.076 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Auditing locally available compute resources for node compute 2018-01-16 11:17:33.129 1435 WARNING nova.scheduler.client.report [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] No authentication information found for placement API. Placement is optional in Newton, but required in Ocata. Please enable the placement service before upgrading. 2018-01-16 11:17:33.130 1435 WARNING nova.scheduler.client.report [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my resource provider record 2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Total usable vcpus: 2, total allocated vcpus: 0 2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view: name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB total_vcpus=2 used_vcpus=0 pci_stats=[] 2018-01-16 11:17:33.197 1435 WARNING nova.scheduler.client.report [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my resource provider record 2018-01-16 11:17:33.197 1435 INFO nova.compute.resource_tracker [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record updated for compute:compute On Mon, Jan 15, 2018 at 8:05 PM Eugen Block wrote: > Hi, > > you should check your config settings again, especially the "auth_url" > settings in the section(s) "[keystone_authtoken]" of all the config > files. > Are all the services up (nova, cinder and neutron) and running? What > is the output of 'nova service-list'? > Have you checked other log files for errors? Is there something > interesting in nova-compute.log? > > Regards, > Eugen > > > Zitat von Sashan Govender : > > > Hi > > > > I've setup an openstack system based on the instructions here: > > > > https://docs.openstack.org/newton/ > > > > I'm trying to launch an instance: > > $ . demo-openrc > > $ openstack server create --flavor m1.nano --image cirros --nic > > net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default > > --key-name mykey provider-instance > > > > but get this error in the nova-conductor log file: > > > > 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils > > [req-5b47171a-f74e-4e8e-8659-89cce144f284 > 82858c289ca444bf90fcd41123d069ce > > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > > e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state. > > 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils > > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a > 82858c289ca444bf90fcd41123d069ce > > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > > 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node > > compute): [u'Traceback (most recent call last):\n', u' File > > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in > > _do_build_and_run_instance\n filter_properties)\n', u' File > > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in > > _build_and_run_instance\n instance_uuid=instance.uuid, > > reason=six.text_type(e))\n', u'RescheduledException: Build of instance > > 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could > > not determine a suitable URL for the plugin\n'] > > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils > > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a > 82858c289ca444bf90fcd41123d069ce > > 61b0b2b23b08419596bd923f2c544956 - - -] Failed to > > compute_task_build_instances: No valid host was found. There are not > enough > > hosts available. > > Traceback (most recent call last): > > > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > > line 199, in inner > > return func(*args, **kwargs) > > > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > > 104, in select_destinations > > dests = self.driver.select_destinations(ctxt, spec_obj) > > > > File > > "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", > line > > 74, in select_destinations > > raise exception.NoValidHost(reason=reason) > > > > NoValidHost: No valid host was found. There are not enough hosts > available. > > > > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils > > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a > 82858c289ca444bf90fcd41123d069ce > > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: > > 0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state. > > > > Any tips how to resolve this? > > > > Thanks > > > > -- > Eugen Block voice : +49-40-559 51 75 > <+49%2040%205595175> > NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 > <+49%2040%205595177> > Postfach 61 03 15 > D-22423 Hamburg e-mail : eblock at nde.ag > > Vorsitzende des Aufsichtsrates: Angelika Mozdzen > Sitz und Registergericht: Hamburg, HRB 90934 > Vorstand: Jens-U. Mozdzen > USt-IdNr. DE 814 013 983 > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Jan 16 08:48:15 2018 From: eblock at nde.ag (Eugen Block) Date: Tue, 16 Jan 2018 08:48:15 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> Message-ID: <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> Could you also paste the output of "openstack compute service list" and "openstack network agent list"? I'd like to see if the nova and neutron services are all up and running. > Note that I don't have cinder installed. The docs said the block storage > service was optional. You're right, I just assumed that it's also installed. > the config files should be the same as the site. I can't see any obvious error there, although it differs from our configs since we run Ocata, of course. > I don't think the > warning about the placement api is relevant. What about the other one: > Unable to refresh my resource provider record? I'm not sure about that, it is just a warning. Can you confirm that glance is working properly and the image is okay? Is the network layout as expected? Any information in other logs like neutron and glance? Eugen Zitat von Sashan Govender : > Note that I don't have cinder installed. The docs said the block storage > service was optional. I was following the newton guide here: > https://docs.openstack.org/newton/install-guide-rdo/launch-instance-provider.html > so > the config files should be the same as the site. Running a controller and > compute node using centos 7 in kvm with nested kvm turned on on my host > machine. Firewall is disabled on controller and compute node. > > Contents of nova.conf on the compute node: > > [DEFAULT] > enabled_apis = osapi_compute,metadata > transport_url = rabbit://openstack:rootroot at controller > auth_strategy = keystone > my_ip = 192.168.122.5 > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > > > [keystone_authtoken] > auth_uri = http://controller:5000 > auth_url = http://controller:35357 > memcached_servers = controller:11211 > auth_type = password > project_domain_name = Default > user_domain_name = Default > project_name = service > username = nova > password = rootroot > > [vnc] > enabled = True > vncserver_listen = 0.0.0.0 > vncserver_proxyclient_address = $my_ip > novncproxy_base_url = http://controller:6080/vnc_auto.html > > [glance] > api_servers = http://controller:9292 > > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > > Contents from nova.conf on the controller: > > [DEFAULT] > auth_strategy=keystone > my_ip=192.168.122.186 > use_neutron=true > enabled_apis=osapi_compute,metadata > firewall_driver=nova.virt.firewall.NoopFirewallDriver > debug=false > transport_url=rabbit://openstack:rootroot at controller > > [api_database] > connection=mysql+pymysql://nova:rootroot at controller/nova_api > > [database] > connection=mysql+pymysql://nova:rootroot at controller/nova > [glance] > api_servers=http://controller:9292 > > [keystone_authtoken] > auth_uri = http://controller:5000 > auth_url = http://controller:35357 > memcached_servers = controller:11211 > auth_type = password > project_domain_name = Default > user_domain_name = Default > project_name = service > username = nova > password = rootroot > > [neutron] > url = http://controller:9696 > auth_url = http://controller:35357 > auth_type = password > project_domain_name = Default > user_domain_name = Default > region_name = RegionOne > project_name = service > username = neutron > password = rootroot > service_metadata_proxy = True > metadata_proxy_shared_secret = sharedsecret > > [vnc] > vncserver_listen=$my_ip > vncserver_proxyclient_address=$my_ip > > Output from various commands: > > [sashan at controller ~]$ openstack flavor list > +----+---------+-----+------+-----------+-------+-----------+ > | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | > +----+---------+-----+------+-----------+-------+-----------+ > | 0 | m1.nano | 64 | 1 | 0 | 1 | True | > +----+---------+-----+------+-----------+-------+-----------+ > [sashan at controller ~]$ > > [sashan at controller ~]$ openstack service list > +----------------------------------+----------+----------+ > | ID | Name | Type | > +----------------------------------+----------+----------+ > | 225d12b19b9d47e890537acbfa25d1ed | nova | compute | > | 84913aec768d4734a912c62965ba0462 | keystone | identity | > | ee5afcaa96c64baba15e7fa9cf02672f | glance | image | > | f862d1a46c474cb284ef381525948b8d | neutron | network | > +----------------------------------+----------+----------+ > [sashan at controller ~]$ > > openstack server list > +--------------------------------------+-------------------+--------+----------+------------+ > | ID | Name | Status | > Networks | Image Name | > +--------------------------------------+-------------------+--------+----------+------------+ > | 105b46c4-d30e-4a7c-99fd-ea8cbc8b43c3 | provider-instance | ERROR | > | cirros | > | 503b7a3a-66c2-4de7-8106-0958e595771f | provider-instance | ERROR | > | cirros | > | 62e2ec41-8711-4470-881d-d63bd5b348c0 | provider-instance | ERROR | > | cirros | > | 6c0fe163-d587-41a8-89c7-18085e80bd38 | provider-instance | ERROR | > | cirros | > | 2849a455-b5a4-4dd5-a12e-6fd2497eed9e | provider-instance | ERROR | > | cirros | > | 465acda2-0d1a-47e3-9e48-2e63e7cf7a30 | provider-instance | ERROR | > | cirros | > +--------------------------------------+-------------------+--------+----------+------------+ > [sashan at controller ~]$ > > > Content from nova-compute.log on the compute node. I don't think the > warning about the placement api is relevant. What about the other one: > Unable to refresh my resource provider record? > > [root at compute ~]# tail /var/log/nova/nova-compute.log > 2018-01-16 11:16:32.209 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view: > name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB > total_vcpus=2 used_vcpus=0 pci_stats=[] > 2018-01-16 11:16:32.236 1435 WARNING nova.scheduler.client.report > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my > resource provider record > 2018-01-16 11:16:32.236 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record > updated for compute:compute > 2018-01-16 11:17:33.076 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Auditing locally > available compute resources for node compute > 2018-01-16 11:17:33.129 1435 WARNING nova.scheduler.client.report > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] No authentication > information found for placement API. Placement is optional in Newton, but > required in Ocata. Please enable the placement service before upgrading. > 2018-01-16 11:17:33.130 1435 WARNING nova.scheduler.client.report > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my > resource provider record > 2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Total usable vcpus: 2, > total allocated vcpus: 0 > 2018-01-16 11:17:33.168 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Final resource view: > name=compute phys_ram=2047MB used_ram=512MB phys_disk=16GB used_disk=0GB > total_vcpus=2 used_vcpus=0 pci_stats=[] > 2018-01-16 11:17:33.197 1435 WARNING nova.scheduler.client.report > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Unable to refresh my > resource provider record > 2018-01-16 11:17:33.197 1435 INFO nova.compute.resource_tracker > [req-08a524f9-4c31-45b9-808a-9cd4050b0e00 - - - - -] Compute_service record > updated for compute:compute > > > On Mon, Jan 15, 2018 at 8:05 PM Eugen Block wrote: > >> Hi, >> >> you should check your config settings again, especially the "auth_url" >> settings in the section(s) "[keystone_authtoken]" of all the config >> files. >> Are all the services up (nova, cinder and neutron) and running? What >> is the output of 'nova service-list'? >> Have you checked other log files for errors? Is there something >> interesting in nova-compute.log? >> >> Regards, >> Eugen >> >> >> Zitat von Sashan Govender : >> >> > Hi >> > >> > I've setup an openstack system based on the instructions here: >> > >> > https://docs.openstack.org/newton/ >> > >> > I'm trying to launch an instance: >> > $ . demo-openrc >> > $ openstack server create --flavor m1.nano --image cirros --nic >> > net-id=da77f469-f594-42f6-ab18-8b907b3359e4 --security-group default >> > --key-name mykey provider-instance >> > >> > but get this error in the nova-conductor log file: >> > >> > 2018-01-15 15:46:48.938 2566 WARNING nova.scheduler.utils >> > [req-5b47171a-f74e-4e8e-8659-89cce144f284 >> 82858c289ca444bf90fcd41123d069ce >> > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: >> > e1cfc9a9-9c21-435f-a9dc-c7c692e06c29] Setting instance to ERROR state. >> > 2018-01-15 16:09:51.026 2567 ERROR nova.scheduler.utils >> > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a >> 82858c289ca444bf90fcd41123d069ce >> > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: >> > 0ba01247-5513-4c58-bf04-18092fff2622] Error from last host: compute (node >> > compute): [u'Traceback (most recent call last):\n', u' File >> > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1787, in >> > _do_build_and_run_instance\n filter_properties)\n', u' File >> > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1985, in >> > _build_and_run_instance\n instance_uuid=instance.uuid, >> > reason=six.text_type(e))\n', u'RescheduledException: Build of instance >> > 0ba01247-5513-4c58-bf04-18092fff2622 was re-scheduled: Could >> > not determine a suitable URL for the plugin\n'] >> > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils >> > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a >> 82858c289ca444bf90fcd41123d069ce >> > 61b0b2b23b08419596bd923f2c544956 - - -] Failed to >> > compute_task_build_instances: No valid host was found. There are not >> enough >> > hosts available. >> > Traceback (most recent call last): >> > >> > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", >> > line 199, in inner >> > return func(*args, **kwargs) >> > >> > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line >> > 104, in select_destinations >> > dests = self.driver.select_destinations(ctxt, spec_obj) >> > >> > File >> > "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", >> line >> > 74, in select_destinations >> > raise exception.NoValidHost(reason=reason) >> > >> > NoValidHost: No valid host was found. There are not enough hosts >> available. >> > >> > 2018-01-15 16:09:51.057 2567 WARNING nova.scheduler.utils >> > [req-afff24dc-1ee0-469f-9d99-2abcb4810c7a >> 82858c289ca444bf90fcd41123d069ce >> > 61b0b2b23b08419596bd923f2c544956 - - -] [instance: >> > 0ba01247-5513-4c58-bf04-18092fff2622] Setting instance to ERROR state. >> > >> > Any tips how to resolve this? >> > >> > Thanks >> >> >> >> -- >> Eugen Block voice : +49-40-559 51 75 >> <+49%2040%205595175> >> NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 >> <+49%2040%205595177> >> Postfach 61 03 15 >> D-22423 Hamburg e-mail : eblock at nde.ag >> >> Vorsitzende des Aufsichtsrates: Angelika Mozdzen >> Sitz und Registergericht: Hamburg, HRB 90934 >> Vorstand: Jens-U. Mozdzen >> USt-IdNr. DE 814 013 983 >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From sashang at gmail.com Tue Jan 16 10:54:15 2018 From: sashang at gmail.com (Sashan Govender) Date: Tue, 16 Jan 2018 10:54:15 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> Message-ID: On Tue, Jan 16, 2018 at 7:48 PM Eugen Block wrote: Thanks for the help. Could you also paste the output of "openstack compute service list" > and "openstack network agent list"? I'd like to see if the nova and > neutron services are all up and running. > > [sashan at controller ~]$ openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent | | 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent | | 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent | | c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller | None | True | UP | neutron-metadata-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ [sashan at controller ~]$ openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2018-01-16T10:47:22.000000 | | 2 | nova-conductor | controller | internal | enabled | up | 2018-01-16T10:47:22.000000 | | 3 | nova-scheduler | controller | internal | enabled | up | 2018-01-16T10:47:27.000000 | | 6 | nova-compute | compute | nova | enabled | up | 2018-01-16T10:47:24.000000 | +----+------------------+------------+----------+---------+-------+----------------------------+ > > I don't think the > > warning about the placement api is relevant. What about the other one: > > Unable to refresh my resource provider record? > > I'm not sure about that, it is just a warning. > Can you confirm that glance is working properly and the image is okay? > Is the network layout as expected? Any information in other logs like > neutron and glance? > I noticed this error in the neutron logs: 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension vlan-transparent not supported by any of loaded plugins 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to process extensions (auto-allocated-topology) because the configured plugins do not satisfy their requirements. Some features will not work as expected. 2018-01-16 21:40:12.559 1090 INFO neutron.quota.resource_registry [-] Creating instance of TrackedResource for resource:subnet 2018 glance seems fine i.e. no error messages. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Jan 16 11:10:48 2018 From: eblock at nde.ag (Eugen Block) Date: Tue, 16 Jan 2018 11:10:48 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> Message-ID: <20180116111048.Horde.H5YUnsAAbMz-tHW_0ilZ5j9@webmail.nde.ag> > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension > vlan-transparent not supported by any of loaded plugins > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to > process extensions (auto-allocated-topology) because the configured plugins > do not satisfy their requirements. Some features will not work as expected. This sounds like the right place to dig deeper. I would enable debug logs and see if there are more hints and then try to resolve this. Zitat von Sashan Govender : > On Tue, Jan 16, 2018 at 7:48 PM Eugen Block wrote: > > Thanks for the help. > > Could you also paste the output of "openstack compute service list" >> and "openstack network agent list"? I'd like to see if the nova and >> neutron services are all up and running. >> >> > [sashan at controller ~]$ openstack network agent list > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | ID | Agent Type | Host | > Availability Zone | Alive | State | Binary | > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > | 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute | > None | True | UP | neutron-linuxbridge-agent | > | 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller | > nova | True | UP | neutron-dhcp-agent | > | 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller | > None | True | UP | neutron-linuxbridge-agent | > | c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller | > None | True | UP | neutron-metadata-agent | > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > [sashan at controller ~]$ openstack compute service list > +----+------------------+------------+----------+---------+-------+----------------------------+ > | ID | Binary | Host | Zone | Status | State | Updated > At | > +----+------------------+------------+----------+---------+-------+----------------------------+ > | 1 | nova-consoleauth | controller | internal | enabled | up | > 2018-01-16T10:47:22.000000 | > | 2 | nova-conductor | controller | internal | enabled | up | > 2018-01-16T10:47:22.000000 | > | 3 | nova-scheduler | controller | internal | enabled | up | > 2018-01-16T10:47:27.000000 | > | 6 | nova-compute | compute | nova | enabled | up | > 2018-01-16T10:47:24.000000 | > +----+------------------+------------+----------+---------+-------+----------------------------+ > > >> > I don't think the >> > warning about the placement api is relevant. What about the other one: >> > Unable to refresh my resource provider record? >> >> I'm not sure about that, it is just a warning. >> Can you confirm that glance is working properly and the image is okay? >> Is the network layout as expected? Any information in other logs like >> neutron and glance? >> > > I noticed this error in the neutron logs: > > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension > vlan-transparent not supported by any of loaded plugins > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to > process extensions (auto-allocated-topology) because the configured plugins > do not satisfy their requirements. Some features will not work as expected. > 2018-01-16 21:40:12.559 1090 INFO neutron.quota.resource_registry [-] > Creating instance of TrackedResource for resource:subnet > 2018 > > glance seems fine i.e. no error messages. -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From satish.txt at gmail.com Tue Jan 16 13:37:00 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 16 Jan 2018 08:37:00 -0500 Subject: [Openstack] DVR Public IP consumption In-Reply-To: References: Message-ID: Thanks Brian, I may having difficulty to understand that example, if I have only /24 public subnet for cloud and have 200 compute node then how does it work? Sent from my iPhone > On Jan 15, 2018, at 9:47 PM, Brian Haley wrote: > >> On 01/15/2018 01:57 PM, Satish Patel wrote: >> I am planning to build openstack on production and big question is >> network (legacy vs DVR) but in DVR big concern is number of Public IP >> used on every compute node, I am planning to add max 100 node in >> cluster in that case it will use 100 public IP for compute node, Ouch! > > You can reduce this public IP consumption by using multiple subnets on the external network, with one just for those DVR interfaces. See > https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html > Example #2 for a possible configuration. > > As for what type of L3 configuration to run, it seems like you have a good idea of some of the trade-offs with each. > > -Brian > >> If i use legacy compute node then it could be bottleneck or failure >> node if not in HA. >> what most of company use for network node? DVR or legacy? >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From openstack at medberry.net Tue Jan 16 14:24:45 2018 From: openstack at medberry.net (David Medberry) Date: Tue, 16 Jan 2018 07:24:45 -0700 Subject: [Openstack] Ops Mid Cycle in Tokyo Mar 7-8 2018 Message-ID: Hi all, Broad distribution to make sure folks are aware of the upcoming Ops Meetup in Tokyo. You can help "steer" this meetup by participating in the planning meetings or more practically by editing this page (respectfully): https://etherpad.openstack.org/p/TYO-ops-meetup-2018 Sign up for the meetup is here:https://goo.gl/HBJkPy We'll see you there! -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Jan 16 15:58:33 2018 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 16 Jan 2018 10:58:33 -0500 Subject: [Openstack] Interviews at the PTG in Dublin Message-ID: <6b595e6c-a708-1d1a-57e5-3037ba03ecff@redhat.com> TL;DR: Sign up for PTG interviews at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 As at previous PTGs, I will be doing interviews in Dublin, which will be posted to http://youtube.com/RDOCommunity - where you can see some past examples. If you, or your project/team/company/whatever wish to participate in one of these interviews, please sign up at https://docs.google.com/spreadsheets/d/1MK7rCgYXCQZP1AgQ0RUiuc-cEXIzW5RuRzz5BWhV4nQ/edit#gid=0 That spreadsheet also includes a description of the kinds of things we're looking for, and links to examples of videos from previous PTGs. I have 56 interview slots, so there should be plenty of room for most projects, as well as various cross-project interviews, so talk with your project team, and claim a spot! -- Rich Bowen - rbowen at redhat.com @RDOcommunity // @CentOSProject // @rbowen From davidgab283 at gmail.com Tue Jan 16 16:28:06 2018 From: davidgab283 at gmail.com (David Gabriel) Date: Tue, 16 Jan 2018 17:28:06 +0100 Subject: [Openstack] ping between 2 instances using an ovs in the middle Message-ID: Dears, I am writing you this email to look for your help in order to fix a problem, I am facing since a while, related to creating two ubuntu instances in Openstack (Fuel 9.2 for Mitaka) and setting an ovs bridge in each VM. Here is the problem description: I have defined two instances called VM1 and VM2 and ovs bridge, each one of them is deployed in one Virtual Machine (VM) based on this simple topology: *VM1* ---LAN1----*OVS*---LAN2--- *VM2* I used the following commands, taken from some tutorial, for OVS: ovs-vsctl add-br mybridge1 ifconfig mybridge1 up ovs-vsctl add-port eth1 mybridge1 ifconfig eth1 0 ovs-vsctl add-port eth1 mybridge1 ovs-vsctl set-controller mybridge tcp:AddressOfController:6633 Then I tried to make the ping between the two VMs but it fails ! Could you please tell/guide me how to fix this problem. Thanks in advance. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sashang at gmail.com Wed Jan 17 03:07:16 2018 From: sashang at gmail.com (Sashan Govender) Date: Wed, 17 Jan 2018 03:07:16 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: <20180116111048.Horde.H5YUnsAAbMz-tHW_0ilZ5j9@webmail.nde.ag> References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> <20180116111048.Horde.H5YUnsAAbMz-tHW_0ilZ5j9@webmail.nde.ag> Message-ID: Turns out the neutron config in /etc/nova/nova.conf on the compute node was missing. [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = rootroot After adding that I could create an instance. [sashan at controller ~]$ openstack server list +--------------------------------------+-------------------+--------+-------------------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+-------------------+--------+-------------------------+------------+ | b9342c83-0c10-4f3e-a3b4-41bc601ea0b1 | provider-instance | ACTIVE | provider=192.168.10.107 | cirros | | d03058f3-0009-47c9-8b34-182034398647 | provider-instance | ERROR | | cirros | | 42adeacf-3027-45ba-a12d-e284995ce3a7 | provider-instance | ERROR | | cirros | | cfcbde0b-34f3-4ce8-ba37-735a7fa84417 | provider-instance | ERROR | | cirros | | 9f1481b9-0554-4cec-8cf5-163fb790f463 | provider-instance | ERROR | | cirros | +--------------------------------------+-------------------+--------+-------------------------+------------+ [sashan at controller ~]$ On Tue, Jan 16, 2018 at 10:10 PM Eugen Block wrote: > > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension > > vlan-transparent not supported by any of loaded plugins > > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to > > process extensions (auto-allocated-topology) because the configured > plugins > > do not satisfy their requirements. Some features will not work as > expected. > > This sounds like the right place to dig deeper. I would enable debug > logs and see if there are more hints and then try to resolve this. > > > > Zitat von Sashan Govender : > > > On Tue, Jan 16, 2018 at 7:48 PM Eugen Block wrote: > > > > Thanks for the help. > > > > Could you also paste the output of "openstack compute service list" > >> and "openstack network agent list"? I'd like to see if the nova and > >> neutron services are all up and running. > >> > >> > > [sashan at controller ~]$ openstack network agent list > > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > > | ID | Agent Type | Host > | > > Availability Zone | Alive | State | Binary | > > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > > | 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute > | > > None | True | UP | neutron-linuxbridge-agent | > > | 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller > | > > nova | True | UP | neutron-dhcp-agent | > > | 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller > | > > None | True | UP | neutron-linuxbridge-agent | > > | c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller > | > > None | True | UP | neutron-metadata-agent | > > > +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ > > [sashan at controller ~]$ openstack compute service list > > > +----+------------------+------------+----------+---------+-------+----------------------------+ > > | ID | Binary | Host | Zone | Status | State | > Updated > > At | > > > +----+------------------+------------+----------+---------+-------+----------------------------+ > > | 1 | nova-consoleauth | controller | internal | enabled | up | > > 2018-01-16T10:47:22.000000 | > > | 2 | nova-conductor | controller | internal | enabled | up | > > 2018-01-16T10:47:22.000000 | > > | 3 | nova-scheduler | controller | internal | enabled | up | > > 2018-01-16T10:47:27.000000 | > > | 6 | nova-compute | compute | nova | enabled | up | > > 2018-01-16T10:47:24.000000 | > > > +----+------------------+------------+----------+---------+-------+----------------------------+ > > > > > >> > I don't think the > >> > warning about the placement api is relevant. What about the other > one: > >> > Unable to refresh my resource provider record? > >> > >> I'm not sure about that, it is just a warning. > >> Can you confirm that glance is working properly and the image is okay? > >> Is the network layout as expected? Any information in other logs like > >> neutron and glance? > >> > > > > I noticed this error in the neutron logs: > > > > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension > > vlan-transparent not supported by any of loaded plugins > > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to > > process extensions (auto-allocated-topology) because the configured > plugins > > do not satisfy their requirements. Some features will not work as > expected. > > 2018-01-16 21:40:12.559 1090 INFO neutron.quota.resource_registry [-] > > Creating instance of TrackedResource for resource:subnet > > 2018 > > > > glance seems fine i.e. no error messages. > > > > -- > Eugen Block voice : +49-40-559 51 75 > <+49%2040%205595175> > NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 > <+49%2040%205595177> > Postfach 61 03 15 > D-22423 Hamburg e-mail : eblock at nde.ag > > Vorsitzende des Aufsichtsrates: Angelika Mozdzen > Sitz und Registergericht: Hamburg, HRB 90934 > Vorstand: Jens-U. Mozdzen > USt-IdNr. DE 814 013 983 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dubaek at gmail.com Wed Jan 17 09:43:43 2018 From: dubaek at gmail.com (doun baek) Date: Wed, 17 Jan 2018 18:43:43 +0900 Subject: [Openstack] Deployment error on installation of redhat openstack with director on KVM Message-ID: ​​ Hi all, I encountered some issue at installation I deployed redhat openstack using director on KVM, actually that environment is laptop. following message is output after overcloud deploy. 2018-01-16 04:44:10Z [overcloud.AllNodesDeploySteps .ControllerDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment 52e0a433-2680-4c7b-9f5c-e82267bbba66 failed (6) 2018-01-16 04:44:11Z [overcloud.AllNodesDeploySteps .ControllerDeployment_Step4.0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2018-01-16 04:44:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2018-01-16 04:44:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_FAILED Error: resources.ControllerDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2018-01-16 04:44:11Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2018-01-16 04:44:12Z [overcloud.AllNodesDeploySteps]: CREATE_FAILED Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2018-01-16 04:44:12Z [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.AllNodesDeploySteps.resources.ControllerDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 actually status code is -9. i didn't catch the output when i was deploying overcloud. but it is printed same error message. you can see it from the bottom. i also checked heat resource, but i couldn't find any reason related this problem. [stack at ospd ~]$ openstack stack resource show cdc61a5d-bf3e-4fee-9167-a9188fdee44c 0 +------------------------+---------------------------------- ------------------------------------------------------------ ---------------------------------------------+ | Field | Value | +------------------------+---------------------------------- ------------------------------------------------------------ ---------------------------------------------+ | attributes | {u'deploy_stdout': None, u'deploy_stderr': None, u'deploy_status_code': None} | | creation_time | 2018-01-17T08:09:55Z | | description | | | links | [{u'href': u'https://192.168.200.12:13004/v1/ 97489d3f21b3441485bf12b0d39998a3/stacks/overcloud-AllNodesDeploySteps- gkxwpp3eyz7f- | | | ControllerDeployment_Step4- e55vpyi264ub/cdc61a5d-bf3e-4fee-9167-a9188fdee44c/resources/0', u'rel': u'self'}, {u'href': | | | u'https://192.168.200.12:13004/v1/ 97489d3f21b3441485bf12b0d39998a3/stacks/overcloud-AllNodesDeploySteps- gkxwpp3eyz7f- | | | ControllerDeployment_Step4- e55vpyi264ub/cdc61a5d-bf3e-4fee-9167-a9188fdee44c', u'rel': u'stack'}] | | logical_resource_id | 0 | | parent_resource | ControllerDeployment_Step4 | | physical_resource_id | ef0ae4c3-d39a-485c-92ce- 9b18ed6b2e57 | | required_by | [] | | resource_name | 0 | | resource_status | CREATE_FAILED | | resource_status_reason | Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: -9 | | resource_type | OS::Heat::StructuredDeployment | | updated_time | 2018-01-17T08:09:55Z | +------------------------+---------------------------------- ------------------------------------------------------------ ---------------------------------------------+ So, i connected ssh via heat-admin account and checked /var/log/message in controller node. I found what i thought was related to the problem. Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::begin]: Triggered 'refresh' from 2 events#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Api_cfn/Service[heat-api-cfn]/ensure: ensure changed 'stopped' to 'running'#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Engine/Service[heat-engine]: Triggered 'refresh' from 1 events#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Api/Service[heat-api]/ensure: ensure changed 'stopped' to 'running'#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Api_cloudwatch/Service[heat-api-cloudwatch]/ensure: ensure changed 'stopped' to 'running'#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[mNotice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events#033[0m Jan 17 03:16:59 localhost os-collect-config: [2018-01-17 08:16:58,994] (heat-config) [INFO] exception: connect failed Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Cinder::Api]): keystone_enabled is deprecated, use auth_strategy instead.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Keystone]): Fernet token is recommended in Mitaka release. The default for token_provider will be changed to 'fernet' in O release.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to glance.store.http.Store#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Swift::Client]): Could not look up qualified variable '::swift::client_package_ensure'; class ::swift has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Heat]): keystone_user_domain_id is deprecated, use the name option instead.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Heat]): keystone_project_domain_id is deprecated, use the name option instead.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Neutron::Agents::L3]): parameter external_network_bridge is deprecated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Neutron::Server::Notifications]): nova_url is deprecated and will be removed after Newton cycle.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::cpu_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::ram_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::disk_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Mongodb::Server]): Replset specified, but no replset_members or replset_config provided.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_user'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_password'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::admin_tenant_name'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::auth_uri'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::auth_version'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Keystone::Authtoken]): Could not look up qualified variable '::nova::api::identity_uri'; class ::nova::api has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_host'; class ::nova::compute has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_protocol'; class ::nova::compute has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_port'; class ::nova::compute has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Nova::Vncproxy::Common]): Could not look up qualified variable '::nova::compute::vncproxy_path'; class ::nova::compute has not been evaluated#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Ceilometer]): Both $metering_secret and $telemetry_secret defined, using $telemetry_secret#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 166 in file /etc/puppet/modules/gnocchi/ma nifests/api.pp#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Gnocchi::Api]): gnocchi:api::keystone_identity_uri is deprecated, use gnocchi::keystone::authtoken::auth_url instead#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Class[Gnocchi::Api]): gnocchi::api::keystone_auth_uri is deprecated, use gnocchi::keystone::authtoken::auth_uri instead#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Not collecting exported resources without storeconfigs#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Not collecting exported resources without storeconfigs#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Not collecting exported resources without storeconfigs#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Not collecting exported resources without storeconfigs#033[0m Jan 17 03:16:59 localhost os-collect-config: #033[1;31mWarning: Not collecting exported resources without storeconfigs#033[0m Jan 17 03:16:59 localhost os-collect-config: [2018-01-17 08:16:58,994] (heat-config) [ERROR] Error running /var/lib/heat-config/heat-conf ig-puppet/d13987d5-3d6e-4c8c-a0d1-dad32a967ae4.pp. [-9] for example, i searched cpu_allocation_ratio in nova.conf. but i couldn't find that. i guess that puppet script didn't work properly. so it was printed some error. below command is deploy command. openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e ~/templates/firstboot-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml \ --control-scale 1 \ --compute-scale 1 --control-flavor control \ --compute-flavor compute --ntp-server 8.8.8.8 \ --neutron-network-type vxlan --neutron-tunnel-types vxlan \ --validation-errors-fatal --validation-warnings-fatal --timeout 90 for reference, i referred following page. all of template files same except ip addresss. - https://keithtenzer.com/2017/04/20/red-hat-openstack-platform-10-newton- installation-and-configuration-guide/ I don't know what i need to do more. Thank you for in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Jan 17 09:58:49 2018 From: eblock at nde.ag (Eugen Block) Date: Wed, 17 Jan 2018 09:58:49 +0000 Subject: [Openstack] Could not determine a suitable URL for the plugin In-Reply-To: References: <20180115085742.Horde.CxsunQkuMGdc7Jlqtu_QnYy@webmail.nde.ag> <20180116084815.Horde.ZHG-P0E__HmSGyxnwdk0s9O@webmail.nde.ag> <20180116111048.Horde.H5YUnsAAbMz-tHW_0ilZ5j9@webmail.nde.ag> Message-ID: <20180117095850.Horde.ezYhmH6zhWiHSul_MYvQQF2@webmail.nde.ag> See, I told you to check your configs ;-) I'm glad it works now! Zitat von Sashan Govender : > Turns out the neutron config in /etc/nova/nova.conf on the compute node was > missing. > > [neutron] > url = http://controller:9696 > auth_url = http://controller:35357 > auth_type = password > project_domain_name = Default > user_domain_name = Default > region_name = RegionOne > project_name = service > username = neutron > password = rootroot > > After adding that I could create an instance. > > [sashan at controller ~]$ openstack server list > +--------------------------------------+-------------------+--------+-------------------------+------------+ > | ID | Name | Status | > Networks | Image Name | > +--------------------------------------+-------------------+--------+-------------------------+------------+ > | b9342c83-0c10-4f3e-a3b4-41bc601ea0b1 | provider-instance | ACTIVE | > provider=192.168.10.107 | cirros | > | d03058f3-0009-47c9-8b34-182034398647 | provider-instance | ERROR | > | cirros | > | 42adeacf-3027-45ba-a12d-e284995ce3a7 | provider-instance | ERROR | > | cirros | > | cfcbde0b-34f3-4ce8-ba37-735a7fa84417 | provider-instance | ERROR | > | cirros | > | 9f1481b9-0554-4cec-8cf5-163fb790f463 | provider-instance | ERROR | > | cirros | > +--------------------------------------+-------------------+--------+-------------------------+------------+ > [sashan at controller ~]$ > > > On Tue, Jan 16, 2018 at 10:10 PM Eugen Block wrote: > >> > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension >> > vlan-transparent not supported by any of loaded plugins >> > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to >> > process extensions (auto-allocated-topology) because the configured >> plugins >> > do not satisfy their requirements. Some features will not work as >> expected. >> >> This sounds like the right place to dig deeper. I would enable debug >> logs and see if there are more hints and then try to resolve this. >> >> >> >> Zitat von Sashan Govender : >> >> > On Tue, Jan 16, 2018 at 7:48 PM Eugen Block wrote: >> > >> > Thanks for the help. >> > >> > Could you also paste the output of "openstack compute service list" >> >> and "openstack network agent list"? I'd like to see if the nova and >> >> neutron services are all up and running. >> >> >> >> >> > [sashan at controller ~]$ openstack network agent list >> > >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> > | ID | Agent Type | Host >> | >> > Availability Zone | Alive | State | Binary | >> > >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> > | 0d5571c9-b514-4626-8738-1f87f9344978 | Linux bridge agent | compute >> | >> > None | True | UP | neutron-linuxbridge-agent | >> > | 58b3554f-e0b2-4ce6-941d-ff6ca46247a4 | DHCP agent | controller >> | >> > nova | True | UP | neutron-dhcp-agent | >> > | 5fb85699-20a9-4f8d-9b44-3317ffc1b9fc | Linux bridge agent | controller >> | >> > None | True | UP | neutron-linuxbridge-agent | >> > | c4512921-73ff-49fa-b70d-13a3518883a0 | Metadata agent | controller >> | >> > None | True | UP | neutron-metadata-agent | >> > >> +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ >> > [sashan at controller ~]$ openstack compute service list >> > >> +----+------------------+------------+----------+---------+-------+----------------------------+ >> > | ID | Binary | Host | Zone | Status | State | >> Updated >> > At | >> > >> +----+------------------+------------+----------+---------+-------+----------------------------+ >> > | 1 | nova-consoleauth | controller | internal | enabled | up | >> > 2018-01-16T10:47:22.000000 | >> > | 2 | nova-conductor | controller | internal | enabled | up | >> > 2018-01-16T10:47:22.000000 | >> > | 3 | nova-scheduler | controller | internal | enabled | up | >> > 2018-01-16T10:47:27.000000 | >> > | 6 | nova-compute | compute | nova | enabled | up | >> > 2018-01-16T10:47:24.000000 | >> > >> +----+------------------+------------+----------+---------+-------+----------------------------+ >> > >> > >> >> > I don't think the >> >> > warning about the placement api is relevant. What about the other >> one: >> >> > Unable to refresh my resource provider record? >> >> >> >> I'm not sure about that, it is just a warning. >> >> Can you confirm that glance is working properly and the image is okay? >> >> Is the network layout as expected? Any information in other logs like >> >> neutron and glance? >> >> >> > >> > I noticed this error in the neutron logs: >> > >> > 2018-01-16 21:40:12.558 1090 WARNING neutron.api.extensions [-] Extension >> > vlan-transparent not supported by any of loaded plugins >> > 2018-01-16 21:40:12.558 1090 ERROR neutron.api.extensions [-] Unable to >> > process extensions (auto-allocated-topology) because the configured >> plugins >> > do not satisfy their requirements. Some features will not work as >> expected. >> > 2018-01-16 21:40:12.559 1090 INFO neutron.quota.resource_registry [-] >> > Creating instance of TrackedResource for resource:subnet >> > 2018 >> > >> > glance seems fine i.e. no error messages. >> >> >> >> -- >> Eugen Block voice : +49-40-559 51 75 >> <+49%2040%205595175> >> NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 >> <+49%2040%205595177> >> Postfach 61 03 15 >> D-22423 Hamburg e-mail : eblock at nde.ag >> >> Vorsitzende des Aufsichtsrates: Angelika Mozdzen >> Sitz und Registergericht: Hamburg, HRB 90934 >> Vorstand: Jens-U. Mozdzen >> USt-IdNr. DE 814 013 983 >> >> -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From martins-lists at hostnet.lv Wed Jan 17 10:31:47 2018 From: martins-lists at hostnet.lv (=?UTF-8?B?TcSBcnRpxYbFoSBKYWt1Ym92acSNcw==?=) Date: Wed, 17 Jan 2018 12:31:47 +0200 Subject: [Openstack] DVR Public IP consumption In-Reply-To: References: Message-ID: Hello all, It would be useful for me too if someone could explain this in details. Additionally I also have issue with each virtual router on which is configured external GW, as this also consumes additional public IP. As I read documentation, it looks like Service Subnet solve this issue too. On 2018.01.16. 15:37, Satish Patel wrote: > Thanks Brian, > > I may having difficulty to understand that example, if I have only /24 public subnet for cloud and have 200 compute node then how does it work? > > Sent from my iPhone > >> On Jan 15, 2018, at 9:47 PM, Brian Haley wrote: >> >>> On 01/15/2018 01:57 PM, Satish Patel wrote: >>> I am planning to build openstack on production and big question is >>> network (legacy vs DVR) but in DVR big concern is number of Public IP >>> used on every compute node, I am planning to add max 100 node in >>> cluster in that case it will use 100 public IP for compute node, Ouch! >> You can reduce this public IP consumption by using multiple subnets on the external network, with one just for those DVR interfaces. See >> https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html >> Example #2 for a possible configuration. >> >> As for what type of L3 configuration to run, it seems like you have a good idea of some of the trade-offs with each. >> >> -Brian >> >>> If i use legacy compute node then it could be bottleneck or failure >>> node if not in HA. >>> what most of company use for network node? DVR or legacy? >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From lars-erik.helander at proceranetworks.com Wed Jan 17 11:25:57 2018 From: lars-erik.helander at proceranetworks.com (Lars-Erik Helander) Date: Wed, 17 Jan 2018 11:25:57 +0000 Subject: [Openstack] [nova] Compute node in Pike does not register itself Message-ID: <8803361B-D5BC-477F-B309-C9EBCA83C2AC@proceranetworks.com> I can not get my compute nodes to register themelves when using Pike. It works OK in Ocata. Is there some additional config, service or software package required in Pike ? If I monitor the IP traffic on the compute node the following can be seen when nova-compute is started: Ocata: Compute node “registration” message sent from compute node to the controller node, along with calls to the placement api. Pike: No activity at all is seen on the network. The nova-compute log does not show anything after 2018-01-17 11:10:41.044 DEBUG os_brick.initiator.connector [req-1095033c-df63-4123-9d89-e3a5610a4b61 None None] Factory for VERITAS_HYPERSCALE on x86_64 from (pid=35345) factory /usr/lib/python2.7/dist-packages/os_brick/initiator/connector.py:290 Anyone that could provide any hints on this ? /Lars -------------- next part -------------- An HTML attachment was scrubbed... URL: From correajl at gmail.com Wed Jan 17 17:46:18 2018 From: correajl at gmail.com (Jorge Luiz Correa) Date: Wed, 17 Jan 2018 15:46:18 -0200 Subject: [Openstack] Meaning of each field of 'hypervisor stats show' command. Message-ID: Hi, I would like some help to understand what does means each field in output of the command 'openstack hypervisor stats show': $ openstack hypervisor stats show +----------------------+---------+ | Field | Value | +----------------------+---------+ | count | 5 | | current_workload | 0 | | disk_available_least | 1848 | | free_disk_gb | 1705 | | free_ram_mb | 2415293 | | local_gb | 2055 | | local_gb_used | 350 | | memory_mb | 2579645 | | memory_mb_used | 164352 | | running_vms | 13 | | vcpus | 320 | | vcpus_used | 75 | +----------------------+---------+ Anyone could indicate the documentation that explain each one? Some of them is clear but others are not. Thanks! - JLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Jan 17 18:50:10 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 17 Jan 2018 13:50:10 -0500 Subject: [Openstack] Meaning of each field of 'hypervisor stats show' command. In-Reply-To: References: Message-ID: <1c3447bc-4b4f-4445-45a2-95c0ffcacd27@gmail.com> On 01/17/2018 12:46 PM, Jorge Luiz Correa wrote: > Hi, I would like some help to understand what does means each field in > output of the command 'openstack hypervisor stats show': it's an amalgamation of legacy information that IMHO should be deprecated from the Compute API. FWIW, the "implementation" for this API response is basically just a single SQL statement issued against each Nova cell DB: https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L755 > $ openstack hypervisor stats show > +----------------------+---------+ > | Field | Value | > +----------------------+---------+ > | count | 5 | number of hypervisor hosts in the system that are not disabled. > | current_workload | 0 | The SUM of active boot/reboot/migrate/resize operations going on for all the hypervisor hosts. What actions represent "workload"? See here: https://github.com/openstack/nova/blob/master/nova/compute/stats.py#L45 > | disk_available_least | 1848 | who knows? it's dependent on the virt driver and the disk image backing file and about as reliable as a one-armed guitar player. > | free_disk_gb | 1705 | theoretically should be sum(local_gb - local_gb_used) for all hypervisor hosts. > | free_ram_mb | 2415293 | theoretically should be sum(memory_mb - memory_mb_used) for all hypervisor hosts. > | local_gb | 2055 | amount of space, in GB, available for ephemeral disk images on the hypervisor hosts. if shared storage is used, this value is as useful as having two left feet. > | local_gb_used | 350 | the amount of storage used for ephemeral disk images of instances on the hypervisor hosts. if the instances are boot-from-volume, this number is about as valuable as a three-dollar bill. > | memory_mb | 2579645 | the total amount of RAM the hypervisor hosts have. this does not take into account the amount of reserved memory the host might have configured. > | memory_mb_used | 164352 | the total amount of memory allocated to guest VMs on the hypervisor hosts. > | running_vms | 13 | the total number of VMs on all the hypervisor hosts that are NOT in the DELETED or SHELVED_OFFLOADED states. https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py#L78 > | vcpus | 320 | total amount of physical CPU core-threads across all hypervisor hosts. > | vcpus_used | 75 | > +----------------------+---------+ total number of vCPUs allocated to guests (regardless of VM state) across the hypervisor hosts. Best, -jay > > Anyone could indicate the documentation that explain each one? Some of > them is clear but others are not. > > Thanks! > > - JLC > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From eblock at nde.ag Thu Jan 18 16:11:13 2018 From: eblock at nde.ag (Eugen Block) Date: Thu, 18 Jan 2018 16:11:13 +0000 Subject: [Openstack] Rally - problem with some test In-Reply-To: <44404089.20171206094831@chrustek.net> Message-ID: <20180118161113.Horde.q_veT0HYYEWGKPZGMR-PT2d@webmail.nde.ag> Hi, I can't really help you yet, I just started to deal with rally this week, but I kept your mail in my inbox, just in case ;-) How did you configure your json file? Obviously, it's nova who is complaining about the block devices. How are the instances usually created in your environment? If I launch an instance via horizon, it has preselected "Yes" for "Create new volume", I don't know if this affects rally, too. Regards, Eugen Zitat von Łukasz Chrustek : > Hi, > > I have folowing problem with resize-server.json test in rally: > > # rally task start resize-server.json > > Traceback (most recent call last): > File > "/usr/local/lib/python2.7/dist-packages/rally/task/runner.py", line > 71, in _run_scenario_once > getattr(scenario_inst, method_name)(**scenario_kwargs) > File > "/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/servers.py", line 388, in > run > server = self._boot_server(image, flavor, **kwargs) > File > "/usr/local/lib/python2.7/dist-packages/rally/task/atomic.py", line > 87, in func_atomic_actions > f = func(self, *args, **kwargs) > File > "/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/utils.py", line 80, in > _boot_server > server_name, image, flavor, **kwargs) > File > "/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py", > line 1403, in create > **boot_kwargs) > File > "/usr/local/lib/python2.7/dist-packages/novaclient/v2/servers.py", > line 802, in _boot > return_raw=return_raw, **kwargs) > File "/usr/local/lib/python2.7/dist-packages/novaclient/base.py", > line 361, in _create > resp, body = self.api.client.post(url, body=body) > File > "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", > line 310, in post > return self.request(url, 'POST', **kwargs) > File > "/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line > 83, in request > raise exceptions.from_response(resp, body, url, method) > BadRequest: Block Device Mapping is Invalid: You specified more > local devices than the limit allows (HTTP 400) (Request-ID: > req-30fa2508-cc8e-45f4-9f1c-86202de111df) > > > we don't have ephemeral disk allowed. What options I need to pass to > rally/json file, to make it work ? > > regards > Luk > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : eblock at nde.ag Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983 From haleyb.dev at gmail.com Thu Jan 18 19:12:14 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 18 Jan 2018 14:12:14 -0500 Subject: [Openstack] DVR Public IP consumption In-Reply-To: References: Message-ID: <79409279-ad7f-25f5-622e-dfd9e6333ea9@gmail.com> On 01/16/2018 08:37 AM, Satish Patel wrote: > Thanks Brian, > > I may having difficulty to understand that example, if I have only /24 public subnet for cloud and have 200 compute node then how does it work? The intention is to have a second subnet on the external network, but only have it usable within the datacenter. If you create it and set the service-type to only certain types of ports, like DVR, then it won't be used for floating IP as the other one is. -Brian >> On Jan 15, 2018, at 9:47 PM, Brian Haley wrote: >> >>> On 01/15/2018 01:57 PM, Satish Patel wrote: >>> I am planning to build openstack on production and big question is >>> network (legacy vs DVR) but in DVR big concern is number of Public IP >>> used on every compute node, I am planning to add max 100 node in >>> cluster in that case it will use 100 public IP for compute node, Ouch! >> >> You can reduce this public IP consumption by using multiple subnets on the external network, with one just for those DVR interfaces. See >> https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html >> Example #2 for a possible configuration. >> >> As for what type of L3 configuration to run, it seems like you have a good idea of some of the trade-offs with each. >> >> -Brian >> >>> If i use legacy compute node then it could be bottleneck or failure >>> node if not in HA. >>> what most of company use for network node? DVR or legacy? >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> From xiefp88 at sina.com Fri Jan 19 16:16:39 2018 From: xiefp88 at sina.com (xiefp88 at sina.com) Date: Sat, 20 Jan 2018 00:16:39 +0800 Subject: [Openstack] =?gbk?q?devstack_ERROR_=A3=BAError_opening_current_co?= =?gbk?q?ntrolling_terminal_for_the_process_=28=60/dev/tty=27=29=3A?= =?gbk?q?_No_such_device_or_address_=28polkit-error-quark=2C_0=29?= Message-ID: <20180119161639.D3DFAAC00BF@webmail.sinamail.sina.com.cn> I got this error when I run stack.sh in devstack: 2018-01-19 16:05:11.744 | + /opt/stack/ironic/devstack/tools/ironic/scripts/setup-network.sh::L24: sudo ip link set dev brbm up2018-01-19 16:05:11.757 | + /opt/stack/ironic/devstack/tools/ironic/scripts/setup-network.sh::L27: virsh net-list 2018-01-19 16:05:11.757 | + /opt/stack/ironic/devstack/tools/ironic/scripts/setup-network.sh::L27: grep 'brbm ' 2018-01-19 16:05:11.852 | Error creating textual authentication agent: Error opening current controlling terminal for the process (`/dev/tty'): No such device or address (polkit-error-quark, 0) 2018-01-19 16:05:11.866 | error: failed to connect to the hypervisor 2018-01-19 16:05:11.866 | error: authentication unavailable: no polkit agent available to authenticate action 'org.libvirt.unix.manage' 2018-01-19 16:05:11.869 | + /opt/stack/ironic/devstack/tools/ironic/scripts/setup-network.sh::L28: virsh net-list --inactive 2018-01-19 16:05:11.869 | + /opt/stack/ironic/devstack/tools/ironic/scripts/setup-network.sh::L28: grep 'brbm ' 2018-01-19 16:05:11.906 | Error creating textual authentication agent: Error opening current controlling terminal for the process (`/dev/tty'): No such device or address (polkit-error-quark, 0) And here is my local.conf:[stack at localhost devstack]$ cat local.conf [[local|localrc]] PIP_UPGRADE=True FORCE=yes # Credentials ADMIN_PASSWORD=123456 DATABASE_PASSWORD=123456 RABBIT_PASSWORD=123456 SERVICE_PASSWORD=123456 SERVICE_TOKEN=123456 SWIFT_HASH=123456 SWIFT_TEMPURL_KEY=123456 # Enable Ironic plugin enable_plugin ironic git://git.openstack.org/openstack/ironic # Enable Mogan plugin enable_plugin mogan git://git.openstack.org/openstack/mogan ## Install networking-generic-switch Neutron ML2 driver that interacts with OVS enable_plugin networking-generic-switch https://git.openstack.org/openstack/networking-generic-switch ENABLED_SERVICES=g-api,g-reg,q-agt,q-dhcp,q-l3,q-svc,key,mysql,rabbit,ir-api,ir-cond,s-account,s-container,s-object,s-proxy,tempest # Swift temp URL's are required for agent_* drivers. SWIFT_ENABLE_TEMPURLS=True # Add link local info when registering Ironic node IRONIC_USE_LINK_LOCAL=True IRONIC_ENABLED_NETWORK_INTERFACES=flat,neutron IRONIC_NETWORK_INTERFACE=neutron #Networking configuration OVS_PHYSICAL_BRIDGE=brbm PHYSICAL_NETWORK=mynetwork IRONIC_PROVISION_NETWORK_NAME=ironic-provision IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24 IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1 Q_PLUGIN=ml2 ENABLE_TENANT_VLANS=True Q_ML2_TENANT_NETWORK_TYPE=vlan TENANT_VLAN_RANGE=100:150 Q_USE_PROVIDERNET_FOR_PUBLIC=False # Set resource_classes for nodes to use placement service IRONIC_USE_RESOURCE_CLASSES=True # Create 3 virtual machines to pose as Ironic's baremetal nodes. IRONIC_VM_COUNT=3 IRONIC_VM_SSH_PORT=22 IRONIC_BAREMETAL_BASIC_OPS=True # Enable Ironic drivers. IRONIC_ENABLED_DRIVERS=fake,agent_ipmitool,pxe_ipmitool # Change this to alter the default driver for nodes created by devstack. # This driver should be in the enabled list above. IRONIC_DEPLOY_DRIVER=agent_ipmitool # Using Ironic agent deploy driver by default, so don't use whole disk # image in tempest. IRONIC_TEMPEST_WHOLE_DISK_IMAGE=False # The parameters below represent the minimum possible values to create # functional nodes. IRONIC_VM_SPECS_RAM=1024 IRONIC_VM_SPECS_DISK=10 # To build your own IPA ramdisk from source, set this to True IRONIC_BUILD_DEPLOY_RAMDISK=False #RECLONE=True RECLONE=False # Log all output to files LOGFILE=/home/stack/logs/stack.sh.log VERBOSE=True LOG_COLOR=True SCREEN_LOGDIR=/home/stack/logs IRONIC_VM_LOG_DIR=/home/stack/ironic-bm-logs What maybe is the reason? Thans you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Fri Jan 19 23:58:57 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Sat, 20 Jan 2018 08:58:57 +0900 Subject: [Openstack] =?utf-8?q?devstack_ERROR_=EF=BC=9AError_opening_curre?= =?utf-8?q?nt_controlling_terminal_for_the_process_=28=60/dev/tty=27=29=3A?= =?utf-8?q?_No_such_device_or_address_=28polkit-error-quark=2C_0=29?= In-Reply-To: <20180119161639.D3DFAAC00BF@webmail.sinamail.sina.com.cn> References: <20180119161639.D3DFAAC00BF@webmail.sinamail.sina.com.cn> Message-ID: <5F4F5260-2CE9-49D6-BE43-CA1A69616A6E@gmail.com> This error indicates that something is wrong with the setup of the OS on which you are running DevStack, or with the VM on which you are running the OS. Especially, if you install DevStack on a VirtualBox VM. Google for “Error creating textual authentication agent: Error opening current controlling terminal for the process” and pick the result that best matches your situation. Bernd > On Jan 20, 2018, at 1:16, wrote: > > Error creating textual authentication agent: Error opening current controlling terminal for the process -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.dulak at gmail.com Sat Jan 20 17:28:20 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Sat, 20 Jan 2018 18:28:20 +0100 Subject: [Openstack] Help with openstack-ansible load balancers settings Message-ID: Hi, I need help with the internal_lb_vip_address/external_lb_vip_addresses. I've found several posts people asking for a clarification of the purpose and settings of that load balancers. One of the discussions pointed to https://github.com/openstack/openstack-ansible/blob/a46a72aa7838a3d500e8e397038c6fbded21745c/etc/openstack_deploy/openstack_user_config.yml.example#L103-L118, but that's unclear to me. Does openstack-ansible (17.0.0.0b2 on CentOS7) take care of those load balancers or do I need to configure them manually (if so how?)? Taking https://docs.openstack.org/project-deploy-guide/openstack-ansible/pike/app-config-prod.html as an example I thought that internal_lb_vip_address of 172.29.236.9 corresponds to the deployment host, but when running setup-infrastructure.yml I see haproxy are being configured on the infra nodes (.11, .12, .13) and listen on 172.29.236.9:8181. Since there is no service listening on 8181 on the deployment host 172.29.236.9 I'getting the behavior described in https://ask.openstack.org/en/question/104307/openstack-ansible-pip-issues-while-installing-the-infrastructure/ Cheers, Marcin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ata.abdollahi68 at gmail.com Sun Jan 21 21:01:11 2018 From: ata.abdollahi68 at gmail.com (Ata Abdollahi) Date: Mon, 22 Jan 2018 00:31:11 +0330 Subject: [Openstack] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver Message-ID: Hello everybody, I'm am beginner in openstack and I have install openstack ocata successfully.I using link below to install lbaas: https://docs.openstack.org/ocata/networking-guide/config-lbaas.html When i want to install lbaasv2 I encounter with error below: openstack at ubuntu:~$ sudo neutron-lbaasv2-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/lbaas_agent.ini Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports. 2018-01-21 22:43:12.408 10772 INFO neutron.common.config [-] Logging enabled! 2018-01-21 22:43:12.409 10772 INFO neutron.common.config [-] /usr/bin/neutron-lbaasv2-agent version 10.0.4 2018-01-21 22:43:12.411 10772 WARNING stevedore.named [req-6ebf45ef-7ff4-43c2-8c9a-d9b1f3acc839 - - - - -] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver ^C2018-01-21 22:43:19.697 10772 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting I enter commands in controller node. Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hannes.fuchs at student.htw-berlin.de Mon Jan 22 22:01:22 2018 From: hannes.fuchs at student.htw-berlin.de (Hannes Fuchs) Date: Mon, 22 Jan 2018 23:01:22 +0100 Subject: [Openstack] [openstack] [swift] Erasure Coding - No reconstruction to other nodes/disks on disk failure Message-ID: <190b78fe-268b-b5a9-d96f-27f09b0f9866@student.htw-berlin.de> Hello all, for my master's thesis I'm analyzing different storage policies in openstack swift. I'm manly interested in the reconstruction speed of the different EC implementations. I've noticed in my tests, that there is no reconstruction of fragments/parity to other nodes/disks if a disk fails. My test setup consists of 8 nodes with each 4 disks. OS is Ubuntu 16.04 LTS and the swift version is 2.15.1/pike and here are my 2 example policies: --- [storage-policy:2] name = liberasurecode-rs-vand-4-2 policy_type = erasure_coding ec_type = liberasurecode_rs_vand ec_num_data_fragments = 4 ec_num_parity_fragments = 2 ec_object_segment_size = 1048576 [storage-policy:3] name = liberasurecode-rs-vand-3-1 policy_type = erasure_coding ec_type = liberasurecode_rs_vand ec_num_data_fragments = 3 ec_num_parity_fragments = 1 ec_object_segment_size = 1048576 --- ATM I've tested only the ec_type liberasurecode_rs_vand. With other implementations the startup of swift fails, but I think this is another topic. To simulate a disk failure I'm using fault injection [1]. Testrun example: 1. fill with objects (32.768 1M Objects, Sum: 32GB) 2. make a disk "fail" 3. disk failure is detected, /but no reconstruction/ 4. replace "failed" disk, mount "new" empty disk 5. missing fragments/parity is reconstructed on new empty disk Expected: 1. fill with objects (32.768 1M Objects, Sum: 32GB) 2. make a disk "fail" 3. disk failure is detected, reconstruction to remaining disks/nodes 4. replace "failed" disk, mount "new" empty disk 5. rearrange data in ring to pre fail state Shouldn't be the missing fragments/parity reconstructed on the remaining disks/nodes? (See point 3, in Testrun example) [1] https://www.kernel.org/doc/Documentation/fault-injection/fault-injection.txt Cheers, Hannes Fuchs From doka.ua at gmx.com Tue Jan 23 09:25:34 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Tue, 23 Jan 2018 11:25:34 +0200 Subject: [Openstack] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver In-Reply-To: References: Message-ID: Hi Ata, when you use Octavia, you don't need agents, as specified in documentation: "Ensure that the LBaaS v1 and v2 service providers are removed from the [service_providers] section. They are not used with Octavia. *Verify that all LBaaS agents are stopped.*" Also, neutron lbaas CLI is deprecated in favor to openstack lbaas CLI, which talks Octavia directly, using corresponding endpoints. On 1/21/18 11:01 PM, Ata Abdollahi wrote: > Hello everybody, > I'm am beginner in openstack and I have install openstack ocata > successfully.I using link below to install lbaas: > https://docs.openstack.org/ocata/networking-guide/config-lbaas.html > > When i want to install lbaasv2 I encounter with error below: > openstack at ubuntu:~$ sudo neutron-lbaasv2-agent --config-file > /etc/neutron/neutron.conf --config-file /etc/neutron/lbaas_agent.ini > Guru meditation now registers SIGUSR1 and SIGUSR2 by default for > backward compatibility. SIGUSR1 will no longer be registered in a > future release, so please use SIGUSR2 to generate reports. > 2018-01-21 22:43:12.408 10772 INFO neutron.common.config [-] Logging > enabled! > 2018-01-21 22:43:12.409 10772 INFO neutron.common.config [-] > /usr/bin/neutron-lbaasv2-agent version 10.0.4 > 2018-01-21 22:43:12.411 10772 WARNING stevedore.named > [req-6ebf45ef-7ff4-43c2-8c9a-d9b1f3acc839 - - - - -] Could not load > neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver > ^C2018-01-21 22:43:19.697 10772 INFO oslo_service.service [-] Caught > SIGINT signal, instantaneous exiting > > > I enter commands in controller node. > > Thanks a lot. > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Fri Jan 26 04:29:52 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Fri, 26 Jan 2018 11:29:52 +0700 Subject: [Openstack] [neutron] Cannot acces provider network (Openstack Packstack Opendaylight integration) Message-ID: Hi everyone, I try to integerate Openstack that build with packstack (Centos) with OpenDayLight. this is my topology Openstack Controller : 10.210.210.10 & 10.211.211.10 - eth1 : 10.211.211.10/24 - eth0 : 10.210.210.10/24 Openstack Compute : 10.210.210.20 & 10.211.211.20 - eth1 : 10.211.211.20/24 - eth0 : 10.210.210.20/24 OpenDayLight : 10.210.210.30 - eth1 : 10.210.210.30/24 Provider Network : 10.211.211.0/24 Tenant Network : 10.210.210.0/24 Openstack Version : Newton OpenDayLight Version : Nitrogen SR1 this is my packstack configuration changes CONFIG_HEAT_INSTALL=y CONFIG_NEUTRON_FWAAS=y CONFIG_NEUTRON_VPNAAS=y CONFIG_LBAAS_INSTALL=y CONFIG_CINDER_INSTALL=n CONFIG_SWIFT_INSTALL=n CONFIG_CEILOMETER_INSTALL=n CONFIG_AODH_INSTALL=n CONFIG_GNOCCHI_INSTALL=n CONFIG_NAGIOS_INSTALL=n CONFIG_PROVISION_DEMO=n CONFIG_COMPUTE_HOSTS=10.X0.X0.20 CONFIG_USE_EPEL=y CONFIG_KEYSTONE_ADMIN_PW=rahasia CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,gre,vlan,flat,local CONFIG_NEUTRON_ML2_FLAT_NETWORKS=external CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=external:br-ex CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth1 CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex I try to follow this tutorial : http://docs.opendaylight.org/ en/stable-nitrogen/submodules/netvirt/docs/openstack-guide/ openstack-with-netvirt.html the instance is getting dhcp in tenant network and ping the ip tenant router gateway. but i cant ping all of provider network. this is all of my configuration when integrating with opendaylight ## OPENDAYLIGHT ## ** Set ACL mkdir -p etc/opendaylight/datastore/initial/config/ cp system/org/opendaylight/netvirt/aclservice-impl/0.5.1/ aclservice-impl-0.5.1-config.xml etc/opendaylight/datastore/ initial/config/netvirt-aclservice-config.xml sed -i s/stateful/transparent/ etc/opendaylight/datastore/ initial/config/netvirt-aclservice-config.xml export JAVA_HOME=/usr/java/jdk1.8.0_162/jre ./bin/karaf ** Install Feature feature:install odl-dluxapps-nodes odl-dlux-core odl-dluxapps-topology odl-dluxapps-applications odl-netvirt-openstack odl-netvirt-ui odl-mdsal-apidocs odl-l2switch-all ## OPENSTACK CONTROLLER NODE ## systemctl stop neutron-server systemctl stop neutron-openvswitch-agent systemctl disable neutron-openvswitch-agent systemctl stop neutron-l3-agent systemctl disable neutron-l3-agent systemctl stop openvswitch rm -rf /var/log/openvswitch/* rm -rf /etc/openvswitch/conf.db systemctl start openvswitch ovs-vsctl set-manager tcp:10.210.210.30:6640 ovs-vsctl del-port br-int eth1 ovs-vsctl add-br br-ex ovs-vsctl add-port br-ex eth1 ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653 ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.10 ovs-vsctl get Open_vSwitch . other_config yum -y install python-networking-odl crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan cat <> /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_odl] password = admin username = admin url = http://10.210.210.30:8080/controller/nb/v2/neutron EOT crudini --set /etc/neutron/plugins/neutron.conf DEFAULT service_plugins odl-router crudini --set /etc/neutron/plugins/dhcp_agent.ini OVS ovsdb_interface vsctl mysql -e "DROP DATABASE IF EXISTS neutron;" mysql -e "CREATE DATABASE neutron CHARACTER SET utf8;" neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head systemctl start neutron-server sudo ovs-vsctl set Open_vSwitch . other_config:provider_ mappings=external:br-ex ## OPENSTACK COMPUTE NODE ## systemctl stop neutron-openvswitch-agent systemctl disable neutron-openvswitch-agent systemctl stop neutron-l3-agent systemctl disable neutron-l3-agent systemctl stop openvswitch rm -rf /var/log/openvswitch/* rm -rf /etc/openvswitch/conf.db systemctl start openvswitch ovs-vsctl set-manager tcp:10.210.210.30:6640 ovs-vsctl set-manager tcp:10.210.210.30:6640 ovs-vsctl del-port br-int eth1 ovs-vsctl add-br br-ex ovs-vsctl add-port br-ex eth1 ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653 ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.20 ovs-vsctl get Open_vSwitch . other_config yum -y install python-networking-odl sudo ovs-vsctl set Open_vSwitch . other_config:provider_ mappings=external:br-ex ## REPORT ## ############ ## OVS-VSCTL SHOW ## ### CONTROLLER ### [root at pod21-controller ~]# ovs-vsctl show 525fbe7c-e60c-4135-b0a5-178d76c04529 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "gre-0ad2d214" Interface "gre-0ad2d214" type: gre options: {df_default="true", in_key=flow, local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"} Port br-tun Interface br-tun type: internal Port "vxlan-0ad2d214" Interface "vxlan-0ad2d214" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth1" Interface "eth1" Port br-ex Interface br-ex type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} ovs_version: "2.6.1" ### COMPUTE ### [root at pod21-compute ~]# ovs-vsctl show f4466d5a-c1f5-4c5c-91c3-636944cd0f97 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal Port "eth1" Interface "eth1" Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "gre-0ad2d20a" Interface "gre-0ad2d20a" type: gre options: {df_default="true", in_key=flow, local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"} Port br-tun Interface br-tun type: internal Port "vxlan-0ad2d20a" Interface "vxlan-0ad2d20a" type: vxlan options: {df_default="true", in_key=flow, local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"} ovs_version: "2.6.1" ### OVS-VSCTL AFTER CONFIG ### ### CONTROLLER ### [root at pod21-controller ~]# ovs-vsctl show 71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1 Manager "tcp:10.210.210.30:6640" is_connected: true Bridge br-int Controller "tcp:10.210.210.30:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Bridge br-ex Controller "tcp:10.210.210.30:6653" is_connected: true Port br-ex Interface br-ex type: internal Port "eth1" Interface "eth1" ovs_version: "2.6.1" ### COMPUTE ### [root at pod21-compute ~]# ovs-vsctl show 3bede8e2-eb29-4dbb-97f0-4cbadb2c0195 Manager "tcp:10.210.210.30:6640" is_connected: true Bridge br-ex Controller "tcp:10.210.210.30:6653" is_connected: true Port br-ex Interface br-ex type: internal Port "eth1" Interface "eth1" Bridge br-int Controller "tcp:10.210.210.30:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal ovs_version: "2.6.1" ### AFTER ADDING INSTANCE ### ### CONTROLLER ### [root at pod21-controller ~(keystone_admin)]# ovs-vsctl show 71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1 Manager "ptcp:6640:127.0.0.1" is_connected: true Manager "tcp:10.210.210.30:6640" is_connected: true Bridge br-int Controller "tcp:10.210.210.30:6653" is_connected: true fail_mode: secure Port "tapab981c1e-4b" Interface "tapab981c1e-4b" type: internal Port "qr-cba77b1d-73" Interface "qr-cba77b1d-73" type: internal Port br-int Interface br-int type: internal Port "tun7314cbc7b3e" Interface "tun7314cbc7b3e" type: vxlan options: {key=flow, local_ip="10.210.210.10", remote_ip="10.210.210.20"} Bridge br-ex Controller "tcp:10.210.210.30:6653" is_connected: true Port "qg-1ba8c01a-15" Interface "qg-1ba8c01a-15" type: internal Port br-ex Interface br-ex type: internal Port "eth1" Interface "eth1" ovs_version: "2.6.1" ### COMPUTE ### [root at pod21-compute ~]# ovs-vsctl show 3bede8e2-eb29-4dbb-97f0-4cbadb2c0195 Manager "tcp:10.210.210.30:6640" is_connected: true Bridge br-ex Controller "tcp:10.210.210.30:6653" is_connected: true Port br-ex Interface br-ex type: internal Port "eth1" Interface "eth1" Bridge br-int Controller "tcp:10.210.210.30:6653" is_connected: true fail_mode: secure Port "tun51bba5158fe" Interface "tun51bba5158fe" type: vxlan options: {key=flow, local_ip="10.210.210.20", remote_ip="10.210.210.10"} Port "tap1e71587f-32" Interface "tap1e71587f-32" Port "tap5c0a404b-75" Interface "tap5c0a404b-75" Port br-int Interface br-int type: internal ovs_version: "2.6.1"87 i try to mapping to eth1 or br-ex but its same. i cant ping all provider network. (only the gateway 10.211.211.1 from controller or compute node). thanks :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fawaz.moh.ibraheem at gmail.com Fri Jan 26 05:58:38 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Fri, 26 Jan 2018 09:58:38 +0400 Subject: [Openstack] [neutron] Cannot acces provider network (Openstack Packstack Opendaylight integration) In-Reply-To: References: Message-ID: Hi Zufar, I see no patch peer between int-br and br-ex (int-br-ex <-> phy-br-ex) Try to add it manually, then do the changes in your mapping configuration directives. --- Regards, Fawaz Mohammed On Fri, Jan 26, 2018 at 8:29 AM, Zufar Dhiyaulhaq wrote: > Hi everyone, I try to integerate Openstack that build with packstack > (Centos) with OpenDayLight. > this is my topology > > Openstack Controller : 10.210.210.10 & 10.211.211.10 > - eth1 : 10.211.211.10/24 > - eth0 : 10.210.210.10/24 > > Openstack Compute : 10.210.210.20 & 10.211.211.20 > - eth1 : 10.211.211.20/24 > - eth0 : 10.210.210.20/24 > > OpenDayLight : 10.210.210.30 > - eth1 : 10.210.210.30/24 > > Provider Network : 10.211.211.0/24 > Tenant Network : 10.210.210.0/24 > > Openstack Version : Newton > OpenDayLight Version : Nitrogen SR1 > > this is my packstack configuration changes > > CONFIG_HEAT_INSTALL=y > CONFIG_NEUTRON_FWAAS=y > CONFIG_NEUTRON_VPNAAS=y > CONFIG_LBAAS_INSTALL=y > > CONFIG_CINDER_INSTALL=n > CONFIG_SWIFT_INSTALL=n > CONFIG_CEILOMETER_INSTALL=n > CONFIG_AODH_INSTALL=n > CONFIG_GNOCCHI_INSTALL=n > CONFIG_NAGIOS_INSTALL=n > CONFIG_PROVISION_DEMO=n > > CONFIG_COMPUTE_HOSTS=10.X0.X0.20 > CONFIG_USE_EPEL=y > CONFIG_KEYSTONE_ADMIN_PW=rahasia > CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,gre,vlan,flat,local > CONFIG_NEUTRON_ML2_FLAT_NETWORKS=external > CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=external:br-ex > CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth1 > CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex > > I try to follow this tutorial : http://docs.opendaylight.org/e > n/stable-nitrogen/submodules/netvirt/docs/openstack-guide/op > enstack-with-netvirt.html > > the instance is getting dhcp in tenant network and ping the ip tenant > router gateway. but i cant ping all of provider network. > > this is all of my configuration when integrating with opendaylight > > ## OPENDAYLIGHT ## > > ** Set ACL > mkdir -p etc/opendaylight/datastore/initial/config/ > cp system/org/opendaylight/netvirt/aclservice-impl/0.5.1/aclservice-impl-0.5.1-config.xml > etc/opendaylight/datastore/initial/config/netvirt-aclservice-config.xml > sed -i s/stateful/transparent/ etc/opendaylight/datastore/ini > tial/config/netvirt-aclservice-config.xml > > export JAVA_HOME=/usr/java/jdk1.8.0_162/jre > ./bin/karaf > > ** Install Feature > feature:install odl-dluxapps-nodes odl-dlux-core odl-dluxapps-topology > odl-dluxapps-applications odl-netvirt-openstack odl-netvirt-ui > odl-mdsal-apidocs odl-l2switch-all > > ## OPENSTACK CONTROLLER NODE ## > > systemctl stop neutron-server > systemctl stop neutron-openvswitch-agent > systemctl disable neutron-openvswitch-agent > systemctl stop neutron-l3-agent > systemctl disable neutron-l3-agent > > systemctl stop openvswitch > rm -rf /var/log/openvswitch/* > rm -rf /etc/openvswitch/conf.db > systemctl start openvswitch > > ovs-vsctl set-manager tcp:10.210.210.30:6640 > ovs-vsctl del-port br-int eth1 > ovs-vsctl add-br br-ex > ovs-vsctl add-port br-ex eth1 > ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653 > > ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.10 > ovs-vsctl get Open_vSwitch . other_config > > yum -y install python-networking-odl > > crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 > mechanism_drivers opendaylight > crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 > tenant_network_types vxlan > > cat <> /etc/neutron/plugins/ml2/ml2_conf.ini > [ml2_odl] > password = admin > username = admin > url = http://10.210.210.30:8080/controller/nb/v2/neutron > EOT > > crudini --set /etc/neutron/plugins/neutron.conf DEFAULT > service_plugins odl-router > crudini --set /etc/neutron/plugins/dhcp_agent.ini OVS ovsdb_interface > vsctl > > mysql -e "DROP DATABASE IF EXISTS neutron;" > mysql -e "CREATE DATABASE neutron CHARACTER SET utf8;" > neutron-db-manage --config-file /etc/neutron/neutron.conf > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head > > systemctl start neutron-server > sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings > =external:br-ex > > ## OPENSTACK COMPUTE NODE ## > > systemctl stop neutron-openvswitch-agent > systemctl disable neutron-openvswitch-agent > systemctl stop neutron-l3-agent > systemctl disable neutron-l3-agent > > systemctl stop openvswitch > rm -rf /var/log/openvswitch/* > rm -rf /etc/openvswitch/conf.db > > systemctl start openvswitch > > ovs-vsctl set-manager tcp:10.210.210.30:6640 > ovs-vsctl set-manager tcp:10.210.210.30:6640 > ovs-vsctl del-port br-int eth1 > ovs-vsctl add-br br-ex > ovs-vsctl add-port br-ex eth1 > ovs-vsctl set-controller br-ex tcp:10.210.210.30:6653 > > ovs-vsctl set Open_vSwitch . other_config:local_ip=10.210.210.20 > ovs-vsctl get Open_vSwitch . other_config > > yum -y install python-networking-odl > > sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings > =external:br-ex > > ## REPORT ## > ############ > > ## OVS-VSCTL SHOW ## > ### CONTROLLER ### > [root at pod21-controller ~]# ovs-vsctl show > 525fbe7c-e60c-4135-b0a5-178d76c04529 > Manager "ptcp:6640:127.0.0.1" > is_connected: true > Bridge br-tun > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port "gre-0ad2d214" > Interface "gre-0ad2d214" > type: gre > options: {df_default="true", in_key=flow, > local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-0ad2d214" > Interface "vxlan-0ad2d214" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="10.210.210.10", out_key=flow, remote_ip="10.210.210.20"} > > > Port patch-int > > Interface patch-int > > type: patch > > options: {peer=patch-tun} > > Bridge br-ex > > Controller "tcp:127.0.0.1:6633" > > is_connected: true > > fail_mode: secure > > Port phy-br-ex > > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "eth1" > Interface "eth1" > Port br-ex > Interface br-ex > type: internal > Bridge br-int > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > ovs_version: "2.6.1" > > ### COMPUTE ### > [root at pod21-compute ~]# ovs-vsctl show > f4466d5a-c1f5-4c5c-91c3-636944cd0f97 > Manager "ptcp:6640:127.0.0.1" > is_connected: true > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port br-ex > Interface br-ex > type: internal > Port "eth1" > Interface "eth1" > Bridge br-int > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > > Port int-br-ex > > Interface int-br-ex > > type: patch > > options: {peer=phy-br-ex} > > Port br-int > > Interface br-int > > type: internal > > Port patch-tun > > Interface patch-tun > > type: patch > options: {peer=patch-int} > Bridge br-tun > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port "gre-0ad2d20a" > Interface "gre-0ad2d20a" > type: gre > options: {df_default="true", in_key=flow, > local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-0ad2d20a" > Interface "vxlan-0ad2d20a" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="10.210.210.20", out_key=flow, remote_ip="10.210.210.10"} > ovs_version: "2.6.1" > > ### OVS-VSCTL AFTER CONFIG ### > > ### CONTROLLER ### > [root at pod21-controller ~]# ovs-vsctl show > 71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1 > Manager "tcp:10.210.210.30:6640" > is_connected: true > Bridge br-int > Controller "tcp:10.210.210.30:6653" > is_connected: true > fail_mode: secure > Port br-int > Interface br-int > type: internal > Bridge br-ex > Controller "tcp:10.210.210.30:6653" > is_connected: true > Port br-ex > Interface br-ex > type: internal > Port "eth1" > Interface "eth1" > ovs_version: "2.6.1" > > ### COMPUTE ### > [root at pod21-compute ~]# ovs-vsctl show > 3bede8e2-eb29-4dbb-97f0-4cbadb2c0195 > Manager "tcp:10.210.210.30:6640" > is_connected: true > Bridge br-ex > Controller "tcp:10.210.210.30:6653" > is_connected: true > Port br-ex > Interface br-ex > type: internal > Port "eth1" > Interface "eth1" > Bridge br-int > Controller "tcp:10.210.210.30:6653" > is_connected: true > fail_mode: secure > Port br-int > Interface br-int > type: internal > ovs_version: "2.6.1" > > > ### AFTER ADDING INSTANCE ### > > ### CONTROLLER ### > [root at pod21-controller ~(keystone_admin)]# ovs-vsctl show > 71b22ef2-fbea-4cd4-ba6a-883b3df9c5f1 > Manager "ptcp:6640:127.0.0.1" > is_connected: true > Manager "tcp:10.210.210.30:6640" > is_connected: true > Bridge br-int > Controller "tcp:10.210.210.30:6653" > is_connected: true > fail_mode: secure > Port "tapab981c1e-4b" > Interface "tapab981c1e-4b" > type: internal > Port "qr-cba77b1d-73" > Interface "qr-cba77b1d-73" > type: internal > Port br-int > Interface br-int > type: internal > Port "tun7314cbc7b3e" > Interface "tun7314cbc7b3e" > type: vxlan > options: {key=flow, local_ip="10.210.210.10", > remote_ip="10.210.210.20"} > Bridge br-ex > Controller "tcp:10.210.210.30:6653" > is_connected: true > Port "qg-1ba8c01a-15" > Interface "qg-1ba8c01a-15" > type: internal > Port br-ex > Interface br-ex > type: internal > Port "eth1" > Interface "eth1" > ovs_version: "2.6.1" > > > ### COMPUTE ### > [root at pod21-compute ~]# ovs-vsctl show > 3bede8e2-eb29-4dbb-97f0-4cbadb2c0195 > Manager "tcp:10.210.210.30:6640" > is_connected: true > Bridge br-ex > Controller "tcp:10.210.210.30:6653" > is_connected: true > Port br-ex > Interface br-ex > type: internal > Port "eth1" > Interface "eth1" > Bridge br-int > Controller "tcp:10.210.210.30:6653" > is_connected: true > fail_mode: secure > Port "tun51bba5158fe" > Interface "tun51bba5158fe" > type: vxlan > options: {key=flow, local_ip="10.210.210.20", > remote_ip="10.210.210.10"} > Port "tap1e71587f-32" > Interface "tap1e71587f-32" > Port "tap5c0a404b-75" > Interface "tap5c0a404b-75" > Port br-int > Interface br-int > type: internal > ovs_version: "2.6.1"87 > > i try to mapping to eth1 or br-ex but its same. i cant ping all provider > network. (only the gateway 10.211.211.1 from controller or compute node). > thanks :) > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fv at spots.school Fri Jan 26 17:23:05 2018 From: fv at spots.school (fv at spots.school) Date: Fri, 26 Jan 2018 09:23:05 -0800 Subject: [Openstack] [RDO PackStack] Running PackStack multiple times Message-ID: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> Hello! I am trying to deploy an OpenStack cluster using PackStack but I am encountering some errors. I am slowly working my way through them but I have a question: Is it alright to run the packstack script multiple times? And, if not is there a way to undo what packstack has done in order to try again? Thank you! FV From Remo at italy1.com Fri Jan 26 18:06:17 2018 From: Remo at italy1.com (Remo Mattei) Date: Fri, 26 Jan 2018 19:06:17 +0100 Subject: [Openstack] [RDO PackStack] Running PackStack multiple times In-Reply-To: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> References: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> Message-ID: <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> What cluster? As far as I know there is no HA mode with PackStack. Remo > On Jan 26, 2018, at 6:23 PM, fv at spots.school wrote: > > Hello! > > I am trying to deploy an OpenStack cluster using PackStack but I am encountering some errors. I am slowly working my way through them but I have a question: > > Is it alright to run the packstack script multiple times? > And, if not is there a way to undo what packstack has done in order to try again? > > Thank you! > > FV > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From fv at spots.school Fri Jan 26 18:34:00 2018 From: fv at spots.school (fv at spots.school) Date: Fri, 26 Jan 2018 10:34:00 -0800 Subject: [Openstack] [RDO PackStack] Running PackStack multiple times In-Reply-To: <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> References: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> Message-ID: <8ba96b8c910b7bce1c014495117b76ca@spots.school> Yes, you are right! Wrong terminology on my part, sorry. FV On 2018-01-26 10:06, Remo Mattei wrote: > What cluster? As far as I know there is no HA mode with PackStack. > > Remo > >> On Jan 26, 2018, at 6:23 PM, fv at spots.school wrote: >> >> Hello! >> >> I am trying to deploy an OpenStack cluster using PackStack but I am >> encountering some errors. I am slowly working my way through them >> but I have a question: >> >> Is it alright to run the packstack script multiple times? >> And, if not is there a way to undo what packstack has done in order >> to try again? >> >> Thank you! >> >> FV >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From marcin.dulak at gmail.com Sat Jan 27 10:14:08 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Sat, 27 Jan 2018 11:14:08 +0100 Subject: [Openstack] [RDO PackStack] Running PackStack multiple times In-Reply-To: <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> References: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> Message-ID: Yes - you can run packstack repeatedly until the installation is successful, puppet is supposed to take care about the desired state. If you experience the contrary - open a bug at https://bugzilla.redhat.com/enter_bug.cgi?product=RDO For the purpose of learning openstack it's better to use a VM for packstack - give https://github.com/locationlabs/vagrant-packstack a try. Marcin On Fri, Jan 26, 2018 at 7:06 PM, Remo Mattei wrote: > What cluster? As far as I know there is no HA mode with PackStack. > > Remo > > > On Jan 26, 2018, at 6:23 PM, fv at spots.school wrote: > > Hello! > > I am trying to deploy an OpenStack cluster using PackStack but I am > encountering some errors. I am slowly working my way through them but I > have a question: > > Is it alright to run the packstack script multiple times? > And, if not is there a way to undo what packstack has done in order to try > again? > > Thank you! > > FV > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathias.strufe at dfki.de Sat Jan 27 19:00:00 2018 From: mathias.strufe at dfki.de (Mathias Strufe (DFKI)) Date: Sat, 27 Jan 2018 20:00:00 +0100 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough Message-ID: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> Dear all, I'm quite new to openstack and like to install openVSwtich inside one Instance of our Mitika openstack Lab Enviornment ... But it seems that ARP packets got lost between the network interface of the instance and the OVS bridge ... With tcpdump on the interface I see the APR packets ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens6, link-type EN10MB (Ethernet), capture size 262144 bytes 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell 192.168.120.6, length 28 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, length 28 but on the OVS bridge nothing arrives ... tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on OVSbr2, link-type EN10MB (Ethernet), capture size 262144 bytes I disabled port_security and removed the security group but nothing changed +-----------------------+---------------------------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------------------------+ | admin_state_up | True | | allowed_address_pairs | | | binding:host_id | node11 | | binding:profile | {} | | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} | | binding:vif_type | ovs | | binding:vnic_type | normal | | created_at | 2018-01-27T16:45:48Z | | description | | | device_id | 74916967-984c-4617-ae33-b847de73de13 | | device_owner | compute:nova | | extra_dhcp_opts | | | fixed_ips | {"subnet_id": "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": "192.168.120.10"} | | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 | | mac_address | fa:16:3e:af:90:0c | | name | | | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 | | port_security_enabled | False | | project_id | c48457e73b664147a3d2d36d75dcd155 | | revision_number | 27 | | security_groups | | | status | ACTIVE | | tenant_id | c48457e73b664147a3d2d36d75dcd155 | | updated_at | 2018-01-27T18:54:24Z | +-----------------------+---------------------------------------------------------------------------------------+ maybe the port_filter causes still the problem? But how to disable it? Any other idea? Thanks and BR Mathias. From fv at spots.school Sat Jan 27 19:19:36 2018 From: fv at spots.school (fv at spots.school) Date: Sat, 27 Jan 2018 11:19:36 -0800 Subject: [Openstack] [RDO PackStack] Running PackStack multiple times In-Reply-To: References: <35c294a8a9475ac12e8471bb1e2505dd@spots.school> <7125CEFE-36BC-437A-84FC-8A3ABF0066DC@italy1.com> Message-ID: <55c1b4f2c1004fc7370d76bf540a7fc8@spots.school> Thank you, that is exactly what I needed! (I know it is a bit naughty but I am planning to use PackStack for a production deployment. :) FV On 2018-01-27 02:14, Marcin Dulak wrote: > Yes - you can run packstack repeatedly until the installation is > successful, puppet is supposed to take care about the desired state. > > If you experience the contrary - open a bug at > https://bugzilla.redhat.com/enter_bug.cgi?product=RDO > > For the purpose of learning openstack it's better to use a VM for > packstack - give https://github.com/locationlabs/vagrant-packstack [2] > a try. > > Marcin > > On Fri, Jan 26, 2018 at 7:06 PM, Remo Mattei wrote: > >> What cluster? As far as I know there is no HA mode with PackStack. >> >> Remo >> >>> On Jan 26, 2018, at 6:23 PM, fv at spots.school wrote: >>> >>> Hello! >>> >>> I am trying to deploy an OpenStack cluster using PackStack but I >>> am encountering some errors. I am slowly working my way through >>> them but I have a question: >>> >>> Is it alright to run the packstack script multiple times? >>> And, if not is there a way to undo what packstack has done in >>> order to try again? >>> >>> Thank you! >>> >>> FV >>> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > > > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > [2] https://github.com/locationlabs/vagrant-packstack From doka.ua at gmx.com Sat Jan 27 21:44:57 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Sat, 27 Jan 2018 23:44:57 +0200 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> Message-ID: <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> Hi Mathias, whether you have all corresponding bridges and patches between them as described in openvswitch_agent.ini using integration_bridge tunnel_bridge int_peer_patch_port tun_peer_patch_port bridge_mappings parameters? And make sure, that service "neutron-ovs-cleanup" is in use during system boot. You can check these bridges and patches using "ovs-vsctl show" command. On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: > Dear all, > > I'm quite new to openstack and like to install openVSwtich inside one > Instance of our Mitika openstack Lab Enviornment ... > But it seems that ARP packets got lost between the network interface > of the instance and the OVS bridge ... > > With tcpdump on the interface I see the APR packets ... > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on ens6, link-type EN10MB (Ethernet), capture size 262144 bytes > 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, > length 28 > 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, > length 28 > 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell > 192.168.120.6, length 28 > 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell 192.168.120.6, > length 28 > > > > but on the OVS bridge nothing arrives ... > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode > listening on OVSbr2, link-type EN10MB (Ethernet), capture size 262144 > bytes > > > > I disabled port_security and removed the security group but nothing > changed > > +-----------------------+---------------------------------------------------------------------------------------+ > > | Field                 | Value                                        | > +-----------------------+---------------------------------------------------------------------------------------+ > > | admin_state_up        | True                                        | > | allowed_address_pairs |                                        | > | binding:host_id       | node11                                        | > | binding:profile       | {}                                        | > | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": > true}                                        | > | binding:vif_type      | ovs                                        | > | binding:vnic_type     | normal                                        | > | created_at            | 2018-01-27T16:45:48Z >                                        | > | description |                                        | > | device_id             | 74916967-984c-4617-ae33-b847de73de13 >                                        | > | device_owner          | compute:nova >                                        | > | extra_dhcp_opts |                                        | > | fixed_ips             | {"subnet_id": > "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": "192.168.120.10"} | > | id                    | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >                                        | > | mac_address           | fa:16:3e:af:90:0c >                                        | > | name |                                        | > | network_id            | 917254cb-9721-4207-99c5-8ead9f95d186 >                                        | > | port_security_enabled | False                                        | > | project_id            | c48457e73b664147a3d2d36d75dcd155 >                                        | > | revision_number       | 27                                        | > | security_groups |                                        | > | status                | ACTIVE                                        | > | tenant_id             | c48457e73b664147a3d2d36d75dcd155 >                                        | > | updated_at            | 2018-01-27T18:54:24Z >                                        | > +-----------------------+---------------------------------------------------------------------------------------+ > > > > maybe the port_filter causes still the problem? But how to disable it? > > Any other idea? > > Thanks and BR Mathias. > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to     : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.bishop at beyondhosting.net Sun Jan 28 01:00:48 2018 From: tyler.bishop at beyondhosting.net (Tyler Bishop) Date: Sat, 27 Jan 2018 20:00:48 -0500 (EST) Subject: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? In-Reply-To: References: Message-ID: <1757695191.245913.1517101248184.JavaMail.zimbra@beyondhosting.net> On dell DRAC you can disable IPMI/RAC control at the the device for OS configuration. With Supermicro IPMI you just need to create a random user and random password that is not "admin". _____________________________________________ Tyler Bishop Founder EST 2007 O: 513-299-7108 x10 M: 513-646-5809 [ http://beyondhosting.net/ | http://BeyondHosting.net ] This email is intended only for the recipient(s) above and/or otherwise authorized personnel. The information contained herein and attached is confidential and the property of Beyond Hosting. Any unauthorized copying, forwarding, printing, and/or disclosing any information related to this email is prohibited. If you received this message in error, please contact the sender and destroy all copies of this email and any attachment(s). From: "Guo James" To: xiefp88 at sina.com, "openstack" Sent: Wednesday, January 10, 2018 10:16:34 PM Subject: Re: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? Ironic user can change ipmi address so that OpenStack ironic lose control of bare mental. I think that is unacceptable. It seems that we should make ironic image without root privilege From: xiefp88 at sina.com [mailto:xiefp88 at sina.com] Sent: Thursday, January 11, 2018 9:12 AM To: Guo James; openstack Subject: 回复: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? If you can not get the usename and password of the OS, you can not modify ipmi configuration through you got the ironic user info. ----- 原始邮件 ----- 发件人: Guo James < [ mailto:guoyongxhzhf at outlook.com | guoyongxhzhf at outlook.com ] > 收件人: " [ mailto:openstack at lists.openstack.org | openstack at lists.openstack.org ] " < [ mailto:openstack at lists.openstack.org | openstack at lists.openstack.org ] > 主题: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? 日期: 2018 年 01 月 10 日 17 点 21 分 I notice that after an ironic user get a bare mental successfully, he can access ipmi through ipmi device although he can't access ipmi through LAN How to prevent the situation? If he modify ipmi configuration, that will be mess. _______________________________________________ Mailing list: [ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ] Post to : [ mailto:openstack at lists.openstack.org | openstack at lists.openstack.org ] Unsubscribe : [ http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack | http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack ] _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From riccardo.murri at gmail.com Sun Jan 28 09:53:36 2018 From: riccardo.murri at gmail.com (Riccardo Murri) Date: Sun, 28 Jan 2018 10:53:36 +0100 Subject: [Openstack] Fwd: how to allocate floating IP with Python API? In-Reply-To: <20180128095137.tb3s53iws7bnauor@monia> References: <20180128095137.tb3s53iws7bnauor@monia> Message-ID: Hello, I am trying to figure out how floating IPs should be allocated with the Python APIs. Code like the following used to work:: # `self.nova_client` is a `novaclient.client.Client` instance free_ips = [ip for ip in self.nova_client.floating_ips.list() if not ip.fixed_ip] if not free_ips: free_ips.append(self.nova_client.floating_ips.create()) floating_ip = free_ips.pop() # `instance` is a Nova client "server manager" object instance.add_floating_ip(floating_ip) However, Nova client's `add_floating_ip()` method is marked as deprecated but no alternative is given in the documentation. I presumed that I could use the Neutron API client to do the following, but it doesn't work either (the call to update the port with the floating IP raises an error):: # # 1. get or create a floating IP address # # `self.neutron_client` is a `neutronclient.v2_0.client.Client` instance free_ips = [ ip for ip in # `fixed_ip_address=''` seems to act as wildcard self.neutron_client.list_floatingips(fixed_ip_address='').get('floatingips') if (ip['fixed_ip_address'] is None and ip['floating_network_id'] == network_id and ip['port_id'] is None) ] if not free_ips: floating_ip = self.neutron_client.create_floatingip({ 'floatingip': { 'floating_network_id':network_id, }}).get('floatingip') floating_ip = free_ips.pop() # # 2. wait until at least one interface of the server is up # interfaces = [] while not interfaces: interfaces = instance.interface_list() sleep(2) ## FIXME: hard-coded value # # 3. get port ID # port_id = interface.port_id # # 4. assign floating IP to port # floating_ip = self.neutron_client.update_floatingip( floating_ip, { 'floatingip_id': floating_ip['id'], 'floatingip': { 'port_id': port_id, }, }).get('floatingip') What's the correct way of allocating floating IPs with the current OpenStack API Python clients? Thanks, Riccardo -- Riccardo Murri / Email: riccardo.murri at gmail.com / Tel.: +41 77 458 98 32 From hannes.fuchs at student.htw-berlin.de Sun Jan 28 11:05:41 2018 From: hannes.fuchs at student.htw-berlin.de (Hannes Fuchs) Date: Sun, 28 Jan 2018 12:05:41 +0100 Subject: [Openstack] [swift] Erasure Coding - "Unknown Error" on other ec_types than liberasurecode_rs_vand Message-ID: <53f59ea5-0363-c166-0c6e-569f439972e0@student.htw-berlin.de> Hello all, currently I try the provided ec_types of EC in swift. But the only one which works is "liberasurecode_rs_vand". On every other one I get the following error on startup (swift/swift-proxy): --- ERROR: Invalid Storage Policy Configuration in /etc/swift/swift.conf (Error creating EC policy (pyeclib_c_init ERROR: Unknown error. Please inspect syslog for liberasurecode error report.), for index 9) --- But in the syslog there is exactly the same error message, no hint whats going wrong. It seems to be the default error [1] (ECDriverError) Anyone an idea whats wrong or did I miss something? Tried implementations: - liberasurecode_rs_vand - works - jerasure_rs_vand - fails with "Unknown error" - jerasure_rs_cauchy - fails with "Unknown error" - flat_xor_hd_3 - fails with "Unknown error" - flat_xor_hd_4 - fails with "Unknown error" Installed Versions: - python-pyeclib 1.3.1-1ubuntu3~cloud0 - liberasurecode1 1.5.0-1~cloud0 - libjerasure2 2.0.0-2ubuntu1 - swift 2.15.1-0ubuntu3~cloud0 [1] https://github.com/openstack/pyeclib/blob/4e0f35a34d4aa10fd98ae8d3bbc9cecaf43601d4/src/c/pyeclib_c/pyeclib_c.c#L182 Cheers, Hannes From hannes.fuchs at gmx.org Sun Jan 28 11:19:59 2018 From: hannes.fuchs at gmx.org (Hannes Fuchs) Date: Sun, 28 Jan 2018 12:19:59 +0100 Subject: [Openstack] Fwd: Re: [openstack] [swift] Erasure Coding - No reconstruction to other nodes/disks on disk failure In-Reply-To: References: Message-ID: <819d4690-516c-47f3-97ae-439bb7634686@gmx.org> Seems to be that my university mail server bounces replies from the ML. So I have to change my mail settings. Maybe the information's are helpful for anyone who runs into the same question. Cheers, Hannes -------- Weitergeleitete Nachricht -------- Betreff: Re: [Openstack] [openstack] [swift] Erasure Coding - No reconstruction to other nodes/disks on disk failure Datum: Tue, 23 Jan 2018 10:06:47 +0100 Von: Hannes Fuchs An: Clay Gerrard Hello Clay, Thank you for the fast reply and the explanation. This clears things up. The link to the bug is also very helpful. (Did not find a hint in the documentation) So I'll change my test workflow. Are there public information's from RedHat about their discussion about ring automation? Thanks, Hannes On 23.01.2018 00:03, Clay Gerrard wrote: > It's debatable, but currently operating as intended [1]. The fail in place > workflow for EC expects the operator to do a ring change [2]. While > replicated fail in place workflows do allow for the operator to unmount and > post-pone a rebalance, it's not a common workflow. In practice the Swift > deployers/operators I've talked to tend to follow the rebalance after disk > failure workflow for both replicated and EC policies. While restoring data > to full durability in a reactive manor to drive failures is important - > there's more than one way to get Swift to do that - and it seems > operators/automation prefers to handle that with an explict ring change. > That said; it's just a prioritization issue - I wouldn't imagine anyone > would be opposed to rebuilding fragments to handoffs in response to a 507. > But there is some efficiency concerns... reassigning primaries is a lot > simpler in many ways as long as you're able to do that in a reactive > fashion. Redhat was recently discussing interest in doing more opensource > upstream work on ring automation... > > -Clay > > 1. https://bugs.launchpad.net/swift/+bug/1510342 - I don't think anyone is > directly opposed to seeing this change, but as ring automation best > practices have become more sophisticated it's less of a priority > 2. essentially everyone has some sort of alert/trigger/automation around > disk failure (or degraded disk performance) and the operator/system > immediately/automatically fail the device by removing it from the ring and > push out the changed partition assignments - allowing the system to rebuild > the partitions to the new primaries instead of a handoff. > > On Mon, Jan 22, 2018 at 2:01 PM, Hannes Fuchs < > hannes.fuchs at student.htw-berlin.de> wrote: > >> Hello all, >> >> for my master's thesis I'm analyzing different storage policies in >> openstack swift. I'm manly interested in the reconstruction speed of the >> different EC implementations. >> >> I've noticed in my tests, that there is no reconstruction of >> fragments/parity to other nodes/disks if a disk fails. >> >> My test setup consists of 8 nodes with each 4 disks. OS is Ubuntu 16.04 >> LTS and the swift version is 2.15.1/pike and here are my 2 example >> policies: >> >> --- >> [storage-policy:2] >> name = liberasurecode-rs-vand-4-2 >> policy_type = erasure_coding >> ec_type = liberasurecode_rs_vand >> ec_num_data_fragments = 4 >> ec_num_parity_fragments = 2 >> ec_object_segment_size = 1048576 >> >> [storage-policy:3] >> name = liberasurecode-rs-vand-3-1 >> policy_type = erasure_coding >> ec_type = liberasurecode_rs_vand >> ec_num_data_fragments = 3 >> ec_num_parity_fragments = 1 >> ec_object_segment_size = 1048576 >> --- >> >> ATM I've tested only the ec_type liberasurecode_rs_vand. With other >> implementations the startup of swift fails, but I think this is another >> topic. >> >> To simulate a disk failure I'm using fault injection [1]. >> >> Testrun example: >> 1. fill with objects (32.768 1M Objects, Sum: 32GB) >> 2. make a disk "fail" >> 3. disk failure is detected, /but no reconstruction/ >> 4. replace "failed" disk, mount "new" empty disk >> 5. missing fragments/parity is reconstructed on new empty disk >> >> Expected: >> 1. fill with objects (32.768 1M Objects, Sum: 32GB) >> 2. make a disk "fail" >> 3. disk failure is detected, reconstruction to remaining disks/nodes >> 4. replace "failed" disk, mount "new" empty disk >> 5. rearrange data in ring to pre fail state >> >> >> Shouldn't be the missing fragments/parity reconstructed on the remaining >> disks/nodes? (See point 3, in Testrun example) >> >> >> [1] >> https://www.kernel.org/doc/Documentation/fault-injection/ >> fault-injection.txt >> >> >> Cheers, >> Hannes Fuchs >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> > From navdeep.uniyal at bristol.ac.uk Sun Jan 28 20:58:22 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Sun, 28 Jan 2018 20:58:22 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Message-ID: Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Mon Jan 29 06:50:27 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 29 Jan 2018 06:50:27 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Mon Jan 29 08:32:12 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Mon, 29 Jan 2018 08:32:12 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Hi Moshe, Thank you very much for your response. I could see the list of nodes using virsh: net_enp129s0f0_3c_fd_fe_a9_37_c0 net_enp129s0f1_3c_fd_fe_a9_37_c1 net_enp129s2_36_a5_18_f7_41_c6 net_enp129s2f1_96_b0_b3_37_cf_c5 net_enp129s2f2_0e_0c_63_76_07_0f net_enp129s2f3_36_ca_41_10_7f_62 net_enp129s2f4_f2_79_ee_75_38_a9 net_enp129s2f5_2a_a1_c6_02_55_11 net_enp129s2f6_16_e3_c4_01_8c_6d net_enp129s2f7_66_56_63_d8_a8_ad these does not have the node 'net_enp129s2_b2_87_6e_13_a1_5e' listed. Could you please help me finding out what could be the reason behind this mismatch. Kind Regards, Navdeep Uniyal From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 06:50 To: Navdeep Uniyal ; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Mon Jan 29 09:29:48 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 29 Jan 2018 09:29:48 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Can you try to restart libvirtd and query again "virsh nodedev-list" From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 10:32 AM To: Moshe Levi ; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you very much for your response. I could see the list of nodes using virsh: net_enp129s0f0_3c_fd_fe_a9_37_c0 net_enp129s0f1_3c_fd_fe_a9_37_c1 net_enp129s2_36_a5_18_f7_41_c6 net_enp129s2f1_96_b0_b3_37_cf_c5 net_enp129s2f2_0e_0c_63_76_07_0f net_enp129s2f3_36_ca_41_10_7f_62 net_enp129s2f4_f2_79_ee_75_38_a9 net_enp129s2f5_2a_a1_c6_02_55_11 net_enp129s2f6_16_e3_c4_01_8c_6d net_enp129s2f7_66_56_63_d8_a8_ad these does not have the node 'net_enp129s2_b2_87_6e_13_a1_5e' listed. Could you please help me finding out what could be the reason behind this mismatch. Kind Regards, Navdeep Uniyal From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 06:50 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Mon Jan 29 11:57:34 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Mon, 29 Jan 2018 11:57:34 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Hi Moshe, Thank you for your response. I am creating VFs on intel X710 NIC. As per the documentation I need to disable 'i40evf driver' on the host machine to get the VMs running with the VFs however, on doing so I am not getting the VF list while doing 'virsh nodedev-list'. Could you please advise if this driver is required to be running or not. Best Regards, Navdeep From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 09:30 To: Navdeep Uniyal ; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Can you try to restart libvirtd and query again "virsh nodedev-list" From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 10:32 AM To: Moshe Levi >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you very much for your response. I could see the list of nodes using virsh: net_enp129s0f0_3c_fd_fe_a9_37_c0 net_enp129s0f1_3c_fd_fe_a9_37_c1 net_enp129s2_36_a5_18_f7_41_c6 net_enp129s2f1_96_b0_b3_37_cf_c5 net_enp129s2f2_0e_0c_63_76_07_0f net_enp129s2f3_36_ca_41_10_7f_62 net_enp129s2f4_f2_79_ee_75_38_a9 net_enp129s2f5_2a_a1_c6_02_55_11 net_enp129s2f6_16_e3_c4_01_8c_6d net_enp129s2f7_66_56_63_d8_a8_ad these does not have the node 'net_enp129s2_b2_87_6e_13_a1_5e' listed. Could you please help me finding out what could be the reason behind this mismatch. Kind Regards, Navdeep Uniyal From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 06:50 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From vondra at homeatcloud.cz Mon Jan 29 11:59:14 2018 From: vondra at homeatcloud.cz (=?utf-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Mon, 29 Jan 2018 12:59:14 +0100 Subject: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? In-Reply-To: <1757695191.245913.1517101248184.JavaMail.zimbra@beyondhosting.net> References: <1757695191.245913.1517101248184.JavaMail.zimbra@beyondhosting.net> Message-ID: <009401d398f8$9661cea0$c3256be0$@homeatcloud.cz> How about HPE iLO, does anyone know a way to disable access from the OS? From: Tyler Bishop [mailto:tyler.bishop at beyondhosting.net] Sent: Sunday, January 28, 2018 2:01 AM To: Guo James Cc: openstack Subject: Re: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? On dell DRAC you can disable IPMI/RAC control at the the device for OS configuration. With Supermicro IPMI you just need to create a random user and random password that is not "admin". _____________________________________________ Tyler Bishop Founder EST 2007 Obrázek byl odebrán odesílatelem. O: 513-299-7108 x10 M: 513-646-5809 http://BeyondHosting.net This email is intended only for the recipient(s) above and/or otherwise authorized personnel. The information contained herein and attached is confidential and the property of Beyond Hosting. Any unauthorized copying, forwarding, printing, and/or disclosing any information related to this email is prohibited. If you received this message in error, please contact the sender and destroy all copies of this email and any attachment(s). _____ From: "Guo James" To: xiefp88 at sina.com, "openstack" Sent: Wednesday, January 10, 2018 10:16:34 PM Subject: Re: [Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? Ironic user can change ipmi address so that OpenStack ironic lose control of bare mental. I think that is unacceptable. It seems that we should make ironic image without root privilege From: xiefp88 at sina.com [mailto:xiefp88 at sina.com] Sent: Thursday, January 11, 2018 9:12 AM To: Guo James; openstack Subject: 回复:[Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? If you can not get the usename and password of the OS, you can not modify ipmi configuration through you got the ironic user info. ----- 原始邮件 ----- 发件人:Guo James 收件人:"openstack at lists.openstack.org" 主题:[Openstack] [ironic] how to prevent ironic user to controle ipmi through OS? 日期:2018年01月10日 17点21分 I notice that after an ironic user get a bare mental successfully, he can access ipmi through ipmi device although he can't access ipmi through LAN How to prevent the situation? If he modify ipmi configuration, that will be mess. _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: not available URL: From moshele at mellanox.com Mon Jan 29 11:59:46 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 29 Jan 2018 11:59:46 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Hi Nadepp, I am not familiar with Intel NIC (I work at Mellanox) Maybe the intel folks in the mailing list can help you. From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 1:58 PM To: Moshe Levi ; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you for your response. I am creating VFs on intel X710 NIC. As per the documentation I need to disable 'i40evf driver' on the host machine to get the VMs running with the VFs however, on doing so I am not getting the VF list while doing 'virsh nodedev-list'. Could you please advise if this driver is required to be running or not. Best Regards, Navdeep From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 09:30 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Can you try to restart libvirtd and query again "virsh nodedev-list" From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 10:32 AM To: Moshe Levi >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you very much for your response. I could see the list of nodes using virsh: net_enp129s0f0_3c_fd_fe_a9_37_c0 net_enp129s0f1_3c_fd_fe_a9_37_c1 net_enp129s2_36_a5_18_f7_41_c6 net_enp129s2f1_96_b0_b3_37_cf_c5 net_enp129s2f2_0e_0c_63_76_07_0f net_enp129s2f3_36_ca_41_10_7f_62 net_enp129s2f4_f2_79_ee_75_38_a9 net_enp129s2f5_2a_a1_c6_02_55_11 net_enp129s2f6_16_e3_c4_01_8c_6d net_enp129s2f7_66_56_63_d8_a8_ad these does not have the node 'net_enp129s2_b2_87_6e_13_a1_5e' listed. Could you please help me finding out what could be the reason behind this mismatch. Kind Regards, Navdeep Uniyal From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 06:50 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgrieco at sg2recruiting.com Mon Jan 29 12:36:51 2018 From: sgrieco at sg2recruiting.com (Suzie Grieco) Date: Mon, 29 Jan 2018 07:36:51 -0500 Subject: [Openstack] Regarding Openings on Red Hat's Public Sector Team for OpenStack Engineers Message-ID: Hello, May I request the following to be forwarded to members of your OpenStack mailing list? Red Hat's public sector team is looking for OpenStack engineers with experience implementing in a production environment. Having a Top Secret or higher clearance would be a plus! Below is the position description along with a link . If interested in learning more, please schedule time to talk to Suzie Grieco at SG2 Recruiting using scheduler in signature line SG2 Recruiting is a Red Hat Public Sector Team recruiting partner and we are seeking OpenStack systems engineers and architects to join their growing team creating innovative open source solutions for government clients. Responsible for post sales project delivery and support, you will apply consulting experience, systems engineering and design as well as strong knowledge of relevant IT trends to solve complex problems. Certifications aren't required-- just the technical curiosity and willingness to learn! Your key responsibilities will include: - Serving as the lead consultant on a large consulting projects utilizing several different technologies to create and architect the solution - Coordinating architecture and dependencies with other teams in a broad technical context, and demonstrating an understanding of the consequences of technological choices. - Applying proven industry standards and practices as they relate to projects and solutions led by the architect - Remaining current on IT trends pertaining to their area of practice (e.g., Agile, DevOps, containerization, microservices) - Identifying client opportunities spanning multiple technical disciplines. - Serving as a technical mentor for consultants, providing valuable, timely, and accurate technical leadership by sharing knowledge and best practices associated with Red Hat product stack, including OSes, virtualization, middleware, storage, messaging, and cloud. Requirements: - 8+ years experience as a consultant, system architect or implementation lead, operating in a senior capacity in the open source community - BS degree in computer science, MIS or related field - You are able to obtain US Government security clearances - Strong OpenStack deployment and troubleshooting skills with Director, preferably to a code level using Python - Hands on experience with Puppet, Heat Templates, and Ansible - Red Hat Satellite 6 - Basic knowledge of Internet Protocol Address (IPA) components - Ability to support clients throughout DC Metro Area Desired Qualifications (one or more of the following): - You have an active TS/SCI clearance with ability to obtain and maintain higher level clearances - Interest in open source technology and community - Red Hat Certified Engineer (RHCE) Certification - Red Hat Certified Architect (RHCA) Certification Similar Job Titles: - OpenStack Engineer - OpenStack Cloud Systems Engineer - Senior OpenStack Deployment Engineer - Cloud Infrastructure Engineer Suzie Grieco | President SG2 Recruiting 703.675.6286 | sgrieco at sg2recruiting.com Visit me on LinkedIn Schedule a meeting with me. Thank you in advance for sharing with the OpenStack listserve! Suzie ---------------------------------------------------------- Suzie Grieco | President SG2 Recruiting 703.675.6286 | sgrieco at sg2recruiting.com Visit me on LinkedIn . Check out our current openings. Schedule a meeting with me. ----------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Mon Jan 29 15:12:23 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Mon, 29 Jan 2018 15:12:23 +0000 Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance In-Reply-To: References: Message-ID: Hi Moshe, Thank you. Dear All, Has anyone worked with intel NICs to enable SRIOV in Openstack. If yes, please if you could help me in configuring the NIC to be made suitable for Openstack to pick up. Currently, I am facing issues with Libvirt failing to list the node. With driver i40evf running, it lists the interfaces. However, if I disable the i40evf driver it does not list the interfaces in nodedev-list. Please if someone could verify this as right behaviour or not. Kind Regards, Navdeep From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 12:00 To: Navdeep Uniyal ; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Nadepp, I am not familiar with Intel NIC (I work at Mellanox) Maybe the intel folks in the mailing list can help you. From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 1:58 PM To: Moshe Levi >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you for your response. I am creating VFs on intel X710 NIC. As per the documentation I need to disable 'i40evf driver' on the host machine to get the VMs running with the VFs however, on doing so I am not getting the VF list while doing 'virsh nodedev-list'. Could you please advise if this driver is required to be running or not. Best Regards, Navdeep From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 09:30 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Can you try to restart libvirtd and query again "virsh nodedev-list" From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Monday, January 29, 2018 10:32 AM To: Moshe Levi >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Moshe, Thank you very much for your response. I could see the list of nodes using virsh: net_enp129s0f0_3c_fd_fe_a9_37_c0 net_enp129s0f1_3c_fd_fe_a9_37_c1 net_enp129s2_36_a5_18_f7_41_c6 net_enp129s2f1_96_b0_b3_37_cf_c5 net_enp129s2f2_0e_0c_63_76_07_0f net_enp129s2f3_36_ca_41_10_7f_62 net_enp129s2f4_f2_79_ee_75_38_a9 net_enp129s2f5_2a_a1_c6_02_55_11 net_enp129s2f6_16_e3_c4_01_8c_6d net_enp129s2f7_66_56_63_d8_a8_ad these does not have the node 'net_enp129s2_b2_87_6e_13_a1_5e' listed. Could you please help me finding out what could be the reason behind this mismatch. Kind Regards, Navdeep Uniyal From: Moshe Levi [mailto:moshele at mellanox.com] Sent: 29 January 2018 06:50 To: Navdeep Uniyal >; openstack at lists.openstack.org Subject: RE: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Hi Navdeep, The errors you are pointing out shouldn't cause error while creating instance with SR-IOV/PCI passthrough. They are part of the SR-IOV nic capabilities feature which is not completed yet. Libvirt "virsh nodedev-list" command will show you the list of nic and this method [1] will look it in the hypervisor. For some reason you have a mismatch here. You need to compare them. Anyway if there is a mismatch it just won't report the capabilities of the nic. As I mentioned above I think this is not the issue for the error while creating instance [1] https://github.com/openstack/nova/blob/91f7a999988b3f857d738d39984117e6c514cbec/nova/pci/utils.py#L196-L217 From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Sunday, January 28, 2018 10:58 PM To: openstack at lists.openstack.org Subject: [Openstack] Openstack SRIOV and PCI passthrough error while creating instance Dear All, I am getting an error while creating SRIOV enabled instances following the guide from Openstack pike https://docs.openstack.org/neutron/pike/admin/config-sriov.html I want to start an instance on openstack pike with SRIOV enabled NICs. However, I am getting a Libvirt error regarding the node name. The error looks weird as the node name is not matching the interface name on the host machine or in the configuration files. 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager [req-caa92f1d-5ac1-402d-a8bc-b08ab350a21f - - - - -] Error updating resources for node jupiter.: libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager Traceback (most recent call last): 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6696, in update_available_resource_for_node 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rt.update_available_resource(context, nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 641, in update_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager resources = self.driver.get_available_resource(nodename) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5857, in get_available_resource 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5621, in _get_pci_passthrough_devices 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager for name in dev_names: 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5582, in _get_pcidev_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager device['label'] = 'label_%(vendor_id)s_%(product_id)s' % device 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5553, in _get_device_capabilities 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager pcinet_info = self._get_pcinet_info(address) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5496, in _get_pcinet_info 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager virtdev = self._host.device_lookup_by_name(devname) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/host.py", line 845, in device_lookup_by_name 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager return self.get_connection().nodeDeviceLookupByName(name) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = execute(f, *args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager six.reraise(c, e, tb) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager rv = meth(*args, **kwargs) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/libvirt.py", line 4177, in nodeDeviceLookupByName 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager if ret is None:raise libvirtError('virNodeDeviceLookupByName() failed', conn=self) 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager libvirtError: Node device not found: no node device with matching name 'net_enp129s2_b2_87_6e_13_a1_5e' 2018-01-28 20:40:11.416 2953 ERROR nova.compute.manager The correct interface name is enp129s0f0. However, I am getting the node name as net_enp129s2_b2_87_6e_13_a1_5e' which i believe is the reason behind the failure of vm creation on openstack. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jan 29 15:44:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 29 Jan 2018 15:44:55 +0000 Subject: [Openstack] Regarding Openings on Red Hat's Public Sector Team for OpenStack Engineers In-Reply-To: References: Message-ID: <20180129154454.7ucpz3xxwezojcsx@yuggoth.org> On 2018-01-29 07:36:51 -0500 (-0500), Suzie Grieco wrote: > May I request the following to be forwarded to members of your OpenStack > mailing list? [...] In the future, please post OpenStack-related job openings at https://www.openstack.org/community/jobs/ instead of on our discussion lists. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From edgar.magana at workday.com Mon Jan 29 17:12:58 2018 From: edgar.magana at workday.com (Edgar Magana) Date: Mon, 29 Jan 2018 17:12:58 +0000 Subject: [Openstack] Stepping aside announcement Message-ID: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> Dear Community, This is an overdue announcement but I was waiting for the right moment and today it is with the opening of the UC election. It has been almost seven years of full commitment to OpenStack and the entire ecosystem around it. During the last couple of years, I had the opportunity to serve as Chair of the User Committee. I have participated in this role with nothing more important but passion and dedication for the users and operators. OpenStack has been very important for me and it will be always the most enjoyable work I have ever done. It is time to move on. Our team is extending its focus to other cloud domains and I will be leading one of the those. Therefore, I would like to announce that I am stepping aside from my role as UC Chair. Per our UC election, there will be no just 2 seats available but three: https://governance.openstack.org/uc/reference/uc-election-feb2018.html I want to encourage the whole AUC community to participate, be part of the User Committee is a very important and grateful activity. Please, go for it! Thank you all, Edgar Magana Sr. Principal Architect Workday, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From itzshamail at gmail.com Mon Jan 29 17:50:10 2018 From: itzshamail at gmail.com (Shamail Tahir) Date: Mon, 29 Jan 2018 12:50:10 -0500 Subject: [Openstack] [User-committee] Stepping aside announcement In-Reply-To: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> References: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> Message-ID: <4B0866E8-8D5D-4AD9-A3CA-6C2784706EFB@gmail.com> > On Jan 29, 2018, at 12:12 PM, Edgar Magana wrote: > > Dear Community, > > This is an overdue announcement but I was waiting for the right moment and today it is with the opening of the UC election. It has been almost seven years of full commitment to OpenStack and the entire ecosystem around it. During the last couple of years, I had the opportunity to serve as Chair of the User Committee. I have participated in this role with nothing more important but passion and dedication for the users and operators. OpenStack has been very important for me and it will be always the most enjoyable work I have ever done. > > It is time to move on. Our team is extending its focus to other cloud domains and I will be leading one of the those. Therefore, I would like to announce that I am stepping aside from my role as UC Chair. Per our UC election, there will be no just 2 seats available but three: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you for everything you’ve done for the community thus far Edgar! Your leadership has been instrumental in helping us evolve over the last 2-3 cycles. I hope you are still able to participate in the community even after you leave the User Committee. > > I want to encourage the whole AUC community to participate, be part of the User Committee is a very important and grateful activity. Please, go for it! > > Thank you all, > > Edgar Magana > Sr. Principal Architect > Workday, Inc. > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Jan 29 17:59:55 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 29 Jan 2018 11:59:55 -0600 Subject: [Openstack] [User-committee] Stepping aside announcement In-Reply-To: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> References: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> Message-ID: Edgar, Thank you for all your hard work and contributions! Amy (spotz) On Mon, Jan 29, 2018 at 11:12 AM, Edgar Magana wrote: > Dear Community, > > > > This is an overdue announcement but I was waiting for the right moment and > today it is with the opening of the UC election. It has been almost seven > years of full commitment to OpenStack and the entire ecosystem around it. > During the last couple of years, I had the opportunity to serve as Chair of > the User Committee. I have participated in this role with nothing more > important but passion and dedication for the users and operators. OpenStack > has been very important for me and it will be always the most enjoyable > work I have ever done. > > > > It is time to move on. Our team is extending its focus to other cloud > domains and I will be leading one of the those. Therefore, I would like to > announce that I am stepping aside from my role as UC Chair. Per our UC > election, there will be no just 2 seats available but three: > https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > > > I want to encourage the whole AUC community to participate, be part of the > User Committee is a very important and grateful activity. Please, go for it! > > > > Thank you all, > > > > Edgar Magana > > Sr. Principal Architect > > Workday, Inc. > > > > > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Jan 29 18:44:58 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 29 Jan 2018 12:44:58 -0600 Subject: [Openstack] [User-committee] Stepping aside announcement In-Reply-To: References: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> Message-ID: Thanks for your service to the community Edgar! Hope to see you at an event soon and we can toast to your departure and continued success! On Mon, Jan 29, 2018 at 11:59 AM, Amy Marrich wrote: > Edgar, > > Thank you for all your hard work and contributions! > > Amy (spotz) > > On Mon, Jan 29, 2018 at 11:12 AM, Edgar Magana > wrote: > >> Dear Community, >> >> >> >> This is an overdue announcement but I was waiting for the right moment >> and today it is with the opening of the UC election. It has been almost >> seven years of full commitment to OpenStack and the entire ecosystem around >> it. During the last couple of years, I had the opportunity to serve as >> Chair of the User Committee. I have participated in this role with nothing >> more important but passion and dedication for the users and operators. >> OpenStack has been very important for me and it will be always the most >> enjoyable work I have ever done. >> >> >> >> It is time to move on. Our team is extending its focus to other cloud >> domains and I will be leading one of the those. Therefore, I would like to >> announce that I am stepping aside from my role as UC Chair. Per our UC >> election, there will be no just 2 seats available but three: >> https://governance.openstack.org/uc/reference/uc-election-feb2018.html >> >> >> >> I want to encourage the whole AUC community to participate, be part of >> the User Committee is a very important and grateful activity. Please, go >> for it! >> >> >> >> Thank you all, >> >> >> >> Edgar Magana >> >> Sr. Principal Architect >> >> Workday, Inc. >> >> >> >> >> >> >> >> _______________________________________________ >> User-committee mailing list >> User-committee at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee >> >> > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yihleong at gmail.com Mon Jan 29 22:17:36 2018 From: yihleong at gmail.com (Yih Leong, Sun.) Date: Mon, 29 Jan 2018 14:17:36 -0800 Subject: [Openstack] [User-committee] Stepping aside announcement In-Reply-To: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> References: <1BDADB08-9333-4A24-BC5A-1058F14EADD6@workday.com> Message-ID: Sad to hear but thanks for your contributions and I enjoyed the time working with you! On Monday, January 29, 2018, Edgar Magana wrote: > Dear Community, > > > > This is an overdue announcement but I was waiting for the right moment and > today it is with the opening of the UC election. It has been almost seven > years of full commitment to OpenStack and the entire ecosystem around it. > During the last couple of years, I had the opportunity to serve as Chair of > the User Committee. I have participated in this role with nothing more > important but passion and dedication for the users and operators. OpenStack > has been very important for me and it will be always the most enjoyable > work I have ever done. > > > > It is time to move on. Our team is extending its focus to other cloud > domains and I will be leading one of the those. Therefore, I would like to > announce that I am stepping aside from my role as UC Chair. Per our UC > election, there will be no just 2 seats available but three: > https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > > > I want to encourage the whole AUC community to participate, be part of the > User Committee is a very important and grateful activity. Please, go for it! > > > > Thank you all, > > > > Edgar Magana > > Sr. Principal Architect > > Workday, Inc. > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gurud78 at gmail.com Tue Jan 30 14:57:34 2018 From: gurud78 at gmail.com (Guru Desai) Date: Tue, 30 Jan 2018 20:27:34 +0530 Subject: [Openstack] Openstack manual setup Message-ID: Hello, I plan to setup openstack manually on single server ( preferably PIKE). Somehow i am not able to find any documentation for the same. Could some one help with any pointers. All the documentation that i find refers to either packstack or devstack which i dont want to use. I would like to setup all the openstack services on single server, either ubuntu or centOS. Any pointers would be very helpful.. Thanks, Guru -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brandon.Bruce at rackspace.com Tue Jan 30 15:11:10 2018 From: Brandon.Bruce at rackspace.com (Brandon Bruce) Date: Tue, 30 Jan 2018 15:11:10 +0000 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: Hey Guru! I think this might be what you are looking for: https://docs.openstack.org/install-guide/ However, this will be a decent amount of work doing it on a single machine. I am assuming you are going to use VirtualBox or VMWare to create the multiple nodes. I did this same procedure a few years ago using VBox and while I learned a lot, I would never recommend it as using PackStack, DevStack, or Openstack Ansible (OSA) remove so many of the pitfalls that are going to occur. If you do choose to go this route, especially on a single machine using virtualization, plan on it be an arduous process and hitting hundreds of little snags. Brandon Bruce ________________________________ From: Guru Desai Sent: Tuesday, January 30, 2018 8:57:34 AM To: OpenStack Mailing List Subject: [Openstack] Openstack manual setup Hello, I plan to setup openstack manually on single server ( preferably PIKE). Somehow i am not able to find any documentation for the same. Could some one help with any pointers. All the documentation that i find refers to either packstack or devstack which i dont want to use. I would like to setup all the openstack services on single server, either ubuntu or centOS. Any pointers would be very helpful.. Thanks, Guru -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Tue Jan 30 15:21:43 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Tue, 30 Jan 2018 15:21:43 +0000 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: Hi Guru, Check this link: https://docs.openstack.org/pike/install/ Regards, Navdeep From: Guru Desai [mailto:gurud78 at gmail.com] Sent: 30 January 2018 14:58 To: OpenStack Mailing List Subject: [Openstack] Openstack manual setup Hello, I plan to setup openstack manually on single server ( preferably PIKE). Somehow i am not able to find any documentation for the same. Could some one help with any pointers. All the documentation that i find refers to either packstack or devstack which i dont want to use. I would like to setup all the openstack services on single server, either ubuntu or centOS. Any pointers would be very helpful.. Thanks, Guru -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Tue Jan 30 15:35:21 2018 From: Remo at italy1.com (Remo Mattei) Date: Tue, 30 Jan 2018 16:35:21 +0100 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: <933ADD71-2F19-43E2-8C01-19B1D1717AC9@italy1.com> I personally think that besides learning the manual installation is not a feasible solution. Packstack single node good, RDO Tripleo HA very good (Director is the Red Hat version of this) but you will find other learning on how to use, configure the services, there was mira before but now gone. OpenStack-kube, OpenStack-ansi, and kolla are projects to look for. Just my 2 cents. Remo > On Jan 30, 2018, at 4:21 PM, Navdeep Uniyal wrote: > > Hi Guru, > > Check this link: https://docs.openstack.org/pike/install/ > > Regards, > Navdeep > > From: Guru Desai [mailto:gurud78 at gmail.com] > Sent: 30 January 2018 14:58 > To: OpenStack Mailing List > Subject: [Openstack] Openstack manual setup > > Hello, > > I plan to setup openstack manually on single server ( preferably PIKE). Somehow i am not able to find any documentation for the same. Could some one help with any pointers. > All the documentation that i find refers to either packstack or devstack which i dont want to use. > I would like to setup all the openstack services on single server, either ubuntu or centOS. > > Any pointers would be very helpful.. > > Thanks, > Guru > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdiaz at whitestack.com Tue Jan 30 15:37:18 2018 From: bdiaz at whitestack.com (Benjamin Diaz) Date: Tue, 30 Jan 2018 12:37:18 -0300 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: Hi Guru, I would recommend you to check the kolla-ansible project. It has native support for all-in-one deployment. https://github.com/openstack/kolla-ansible Greetings, Benjamin On Tue, Jan 30, 2018 at 12:21 PM, Navdeep Uniyal < navdeep.uniyal at bristol.ac.uk> wrote: > Hi Guru, > > > > Check this link: https://docs.openstack.org/pike/install/ > > > > Regards, > > Navdeep > > > > *From:* Guru Desai [mailto:gurud78 at gmail.com] > *Sent:* 30 January 2018 14:58 > *To:* OpenStack Mailing List > *Subject:* [Openstack] Openstack manual setup > > > > Hello, > > > > I plan to setup openstack manually on single server ( preferably PIKE). > Somehow i am not able to find any documentation for the same. Could some > one help with any pointers. > > All the documentation that i find refers to either packstack or devstack > which i dont want to use. > > I would like to setup all the openstack services on single server, either > ubuntu or centOS. > > > > Any pointers would be very helpful.. > > > > Thanks, > > Guru > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -- *Benjamín Díaz* Cloud Computing Engineer bdiaz at whitestack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sashang at gmail.com Tue Jan 30 22:24:21 2018 From: sashang at gmail.com (Sashan Govender) Date: Tue, 30 Jan 2018 22:24:21 +0000 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: Hi I setup an openstack system on my latpop using kvm and these instructions for Centos 7: https://docs.openstack.org/newton/install-guide-rdo/ It's a long process (about a weeks worth of work if you're new to it and doing it full-time). Some issues I had were with the database sync commands (eg: su -s /bin/sh -c "glance-manage db_sync" glance) . For some reason my tables never populated until I'd run the command twice or restarted the service. Remember to open ports in the firewall if using centos. It has a firewall enabled by default. I made an nfs mount on the host machine to store the glance images. That way I didn't need to allocated tons of space for the vm's hard disk. Otherwise I think the instructions on that site referenced above are good and straightforward. It's just that the process is long and prone to user error. On Wed, Jan 31, 2018 at 2:16 AM Guru Desai wrote: > Hello, > > I plan to setup openstack manually on single server ( preferably PIKE). > Somehow i am not able to find any documentation for the same. Could some > one help with any pointers. > All the documentation that i find refers to either packstack or devstack > which i dont want to use. > I would like to setup all the openstack services on single server, either > ubuntu or centOS. > > Any pointers would be very helpful.. > > Thanks, > Guru > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.dulak at gmail.com Tue Jan 30 23:30:24 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Wed, 31 Jan 2018 00:30:24 +0100 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 11:24 PM, Sashan Govender wrote: > Hi > > I setup an openstack system on my latpop using kvm and these instructions > for Centos 7: > https://docs.openstack.org/newton/install-guide-rdo/ > > It's a long process (about a weeks worth of work if you're new to it and > doing it full-time). Some issues I had were with the database sync commands > (eg: su -s /bin/sh -c "glance-manage db_sync" glance) . For some reason my > tables never populated until I'd run the command twice or restarted the > service. > I would recommend using nested virtualization under Vagrant if you want to script the whole tutorial because you will need to start from scratch several times - there are probably several projects on github like that https://github.com/marcindulak/install-guide-rdo-with-vagrant Moreover I would directly focus on multi-server deployment. There is no point in doing this process more than once, and people provided links to various ways of automating the setup. Marcin > > Remember to open ports in the firewall if using centos. It has a firewall > enabled by default. > > I made an nfs mount on the host machine to store the glance images. That > way I didn't need to allocated tons of space for the vm's hard disk. > > Otherwise I think the instructions on that site referenced above are good > and straightforward. It's just that the process is long and prone to user > error. > > > On Wed, Jan 31, 2018 at 2:16 AM Guru Desai wrote: > >> Hello, >> >> I plan to setup openstack manually on single server ( preferably PIKE). >> Somehow i am not able to find any documentation for the same. Could some >> one help with any pointers. >> All the documentation that i find refers to either packstack or devstack >> which i dont want to use. >> I would like to setup all the openstack services on single server, either >> ubuntu or centOS. >> >> Any pointers would be very helpful.. >> >> Thanks, >> Guru >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ >> openstack >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 30 23:53:01 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Jan 2018 23:53:01 +0000 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: <20180130235301.c3xtj7rcuk6krubh@yuggoth.org> While not a single-server deployment, the reasoning behind Matt's experiment documented at https://blog.kortar.org/?p=380 (deploying from official release tarballs and following the official install guides) seems similar. You might take some clues from the experiences he describes there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From himgupta1996 at gmail.com Wed Jan 31 03:51:34 2018 From: himgupta1996 at gmail.com (Himanshu Gupta) Date: Wed, 31 Jan 2018 09:21:34 +0530 Subject: [Openstack] [Devstack]Installation of Barbican failing in ubuntu 16.04 LTS Message-ID: Hi Guys, I was trying to install the Devstack(Ocata branch) on the ubuntu 16.04 LTS by performing pre-requisite steps and running stack.sh. But the installation throws an error while installing Barbican. It throws the following exception: 2018-01-30 18:49:17.297 | ++^[[3242m/opt/stack/barbican/devstack/lib/barbican:configure_barbican:147 ^[(B^[[m write_uwsgi_config /etc/barbican/barbican-uwsgi.ini /usr/local/bin/barbican-wsgi-api /key-manager 2018-01-30 18:49:17.297 | /opt/stack/barbican/devstack/lib/barbican: line 147: write_uwsgi_config: command not found 2018-01-30 18:49:17.305 | +^[[3242m/opt/stack/barbican/devstack/lib/barbican:configure_barbican:1 ^[(B^[[m exit_trap Any help in this regard would be highly appreciated. -- Thanks and Regards, Himanshu Gupta 8960839015 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Jan 31 04:10:45 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 31 Jan 2018 15:10:45 +1100 Subject: [Openstack] [Devstack]Installation of Barbican failing in ubuntu 16.04 LTS In-Reply-To: References: Message-ID: <20180131041044.GB23143@thor.bakeyournoodle.com> On Wed, Jan 31, 2018 at 09:21:34AM +0530, Himanshu Gupta wrote: > Hi Guys, > > I was trying to install the Devstack(Ocata branch) on the ubuntu 16.04 LTS > by performing pre-requisite steps and running stack.sh. But the > installation throws an error while installing Barbican. It throws the > following exception: The bottom line is that the uwsgi code isn't available in the ocata branch of devstack So you either need to backport I1d89be1f1b36f26eaf543b99bde6fdc5701474fe to ocata (which probably isn't appropriate at this stage) or have barbican in ocata run stand alone. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From satish.txt at gmail.com Wed Jan 31 04:14:01 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 30 Jan 2018 23:14:01 -0500 Subject: [Openstack] Openstack manual setup In-Reply-To: References: Message-ID: Is killa-ansible ready for production deployment on 100 node cluster? On Tue, Jan 30, 2018 at 10:37 AM, Benjamin Diaz wrote: > Hi Guru, > I would recommend you to check the kolla-ansible project. It has native > support for all-in-one deployment. > > https://github.com/openstack/kolla-ansible > > Greetings, > Benjamin > > On Tue, Jan 30, 2018 at 12:21 PM, Navdeep Uniyal < > navdeep.uniyal at bristol.ac.uk> wrote: > >> Hi Guru, >> >> >> >> Check this link: https://docs.openstack.org/pike/install/ >> >> >> >> Regards, >> >> Navdeep >> >> >> >> *From:* Guru Desai [mailto:gurud78 at gmail.com] >> *Sent:* 30 January 2018 14:58 >> *To:* OpenStack Mailing List >> *Subject:* [Openstack] Openstack manual setup >> >> >> >> Hello, >> >> >> >> I plan to setup openstack manually on single server ( preferably PIKE). >> Somehow i am not able to find any documentation for the same. Could some >> one help with any pointers. >> >> All the documentation that i find refers to either packstack or devstack >> which i dont want to use. >> >> I would like to setup all the openstack services on single server, either >> ubuntu or centOS. >> >> >> >> Any pointers would be very helpful.. >> >> >> >> Thanks, >> >> Guru >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > > > -- > > *Benjamín Díaz* > Cloud Computing Engineer > > bdiaz at whitestack.com > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Wed Jan 31 04:21:29 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 30 Jan 2018 23:21:29 -0500 Subject: [Openstack] Openstack neutron with ASR1k Message-ID: Folks, We are planning to deploy production style private cloud and gathering information about what we should use and why and i came across with couple of document related network node criticality and performance issue and many folks suggesting following 1. DVR (it seem complicated after reading, also need lots of public IP) 2. Use ASR1k centralized router to use for L3 function (any idea what model should be good? or do we need any licensing to integrate with openstack?) Would like to get some input from folks who already using openstack in production and would like to know what kind of deployment they pick for network/neutron performance? From satish.txt at gmail.com Wed Jan 31 15:12:56 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 31 Jan 2018 10:12:56 -0500 Subject: [Openstack] DVR Public IP consumption In-Reply-To: <79409279-ad7f-25f5-622e-dfd9e6333ea9@gmail.com> References: <79409279-ad7f-25f5-622e-dfd9e6333ea9@gmail.com> Message-ID: Brian, You mean say i can use any private subnet for that service-type? I am having hard time to understand could you please give me example or details would be helpful. On Thu, Jan 18, 2018 at 2:12 PM, Brian Haley wrote: > On 01/16/2018 08:37 AM, Satish Patel wrote: >> >> Thanks Brian, >> >> I may having difficulty to understand that example, if I have only /24 >> public subnet for cloud and have 200 compute node then how does it work? > > > The intention is to have a second subnet on the external network, but only > have it usable within the datacenter. If you create it and set the > service-type to only certain types of ports, like DVR, then it won't be used > for floating IP as the other one is. > > -Brian > > >>> On Jan 15, 2018, at 9:47 PM, Brian Haley wrote: >>> >>>> On 01/15/2018 01:57 PM, Satish Patel wrote: >>>> I am planning to build openstack on production and big question is >>>> network (legacy vs DVR) but in DVR big concern is number of Public IP >>>> used on every compute node, I am planning to add max 100 node in >>>> cluster in that case it will use 100 public IP for compute node, Ouch! >>> >>> >>> You can reduce this public IP consumption by using multiple subnets on >>> the external network, with one just for those DVR interfaces. See >>> >>> https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html >>> Example #2 for a possible configuration. >>> >>> As for what type of L3 configuration to run, it seems like you have a >>> good idea of some of the trade-offs with each. >>> >>> -Brian >>> >>>> If i use legacy compute node then it could be bottleneck or failure >>>> node if not in HA. >>>> what most of company use for network node? DVR or legacy? >>>> _______________________________________________ >>>> Mailing list: >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> > From satish.txt at gmail.com Wed Jan 31 19:29:26 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 31 Jan 2018 14:29:26 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: So no one using ASR 1001 for Openstack? On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel wrote: > Folks, > > We are planning to deploy production style private cloud and gathering > information about what we should use and why and i came across with > couple of document related network node criticality and performance > issue and many folks suggesting following > > 1. DVR (it seem complicated after reading, also need lots of public IP) > 2. Use ASR1k centralized router to use for L3 function (any idea what > model should be good? or do we need any licensing to integrate with > openstack?) > > Would like to get some input from folks who already using openstack in > production and would like to know what kind of deployment they pick > for network/neutron performance? From mathias.strufe at dfki.de Wed Jan 31 20:49:15 2018 From: mathias.strufe at dfki.de (Mathias Strufe (DFKI)) Date: Wed, 31 Jan 2018 21:49:15 +0100 Subject: [Openstack] OpenVSwitch inside Instance no ARP passthrough In-Reply-To: <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> References: <19e2c014fb8d332bdb3518befce68a37@projects.dfki.uni-kl.de> <11ea9728-9446-2d8c-db3f-f5712e891af4@gmx.com> Message-ID: <9e663e326f138cf141d11964764388f1@projects.dfki.uni-kl.de> Dear Volodymyr, all, thanks for your fast answer ... but I'm still facing the same problem, still can't ping the instance with configured and up OVS bridge ... may because I'm quite new to OpenStack and OpenVswitch and didn't see the problem ;) My setup is devstack Mitaka in single machine config ... first of all I didn't find there the openvswitch_agent.ini anymore, I remember in previous version it was in the neutron/plugin folder ... Is this config now done in the ml2 config file in the [OVS] section???? I'm really wondering ... so I can ping between the 2 instances without any problem. But as soon I bring up the OVS bridge inside the vm the ARP requests only visible at the ens interface but not reaching the OVSbr ... please find attached two files which may help for troubleshooting. One are some network information from inside the Instance that runs the OVS and one ovs-vsctl info of the OpenStack Host. If you need more info/logs please let me know! Thanks for your help! BR Mathias. On 2018-01-27 22:44, Volodymyr Litovka wrote: > Hi Mathias, > > whether you have all corresponding bridges and patches between them > as described in openvswitch_agent.ini using > > integration_bridge > tunnel_bridge > int_peer_patch_port > tun_peer_patch_port > bridge_mappings > > parameters? And make sure, that service "neutron-ovs-cleanup" is in > use during system boot. You can check these bridges and patches using > "ovs-vsctl show" command. > > On 1/27/18 9:00 PM, Mathias Strufe (DFKI) wrote: > >> Dear all, >> >> I'm quite new to openstack and like to install openVSwtich inside >> one Instance of our Mitika openstack Lab Enviornment ... >> But it seems that ARP packets got lost between the network >> interface of the instance and the OVS bridge ... >> >> With tcpdump on the interface I see the APR packets ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode >> listening on ens6, link-type EN10MB (Ethernet), capture size 262144 >> bytes >> 18:50:58.080478 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:58.125009 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:50:59.077315 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:50:59.121369 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> 18:51:00.077327 ARP, Request who-has 192.168.120.10 tell >> 192.168.120.6, length 28 >> 18:51:00.121343 ARP, Request who-has 192.168.120.1 tell >> 192.168.120.6, length 28 >> >> but on the OVS bridge nothing arrives ... >> >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode >> listening on OVSbr2, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> >> I disabled port_security and removed the security group but nothing >> changed >> >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> | Field | Value >> | >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> | admin_state_up | True >> | >> | allowed_address_pairs | >> | >> | binding:host_id | node11 >> | >> | binding:profile | {} >> | >> | binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": >> true} | >> | binding:vif_type | ovs >> | >> | binding:vnic_type | normal >> | >> | created_at | 2018-01-27T16:45:48Z >> | >> | description | >> | >> | device_id | 74916967-984c-4617-ae33-b847de73de13 >> | >> | device_owner | compute:nova >> | >> | extra_dhcp_opts | >> | >> | fixed_ips | {"subnet_id": >> "525db7ff-2bf2-4c64-b41e-1e41570ec358", "ip_address": >> "192.168.120.10"} | >> | id | 74b754d6-0000-4c2e-bfd1-87f640154ac9 >> | >> | mac_address | fa:16:3e:af:90:0c >> | >> | name | >> | >> | network_id | 917254cb-9721-4207-99c5-8ead9f95d186 >> | >> | port_security_enabled | False >> | >> | project_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | revision_number | 27 >> | >> | security_groups | >> | >> | status | ACTIVE >> | >> | tenant_id | c48457e73b664147a3d2d36d75dcd155 >> | >> | updated_at | 2018-01-27T18:54:24Z >> | >> > +-----------------------+---------------------------------------------------------------------------------------+ >> >> >> maybe the port_filter causes still the problem? But how to disable >> it? >> >> Any other idea? >> >> Thanks and BR Mathias. >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1] > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > Links: > ------ > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Vielen Dank und Gruß Mathias. Many Thanks and kind regards, Mathias. -- Dipl.-Ing. (FH) Mathias Strufe Wissenschaftlicher Mitarbeiter / Researcher Intelligente Netze / Intelligent Networks Phone: +49 (0) 631 205 75 - 1826 Fax: +49 (0) 631 205 75 – 4400 E-Mail: Mathias.Strufe at dfki.de WWW: http://www.dfki.de/web/forschung/in WWW: https://selfnet-5g.eu/ -------------------------------------------------------------- Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH Trippstadter Strasse 122 D-67663 Kaiserslautern, Germany Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 VAT-ID: DE 148 646 973 -------------------------------------------------------------- -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ovs-vsctl_from_OpenStack_Host.text URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: NetworkInfoInsideOpenStackInstance.txt URL: From fawaz.moh.ibraheem at gmail.com Wed Jan 31 21:02:15 2018 From: fawaz.moh.ibraheem at gmail.com (Fawaz Mohammed) Date: Thu, 1 Feb 2018 01:02:15 +0400 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: Hi Santish, In my knowlege, Cisco has ml2 driver for Nexus only. So, if you have requirements for dynamic L3 provisioning / configuration, it's better to go with SDN solution. On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: > So no one using ASR 1001 for Openstack? > > On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel > wrote: > > Folks, > > > > We are planning to deploy production style private cloud and gathering > > information about what we should use and why and i came across with > > couple of document related network node criticality and performance > > issue and many folks suggesting following > > > > 1. DVR (it seem complicated after reading, also need lots of public IP) > > 2. Use ASR1k centralized router to use for L3 function (any idea what > > model should be good? or do we need any licensing to integrate with > > openstack?) > > > > Would like to get some input from folks who already using openstack in > > production and would like to know what kind of deployment they pick > > for network/neutron performance? > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Wed Jan 31 22:14:58 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 31 Jan 2018 17:14:58 -0500 Subject: [Openstack] DVR Public IP consumption In-Reply-To: References: <79409279-ad7f-25f5-622e-dfd9e6333ea9@gmail.com> Message-ID: <8319ccb3-bb75-15d9-1aec-af406801ca09@gmail.com> On 01/31/2018 10:12 AM, Satish Patel wrote: > Brian, > > You mean say i can use any private subnet for that service-type? I am > having hard time to understand could you please give me example or > details would be helpful. Yes, you should be able to use a private subnet that can be configured onto that network, that way the DVR routers would use IPs from it, and still be able to communicate with other devices in the datacenter. I believe the docs site I linked previously had an example of this. -Brian > On Thu, Jan 18, 2018 at 2:12 PM, Brian Haley wrote: >> On 01/16/2018 08:37 AM, Satish Patel wrote: >>> >>> Thanks Brian, >>> >>> I may having difficulty to understand that example, if I have only /24 >>> public subnet for cloud and have 200 compute node then how does it work? >> >> >> The intention is to have a second subnet on the external network, but only >> have it usable within the datacenter. If you create it and set the >> service-type to only certain types of ports, like DVR, then it won't be used >> for floating IP as the other one is. >> >> -Brian >> >> >>>> On Jan 15, 2018, at 9:47 PM, Brian Haley wrote: >>>> >>>>> On 01/15/2018 01:57 PM, Satish Patel wrote: >>>>> I am planning to build openstack on production and big question is >>>>> network (legacy vs DVR) but in DVR big concern is number of Public IP >>>>> used on every compute node, I am planning to add max 100 node in >>>>> cluster in that case it will use 100 public IP for compute node, Ouch! >>>> >>>> >>>> You can reduce this public IP consumption by using multiple subnets on >>>> the external network, with one just for those DVR interfaces. See >>>> >>>> https://docs.openstack.org/ocata/networking-guide/config-service-subnets.html >>>> Example #2 for a possible configuration. >>>> >>>> As for what type of L3 configuration to run, it seems like you have a >>>> good idea of some of the trade-offs with each. >>>> >>>> -Brian >>>> >>>>> If i use legacy compute node then it could be bottleneck or failure >>>>> node if not in HA. >>>>> what most of company use for network node? DVR or legacy? >>>>> _______________________________________________ >>>>> Mailing list: >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>>> Post to : openstack at lists.openstack.org >>>>> Unsubscribe : >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>>> >>>> >> From satish.txt at gmail.com Wed Jan 31 22:58:20 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 31 Jan 2018 17:58:20 -0500 Subject: [Openstack] Openstack neutron with ASR1k In-Reply-To: References: Message-ID: What about this? http://www.jimmdenton.com/networking-cisco-asr-part-two/ ML2 does use ASR too, Just curious what people mostly use in production? are they use DVR or some kind of hardware for L3? On Wed, Jan 31, 2018 at 4:02 PM, Fawaz Mohammed wrote: > Hi Santish, > > In my knowlege, Cisco has ml2 driver for Nexus only. > > So, if you have requirements for dynamic L3 provisioning / configuration, > it's better to go with SDN solution. > > On Jan 31, 2018 11:39 PM, "Satish Patel" wrote: >> >> So no one using ASR 1001 for Openstack? >> >> On Tue, Jan 30, 2018 at 11:21 PM, Satish Patel >> wrote: >> > Folks, >> > >> > We are planning to deploy production style private cloud and gathering >> > information about what we should use and why and i came across with >> > couple of document related network node criticality and performance >> > issue and many folks suggesting following >> > >> > 1. DVR (it seem complicated after reading, also need lots of public IP) >> > 2. Use ASR1k centralized router to use for L3 function (any idea what >> > model should be good? or do we need any licensing to integrate with >> > openstack?) >> > >> > Would like to get some input from folks who already using openstack in >> > production and would like to know what kind of deployment they pick >> > for network/neutron performance? >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From satish.txt at gmail.com Wed Jan 31 23:10:11 2018 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 31 Jan 2018 18:10:11 -0500 Subject: [Openstack] tripleO Error No valid host was found Message-ID: I am playing with tripleO and getting following error when deploying overcloud, I doing all this on VMware Workstation with fake_pxe driver, I did enable drive in ironic too. What could be wrong here? (undercloud) [stack at tripleo instance]$ openstack overcloud profiles list +--------------------------------------+----------------------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+----------------------+-----------------+-----------------+-------------------+ | 1800106d-6576-4d9a-869f-20dd5712398d | overcloud-controller | available | control | | | 8da58109-ad60-4463-8cdb-670ddd894ca2 | overcloud-compute1 | available | compute | | +--------------------------------------+----------------------+-----------------+-----------------+-------------------+ (undercloud) [stack at tripleo instance]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute Stack overcloud UPDATE_FAILED overcloud.Controller.0.Controller: resource_type: OS::TripleO::ControllerServer physical_resource_id: 7a0b9317-b607-4c89-9d66-5dd105136a73 status: CREATE_FAILED status_reason: | ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" overcloud.Compute.0.NovaCompute: resource_type: OS::TripleO::ComputeServer physical_resource_id: 57f4976c-9c57-430b-8c80-4b2676f1b397 status: CREATE_FAILED status_reason: | ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" Heat Stack update failed. Heat Stack update failed. From tony at bakeyournoodle.com Wed Jan 31 23:48:05 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 1 Feb 2018 10:48:05 +1100 Subject: [Openstack] tripleO Error No valid host was found In-Reply-To: References: Message-ID: <20180131234803.GD23143@thor.bakeyournoodle.com> On Wed, Jan 31, 2018 at 06:10:11PM -0500, Satish Patel wrote: > I am playing with tripleO and getting following error when deploying > overcloud, I doing all this on VMware Workstation with fake_pxe > driver, I did enable drive in ironic too. > > What could be wrong here? There's lots that could be wrong sadly. Testign under VMware is minimal to none. There are some good tips at: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting.html Specifically: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html#no-valid-host-found-error Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: