From pradhanparas at gmail.com Tue May 1 15:06:37 2018 From: pradhanparas at gmail.com (Paras pradhan) Date: Tue, 1 May 2018 10:06:37 -0500 Subject: [Openstack] Multiple floating IPs one instance In-Reply-To: References: Message-ID: Look for nova add-fixed-ip and nova add-floating-ip * add a private fixed ip for an instance: nova add-fixed-ip instance_id neutron_network_id * On instance : ip address add private_fixed_ip_address/24 dev eth0 * add fip: nova add-floating-ip --fixed-address private_fixed_ip_address instance_id floating_ip_address Thanks Paras. On Fri, Apr 27, 2018 at 1:47 PM, Torin Woltjer wrote: > Is it possible to run an instance with more than one floating IPs? It is > not immediately evident how to do this, or whether it is even possible. I > have an instance that I would like to have address on two separate > networks, and would like to use floating IPs so that I can have that are > capable of living longer than the instance itself. > > *Torin Woltjer* > > *Grand Dial Communications - A ZK Tech Inc. Company* > > *616.776.1066 ext. 2006* > * www.granddial.com * > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfb29 at cam.ac.uk Tue May 1 15:42:51 2018 From: pfb29 at cam.ac.uk (Paul Browne) Date: Tue, 1 May 2018 16:42:51 +0100 Subject: [Openstack] Multiple floating IPs one instance In-Reply-To: References: Message-ID: In our KVM deployment we've found mapping 2 floating IPs to an instance (pulled from 2 separate external network floating IP pools) like this is perfectly possible, with the proviso that with port security enabled on the Neutron ports static routes must be placed on the instance to ensure that traffic uses the correct interface, i.e. return traffic that came in on the private interface associated with one floating IP must go out the same interface and not another (or it will likely be dropped at the hypervisor). Thanks, Paul On 1 May 2018 at 16:06, Paras pradhan wrote: > Look for nova add-fixed-ip and nova add-floating-ip > > * add a private fixed ip for an instance: nova add-fixed-ip instance_id > neutron_network_id > * On instance : ip address add private_fixed_ip_address/24 dev eth0 > * add fip: nova add-floating-ip --fixed-address private_fixed_ip_address > instance_id floating_ip_address > > Thanks > Paras. > > On Fri, Apr 27, 2018 at 1:47 PM, Torin Woltjer < > torin.woltjer at granddial.com> wrote: > >> Is it possible to run an instance with more than one floating IPs? It is >> not immediately evident how to do this, or whether it is even possible. I >> have an instance that I would like to have address on two separate >> networks, and would like to use floating IPs so that I can have that are >> capable of living longer than the instance itself. >> >> *Torin Woltjer* >> >> *Grand Dial Communications - A ZK Tech Inc. Company* >> >> *616.776.1066 ext. 2006* >> * www.granddial.com * >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jokken.com Tue May 1 17:21:17 2018 From: jim at jokken.com (Jim Okken) Date: Tue, 1 May 2018 13:21:17 -0400 Subject: [Openstack] [Fuel] add custom settings to a fuel deploy Message-ID: Hi list, We’ve created a pretty large openstack Newton HA environment using fuel. After initial hiccups with deployment (not all fuel troubles) we can now add additional compute nodes to the environment with ease! Thank you for all who’ve worked on all the projects to make this product. My question has to do with something I think I should know already: How can we get fuel to stop overwriting custom settings in our environment? When we deploy new compute nodes, original openstack settings on all nodes are re-deployed/re-set. For example we have changes to settings in these files on the controller nodes. /etc/nova/nova.conf /etc/neutron/dhcp_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/openstack-dashboard/local_settings.py /etc/keystone/keystone.conf /etc/cinder/cinder.conf /etc/neutron/neutron.conf I’m guessing the method to resolve this is not to stop fuel from overwriting settings, but to add to fuel some tasks that sets these custom settings again near the end of each deploy. I’m sure this is something I am supposed to know already, but so far in my route thru Openstack land experience with this has escaped me. Can you send me some advice, pointers, places to start? Thanks! --jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jitendra.b at pramati.com Tue May 1 19:39:11 2018 From: jitendra.b at pramati.com (Jitendra Kumar Bhaskar) Date: Tue, 1 May 2018 14:39:11 -0500 Subject: [Openstack] [Fuel] add custom settings to a fuel deploy In-Reply-To: References: Message-ID: Hi Jim, I can help you one that, but before that wanted to understand how are you deploying the additional computes: 1. If CLI then share the command that you used to deploy. 2. If UI then are you deploying only one node after selection ? Regards Jitendra Bhaskar Regards Bhaskar +1-469-514-7986 On Tue, May 1, 2018 at 12:21 PM, Jim Okken wrote: > Hi list, > > > > We’ve created a pretty large openstack Newton HA environment using fuel. > After initial hiccups with deployment (not all fuel troubles) we can now > add additional compute nodes to the environment with ease! > > Thank you for all who’ve worked on all the projects to make this product. > > > > My question has to do with something I think I should know already: How > can we get fuel to stop overwriting custom settings in our environment? > When we deploy new compute nodes, original openstack settings on all nodes > are re-deployed/re-set. > > > > For example we have changes to settings in these files on the controller > nodes. > > > > /etc/nova/nova.conf > > /etc/neutron/dhcp_agent.ini > > /etc/neutron/plugins/ml2/openvswitch_agent.ini > > /etc/openstack-dashboard/local_settings.py > > /etc/keystone/keystone.conf > > /etc/cinder/cinder.conf > > /etc/neutron/neutron.conf > > > > I’m guessing the method to resolve this is not to stop fuel from > overwriting settings, but to add to fuel some tasks that sets these custom > settings again near the end of each deploy. > > > > I’m sure this is something I am supposed to know already, but so far in my > route thru Openstack land experience with this has escaped me. > > Can you send me some advice, pointers, places to start? > > > > Thanks! > > > > --jim > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -- Disclaimer: The contents of this email and any attachments are confidential. They are intended for the named recipient(s) only. If you have received this email by mistake, please notify the sender immediately and do not disclose the contents to anyone or make copies thereof.  -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jokken.com Tue May 1 22:42:23 2018 From: jim at jokken.com (Jim Okken) Date: Tue, 1 May 2018 18:42:23 -0400 Subject: [Openstack] [Fuel] add custom settings to a fuel deploy In-Reply-To: References: Message-ID: hi Jitendra, thanks very much for your reply! We deploy with the UI and right now the environment is set and we only add compute nodes. In the past we deployed one compute node at a time but now that we understand the process we deploy multiple, 3 or 5 compute nodes at a time. Honestly though going fwd it could be 1 or multiple nodes at a time. This is an growing internal-use environment, but we have 23 compute nodes right now, so we are going to be growing it slower going fwd. Right now we have some simple shell scripts which we run after a successful deploy, these set the settings in the config files and restart openstack services. But until those scripts are run the environment is missing those needed additions and not really usable. Not a huge problem for an internal-use environment, but we would like to have no downtime. Also it is HA so we have 3 controllers. thanks!! -- Jim On Tue, May 1, 2018 at 3:39 PM, Jitendra Kumar Bhaskar < jitendra.b at pramati.com> wrote: > Hi Jim, > > I can help you one that, but before that wanted to understand how are you > deploying the additional computes: > 1. If CLI then share the command that you used to deploy. > 2. If UI then are you deploying only one node after selection ? > > > Regards > Jitendra Bhaskar > > Regards > Bhaskar > +1-469-514-7986 > > > > > > On Tue, May 1, 2018 at 12:21 PM, Jim Okken wrote: > >> Hi list, >> >> >> >> We’ve created a pretty large openstack Newton HA environment using fuel. >> After initial hiccups with deployment (not all fuel troubles) we can now >> add additional compute nodes to the environment with ease! >> >> Thank you for all who’ve worked on all the projects to make this product. >> >> >> >> My question has to do with something I think I should know already: How >> can we get fuel to stop overwriting custom settings in our environment? >> When we deploy new compute nodes, original openstack settings on all nodes >> are re-deployed/re-set. >> >> >> >> For example we have changes to settings in these files on the controller >> nodes. >> >> >> >> /etc/nova/nova.conf >> >> /etc/neutron/dhcp_agent.ini >> >> /etc/neutron/plugins/ml2/openvswitch_agent.ini >> >> /etc/openstack-dashboard/local_settings.py >> >> /etc/keystone/keystone.conf >> >> /etc/cinder/cinder.conf >> >> /etc/neutron/neutron.conf >> >> >> >> I’m guessing the method to resolve this is not to stop fuel from >> overwriting settings, but to add to fuel some tasks that sets these custom >> settings again near the end of each deploy. >> >> >> >> I’m sure this is something I am supposed to know already, but so far in >> my route thru Openstack land experience with this has escaped me. >> >> Can you send me some advice, pointers, places to start? >> >> >> >> Thanks! >> >> >> >> --jim >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > > Disclaimer: > The contents of this email and any attachments are confidential. They are > intended for the named recipient(s) only. If you have received this email > by mistake, please notify the sender immediately and do not disclose the > contents to anyone or make copies thereof. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Wed May 2 16:32:59 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Wed, 2 May 2018 16:32:59 +0000 Subject: [Openstack] SRIOV Enablement in openstack error Message-ID: Dear all, I am trying to enable the SRIOV interfaces in my openstack Pike installation. I am following the guide: https://docs.openstack.org/neutron/pike/admin/config-sriov.html I am getting following error in neutron while doing so: 2018-05-02 17:08:19.492 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] 2018-05-02 17:08:19.492 75833 INFO neutron.plugins.ml2.plugin [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempt 10 to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct with profile {"pci_slot": "0000:82:08.5", "physical_network": "sriovprovider", "pci_vendor_info": "1924:1a03"} bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:744 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller at level 0 using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider' , 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] _bind_port_level /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:765 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attem pting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:88 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Refus ing to bind due to unsupported vnic_type: direct bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:93 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sri ov/mech_driver/mech_driver.py:78 2018-05-02 17:08:19.503 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking agent: {'binary': u'neutron-sriov-nic-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2018, 5, 2, 16, 8, 9), 'availability_zone': Non e, 'alive': True, 'topic': u'N/A', 'host': u'controller', 'agent_type': u'NIC Switch agent', 'resource_versions': {u'Subnet': u'1.0', u'Network': u'1.0', u'SubPort': u'1.0', u'SecurityGroup': u'1.0', u'Secur ityGroupRule': u'1.0', u'Trunk': u'1.1', u'QosPolicy': u'1.6', u'Port': u'1.1', u'Log': u'1.0'}, 'created_at': datetime.datetime(2018, 4, 10, 7, 40, 42), 'started_at': datetime.datetime(2018, 4, 10, 8, 36, 4 ), 'id': u'f2f40c44-cc99-4371-93b9-4c21731122a4', 'configurations': {u'extensions': [], u'devices': 0, u'device_mappings': {u'internetprovider': [u'enp130s0f0']}}} bind_port /usr/lib/python2.7/dist-packages/ neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:110 2018-05-02 17:08:19.504 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking segment: {'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'networ k_type': u'vlan'} for mappings: {u'internetprovider': [u'enp130s0f0']} check_segment_for_agent /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:149 2018-05-02 17:08:19.504 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] Could you please direct me to the correct guide for SRIOV enablement in openstack Pike or help me in this regard. Kind Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Wed May 2 17:25:58 2018 From: moshele at mellanox.com (Moshe Levi) Date: Wed, 2 May 2018 17:25:58 +0000 Subject: [Openstack] SRIOV Enablement in openstack error In-Reply-To: References: Message-ID: Hi Navdeep, The yellow lines that you outline are related to the ovs mechanism driver which indicate that it can't bind direct port which as expected. (/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py is port of the ovs mechanism driver) The error with the sriov mechanism driver seem to be in Checking agent: {'binary': u'neutron-sriov-nic-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2018, 5, 2, 16, 8, 9), 'availability_zone': Non e, 'alive': True, 'topic': u'N/A', 'host': u'controller', 'agent_type': u'NIC Switch agent', 'resource_versions': {u'Subnet': u'1.0', u'Network': u'1.0', u'SubPort': u'1.0', u'SecurityGroup': u'1.0', u'Secur ityGroupRule': u'1.0', u'Trunk': u'1.1', u'QosPolicy': u'1.6', u'Port': u'1.1', u'Log': u'1.0'}, 'created_at': datetime.datetime(2018, 4, 10, 7, 40, 42), 'started_at': datetime.datetime(2018, 4, 10, 8, 36, 4 ), 'id': u'f2f40c44-cc99-4371-93b9-4c21731122a4', 'configurations': {u'extensions': [], u'devices': 0, u'device_mappings': {u'internetprovider': [u'enp130s0f0']}}} It seem the network has physical_network= 'sriovprovider' but in the sriov agnet has outlined in yellow the physnet is 'internetprovider'. If you change the in the sriov agent config to device_mappings= sriovprovider: enp130s0f0 and restart the agnet is should work. From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Wednesday, May 2, 2018 7:33 PM To: OpenStack Mailing List Subject: [Openstack] SRIOV Enablement in openstack error Dear all, I am trying to enable the SRIOV interfaces in my openstack Pike installation. I am following the guide: https://docs.openstack.org/neutron/pike/admin/config-sriov.html I am getting following error in neutron while doing so: 2018-05-02 17:08:19.492 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] 2018-05-02 17:08:19.492 75833 INFO neutron.plugins.ml2.plugin [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempt 10 to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct with profile {"pci_slot": "0000:82:08.5", "physical_network": "sriovprovider", "pci_vendor_info": "1924:1a03"} bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:744 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller at level 0 using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider' , 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] _bind_port_level /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:765 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attem pting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:88 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Refus ing to bind due to unsupported vnic_type: direct bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:93 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sri ov/mech_driver/mech_driver.py:78 2018-05-02 17:08:19.503 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking agent: {'binary': u'neutron-sriov-nic-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2018, 5, 2, 16, 8, 9), 'availability_zone': Non e, 'alive': True, 'topic': u'N/A', 'host': u'controller', 'agent_type': u'NIC Switch agent', 'resource_versions': {u'Subnet': u'1.0', u'Network': u'1.0', u'SubPort': u'1.0', u'SecurityGroup': u'1.0', u'Secur ityGroupRule': u'1.0', u'Trunk': u'1.1', u'QosPolicy': u'1.6', u'Port': u'1.1', u'Log': u'1.0'}, 'created_at': datetime.datetime(2018, 4, 10, 7, 40, 42), 'started_at': datetime.datetime(2018, 4, 10, 8, 36, 4 ), 'id': u'f2f40c44-cc99-4371-93b9-4c21731122a4', 'configurations': {u'extensions': [], u'devices': 0, u'device_mappings': {u'internetprovider': [u'enp130s0f0']}}} bind_port /usr/lib/python2.7/dist-packages/ neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:110 2018-05-02 17:08:19.504 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking segment: {'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'networ k_type': u'vlan'} for mappings: {u'internetprovider': [u'enp130s0f0']} check_segment_for_agent /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:149 2018-05-02 17:08:19.504 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] Could you please direct me to the correct guide for SRIOV enablement in openstack Pike or help me in this regard. Kind Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Wed May 2 18:43:10 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Wed, 02 May 2018 14:43:10 -0400 Subject: [Openstack] HA Compute & Instance Evacuation Message-ID: <6566306.P08sTW7ctK@localhost.localdomain> I am working on setting up Openstack for HA and one of the last orders of business is getting HA behavior out of the compute nodes. Is there a project that will automatically evacuate instances from a downed or failed compute host, and automatically reboot them on their new host? I'm curious what suggestions people have about this, or whatever advice you might have. Is there a best way of getting this functionality, or anything else I should be aware of? Thanks, From jaypipes at gmail.com Wed May 2 19:42:03 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 2 May 2018 15:42:03 -0400 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: <6566306.P08sTW7ctK@localhost.localdomain> References: <6566306.P08sTW7ctK@localhost.localdomain> Message-ID: On 05/02/2018 02:43 PM, Torin Woltjer wrote: > I am working on setting up Openstack for HA and one of the last orders of > business is getting HA behavior out of the compute nodes. There is no HA behaviour for compute nodes. > Is there a project that will automatically evacuate instances from a > downed or failed compute host, and automatically reboot them on their > new host? Check out Masakari: https://wiki.openstack.org/wiki/Masakari > I'm curious what suggestions people have about this, or whatever > advice you might have. Is there a best way of getting this > functionality, or anything else I should be aware of? You are referring to HA of workloads running on compute nodes, not HA of compute nodes themselves. My advice would be to install Kubernetes on one or more VMs (with the VMs acting as Kubernetes nodes) and use that project's excellent orchestrator for daemonsets/statefulsets which is essentially the use case you are describing. The OpenStack Compute API (implemented in Nova) is not an orchestration API. It's a low-level infrastructure API for executing basic actions on compute resources. Best, -jay From jpetrini at coredial.com Wed May 2 20:37:58 2018 From: jpetrini at coredial.com (John Petrini) Date: Wed, 2 May 2018 16:37:58 -0400 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: References: <6566306.P08sTW7ctK@localhost.localdomain> Message-ID: We're using the original Masakari project for this and it works really well. In fact just last week we lost a compute node and all of VM's were successfully migrated to a reserve host in under 5 minutes. It's a really nice feeling when your infrastructure heals itself before you even get a chance to start troubleshooting. It does require a good deal of configuration to get it up and running, especially the clustering with Pacemaker/Corosync so be prepared to get familiar with those tools and STONITH if you're not already. Worth it if some of your infrastructure doesn't have redundancy built in at higher level. -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Wed May 2 20:39:58 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Wed, 02 May 2018 20:39:58 GMT Subject: [Openstack] HA Compute & Instance Evacuation Message-ID: > There is no HA behaviour for compute nodes. > > You are referring to HA of workloads running on compute nodes, not HA of > compute nodes themselves. It was a mistake for me to say HA when referring to compute and instances. Really I want to avoid a situation where one of my compute hosts gives up the ghost, and all of the instances are offline until someone reboots them on a different host. I would like them to automatically reboot on a healthy compute node. > Check out Masakari: > > https://wiki.openstack.org/wiki/Masakari This looks like the kind of thing I'm searching for. I'm seeing 3 components here, I'm assuming one goes on compute hosts and one or both of the others go on the control nodes? Is there any documentation outlining the procedure for deploying this? Will there be any problem running the Masakari API service on 2 machines simultaneously, sitting behind HAProxy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed May 2 20:46:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 2 May 2018 16:46:54 -0400 Subject: [Openstack] [masakari] HA Compute & Instance Evacuation In-Reply-To: References: Message-ID: <6ffade55-f2c7-fec7-221a-6ca53ad13d25@gmail.com> On 05/02/2018 04:39 PM, Torin Woltjer wrote: > > There is no HA behaviour for compute nodes. > > > > You are referring to HA of workloads running on compute nodes, not HA of > > compute nodes themselves. > It was a mistake for me to say HA when referring to compute and > instances. Really I want to avoid a situation where one of my compute > hosts gives up the ghost, and all of the instances are offline until > someone reboots them on a different host. I would like them to > automatically reboot on a healthy compute node. > > > Check out Masakari: > > > > https://wiki.openstack.org/wiki/Masakari > This looks like the kind of thing I'm searching for. > > I'm seeing 3 components here, I'm assuming one goes on compute hosts and > one or both of the others go on the control nodes? I don't believe anything goes on the compute nodes, no. I'm pretty sure the Masakari API service and engine workers live on controller nodes. > Is there any documentation outlining the procedure for deploying > this? Will there be any problem running the Masakari API service on 2 > machines simultaneously, sitting behind HAProxy? Not sure. I'll leave it up to the Masakari developers to help out here. I've added [masakari] topic to the subject line. Best, -jay From torin.woltjer at granddial.com Wed May 2 21:24:55 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Wed, 02 May 2018 21:24:55 GMT Subject: [Openstack] HA Compute & Instance Evacuation Message-ID: <2081774e32e64432961a12a177aa2239@granddial.com> I'm vaguely familiar with Pacemaker/Corosync, as I'm using it with HAProxy on my controller nodes. I'm assuming in this instance that you use Pacemaker on your compute hosts so masakari can detect host outages? If possible could you go into more detail about the configuration? I would like to use Masakari and I'm having trouble finding a step by step or other documentation to get started with. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpetrini at coredial.com Thu May 3 00:21:30 2018 From: jpetrini at coredial.com (John Petrini) Date: Wed, 2 May 2018 20:21:30 -0400 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: <2081774e32e64432961a12a177aa2239@granddial.com> References: <2081774e32e64432961a12a177aa2239@granddial.com> Message-ID: Take this with a grain of salt because we're using the original version before the project moved under the Big Tent and I'm not sure how much it's evolved since then. I assume the basic functions are the same though. You're correct; Corosync and Pacemaker are used to determine if a compute node goes down. The masakari-host-monitor process runs on each compute node and checks the cluster status and sends a notification to masakari-controller when a node goes down. The controller process keeps a list of reserved hosts in it's database and calls nova host-evacuate to move the Instances to one of the reserved hosts. In our environment I also configured STONITH and I'd highly recommend it. With STONITH Pacemaker sends a shutdown command to the Out of Band Management card of the unreachable node to make sure that it can't come back and cause a conflict. There are two other components, masakari-process-monitor and masakari-instance-monitor. These also run on your compute nodes. The former watches the nova-compute service and the later monitors running instances and restarts them if necessary. Looking here it seems they've split Masakari into thee different repos: https://github.com/openstack?utf8=%E2%9C%93&q=masakari&type=&language= masakari - The controller service and API masakari-monitors - Compute node monitoring services python-masakari-client - The cli tools -------------- next part -------------- An HTML attachment was scrubbed... URL: From nspmangalore at gmail.com Thu May 3 07:22:44 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Thu, 3 May 2018 12:52:44 +0530 Subject: [Openstack] Not able to use openstack swift using the s3 api plugin... Message-ID: Hi, I tried installing the swift3 plugin using the following link: https://docs.openstack.org/mitaka/config-reference/object-storage/configure-s3.html However, I'm not able to perform the operations: # ./s3curl.pl --id=personal -get -- -s -v http://proxy-server:8080 Unknown option: get * Rebuilt URL to: http://proxy-server:8080/ * Trying 20.20.20.220... * Connected to proxy-server (20.20.20.220) port 8080 (#0) > GET / HTTP/1.1 > Host: proxy-server:8080 > User-Agent: curl/7.47.0 > Accept: */* > Date: Thu, 03 May 2018 06:07:40 +0000 > Authorization: AWS 4579fb60db3a47069f289d8fd7fa3212:R4zTQJDhPB3G8wLCsNBHaWNjaZQ= > < HTTP/1.1 500 Internal Server Error < x-amz-id-2: txbbe4b8dc26904cbf880e3-005aeaa72c < x-amz-request-id: txbbe4b8dc26904cbf880e3-005aeaa72c < Content-Type: application/xml < X-Trans-Id: txbbe4b8dc26904cbf880e3-005aeaa72c < Date: Thu, 03 May 2018 06:07:41 GMT < Transfer-Encoding: chunked < InternalErrorWe encountered an internal error. Please try again.txbbe4b8dc26904cbf880e3-005aeaa72c<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>InvalidURI</Code> <Message>Could not parse the specified URI</Message> </Error> * Connection #0 to host proxy-server left intact Getting this in the logs... May 2 23:07:40 localhost proxy-server: STDERR: (20652) accepted ('20.20.20.220', 51030) May 2 23:07:41 localhost proxy-server: #015#012#015#012 InvalidURI#015#012 Could not parse the specified URI#015#012#015#012: #012Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/swift3/middleware.py", line 81, in __call__#012 resp = self.handle_request(req)#012 File "/usr/lib/python2.7/dist-packages/swift3/middleware.py", line 104, in handle_request#012 res = getattr(controller, req.method)(req)#012 File "/usr/lib/python2.7/dist-packages/swift3/controllers/service.py", line 33, in GET#012 resp = req.get_response(self.app, query={'format': 'json'})#012 File "/usr/lib/python2.7/dist-packages/swift3/request.py", line 686, in get_response#012 headers, body, query)#012 File "/usr/lib/python2.7/dist-packages/swift3/request.py", line 665, in _get_response#012 raise BadSwiftRequest(err_msg)#012BadSwiftRequest: #015#012#015#012 InvalidURI#015#012 Could not parse the specified URI#015#012#015#012 (txn: txbbe4b8dc26904cbf880e3-005aeaa72c) May 2 23:07:41 localhost proxy-server: STDERR: 20.20.20.220 - - [03/May/2018 06:07:41] "GET / HTTP/1.1" 500 724 0.058274 (txn: txbbe4b8dc26904cbf880e3-005aeaa72c) Can someone please tell me what's going on? Please let me know if some additional data is necessary. -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: From navdeep.uniyal at bristol.ac.uk Thu May 3 09:17:08 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Thu, 3 May 2018 09:17:08 +0000 Subject: [Openstack] SRIOV Enablement in openstack error In-Reply-To: References: Message-ID: Hi Moshe, Thank you very much for figuring it out. It works now, Just had to restart the sriov-agent. Kind Regards, Navdeep From: Moshe Levi Sent: 02 May 2018 18:26 To: Navdeep Uniyal ; OpenStack Mailing List Subject: RE: SRIOV Enablement in openstack error Hi Navdeep, The yellow lines that you outline are related to the ovs mechanism driver which indicate that it can't bind direct port which as expected. (/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py is port of the ovs mechanism driver) The error with the sriov mechanism driver seem to be in Checking agent: {'binary': u'neutron-sriov-nic-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2018, 5, 2, 16, 8, 9), 'availability_zone': Non e, 'alive': True, 'topic': u'N/A', 'host': u'controller', 'agent_type': u'NIC Switch agent', 'resource_versions': {u'Subnet': u'1.0', u'Network': u'1.0', u'SubPort': u'1.0', u'SecurityGroup': u'1.0', u'Secur ityGroupRule': u'1.0', u'Trunk': u'1.1', u'QosPolicy': u'1.6', u'Port': u'1.1', u'Log': u'1.0'}, 'created_at': datetime.datetime(2018, 4, 10, 7, 40, 42), 'started_at': datetime.datetime(2018, 4, 10, 8, 36, 4 ), 'id': u'f2f40c44-cc99-4371-93b9-4c21731122a4', 'configurations': {u'extensions': [], u'devices': 0, u'device_mappings': {u'internetprovider': [u'enp130s0f0']}}} It seem the network has physical_network= 'sriovprovider' but in the sriov agnet has outlined in yellow the physnet is 'internetprovider'. If you change the in the sriov agent config to device_mappings= sriovprovider: enp130s0f0 and restart the agnet is should work. From: Navdeep Uniyal [mailto:navdeep.uniyal at bristol.ac.uk] Sent: Wednesday, May 2, 2018 7:33 PM To: OpenStack Mailing List > Subject: [Openstack] SRIOV Enablement in openstack error Dear all, I am trying to enable the SRIOV interfaces in my openstack Pike installation. I am following the guide: https://docs.openstack.org/neutron/pike/admin/config-sriov.html I am getting following error in neutron while doing so: 2018-05-02 17:08:19.492 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] 2018-05-02 17:08:19.492 75833 INFO neutron.plugins.ml2.plugin [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempt 10 to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct with profile {"pci_slot": "0000:82:08.5", "physical_network": "sriovprovider", "pci_vendor_info": "1924:1a03"} bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:744 2018-05-02 17:08:19.500 75833 DEBUG neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to b ind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller at level 0 using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider' , 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] _bind_port_level /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py:765 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attem pting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:88 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_agent [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Refus ing to bind due to unsupported vnic_type: direct bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_agent.py:93 2018-05-02 17:08:19.501 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Attempting to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on network ba89924e-3134-4966-af56-b582d70f5f41 bind_port /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sri ov/mech_driver/mech_driver.py:78 2018-05-02 17:08:19.503 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking agent: {'binary': u'neutron-sriov-nic-agent', 'description': None, 'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2018, 5, 2, 16, 8, 9), 'availability_zone': Non e, 'alive': True, 'topic': u'N/A', 'host': u'controller', 'agent_type': u'NIC Switch agent', 'resource_versions': {u'Subnet': u'1.0', u'Network': u'1.0', u'SubPort': u'1.0', u'SecurityGroup': u'1.0', u'Secur ityGroupRule': u'1.0', u'Trunk': u'1.1', u'QosPolicy': u'1.6', u'Port': u'1.1', u'Log': u'1.0'}, 'created_at': datetime.datetime(2018, 4, 10, 7, 40, 42), 'started_at': datetime.datetime(2018, 4, 10, 8, 36, 4 ), 'id': u'f2f40c44-cc99-4371-93b9-4c21731122a4', 'configurations': {u'extensions': [], u'devices': 0, u'device_mappings': {u'internetprovider': [u'enp130s0f0']}}} bind_port /usr/lib/python2.7/dist-packages/ neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:110 2018-05-02 17:08:19.504 75833 DEBUG neutron.plugins.ml2.drivers.mech_sriov.mech_driver.mech_driver [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Checking segment: {'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovprovider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'networ k_type': u'vlan'} for mappings: {u'internetprovider': [u'enp130s0f0']} check_segment_for_agent /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_sriov/mech_driver/mech_driver.py:149 2018-05-02 17:08:19.504 75833 ERROR neutron.plugins.ml2.managers [req-4e2f89f4-4d26-4e78-9b05-e997f398f2a1 2b4070ba7a5044cca6f42f594c37011c 65ac10ca3d2f407eab488c208348ace5 - default default] Failed to bind port ef64559f-c047-47d6-b9eb-09c5a60d7549 on host controller for vnic_type direct using segments [{'network_id': 'ba89924e-3134-4966-af56-b582d70f5f41', 'segmentation_id': 200, 'physical_network': u'sriovpro vider', 'id': '19120c0a-0ec7-4b6b-ad07-f68effdb5bf3', 'network_type': u'vlan'}] Could you please direct me to the correct guide for SRIOV enablement in openstack Pike or help me in this regard. Kind Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcioprado at marcioprado.eti.br Thu May 3 18:08:37 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Thu, 03 May 2018 15:08:37 -0300 Subject: [Openstack] Upgrade Pike to Queens on Ubuntu Message-ID: Good afternoon everyone. Has anyone upgrade the OpenStack Pike for Queens on Ubuntu? Thanks for listening. From Ocata to Pike I've already realized. -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br From eli at mirantis.com Thu May 3 20:56:19 2018 From: eli at mirantis.com (Evgeny L) Date: Thu, 3 May 2018 13:56:19 -0700 Subject: [Openstack] [Fuel] add custom settings to a fuel deploy In-Reply-To: References: Message-ID: Hi Jim, There are a couple of ways to approach your problem: 1. Implement Fuel Plugin, where the plugin would automatically override changes done by Fuel core tasks [1], you can override or skip any task from deployment graph. 2. Fix puppet manifests in place, the manifests which are used for deployment are located in "/etc/puppet/" directory on Fuel Master node, during the deployment they are synced to OpenStack nodes via rsync. You should grep for nova_config, keystone_config, etc, to see where the parameters your are looking for are located, do not forget to put it under git so you can keep track of your changes. Thanks, [1] https://docs.openstack.org/fuel-docs/latest/plugindocs/fuel-plugin-sdk-guide.html On Tue, May 1, 2018 at 3:42 PM, Jim Okken wrote: > hi Jitendra, > thanks very much for your reply! > > We deploy with the UI and right now the environment is set and we only add > compute nodes. > In the past we deployed one compute node at a time but now that we > understand the process we deploy multiple, 3 or 5 compute nodes at a time. > Honestly though going fwd it could be 1 or multiple nodes at a time. > This is an growing internal-use environment, but we have 23 compute nodes > right now, so we are going to be growing it slower going fwd. > > Right now we have some simple shell scripts which we run after a > successful deploy, these set the settings in the config files and restart > openstack services. But until those scripts are run the environment is > missing those needed additions and not really usable. > Not a huge problem for an internal-use environment, but we would like to > have no downtime. > Also it is HA so we have 3 controllers. > > thanks!! > > > -- Jim > > On Tue, May 1, 2018 at 3:39 PM, Jitendra Kumar Bhaskar < > jitendra.b at pramati.com> wrote: > >> Hi Jim, >> >> I can help you one that, but before that wanted to understand how are you >> deploying the additional computes: >> 1. If CLI then share the command that you used to deploy. >> 2. If UI then are you deploying only one node after selection ? >> >> >> Regards >> Jitendra Bhaskar >> >> Regards >> Bhaskar >> +1-469-514-7986 >> >> >> >> >> >> On Tue, May 1, 2018 at 12:21 PM, Jim Okken wrote: >> >>> Hi list, >>> >>> >>> >>> We’ve created a pretty large openstack Newton HA environment using fuel. >>> After initial hiccups with deployment (not all fuel troubles) we can now >>> add additional compute nodes to the environment with ease! >>> >>> Thank you for all who’ve worked on all the projects to make this product. >>> >>> >>> >>> My question has to do with something I think I should know already: How >>> can we get fuel to stop overwriting custom settings in our environment? >>> When we deploy new compute nodes, original openstack settings on all nodes >>> are re-deployed/re-set. >>> >>> >>> >>> For example we have changes to settings in these files on the controller >>> nodes. >>> >>> >>> >>> /etc/nova/nova.conf >>> >>> /etc/neutron/dhcp_agent.ini >>> >>> /etc/neutron/plugins/ml2/openvswitch_agent.ini >>> >>> /etc/openstack-dashboard/local_settings.py >>> >>> /etc/keystone/keystone.conf >>> >>> /etc/cinder/cinder.conf >>> >>> /etc/neutron/neutron.conf >>> >>> >>> >>> I’m guessing the method to resolve this is not to stop fuel from >>> overwriting settings, but to add to fuel some tasks that sets these custom >>> settings again near the end of each deploy. >>> >>> >>> >>> I’m sure this is something I am supposed to know already, but so far in >>> my route thru Openstack land experience with this has escaped me. >>> >>> Can you send me some advice, pointers, places to start? >>> >>> >>> >>> Thanks! >>> >>> >>> >>> --jim >>> >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : http://lists.openstack.org/cgi >>> -bin/mailman/listinfo/openstack >>> >>> >> >> Disclaimer: >> The contents of this email and any attachments are confidential. They are >> intended for the named recipient(s) only. If you have received this email >> by mistake, please notify the sender immediately and do not disclose the >> contents to anyone or make copies thereof. >> > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From celebdor at gmail.com Fri May 4 11:20:56 2018 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Fri, 4 May 2018 13:20:56 +0200 Subject: [Openstack] kuryr-libnetwork to Openstack Zun In-Reply-To: References: Message-ID: Hi Andres, The problem is that there's some python dependency that needs to be compiled and you are missing gcc and/or other build essentials. I recommend finding the development meta-package of your distro. Regards, Toni On Thu, May 3, 2018 at 2:32 PM, Andrés López wrote: > > Hello my name is Andres Lopez I sent this email to ask for your help with > the installation of kuryr-libnetwork for my openstack infrastructure, I am > currently installing the Zun module but I need the kuryr to continue, I > followed this guide > https://github.com/openstack/kuryr-libnetwork/tree/stable/queens but when I > get to the point of I get the error, I enclose the log for > more information about the problem, I hope you can help me with this problem > or if you can recommend another way to do it. Thank you > > -- > Andrés Felipe López Suárez > Estudiante de Ingeniería de Sistemas > Universidad Industrial de Santander > From torin.woltjer at granddial.com Fri May 4 18:43:05 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Fri, 04 May 2018 18:43:05 GMT Subject: [Openstack] HA Compute & Instance Evacuation Message-ID: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> Thank you very much for the information. Just for clarification, when you say reserved hosts, do you mean that I must keep unloaded virtualization hosts in reserve? Or can Masakari move instances from a downed host to an already loaded host that has open capacity? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsuyuzaki.kota at lab.ntt.co.jp Mon May 7 02:29:21 2018 From: tsuyuzaki.kota at lab.ntt.co.jp (Kota TSUYUZAKI) Date: Mon, 07 May 2018 11:29:21 +0900 Subject: [Openstack] Not able to use openstack swift using the s3 api plugin... In-Reply-To: References: Message-ID: <5AEFBA01.6070702@lab.ntt.co.jp> Hello Shyam, I did not have so much experience with s3curl but it looks like the problem is caused by client but it seems like the s3token middleware, which is to get authentication with keystone. So let us check "What authentication are you using", then check "your exisiting configuration". That seems a way to go. And more, if you could change the reference from https://docs.openstack.org/mitaka/config-reference/object-storage/configure-s3.html, please consider if you would refer the newest docs at the master of swift3 repository, https://github.com/openstack/swift3. That is because OpenStack mitaka was approximately 2 years ago and some of the docs looks weird to me. (e.g. recommend swauth by default in the pipeline config) If you could follow up the Swift upstream, the good news is that swift3 got merged as "s3api" to the Swift upstream repository in recent, https://github.com/openstack/swift/commit/636b922f3b2882f7dd6c656983d7862b274dcf98. So you may consider to use the newest version if you prefer to do so. Best, Kota (2018/05/03 16:22), Shyam Prasad N wrote: > Hi, > > I tried installing the swift3 plugin using the following link: > https://docs.openstack.org/mitaka/config-reference/object-storage/configure-s3.html > > However, I'm not able to perform the operations: > # ./s3curl.pl --id=personal -get -- -s -v http://proxy-server:8080 > Unknown option: get > * Rebuilt URL to: http://proxy-server:8080/ > * Trying 20.20.20.220... > * Connected to proxy-server (20.20.20.220) port 8080 (#0) >> GET / HTTP/1.1 >> Host: proxy-server:8080 >> User-Agent: curl/7.47.0 >> Accept: */* >> Date: Thu, 03 May 2018 06:07:40 +0000 >> Authorization: AWS > 4579fb60db3a47069f289d8fd7fa3212:R4zTQJDhPB3G8wLCsNBHaWNjaZQ= >> > < HTTP/1.1 500 Internal Server Error > < x-amz-id-2: txbbe4b8dc26904cbf880e3-005aeaa72c > < x-amz-request-id: txbbe4b8dc26904cbf880e3-005aeaa72c > < Content-Type: application/xml > < X-Trans-Id: txbbe4b8dc26904cbf880e3-005aeaa72c > < Date: Thu, 03 May 2018 06:07:41 GMT > < Transfer-Encoding: chunked > < > > InternalErrorWe encountered an internal error. > Please try > again.txbbe4b8dc26904cbf880e3-005aeaa72c<?xml > version="1.0" encoding="UTF-8"?> > <Error> > <Code>InvalidURI</Code> > <Message>Could not parse the specified URI</Message> > </Error> > * Connection #0 to host proxy-server left intact > > Getting this in the logs... > May 2 23:07:40 localhost proxy-server: STDERR: (20652) accepted > ('20.20.20.220', 51030) > May 2 23:07:41 localhost proxy-server: encoding="UTF-8"?>#015#012#015#012 InvalidURI#015#012 > Could not parse the specified > URI#015#012#015#012: #012Traceback (most recent call > last):#012 File "/usr/lib/python2.7/dist-packages/swift3/middleware.py", > line 81, in __call__#012 resp = self.handle_request(req)#012 File > "/usr/lib/python2.7/dist-packages/swift3/middleware.py", line 104, in > handle_request#012 res = getattr(controller, req.method)(req)#012 File > "/usr/lib/python2.7/dist-packages/swift3/controllers/service.py", line 33, > in GET#012 resp = req.get_response(self.app, query={'format': > 'json'})#012 File "/usr/lib/python2.7/dist-packages/swift3/request.py", > line 686, in get_response#012 headers, body, query)#012 File > "/usr/lib/python2.7/dist-packages/swift3/request.py", line 665, in > _get_response#012 raise BadSwiftRequest(err_msg)#012BadSwiftRequest: > #015#012#015#012 > InvalidURI#015#012 Could not parse the specified > URI#015#012#015#012 (txn: > txbbe4b8dc26904cbf880e3-005aeaa72c) > May 2 23:07:41 localhost proxy-server: STDERR: 20.20.20.220 - - > [03/May/2018 06:07:41] "GET / HTTP/1.1" 500 724 0.058274 (txn: > txbbe4b8dc26904cbf880e3-005aeaa72c) > > Can someone please tell me what's going on? > Please let me know if some additional data is necessary. > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From Tushar.Patil at nttdata.com Mon May 7 02:41:48 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Mon, 7 May 2018 02:41:48 +0000 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> References: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> Message-ID: Hi Torin, Masakari supports 4 different types of recovery methods at the time of creation of failover_segment. 1. auto: It will let nova decide on which compute host the instances should be evacuated. 2. reserved_host: You will first need to add reserved hosts to the failover segments. Masakari engine will select the first available reserved host from the failover segment, enable compute service in nova and then use that reserved host to evacuate the instances from the failed compute host. 3. auto_priority: it will first try to evacuate instances using 'auto' recovery method, if it's fails then it attempts to evacuate using "reserved_host" recovery method. 4. rh_priority: It's opposite of above "auto_priority" recovery method. it will first try to evacuate instances using 'reserved_host' recovery method, if it's fails then it attempts to evacuate using "auto" recovery method. In your case you will need to use "auto" recovery method. Please refer to the below documentation links for more details. Masakari system architecture: https://docs.openstack.org/masakari/latest/ Masakari api-ref: https://developer.openstack.org/api-ref/instance-ha/ To install masakari-monitors with pacemaker/corosync: https://review.openstack.org/#/c/489095/6/doc/source/install_and_configure_debian.rst Other ways to reach us: Masakari weekly meeting on #openstack-meeting IRC channel on every Tuesday at 0400 UTC or else you can post your queries on #openstack-masakari IRC channel. Regards, Tushar ________________________________________ From: Torin Woltjer Sent: Saturday, May 5, 2018 3:43:05 AM To: jpetrini at coredial.com Cc: openstack at lists.openstack.org Subject: Re: [Openstack] HA Compute & Instance Evacuation Thank you very much for the information. Just for clarification, when you say reserved hosts, do you mean that I must keep unloaded virtualization hosts in reserve? Or can Masakari move instances from a downed host to an already loaded host that has open capacity? Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Pablo.Iranzo at redhat.com Mon May 7 07:06:46 2018 From: Pablo.Iranzo at redhat.com (Pablo Iranzo =?iso-8859-1?Q?G=F3mez?=) Date: Mon, 7 May 2018 09:06:46 +0200 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: References: Message-ID: <20180507070646.GB17404@redhat.com> +++ Torin Woltjer [02/05/18 20:39 +0000]: >> There is no HA behaviour for compute nodes. >> >> You are referring to HA of workloads running on compute nodes, not HA of >> compute nodes themselves. >It was a mistake for me to say HA when referring to compute and instances. Really I want to avoid a situation where one of my compute hosts gives up the ghost, and all of the instances are offline until someone reboots them on a different host. I would like them to automatically reboot on a healthy compute node. > >> Check out Masakari: >> >> https://wiki.openstack.org/wiki/Masakari >This looks like the kind of thing I'm searching for. > >I'm seeing 3 components here, I'm assuming one goes on compute hosts and one or both of the others go on the control nodes? Is there any documentation outlining the procedure for deploying this? Will there be any problem running the Masakari API service on 2 machines simultaneously, sitting behind HAProxy? Check for 'Instance HA': https://blueprints.launchpad.net/tripleo/+spec/instance-ha Which more or less came with: https://github.com/beekhof/osp-ha-deploy/blob/master/pcmk/compute-managed.scenario https://github.com/beekhof/osp-ha-deploy/blob/master/pcmk/controller-managed.scenario Ansible scripts are at git://github.com/redhat-openstack/tripleo-quickstart-utils And enabled via: ansible-playbook /home/stack/ansible-instanceha/playbooks/overcloud-instance-ha.yml \ -e release="RELEASE" This of course requires a valid HA deployment setup on the controllers (usually tripleO or OSP Director). Regards, Pablo > >_______________________________________________ >Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >Post to : openstack at lists.openstack.org >Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Pablo Iranzo Gómez (Pablo.Iranzo at redhat.com) GnuPG: 0x5BD8E1E4 Principal Software Maintenance Engineer - OpenStack iranzo @ IRC RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA #110-215-852 RHCA Level V Blog: https://iranzo.github.io Citellus: https://citellus.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: not available URL: From torin.woltjer at granddial.com Tue May 8 18:17:57 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Tue, 08 May 2018 18:17:57 GMT Subject: [Openstack] Database (Got timeout reading communication packets) Message-ID: Just the other day I noticed a bunch of errors spewing from the mysql service. I've spent quite a bit of time trying to track this down, and I haven't had any luck figuring out why this is happening. The following line is repeatedly spewed in the service's journal. May 08 11:13:47 UBNTU-DBMQ2 mysqld[20788]: 2018-05-08 11:13:47 140127545740032 [Warning] Aborted connection 211 to db: 'nova_api' user: 'nova' host: '192.168.116.21' (Got timeout reading communication packets) It isn't always nova_api, it's happening with all of the openstack projects. And either of the controller node's ip addresses. The database is a mariadb galera cluster. Removing haproxy has no effect. The output only occurs on the node receiving the connections; with haproxy it is multiple nodes, otherwise it is whatever node I specify as database in my controllers' host file's. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcioprado at marcioprado.eti.br Wed May 9 19:17:47 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Wed, 09 May 2018 16:17:47 -0300 Subject: [Openstack] Upgrade Pike to Queens on Ubuntu In-Reply-To: References: Message-ID: Hi Marco, To upgrade from Ocata to Pike yes, follow the procedures I performed. But from the Pike version to Queens I have not yet realized ... Here are the steps: UPDATE OPENSTACK VERSION OCETA FOR PIKE UBUNTU UPDATE OF THE CONTROLLER Node 1) PAUSE ALL SERVICES service glance-registry stop service glance-api stop service nova-api stop service nova-conductor stop service nova-consoleauth stop service nova-novncproxy stop service nova-scheduler stop service neutron-server stop service neutron-linuxbridge-agent stop service neutron-dhcp-agent stop service neutron-metadata-agent stop service neutron-l3-agent stop service apache2 stop 2) REMOVE THE OCATA REPOSITORY add-apt-repository --remove cloud-archive:ocata 3) ADD THE PIKE REPOSITORY add-apt-repository cloud-archive:pike 4) UPDATE PACKAGES apt-get update apt-get upgrade 5) FORCED OPENSTACK PACKAGE UPDATE apt-get install list of all packages not installed 6) ACCEPT THE INSTALLATION AND SUB-SUBSTITUTION OF THE CONFIGURATION FILES (A COPY OF THE PRESENT CONFIGURATIONS SHALL BE MADE) Choose the option: Y In my case, the substitute archives were: /etc/nova/nova.conf /etc/keystone/keystone-paste.ini /etc/keystone/keystone.conf /etc/neutron/l3_agent.ini /etc/neutron/neutron.conf /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/metadata_agent.ini /etc/neutron/dhcp_agent.ini /etc/neutron/plugins/ml2/ml2_conf.ini /etc/glance/glance-registry.conf /etc/glance/glance-api.conf /etc/openstack-dashboard/local_settings.py Note: The old configuration files have an extension: .dpkg.conf 7) COMPARE THE SUBMITTED FILES OF ITEM 6, WITH THE FILES .dpkg.conf MAKING THE CHANGES NECESSARY 8) UPDATE DATA BANK su -s /bin/sh -c "keystone-manage token_flush" keystone su -s /bin/sh -c "keystone-manage db_sync" keystone su -s /bin/sh -c "glance-manage db_sync" glance su -s /bin/sh -c "nova-manage db sync" nova su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "neutron-db-manage upgrade heads" neutron su -s /bin/sh -c "nova-manage db online_data_migrations" neutron 9) REMOVING UNNECESSARY PACKAGES apt autoremove 10) RESET SYSTEM reboot UPDATE NODES COMPUTER 1) PAUSE ALL SERVICES /etc/init.d/nova-compute stop /etc/init.d/neutron-linuxbridge-agent stop /etc/init.d/neutron-linuxbridge-cleanup stop /etc/init.d/ceilometer-agent-compute stop 2) REMOVE THE OCATA REPOSITORY add-apt-repository --remove cloud-archive:ocata 3) ADD THE PIKE REPOSITORY add-apt-repository cloud-archive:pike 4) UPDATE PACKAGES apt-get update apt-get upgrade 5) FORCED OPENSTACK PACKAGE UPDATE apt-get install list of all packages not installed 6) ACCEPT THE INSTALLATION AND SUB-SUBSTITUTION OF THE CONFIGURATION FILES (A COPY OF THE PRESENT CONFIGURATIONS SHALL BE MADE) School option: Y In my case, the substitute archives were: /etc/libvirt/libvirtd.conf /etc/ceilometer/ceilometer.conf /etc/nova/nova.conf /etc/neutron/neutron.conf /etc/neutron/plugins/ml2/linuxbridge_agent.ini 7) COMPARE THE SUBMITTED FILES OF ITEM 6, WITH THE FILES .dpkg.conf MAKING THE CHANGES NECESSARY 9) REMOVING UNNECESSARY PACKAGES apt autoremove 10) RESET SYSTEM reboot Em 03-05-2018 15:27, Marco Bravo escreveu: > Hi Marcio, I've made 3 node configuration in Pike (Packstack) but > suddenly I lost the Availability Zone.....and I have notes about it. > Do you have a certified (by you) method to make a 3 nodes or more in > Pike? > I have 6 clouds All-in-one in Queens for a course Im teaching, and > everything works good......cause' is all-in-one > Thank you again for your information and time. > > Kind regards, > > Marco Bravo > > 2018-05-03 15:08 GMT-03:00 Marcio Prado > : > >> Good afternoon everyone. >> >> Has anyone upgrade the OpenStack Pike for Queens on Ubuntu? >> >> Thanks for listening. >> >> From Ocata to Pike I've already realized. >> >> -- >> Marcio Prado >> Analista de TI - Infraestrutura e Redes >> Fone: (35) 9.9821-3561 >> www.marcioprado.eti.br [1] >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [2] >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [2] > > > > Links: > ------ > [1] http://www.marcioprado.eti.br > [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br From torin.woltjer at granddial.com Thu May 10 14:08:58 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Thu, 10 May 2018 10:08:58 -0400 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: References: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> Message-ID: <22135393.phh3QcHAGS@localhost.localdomain> Hi Tushar, I followed the documentation to set up the masakari monitors, after I installed the masakari API. None of the monitor services seem to work. I keep getting an error: "AttributeError: 'module' object has no attribute 'URI'" Here is the full output: http://paste.openstack.org/show/720761/ Are you aware of what causes the issue? Can you provide any example configs for a working masakari setup? On Sunday, May 6, 2018 10:41:48 PM EDT Patil, Tushar wrote: > Hi Torin, > > Masakari supports 4 different types of recovery methods at the time of > creation of failover_segment. > > 1. auto: It will let nova decide on which compute host the instances should > be evacuated. > > 2. reserved_host: You will first need to add reserved hosts to the failover > segments. Masakari engine will select the first available reserved host > from the failover segment, enable compute service in nova and then use that > reserved host to evacuate the instances from the failed compute host. > > 3. auto_priority: it will first try to evacuate instances using 'auto' > recovery method, if it's fails then it attempts to evacuate using > "reserved_host" recovery method. > > 4. rh_priority: It's opposite of above "auto_priority" recovery method. it > will first try to evacuate instances using 'reserved_host' recovery method, > if it's fails then it attempts to evacuate using "auto" recovery method. > > In your case you will need to use "auto" recovery method. > > Please refer to the below documentation links for more details. > > Masakari system architecture: > https://docs.openstack.org/masakari/latest/ > > Masakari api-ref: > https://developer.openstack.org/api-ref/instance-ha/ > > To install masakari-monitors with pacemaker/corosync: > https://review.openstack.org/#/c/489095/6/doc/source/install_and_configure_d > ebian.rst > > Other ways to reach us: Masakari weekly meeting on #openstack-meeting IRC > channel on every Tuesday at 0400 UTC or else you can post your queries on > #openstack-masakari IRC channel. > > Regards, > Tushar From apar.subbu at gmail.com Fri May 11 04:16:28 2018 From: apar.subbu at gmail.com (APARNA SUBBURAM) Date: Fri, 11 May 2018 09:46:28 +0530 Subject: [Openstack] Query regarding Tacker of Openstack Message-ID: Hi Team, May i know how do we update a VNF using user-data using tacker. Regards, Aparna Subburam -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Fri May 11 04:40:58 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 11 May 2018 04:40:58 +0000 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: <22135393.phh3QcHAGS@localhost.localdomain> References: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> , <22135393.phh3QcHAGS@localhost.localdomain> Message-ID: Hi Torin, Presently, masakari-monitors is completely broken. Extremely sorry for the inconvenience. I think this is what is needed to make it work. Install openstacksdk version 0.13.0. Apply patch: https://review.openstack.org/#/c/546492/ In this patch ,we need to bump openstacksdk version from 0.11.2 to 0.13.0. We will merge above patch soon. Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Thursday, May 10, 2018 11:08:58 PM To: Patil, Tushar Cc: jpetrini at coredial.com; openstack at lists.openstack.org Subject: Re: [Openstack] HA Compute & Instance Evacuation Hi Tushar, I followed the documentation to set up the masakari monitors, after I installed the masakari API. None of the monitor services seem to work. I keep getting an error: "AttributeError: 'module' object has no attribute 'URI'" Here is the full output: http://paste.openstack.org/show/720761/ Are you aware of what causes the issue? Can you provide any example configs for a working masakari setup? On Sunday, May 6, 2018 10:41:48 PM EDT Patil, Tushar wrote: > Hi Torin, > > Masakari supports 4 different types of recovery methods at the time of > creation of failover_segment. > > 1. auto: It will let nova decide on which compute host the instances should > be evacuated. > > 2. reserved_host: You will first need to add reserved hosts to the failover > segments. Masakari engine will select the first available reserved host > from the failover segment, enable compute service in nova and then use that > reserved host to evacuate the instances from the failed compute host. > > 3. auto_priority: it will first try to evacuate instances using 'auto' > recovery method, if it's fails then it attempts to evacuate using > "reserved_host" recovery method. > > 4. rh_priority: It's opposite of above "auto_priority" recovery method. it > will first try to evacuate instances using 'reserved_host' recovery method, > if it's fails then it attempts to evacuate using "auto" recovery method. > > In your case you will need to use "auto" recovery method. > > Please refer to the below documentation links for more details. > > Masakari system architecture: > https://docs.openstack.org/masakari/latest/ > > Masakari api-ref: > https://developer.openstack.org/api-ref/instance-ha/ > > To install masakari-monitors with pacemaker/corosync: > https://review.openstack.org/#/c/489095/6/doc/source/install_and_configure_d > ebian.rst > > Other ways to reach us: Masakari weekly meeting on #openstack-meeting IRC > channel on every Tuesday at 0400 UTC or else you can post your queries on > #openstack-masakari IRC channel. > > Regards, > Tushar Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From apar.subbu at gmail.com Fri May 11 04:59:22 2018 From: apar.subbu at gmail.com (APARNA SUBBURAM) Date: Fri, 11 May 2018 10:29:22 +0530 Subject: [Openstack] Query regarding Tacker of Openstack In-Reply-To: References: Message-ID: Hi Team, An update on the query VNF is been created using NOOP driver. We just want to know how the VNF could be updated using user-data with the tacker commands. Regards, Aparna Subburam On Fri, May 11, 2018 at 9:46 AM, APARNA SUBBURAM wrote: > Hi Team, > > > May i know how do we update a VNF using user-data using tacker. > > > Regards, > > Aparna Subburam > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phuoc.hc at dcn.ssu.ac.kr Fri May 11 08:40:43 2018 From: phuoc.hc at dcn.ssu.ac.kr (Cong Phuoc Hoang) Date: Fri, 11 May 2018 17:40:43 +0900 Subject: [Openstack] Query regarding Tacker of Openstack In-Reply-To: References: Message-ID: Hi Aparna, As I know, Tacker just support updating VNF with mgmt driver is openwrt. That means you can update config file for OpenWRT instances to update the rules in firewall, dns, dhcp, etc. You can look at this link: https://docs.openstack.org/tacker/latest/install/deploy_openwrt.html to see more details. For now, you can not update VNF by changing VNF's user-data. If you want to update it, I think you can contribute to Tacker for this purpose xD. Main procedure in Tacker is translating from TOSCA VNF template to HOT template, then deploy it on OpenStack as a stack in HEAT. I think we can use "Heat stack update" to do that. When I check Heat document, there are no example for updating stack ( https://docs.openstack.org/heat/queens/getting_started/create_a_stack.html), but I still hope we can do it like this way. In the bad way, we can delete stack and respawn new stack, but I dont like it. You can contact us via IRC channel: #tacker. If you have any problem, you can contact me. I am happy to help you when I have some time. Best regards, Phuoc. On Fri, May 11, 2018 at 1:59 PM, APARNA SUBBURAM wrote: > Hi Team, > > An update on the query VNF is been created using NOOP driver. We just > want to know how the VNF could be updated using user-data with the tacker > commands. > > Regards, > Aparna Subburam > > On Fri, May 11, 2018 at 9:46 AM, APARNA SUBBURAM > wrote: > >> Hi Team, >> >> >> May i know how do we update a VNF using user-data using tacker. >> >> >> Regards, >> >> Aparna Subburam >> >> >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Francois.Palin at windriver.com Fri May 11 14:22:14 2018 From: Francois.Palin at windriver.com (Palin, Francois) Date: Fri, 11 May 2018 14:22:14 +0000 Subject: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? Message-ID: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com> Hi all, libvirt network hooks file ( /etc/libvirt/hooks/network ) does not get called when using neutron. Any suggestion as to what could be used instead? I need to perform some specific actions on an interface once an instance releases it. Thanks, François -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Fri May 11 14:46:05 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Fri, 11 May 2018 10:46:05 -0400 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: References: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> <22135393.phh3QcHAGS@localhost.localdomain> Message-ID: <1561476.geysvssbCs@localhost.localdomain> On Friday, May 11, 2018 12:40:58 AM EDT Patil, Tushar wrote: > I think this is what is needed to make it work. > Install openstacksdk version 0.13.0. > > Apply patch: https://review.openstack.org/#/c/546492/ > > In this patch ,we need to bump openstacksdk version from 0.11.2 to 0.13.0. > We will merge above patch soon. Do you have a timetable on when the patch will be merged? If it is a relatively small window of time, I would rather wait to use the patched mainline code. Otherwise, I am willing to try to work with the patch. Additionally, patching python is something that I am not familiar with. Is there a good resource on doing this? You have been a great help so far, thanks again. From Remo at italy1.com Fri May 11 16:30:36 2018 From: Remo at italy1.com (Remo Mattei) Date: Fri, 11 May 2018 09:30:36 -0700 Subject: [Openstack] Windows images into OpenStack Message-ID: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. Thanks, Remo From chris.friesen at windriver.com Fri May 11 16:56:58 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 11 May 2018 10:56:58 -0600 Subject: [Openstack] Windows images into OpenStack In-Reply-To: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> Message-ID: <5AF5CB5A.6020004@windriver.com> On 05/11/2018 10:30 AM, Remo Mattei wrote: > Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. Here are the instructions if you choose to go the ISO route: https://docs.openstack.org/image-guide/windows-image.html From lmihaiescu at gmail.com Fri May 11 17:15:38 2018 From: lmihaiescu at gmail.com (George Mihaiescu) Date: Fri, 11 May 2018 13:15:38 -0400 Subject: [Openstack] Windows images into OpenStack In-Reply-To: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> Message-ID: Cloudbase provides an evaluation image for Windows: https://cloudbase.it/windows-cloud-images/ On Fri, May 11, 2018 at 12:30 PM, Remo Mattei wrote: > Hello guys, I have a need now to get a Windows VM into the OpenStack > deployment. Can anyone suggest the best way to do this. I have done mostly > Linux. I could use the ISO and build one within OpenStack not sure I want > to go that route. I have some Windows that are coming from VMWare. > > Thanks, > Remo > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Fri May 11 17:34:43 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 11 May 2018 17:34:43 +0000 Subject: [Openstack] Windows images into OpenStack In-Reply-To: <5AF5CB5A.6020004@windriver.com> References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> Message-ID: Watch out for the cores/sockets properties too. Desktop Windows can limit the available resources if every core is a different socket. See http://clouddocs.web.cern.ch/clouddocs/details/image_properties.html Tim -----Original Message----- From: Chris Friesen Date: Friday, 11 May 2018 at 19:05 To: "openstack at lists.openstack.org" Subject: Re: [Openstack] Windows images into OpenStack On 05/11/2018 10:30 AM, Remo Mattei wrote: > Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. Here are the instructions if you choose to go the ISO route: https://docs.openstack.org/image-guide/windows-image.html _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From remo at italy1.com Fri May 11 19:56:48 2018 From: remo at italy1.com (Remo Mattei) Date: Fri, 11 May 2018 12:56:48 -0700 Subject: [Openstack] Windows images into OpenStack In-Reply-To: References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> Message-ID: Thanks Tim. I need windows 2016 and the cloudit has 2012. Thanks I will probably end up doing the iso. > Il giorno 11 mag 2018, alle ore 10:34, Tim Bell ha scritto: > > > > Watch out for the cores/sockets properties too. Desktop Windows can limit the available resources if every core is a different socket. See http://clouddocs.web.cern.ch/clouddocs/details/image_properties.html > > Tim > > -----Original Message----- > From: Chris Friesen > Date: Friday, 11 May 2018 at 19:05 > To: "openstack at lists.openstack.org" > Subject: Re: [Openstack] Windows images into OpenStack > >> On 05/11/2018 10:30 AM, Remo Mattei wrote: >> Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. > > Here are the instructions if you choose to go the ISO route: > > https://docs.openstack.org/image-guide/windows-image.html > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From vahric at doruk.net.tr Fri May 11 20:32:59 2018 From: vahric at doruk.net.tr (Vahric MUHTARYAN) Date: Fri, 11 May 2018 23:32:59 +0300 Subject: [Openstack] Windows images into OpenStack In-Reply-To: References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> Message-ID: <898B2982-A442-40E8-8F45-4F34D93ABAF9@doruk.net.tr> Hello , Sorry documentation is in Turkish but I believe you will understand all steps .. https://www.evernote.com/l/ADD4o5bpbLpE75G7k1imrSWO_eiesD-HzX8 Regards VM On 11.05.2018 22:56, "Remo Mattei" wrote: Thanks Tim. I need windows 2016 and the cloudit has 2012. Thanks I will probably end up doing the iso. > Il giorno 11 mag 2018, alle ore 10:34, Tim Bell ha scritto: > > > > Watch out for the cores/sockets properties too. Desktop Windows can limit the available resources if every core is a different socket. See http://clouddocs.web.cern.ch/clouddocs/details/image_properties.html > > Tim > > -----Original Message----- > From: Chris Friesen > Date: Friday, 11 May 2018 at 19:05 > To: "openstack at lists.openstack.org" > Subject: Re: [Openstack] Windows images into OpenStack > >> On 05/11/2018 10:30 AM, Remo Mattei wrote: >> Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. > > Here are the instructions if you choose to go the ISO route: > > https://docs.openstack.org/image-guide/windows-image.html > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From Remo at italy1.com Fri May 11 23:39:36 2018 From: Remo at italy1.com (Remo Mattei) Date: Fri, 11 May 2018 16:39:36 -0700 Subject: [Openstack] Windows images into OpenStack In-Reply-To: <898B2982-A442-40E8-8F45-4F34D93ABAF9@doruk.net.tr> References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> <898B2982-A442-40E8-8F45-4F34D93ABAF9@doruk.net.tr> Message-ID: <0DC03A42-5A5D-44A6-AF0D-BCC94DF46E78@italy1.com> Thanks > On May 11, 2018, at 1:32 PM, Vahric MUHTARYAN wrote: > > Hello , > Sorry documentation is in Turkish but I believe you will understand all steps .. > https://www.evernote.com/l/ADD4o5bpbLpE75G7k1imrSWO_eiesD-HzX8 > Regards > VM > > On 11.05.2018 22:56, "Remo Mattei" wrote: > > Thanks Tim. > > I need windows 2016 and the cloudit has 2012. > > Thanks I will probably end up doing the iso. > >> Il giorno 11 mag 2018, alle ore 10:34, Tim Bell ha scritto: >> >> >> >> Watch out for the cores/sockets properties too. Desktop Windows can limit the available resources if every core is a different socket. See http://clouddocs.web.cern.ch/clouddocs/details/image_properties.html >> >> Tim >> >> -----Original Message----- >> From: Chris Friesen >> Date: Friday, 11 May 2018 at 19:05 >> To: "openstack at lists.openstack.org" >> Subject: Re: [Openstack] Windows images into OpenStack >> >>> On 05/11/2018 10:30 AM, Remo Mattei wrote: >>> Hello guys, I have a need now to get a Windows VM into the OpenStack deployment. Can anyone suggest the best way to do this. I have done mostly Linux. I could use the ISO and build one within OpenStack not sure I want to go that route. I have some Windows that are coming from VMWare. >> >> Here are the instructions if you choose to go the ISO route: >> >> https://docs.openstack.org/image-guide/windows-image.html >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > From dev.faz at gmail.com Sat May 12 04:27:39 2018 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Sat, 12 May 2018 06:27:39 +0200 Subject: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? In-Reply-To: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com> References: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com> Message-ID: <50325F63-D203-46CD-9B70-587CB78728AC@gmail.com> Hi, did you try hooks/qemu? Fabian Am 11. Mai 2018 16:22:14 MESZ schrieb "Palin, Francois" : >Hi all, > >libvirt network hooks file ( /etc/libvirt/hooks/network ) does not get >called when using neutron. >Any suggestion as to what could be used instead? > >I need to perform some specific actions on an interface once an >instance releases it. > >Thanks, > >François -------------- next part -------------- An HTML attachment was scrubbed... URL: From Francois.Palin at windriver.com Sat May 12 15:26:35 2018 From: Francois.Palin at windriver.com (Palin, Francois) Date: Sat, 12 May 2018 15:26:35 +0000 Subject: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? In-Reply-To: <50325F63-D203-46CD-9B70-587CB78728AC@gmail.com> References: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com>, <50325F63-D203-46CD-9B70-587CB78728AC@gmail.com> Message-ID: <322922BC3A6831409366B0870E771E534C44D82C@ALA-MBC.corp.ad.wrs.com> Hi Fabian, Yes, I have looked at hooks/qemu, and the xml data it receives doesn't have the information needed to be able to logically derive the network and interface. Thanks, François ________________________________ From: Fabian Zimmermann [dev.faz at gmail.com] Sent: Saturday, May 12, 2018 12:27 AM To: openstack at lists.openstack.org; Palin, Francois; openstack at lists.openstack.org Subject: Re: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? Hi, did you try hooks/qemu? Fabian Am 11. Mai 2018 16:22:14 MESZ schrieb "Palin, Francois" : Hi all, libvirt network hooks file ( /etc/libvirt/hooks/network ) does not get called when using neutron. Any suggestion as to what could be used instead? I need to perform some specific actions on an interface once an instance releases it. Thanks, François -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Sat May 12 16:47:25 2018 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Sat, 12 May 2018 18:47:25 +0200 Subject: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? In-Reply-To: <322922BC3A6831409366B0870E771E534C44D82C@ALA-MBC.corp.ad.wrs.com> References: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com>, <50325F63-D203-46CD-9B70-587CB78728AC@gmail.com> <322922BC3A6831409366B0870E771E534C44D82C@ALA-MBC.corp.ad.wrs.com> Message-ID: <1A0AF172-AA8F-429E-930B-A6EA74A9BA92@gmail.com> Hi, what are you trying to execute? Maybe there is a easier way to reach your goal? f. e. a udev-trigger or something like listening for events in your message-queue. Fabian Am 12. Mai 2018 17:26:35 MESZ schrieb "Palin, Francois" : >Hi Fabian, > >Yes, I have looked at hooks/qemu, and the xml data it receives doesn't >have >the information needed to be able to logically derive the network and >interface. > >Thanks, > >François > > >________________________________ >From: Fabian Zimmermann [dev.faz at gmail.com] >Sent: Saturday, May 12, 2018 12:27 AM >To: openstack at lists.openstack.org; Palin, Francois; >openstack at lists.openstack.org >Subject: Re: [Openstack] [libvirt][nova] What can be used as equivalent >to libvirt network hooks file? > >Hi, > >did you try > >hooks/qemu? > >Fabian > >Am 11. Mai 2018 16:22:14 MESZ schrieb "Palin, Francois" >: >Hi all, > >libvirt network hooks file ( /etc/libvirt/hooks/network ) does not get >called when using neutron. >Any suggestion as to what could be used instead? > >I need to perform some specific actions on an interface once an >instance releases it. > >Thanks, > >François -------------- next part -------------- An HTML attachment was scrubbed... URL: From Francois.Palin at windriver.com Sat May 12 18:03:08 2018 From: Francois.Palin at windriver.com (Palin, Francois) Date: Sat, 12 May 2018 18:03:08 +0000 Subject: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? In-Reply-To: <1A0AF172-AA8F-429E-930B-A6EA74A9BA92@gmail.com> References: <322922BC3A6831409366B0870E771E534C44D78E@ALA-MBC.corp.ad.wrs.com>, <50325F63-D203-46CD-9B70-587CB78728AC@gmail.com> <322922BC3A6831409366B0870E771E534C44D82C@ALA-MBC.corp.ad.wrs.com>, <1A0AF172-AA8F-429E-930B-A6EA74A9BA92@gmail.com> Message-ID: <322922BC3A6831409366B0870E771E534C44D844@ALA-MBC.corp.ad.wrs.com> Hi Fabian, I'm working on a specific issue where an interface is configured up for sriov physical function passthrough. The instance boot command can then specify to use that interface as either pci-sriov or pci-passthrough. Now the following is a known issue: If the PF is used, the VF number stored in the sriov_numvfs file is lost. If the PF is attached again to the operating system, the number of VFs assigned to this interface will be zero. And the recommended fix for this is to create an /sbin/ifup-local file that will set sriov_numvfs back to its original value. The problem I have now is that our setup/framework prevents the /sbin/ifup-local from being called. So I'm trying to figure out a way to execute a script once the PF gets re-attached the operating system, to put sriov_numvfs back to its original value. I will now look into your suggestions below. Thanks again, François ________________________________ From: Fabian Zimmermann [dev.faz at gmail.com] Sent: Saturday, May 12, 2018 12:47 PM To: Palin, Francois; openstack at lists.openstack.org Subject: RE: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? Hi, what are you trying to execute? Maybe there is a easier way to reach your goal? f. e. a udev-trigger or something like listening for events in your message-queue. Fabian Am 12. Mai 2018 17:26:35 MESZ schrieb "Palin, Francois" : Hi Fabian, Yes, I have looked at hooks/qemu, and the xml data it receives doesn't have the information needed to be able to logically derive the network and interface. Thanks, François ________________________________ From: Fabian Zimmermann [dev.faz at gmail.com] Sent: Saturday, May 12, 2018 12:27 AM To: openstack at lists.openstack.org; Palin, Francois; openstack at lists.openstack.org Subject: Re: [Openstack] [libvirt][nova] What can be used as equivalent to libvirt network hooks file? Hi, did you try hooks/qemu? Fabian Am 11. Mai 2018 16:22:14 MESZ schrieb "Palin, Francois" : Hi all, libvirt network hooks file ( /etc/libvirt/hooks/network ) does not get called when using neutron. Any suggestion as to what could be used instead? I need to perform some specific actions on an interface once an instance releases it. Thanks, François -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Mon May 14 08:07:37 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Mon, 14 May 2018 08:07:37 +0000 Subject: [Openstack] HA Compute & Instance Evacuation In-Reply-To: <1561476.geysvssbCs@localhost.localdomain> References: <5ea11c5f60a64ecaa5db9d6a916fa0cf@granddial.com> <22135393.phh3QcHAGS@localhost.localdomain> , <1561476.geysvssbCs@localhost.localdomain> Message-ID: Hi Torin, >> Do you have a timetable on when the patch will be merged? If it is a relatively small window of time, I would rather wait to use >> the patched mainline code. You should be able to test masakari successfully as below three patches are already merged. 1. https://review.openstack.org/#/c/546492/15 - openstack/masakari-monitors (it doesn't use masakariclient any more) 2. https://review.openstack.org/#/c/567781/ - openstack/requirements (openstacksdk lower constraints updated to 0.13.0) 3. https://review.openstack.org/#/c/536653/ - openstack/masakari (change service-type from "ha" to "instance-ha". If you are planning to install Openstack using latest devstack, then it will install openstacksdk 0.13.0 by default. No need to take any further action by yourself otherwise you need to ensure that you have correct version of openstacksdk (0.13.0) and also add masakari endpoint to use the correct service-type. Recommend to install latest masakari using devstack. 4. https://review.openstack.org/#/c/557634/2 - python-masakariclient (This patch needs to be merged ASAP) If you are planning to use python-masakariclient to create failover segments or add hosts etc, then you will need to wait until this patch is merged. We need to update this patch to add correct version of openstacksdk in requirements.txt. We will merge this particular patch by tomorrow. But if you plan to add failover segment/hosts by calling RestFul API using curl or any other method, then probably you won't face any issues. Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Friday, May 11, 2018 11:46:05 PM To: Patil, Tushar Cc: jpetrini at coredial.com; openstack at lists.openstack.org Subject: Re: [Openstack] HA Compute & Instance Evacuation On Friday, May 11, 2018 12:40:58 AM EDT Patil, Tushar wrote: > I think this is what is needed to make it work. > Install openstacksdk version 0.13.0. > > Apply patch: https://review.openstack.org/#/c/546492/ > > In this patch ,we need to bump openstacksdk version from 0.11.2 to 0.13.0. > We will merge above patch soon. Do you have a timetable on when the patch will be merged? If it is a relatively small window of time, I would rather wait to use the patched mainline code. Otherwise, I am willing to try to work with the patch. Additionally, patching python is something that I am not familiar with. Is there a good resource on doing this? You have been a great help so far, thanks again. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From eblock at nde.ag Mon May 14 09:32:01 2018 From: eblock at nde.ag (Eugen Block) Date: Mon, 14 May 2018 09:32:01 +0000 Subject: [Openstack] Database (Got timeout reading communication packets) In-Reply-To: Message-ID: <20180514093201.Horde.-HKqJD0BxhxFK-jlXszzm_t@webmail.nde.ag> Hi, are these interruptions occasionally or do they occur all the time? Is this a new issue or has this happened before? Does the openstack environment work as expected despite these messages or do you experience interruptions in the services? I would check the network setup first (I have read about loose cables in different threads...), maybe run some ping tests between the machines to see if there's anything weird. Since you mention different services reporting these interruptions this seems like a network issue to me. Regards, Eugen Zitat von Torin Woltjer : > Just the other day I noticed a bunch of errors spewing from the > mysql service. I've spent quite a bit of time trying to track this > down, and I haven't had any luck figuring out why this is happening. > The following line is repeatedly spewed in the service's journal. > > May 08 11:13:47 UBNTU-DBMQ2 mysqld[20788]: 2018-05-08 11:13:47 > 140127545740032 [Warning] Aborted connection 211 to db: 'nova_api' > user: 'nova' host: '192.168.116.21' (Got timeout reading > communication packets) > > It isn't always nova_api, it's happening with all of the openstack > projects. And either of the controller node's ip addresses. > > The database is a mariadb galera cluster. Removing haproxy has no > effect. The output only occurs on the node receiving the > connections; with haproxy it is multiple nodes, otherwise it is > whatever node I specify as database in my controllers' host file's. From torin.woltjer at granddial.com Mon May 14 14:02:21 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Mon, 14 May 2018 14:02:21 GMT Subject: [Openstack] Database (Got timeout reading communication packets) Message-ID: >are these interruptions occasionally or do they occur all the time? Is >this a new issue or has this happened before? This is a 3 node Galera cluster on 3 KVM virtual machines. The errors are constantly printing in the logs, and no node is excluded from receiving the errors. I don't know whether they had always been there or not, but I noticed them after an update. >Does the openstack environment work as expected despite these messages >or do you experience interruptions in the services? The openstack services operate normally, the dashboard is fairly slow, but it always has been. >I would check the network setup first (I have read about loose cables >in different threads...), maybe run some ping tests between the >machines to see if there's anything weird. Since you mention different >services reporting these interruptions this seems like a network issue >to me. The hosts are all networked with bonded 10G SFP+ cables networked via a switch. Pings between the VMs seem fine. If I were to guess, any networking problem would be between the guest and host due to libvirt. Anything that I should be looking for there? -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon May 14 14:41:09 2018 From: eblock at nde.ag (Eugen Block) Date: Mon, 14 May 2018 14:41:09 +0000 Subject: [Openstack] Database (Got timeout reading communication packets) In-Reply-To: Message-ID: <20180514144109.Horde.fXlCpjUCXpicrDEvVwISQEf@webmail.nde.ag> While I was working on something else I remembered the error messages you described, I have them, too. It's a lab environment on hardware nodes with a sufficient network connection, and since we had to debug network issues before, we can rule out network problems in our case. I found a website [1] to track down galera issues, I tried to apply those steps and it seems that the openstack code doesn't close the connections properly, hence the aborted connections. I'm not sure if this is the correct interpretation, but since I didn't face any problems related to the openstack databases I decided to ignore these messages as long as the openstack environment works properly. Regards, Eugen [1] https://www.fromdual.ch/abbrechende-mariadb-mysql-verbindungen Zitat von Torin Woltjer : >> are these interruptions occasionally or do they occur all the time? Is >> this a new issue or has this happened before? > > This is a 3 node Galera cluster on 3 KVM virtual machines. The errors are > constantly printing in the logs, and no node is excluded from receiving the > errors. I don't know whether they had always been there or not, but I > noticed them after an update. > >> Does the openstack environment work as expected despite these messages >> or do you experience interruptions in the services? > > The openstack services operate normally, the dashboard is fairly slow, but it > always has been. > >> I would check the network setup first (I have read about loose cables >> in different threads...), maybe run some ping tests between the >> machines to see if there's anything weird. Since you mention different >> services reporting these interruptions this seems like a network issue >> to me. > > The hosts are all networked with bonded 10G SFP+ cables networked via a > switch. Pings between the VMs seem fine. If I were to guess, any networking > problem would be between the guest and host due to libvirt. Anything that I > should be looking for there? From torin.woltjer at granddial.com Mon May 14 14:51:57 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Mon, 14 May 2018 14:51:57 GMT Subject: [Openstack] Database (Got timeout reading communication packets) Message-ID: <40c060e87df74c8b9ffe30f6013a3152@granddial.com> >While I was working on something else I remembered the error messages >you described, I have them, too. It's a lab environment on hardware >nodes with a sufficient network connection, and since we had to debug >network issues before, we can rule out network problems in our case. >I found a website [1] to track down galera issues, I tried to apply >those steps and it seems that the openstack code doesn't close the >connections properly, hence the aborted connections. >I'm not sure if this is the correct interpretation, but since I didn't >face any problems related to the openstack databases I decided to >ignore these messages as long as the openstack environment works >properly. I did think something similar to this initially when I noticed a high number of sleeping connections, but because I was unsure I thought to ask. Because this effects all Openstack services as a whole, what project would I file a bug report on? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alosharih at gmail.com Tue May 15 11:00:29 2018 From: alosharih at gmail.com (hassan aloshari) Date: Tue, 15 May 2018 13:00:29 +0200 Subject: [Openstack] =?utf-8?q?=28no_subject=29?= Message-ID: alosharih at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Tue May 15 14:36:11 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Tue, 15 May 2018 14:36:11 GMT Subject: [Openstack] Masakari client error Message-ID: I am using the masakari client version 5.0.0 installed from python pip. I keep getting the following error: ("'Connection' object has no attribute 'ha'", ', mode 'w' at 0x7f6ee88791e0>) when I try to run any commands with it: segment-list host-list etc. It's entirely possible that I'm missing some peice of configuration, or have something improperly configured, but there isn't sufficient documentation for me to figure out if or what. Anybody have a working example that I can see, or know if this an issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Wed May 16 04:29:01 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Wed, 16 May 2018 04:29:01 +0000 Subject: [Openstack] Masakari client error In-Reply-To: References: Message-ID: Hi Torin, Few days back, this patch [1] got merged in which the service type is changed from "ha" to "instance_ha". We have tried reproducing the issue you are facing but we are not getting the exact same error. With different versions of openstacksdk, we got different errors. Masakariclient/masakari-monitors requires openstacksdk version 0.13.0. Today we have fixed LP bug [2] in patch [3] which should also fix the issue you are facing. We will release another version of python-masakariclient soon. Are you installing masakari using devstack? If yes, please install masakari from scratch. After installing latest masakari, you should be able to run segment-list and host-list using openstack commands. If you want to run same commands using masakariclient, then you will need to wait until new version of masakariclient is released or you can apply patch [3] in your environment. If you need any help in applying patches, please ask for help on #openstack-masakari IRC. Simple way to install latest masakariclient from code: 1. git clone https://github.com/openstack/python-masakariclient.git 2. Go to folder python-masakariclient 3. sudo python setup.py install If you find any issues in Masakari, you can also report bugs in launchpad against below respective projects. http://launchpad.net/python-masakariclient https://launchpad.net/masakari-monitors https://launchpad.net/masakari Hope this helps!!! [1] : https://review.openstack.org/#/c/536653/ [2] : https://bugs.launchpad.net/python-masakariclient/+bug/1756047 [3] : https://review.openstack.org/#/c/557634/ Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Tuesday, May 15, 2018 11:36:11 PM To: openstack at lists.openstack.org Subject: [Openstack] Masakari client error I am using the masakari client version 5.0.0 installed from python pip. I keep getting the following error: ("'Connection' object has no attribute 'ha'", ', mode 'w' at 0x7f6ee88791e0>) when I try to run any commands with it: segment-list host-list etc. It's entirely possible that I'm missing some peice of configuration, or have something improperly configured, but there isn't sufficient documentation for me to figure out if or what. Anybody have a working example that I can see, or know if this an issue? Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From torin.woltjer at granddial.com Wed May 16 12:30:47 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Wed, 16 May 2018 12:30:47 GMT Subject: [Openstack] Masakari client error Message-ID: Hello again, I am not using the git version of masakari anymore, I am using the version installed from python pip. I am using Pike and not Queens so the openstacksdk version 13 is not available in the repository. Should openstacksdk version 0.13 still work with Pike, and should this version of masakari still work with Pike? Thanks, Torin Woltjer Grand Dial Communications - A ZK Tech Inc. Company 616.776.1066 ext. 2006 www.granddial.com ---------------------------------------- From: "Patil, Tushar" Sent: 5/16/18 12:29 AM To: "openstack at lists.openstack.org" , "torin.woltjer at granddial.com" Subject: Re: [Openstack] Masakari client error Hi Torin, Few days back, this patch [1] got merged in which the service type is changed from "ha" to "instance_ha". We have tried reproducing the issue you are facing but we are not getting the exact same error. With different versions of openstacksdk, we got different errors. Masakariclient/masakari-monitors requires openstacksdk version 0.13.0. Today we have fixed LP bug [2] in patch [3] which should also fix the issue you are facing. We will release another version of python-masakariclient soon. Are you installing masakari using devstack? If yes, please install masakari from scratch. After installing latest masakari, you should be able to run segment-list and host-list using openstack commands. If you want to run same commands using masakariclient, then you will need to wait until new version of masakariclient is released or you can apply patch [3] in your environment. If you need any help in applying patches, please ask for help on #openstack-masakari IRC. Simple way to install latest masakariclient from code: 1. git clone https://github.com/openstack/python-masakariclient.git 2. Go to folder python-masakariclient 3. sudo python setup.py install If you find any issues in Masakari, you can also report bugs in launchpad against below respective projects. http://launchpad.net/python-masakariclient https://launchpad.net/masakari-monitors https://launchpad.net/masakari Hope this helps!!! [1] : https://review.openstack.org/#/c/536653/ [2] : https://bugs.launchpad.net/python-masakariclient/+bug/1756047 [3] : https://review.openstack.org/#/c/557634/ Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Tuesday, May 15, 2018 11:36:11 PM To: openstack at lists.openstack.org Subject: [Openstack] Masakari client error I am using the masakari client version 5.0.0 installed from python pip. I keep getting the following error: ("'Connection' object has no attribute 'ha'", ', mode 'w' at 0x7f6ee88791e0>) when I try to run any commands with it: segment-list host-list etc. It's entirely possible that I'm missing some peice of configuration, or have something improperly configured, but there isn't sufficient documentation for me to figure out if or what. Anybody have a working example that I can see, or know if this an issue? Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 16 12:46:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 12:46:51 +0000 Subject: [Openstack] Masakari client error In-Reply-To: References: Message-ID: <20180516124651.szypxevy3mvlrydr@yuggoth.org> On 2018-05-16 12:30:47 +0000 (+0000), Torin Woltjer wrote: [...] > I am using Pike and not Queens so the openstacksdk version 13 is > not available in the repository. Should openstacksdk version 0.13 > still work with Pike [...] OpenStackSDK strives for backwards-compatibility with even fairly ancient OpenStack releases, and is not tied to any particular version of OpenStack services. It should always be safe to run the latest releases of OpenStackSDK no matter the age of the deployment with which you intend to communicate. Note however that the dependencies of OpenStackSDK may conflict with dependencies of some OpenStack service, so you can't necessarily expect to be able to co-install them on the same machine without some means of context separation (virtualenvs, containers, pip install --local, et cetera). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From torin.woltjer at granddial.com Wed May 16 13:32:10 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Wed, 16 May 2018 13:32:10 GMT Subject: [Openstack] Masakari client error Message-ID: It looks like pip install actually upgraded my openstacksdk to 0.13 when I installed masakari from pip. Meanwhile the sdk in the 16.04 repository is 0.9.17. I'm wondering now if this might explain why my block storage is also having problems. What is the process for setting up a local environment for separate versions of the SDK (With different services using each?) Torin Woltjer Grand Dial Communications - A ZK Tech Inc. Company 616.776.1066 ext. 2006 www.granddial.com ---------------------------------------- From: Jeremy Stanley Sent: 5/16/18 8:59 AM To: openstack at lists.openstack.org Subject: Re: [Openstack] Masakari client error On 2018-05-16 12:30:47 +0000 (+0000), Torin Woltjer wrote: [...] > I am using Pike and not Queens so the openstacksdk version 13 is > not available in the repository. Should openstacksdk version 0.13 > still work with Pike [...] OpenStackSDK strives for backwards-compatibility with even fairly ancient OpenStack releases, and is not tied to any particular version of OpenStack services. It should always be safe to run the latest releases of OpenStackSDK no matter the age of the deployment with which you intend to communicate. Note however that the dependencies of OpenStackSDK may conflict with dependencies of some OpenStack service, so you can't necessarily expect to be able to co-install them on the same machine without some means of context separation (virtualenvs, containers, pip install --local, et cetera). -- Jeremy Stanley _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Thu May 17 01:02:38 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Thu, 17 May 2018 01:02:38 +0000 Subject: [Openstack] Masakari client error In-Reply-To: References: Message-ID: Hi Torin, If you are using stable/pike, then it is recommended to use python-masakariclient version 3.0.1 [1] which requires openstacksdk version 0.9.17. Are you trying to upgrade your stable/pike environment to the latest rocky-milestone1 (all services including Masakari)? [1] : https://github.com/openstack/requirements/blob/stable/pike/upper-constraints.txt Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Wednesday, May 16, 2018 10:32:10 PM To: fungi at yuggoth.org; openstack at lists.openstack.org Subject: Re: [Openstack] Masakari client error It looks like pip install actually upgraded my openstacksdk to 0.13 when I installed masakari from pip. Meanwhile the sdk in the 16.04 repository is 0.9.17. I'm wondering now if this might explain why my block storage is also having problems. What is the process for setting up a local environment for separate versions of the SDK (With different services using each?) Torin Woltjer Grand Dial Communications - A ZK Tech Inc. Company 616.776.1066 ext. 2006 www.granddial.com ________________________________ From: Jeremy Stanley Sent: 5/16/18 8:59 AM To: openstack at lists.openstack.org Subject: Re: [Openstack] Masakari client error On 2018-05-16 12:30:47 +0000 (+0000), Torin Woltjer wrote: [...] > I am using Pike and not Queens so the openstacksdk version 13 is > not available in the repository. Should openstacksdk version 0.13 > still work with Pike [...] OpenStackSDK strives for backwards-compatibility with even fairly ancient OpenStack releases, and is not tied to any particular version of OpenStack services. It should always be safe to run the latest releases of OpenStackSDK no matter the age of the deployment with which you intend to communicate. Note however that the dependencies of OpenStackSDK may conflict with dependencies of some OpenStack service, so you can't necessarily expect to be able to co-install them on the same machine without some means of context separation (virtualenvs, containers, pip install --local, et cetera). -- Jeremy Stanley _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From berendt at betacloud-solutions.de Thu May 17 12:29:04 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Thu, 17 May 2018 14:29:04 +0200 Subject: [Openstack] Windows images into OpenStack In-Reply-To: References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> Message-ID: <9875DC40-C958-4416-AE5A-071960469EAE@betacloud-solutions.de> > On 11. May 2018, at 21:56, Remo Mattei wrote: > > I need windows 2016 and the cloudit has 2012. > > Thanks I will probably end up doing the iso. Cloudbase offers the tool to build there Images on Github. https://github.com/cloudbase/windows-openstack-imaging-tools —snip— The following versions of Windows images (both x86 / x64, if existent) to be generated are supported: • Windows Server 2008 / 2008 R2 • Windows Server 2012 / 2012 R2 • Windows Server 2016 • Windows 7 / 8 / 8.1 / 10 —snap— So you can build an image for Windows 2016. HTH, Christian. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From torin.woltjer at granddial.com Thu May 17 15:06:23 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Thu, 17 May 2018 15:06:23 GMT Subject: [Openstack] Masakari client error Message-ID: Hi Tushar, Thanks for linking to that document, I hadn't seen it before and it's very useful. As far as milestones are concerned, I was planning on sticking with Pike. Up until this point I've been using the packages from http://ubuntu-cloud.archive.canonical.com/ubuntu xenial-updates/pike main, when I installed the latest packages from pip, I was ignorant of what I would doing and what would happen. I may switch to queens or rocky, but I would like to upgrade to the latest Ubuntu LTS if I am to do that (bionic only has a repo for rocky but not queens I believe). ---------------------------------------- From: "Patil, Tushar" Sent: 5/16/18 9:03 PM To: "fungi at yuggoth.org" , "openstack at lists.openstack.org" , "torin.woltjer at granddial.com" Subject: Re: [Openstack] Masakari client error Hi Torin, If you are using stable/pike, then it is recommended to use python-masakariclient version 3.0.1 [1] which requires openstacksdk version 0.9.17. Are you trying to upgrade your stable/pike environment to the latest rocky-milestone1 (all services including Masakari)? [1] : https://github.com/openstack/requirements/blob/stable/pike/upper-constraints.txt Regards, Tushar Patil ________________________________________ From: Torin Woltjer Sent: Wednesday, May 16, 2018 10:32:10 PM To: fungi at yuggoth.org; openstack at lists.openstack.org Subject: Re: [Openstack] Masakari client error It looks like pip install actually upgraded my openstacksdk to 0.13 when I installed masakari from pip. Meanwhile the sdk in the 16.04 repository is 0.9.17. I'm wondering now if this might explain why my block storage is also having problems. What is the process for setting up a local environment for separate versions of the SDK (With different services using each?) Torin Woltjer Grand Dial Communications - A ZK Tech Inc. Company 616.776.1066 ext. 2006 www.granddial.com ________________________________ From: Jeremy Stanley Sent: 5/16/18 8:59 AM To: openstack at lists.openstack.org Subject: Re: [Openstack] Masakari client error On 2018-05-16 12:30:47 +0000 (+0000), Torin Woltjer wrote: [...] > I am using Pike and not Queens so the openstacksdk version 13 is > not available in the repository. Should openstacksdk version 0.13 > still work with Pike [...] OpenStackSDK strives for backwards-compatibility with even fairly ancient OpenStack releases, and is not tied to any particular version of OpenStack services. It should always be safe to run the latest releases of OpenStackSDK no matter the age of the deployment with which you intend to communicate. Note however that the dependencies of OpenStackSDK may conflict with dependencies of some OpenStack service, so you can't necessarily expect to be able to co-install them on the same machine without some means of context separation (virtualenvs, containers, pip install --local, et cetera). -- Jeremy Stanley _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Thu May 17 15:43:40 2018 From: Remo at italy1.com (Remo Mattei) Date: Thu, 17 May 2018 08:43:40 -0700 Subject: [Openstack] Windows images into OpenStack In-Reply-To: <9875DC40-C958-4416-AE5A-071960469EAE@betacloud-solutions.de> References: <450812E1-DF89-4ED1-A1A4-97B93EA11DC1@italy1.com> <5AF5CB5A.6020004@windriver.com> <9875DC40-C958-4416-AE5A-071960469EAE@betacloud-solutions.de> Message-ID: <2B6B3C2E-CA8E-459D-8E0F-DBE9E4F31F8F@italy1.com> Hi Christian, I have build it for the 2016 and it’s working now. I did not use the GitHub below. Remo > On May 17, 2018, at 5:29 AM, Christian Berendt wrote: > > > >> On 11. May 2018, at 21:56, Remo Mattei wrote: >> >> I need windows 2016 and the cloudit has 2012. >> >> Thanks I will probably end up doing the iso. > > Cloudbase offers the tool to build there Images on Github. > > https://github.com/cloudbase/windows-openstack-imaging-tools > > —snip— > The following versions of Windows images (both x86 / x64, if existent) to be generated are supported: > > • Windows Server 2008 / 2008 R2 > • Windows Server 2012 / 2012 R2 > • Windows Server 2016 > • Windows 7 / 8 / 8.1 / 10 > —snap— > > So you can build an image for Windows 2016. > > HTH, Christian. > > -- > Christian Berendt > Chief Executive Officer (CEO) > > Mail: berendt at betacloud-solutions.de > Web: https://www.betacloud-solutions.de > > Betacloud Solutions GmbH > Teckstrasse 62 / 70190 Stuttgart / Deutschland > > Geschäftsführer: Christian Berendt > Unternehmenssitz: Stuttgart > Amtsgericht: Stuttgart, HRB 756139 > From sevilla.larry.oss at gmail.com Fri May 18 06:52:13 2018 From: sevilla.larry.oss at gmail.com (Larry Sevilla) Date: Fri, 18 May 2018 14:52:13 +0800 Subject: [Openstack] antiX as host Message-ID: Has anybody tried to setup an OpenStack environment with AntiX as host. I tried to install with the guide from " https://docs.openstack.org/install-guide/" I used the ubuntu portion since both ubuntu and antiX are derived from debian. But stopped at “ https://docs.openstack.org/install-guide/environment-etcd-ubuntu.html” at #3 “Create and edit the /lib/systemd/system/etcd.service file:” Sorry I’m not familiar with services, is there an equivalent for non-systemd? -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at goirand.fr Fri May 18 08:54:20 2018 From: thomas at goirand.fr (Thomas Goirand) Date: Fri, 18 May 2018 10:54:20 +0200 Subject: [Openstack] antiX as host In-Reply-To: References: Message-ID: <1a0baf1c-5a07-ea80-eaf8-16345eab7802@goirand.fr> On 05/18/2018 08:52 AM, Larry Sevilla wrote: > Has anybody tried to setup an OpenStack environment with AntiX as host. > > I tried to install with the guide from > "https://docs.openstack.org/install-guide/" > I used the ubuntu portion since both ubuntu and antiX are derived from > debian. > > But stopped at > “https://docs.openstack.org/install-guide/environment-etcd-ubuntu.html” > at #3 “Create and edit the /lib/systemd/system/etcd.service file:” > > Sorry I’m not familiar with services, is there an equivalent for > non-systemd? Since antiX is a Debian derivative, then it should be using all the packages from Debian (which I happen to maintain). In Debian (and Ubuntu, since they use the tooling I wrote) all services come with both systemd units and sysv-rc init scripts. etcd isn't available in stretch, and I've checked, it's not available in antiX either. So you may want to backport it from Debian Testing (ie: fetch the source package and simply rebuild it for your distro). Otherwise, you could just get the init.d script from the Sid/Buster package. I hope that helps, Cheers, Thomas Goirand (zigo) From ebiibe82 at gmail.com Fri May 18 10:46:39 2018 From: ebiibe82 at gmail.com (Amit Kumar) Date: Fri, 18 May 2018 16:16:39 +0530 Subject: [Openstack] [OpenStack-Operators][OpenStack] Regarding production grade OpenStack deployment Message-ID: Hi All, We want to deploy our private cloud using OpenStack as highly available (zero downtime (ZDT) - in normal course of action and during upgrades as well) production grade environment. We came across following tools. - We thought of using *Kolla-Kubernetes* as deployment tool, but we got feedback from Kolla IRC channel that this project is being retired. Moreover, we couldn't find latest documents having multi-node deployment steps and, High Availability support was also not mentioned at all anywhere in the documentation. - Another option to have Kubernetes based deployment is to use OpenStack-Helm, but it seems the OSH community has not made OSH 1.0 officially available yet. - Last option, is to use *Kolla-Ansible*, although it is not a Kubernetes deployment, but seems to have good community support around it. Also, its documentation talks a little about production grade deployment, probably it is being used in production grade environments. If you folks have used any of these tools for deploying OpenStack to fulfill these requirements: HA and ZDT, then please provide your inputs specifically about HA and ZDT support of the deployment tool, based on your experience. And please share if you have any reference links that you have used for achieving HA and ZDT for the respective tools. Lastly, if you think we should think that we have missed another more viable and stable options of deployment tools which can serve our requirement: HA and ZDT, then please do suggest the same. Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebiibe82 at gmail.com Fri May 18 18:59:52 2018 From: ebiibe82 at gmail.com (Amit Kumar) Date: Sat, 19 May 2018 00:29:52 +0530 Subject: [Openstack] [Openstack-operators] [OpenStack-Operators][OpenStack] Regarding production grade OpenStack deployment In-Reply-To: References: Message-ID: Hi, Thanks for sharing your experience. I am talking about HA of only OpenStack services and not the hosted applications or the OpenStack instances they are hosted on. So, for now it is not the requirement. But from your response, it seems that you have deployed OpenStack with Kolla-Ansible in multi node, multi-Controller architecture, right? And any experience with Kolla-Ansible from OpenStack release upgrade perspective? Is ZDT of OpenStack services feasible while upgrading? Regards, Amit On May 18, 2018 5:48 PM, "Flint WALRUS" wrote: Hi amit, I’m using kolla-ansible as a solution on our own infrastructure, however, be aware that because of the nature of Openstack you wont be able to achieve zero downtime if your hosted application do not take advantage of the distributed natre of ressources or if they’re not basically Cloud ready. Cheers. Le ven. 18 mai 2018 à 12:47, Amit Kumar a écrit : > Hi All, > > We want to deploy our private cloud using OpenStack as highly available > (zero downtime (ZDT) - in normal course of action and during upgrades as > well) production grade environment. We came across following tools. > > > - We thought of using *Kolla-Kubernetes* as deployment tool, but we > got feedback from Kolla IRC channel that this project is being retired. > Moreover, we couldn't find latest documents having multi-node deployment > steps and, High Availability support was also not mentioned at all anywhere > in the documentation. > - Another option to have Kubernetes based deployment is to use > OpenStack-Helm, but it seems the OSH community has not made OSH 1.0 > officially available yet. > - Last option, is to use *Kolla-Ansible*, although it is not a > Kubernetes deployment, but seems to have good community support around it. > Also, its documentation talks a little about production grade deployment, > probably it is being used in production grade environments. > > > If you folks have used any of these tools for deploying OpenStack to > fulfill these requirements: HA and ZDT, then please provide your inputs > specifically about HA and ZDT support of the deployment tool, based on your > experience. And please share if you have any reference links that you have > used for achieving HA and ZDT for the respective tools. > > Lastly, if you think we should think that we have missed another more > viable and stable options of deployment tools which can serve our > requirement: HA and ZDT, then please do suggest the same. > > Regards, > Amit > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 17061756-010 at uog.edu.pk Sun May 20 09:56:29 2018 From: 17061756-010 at uog.edu.pk (MUHAMMAD SAMIULLAH .) Date: Sun, 20 May 2018 14:56:29 +0500 Subject: [Openstack] query Message-ID: HELLO SIR I HAVE SOLVED THE IP ADDRESS ISSUE BUT NOW IT SHOWS ANOTHER ERROR -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture2.PNG Type: image/png Size: 92154 bytes Desc: not available URL: From satish.txt at gmail.com Sun May 20 13:15:31 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 20 May 2018 09:15:31 -0400 Subject: [Openstack] openstack-ansible question Message-ID: I am building openstack-ansible and have question related openstack_user_config.yml file internal_lb_vip_address: 172.29.236.9 is above IP address we need to setup manually on haproxy node or ansible does that? From amy at demarco.com Sun May 20 13:29:12 2018 From: amy at demarco.com (Amy Marrich) Date: Sun, 20 May 2018 06:29:12 -0700 Subject: [Openstack] openstack-ansible question In-Reply-To: References: Message-ID: <829233C5-6622-4D7A-BE3B-6460DB1F4156@demarco.com> Salish, OSA will configure the IP on the haproxy node for you when it configures the haproxy node. Thanks, Amy (spotz) Sent from my iPad > On May 20, 2018, at 6:15 AM, Satish Patel wrote: > > I am building openstack-ansible and have question related > openstack_user_config.yml file > > internal_lb_vip_address: 172.29.236.9 > > is above IP address we need to setup manually on haproxy node or > ansible does that? > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From marcin.dulak at gmail.com Sun May 20 14:28:24 2018 From: marcin.dulak at gmail.com (Marcin Dulak) Date: Sun, 20 May 2018 16:28:24 +0200 Subject: [Openstack] openstack-ansible question In-Reply-To: References: Message-ID: Hi, I had a similar question in the past, asked at https://bugs.launchpad.net/openstack-ansible/+bug/1744681, marked as invalid. Cheers, Marcin On Sun, May 20, 2018 at 3:15 PM, Satish Patel wrote: > I am building openstack-ansible and have question related > openstack_user_config.yml file > > internal_lb_vip_address: 172.29.236.9 > > is above IP address we need to setup manually on haproxy node or > ansible does that? > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pascal at watteel.be Sun May 20 15:17:24 2018 From: pascal at watteel.be (Pascal Watteel) Date: Sun, 20 May 2018 15:17:24 +0000 Subject: [Openstack] openstack-ansible question In-Reply-To: References: Message-ID: This ip will be automatically set. From: Marcin Dulak Sent: Sunday, May 20, 2018 18:28 To: Satish Patel Cc: openstack Subject: Re: [Openstack] openstack-ansible question Hi, I had a similar question in the past, asked at https://bugs.launchpad.net/openstack-ansible/+bug/1744681, marked as invalid. Cheers, Marcin On Sun, May 20, 2018 at 3:15 PM, Satish Patel > wrote: I am building openstack-ansible and have question related openstack_user_config.yml file internal_lb_vip_address: 172.29.236.9 is above IP address we need to setup manually on haproxy node or ansible does that? _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun May 20 15:21:07 2018 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 20 May 2018 11:21:07 -0400 Subject: [Openstack] openstack-ansible question In-Reply-To: References: Message-ID: <56085088-798B-4DC4-9C6D-5769FA58168B@gmail.com> Hey Marcin, I read your question and I'm in same boat, document is not clear enough rated haproxy node. I have 3 infra nodes and I don't have dedicated LB so I'm planning to use one of infra node as my lb. So question is I can make any one of node make as my external VIP which will facing outside world. Can I use external_lb_vip has my existing LAN IP which is accessible from outside or I should use fresh IP? Sent from my iPhone > On May 20, 2018, at 10:28 AM, Marcin Dulak wrote: > > Hi, > > I had a similar question in the past, asked at https://bugs.launchpad.net/openstack-ansible/+bug/1744681, marked as invalid. > > Cheers, > > Marcin > >> On Sun, May 20, 2018 at 3:15 PM, Satish Patel wrote: >> I am building openstack-ansible and have question related >> openstack_user_config.yml file >> >> internal_lb_vip_address: 172.29.236.9 >> >> is above IP address we need to setup manually on haproxy node or >> ansible does that? >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon May 21 01:37:03 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 21 May 2018 10:37:03 +0900 Subject: [Openstack] query In-Reply-To: References: Message-ID: <3ec4cfba-3ec2-7514-d01c-bcdf3a280bd6@gmail.com> Check for other errors or warnings further up in the log file. Check the Cinder API log for clues why Cinder API doesn't start. If your DevStack is a recent version, the command journalctl -u devstack at c-api is likely to show you the log. You may also have a c-api log file, usually under /opt/stack/logs. By the way, if you want my recommendation: DevStack was created for developers and for automatic integration testing. It is not a good tool for a quick start into OpenStack. /Packstack /(only on RHEL or Centos, unfortunately) is a bit better for newbies, but you learn infinitely more by first following the installation tutorials and trying to understand each step. This will give you are reasonable foundation on which you can build by, for example, setting up DevStack-based clouds. On 5/20/2018 6:56 PM, MUHAMMAD SAMIULLAH . wrote: > HELLO SIR > I HAVE SOLVED THE IP ADDRESS ISSUE BUT NOW IT SHOWS ANOTHER ERROR  > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From nivednk14 at gmail.com Tue May 22 06:20:31 2018 From: nivednk14 at gmail.com (NivedNK) Date: Tue, 22 May 2018 11:50:31 +0530 Subject: [Openstack] Openstack Trove Instance Stuck at Build State Message-ID: Hi all, Facing some issue on opesntack trove pike release multinode. Instance is up and running in Nova compute and volume is getting created and getting attached to Instance. Cloud-init is finished running successfully In trove-taskmanager.log I'm getting this error. http://paste.openstack.org/show/721555/ TIA Nived NK -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue May 22 17:53:52 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 May 2018 13:53:52 -0400 Subject: [Openstack] openstack-ansible: Failed to find required executable rabbitmqctl Message-ID: I have having issue issue any idea? https://bugs.launchpad.net/openstack-ansible/+bug/1772690 From gurvinder at techblue.co.uk Tue May 22 19:09:19 2018 From: gurvinder at techblue.co.uk (Gurvinder Dadyala) Date: Wed, 23 May 2018 00:39:19 +0530 Subject: [Openstack] openstack-ansible: Failed to find required executable rabbitmqctl In-Reply-To: Message-ID: <34ce980f-bc34-49d3-89c8-5b7dcf988653@email.android.com> An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue May 22 19:53:14 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 May 2018 15:53:14 -0400 Subject: [Openstack] openstack-ansible: Failed to find required executable rabbitmqctl In-Reply-To: <34ce980f-bc34-49d3-89c8-5b7dcf988653@email.android.com> References: <34ce980f-bc34-49d3-89c8-5b7dcf988653@email.android.com> Message-ID: I found my setup-infrasture.yml failed somewhere so now i am running with -vvv option to find out what happened and then i will post here my finding. On Tue, May 22, 2018 at 3:09 PM, Gurvinder Dadyala wrote: > Did you installed rabbitmq packages required for messaging services. From satish.txt at gmail.com Wed May 23 02:11:47 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 May 2018 22:11:47 -0400 Subject: [Openstack] openstack-ansible: Failed to find required executable rabbitmqctl In-Reply-To: References: <34ce980f-bc34-49d3-89c8-5b7dcf988653@email.android.com> Message-ID: I found issue was related my VMware ESX because i am testing this environment on VMware cluster and found one of vSwitch of ESXi was not allowing me to have multiple MAC on single port and because of that host or other stuff intermediately failed. On Tue, May 22, 2018 at 3:53 PM, Satish Patel wrote: > I found my setup-infrasture.yml failed somewhere so now i am running > with -vvv option to find out what happened and then i will post here > my finding. > > On Tue, May 22, 2018 at 3:09 PM, Gurvinder Dadyala > wrote: >> Did you installed rabbitmq packages required for messaging services. From satish.txt at gmail.com Wed May 23 02:12:38 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 22 May 2018 22:12:38 -0400 Subject: [Openstack] openstack-ansible skipping: no hosts matched Message-ID: https://bugs.launchpad.net/openstack-ansible/+bug/1772778 Did anyone seen this issue? From molenkam at uwo.ca Wed May 23 14:02:55 2018 From: molenkam at uwo.ca (Gary Molenkamp) Date: Wed, 23 May 2018 10:02:55 -0400 Subject: [Openstack] create floating ip broken under the openstack cli Message-ID: I have a provider network that has two subnets (Using 1.1.1.0/24 as an publicly routable example): > # openstack subnet list | grep 67917c09-6cb4-4622-ae1b-9f5aef890b0f > | 066df21a-d23d-4917-8b28-d097957633dc | provider-campus       | > 67917c09-6cb4-4622-ae1b-9f5aef890b0f | 172.31.96.0/22  | > | b955a7bf-0965-4e56-a224-8a93bbcb3e99 | provider-public | > 67917c09-6cb4-4622-ae1b-9f5aef890b0f | 1.1.1.0/24 | Normally I use the neutron cli to create a floating ip address on specific subnets, but I'm trying to migrate to the openstack cli since the neutron cli is marked as deprecated.  My understanding is that the two following command should be equivalent to create a floating ip on the provider-public subnet. However the first listed subnet (provider-campus) is always used by the openstack cli and is the default is not subnet is specified: This result is correct: > # neutron floatingip-create --tenant-id > 774810c91edf4f97ae23ad55ebaf2a18 --subnet > b955a7bf-0965-4e56-a224-8a93bbcb3e99 provider > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > Created a new floatingip: > +---------------------+--------------------------------------+ > | Field               | Value                                | > +---------------------+--------------------------------------+ > | created_at          | 2018-05-23T13:51:51Z                 | > | description         |                                      | > | fixed_ip_address    |                                      | > | floating_ip_address | 1.1.1.39                        | > | floating_network_id | 67917c09-6cb4-4622-ae1b-9f5aef890b0f | > | id                  | 3b02eb6a-12b1-46d8-980c-a543c47836c9 | > | port_id             |                                      | > | project_id          | 774810c91edf4f97ae23ad55ebaf2a18     | > | revision_number     | 0                                    | > | router_id           |                                      | > | status              | DOWN                                 | > | tags                |                                      | > | tenant_id           | 774810c91edf4f97ae23ad55ebaf2a18     | > | updated_at          | 2018-05-23T13:51:51Z                 | > +---------------------+--------------------------------------+ This result is incorrect: > # openstack floating ip create --project > 774810c91edf4f97ae23ad55ebaf2a18 --subnet > b955a7bf-0965-4e56-a224-8a93bbcb3e99 provider > +---------------------+--------------------------------------+ > | Field               | Value                                | > +---------------------+--------------------------------------+ > | created_at          | 2018-05-23T13:53:35Z                 | > | description         |                                      | > | fixed_ip_address    | None                                 | > | floating_ip_address | 172.31.96.61                         | > | floating_network_id | 67917c09-6cb4-4622-ae1b-9f5aef890b0f | > | id                  | 37fd261d-ffd3-440b-a19e-6d0fd093d575 | > | name                | 172.31.96.61                         | > | port_id             | None                                 | > | project_id          | 774810c91edf4f97ae23ad55ebaf2a18     | > | revision_number     | 0                                    | > | router_id           | None                                 | > | status              | DOWN                                 | > | updated_at          | 2018-05-23T13:53:35Z                 | > +---------------------+--------------------------------------+ Is this broken or am I doing something incorrect here?  Any pointers would be appreciated. Version details: BaseOS : Centos 7.4.1708 Openstack-release: centos-release-openstack-pike-1-1.el7.x86_64 openstack client:  python2-openstackclient-3.12.1-1.el7.noarch neutron client: python2-neutronclient-6.5.0-1.el7.noarch Thanks Gary. -- Gary Molenkamp Computer Science/Science Technology Services Systems Administrator University of Western Ontario molenkam at uwo.ca http://www.csd.uwo.ca (519) 661-2111 x86882 (519) 661-3566 From remo at italy1.com Wed May 23 14:26:08 2018 From: remo at italy1.com (remo at italy1.com) Date: Wed, 23 May 2018 07:26:08 -0700 Subject: [Openstack] create floating ip broken under the openstack cli In-Reply-To: References: Message-ID: Content-Type: multipart/alternative; boundary="=_445309788396a794fb382fe3ca5a9709" --=_445309788396a794fb382fe3ca5a9709 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSB3aWxsIHNoYXJlIHRoZSBzdGVwcyBJIHRoaW5rIHlvdSBhcmUgbWlzc2luZyBzb21lLiANCkF0 IHRoZSBvcGVuc3RhY2sgc3VtbWl0IG5vdyBzbyB3aWxsIHRyeSBhcyBzb29uIGFzIEkgaGF2ZSBh IG1pbi4gDQoNCj4gSWwgZ2lvcm5vIDIzIG1hZyAyMDE4LCBhbGxlIG9yZSAwNzowMiwgR2FyeSBN b2xlbmthbXAgPG1vbGVua2FtQHV3by5jYT4gaGEgc2NyaXR0bzoNCj4gDQo+IEkgaGF2ZSBhIHBy b3ZpZGVyIG5ldHdvcmsgdGhhdCBoYXMgdHdvIHN1Ym5ldHMgKFVzaW5nIDEuMS4xLjAvMjQgYXMg YW4gcHVibGljbHkgcm91dGFibGUgZXhhbXBsZSk6DQo+PiAjIG9wZW5zdGFjayBzdWJuZXQgbGlz dCB8IGdyZXAgNjc5MTdjMDktNmNiNC00NjIyLWFlMWItOWY1YWVmODkwYjBmDQo+PiB8IDA2NmRm MjFhLWQyM2QtNDkxNy04YjI4LWQwOTc5NTc2MzNkYyB8IHByb3ZpZGVyLWNhbXB1cyAgICAgICB8 IDY3OTE3YzA5LTZjYjQtNDYyMi1hZTFiLTlmNWFlZjg5MGIwZiB8IDE3Mi4zMS45Ni4wLzIyICB8 DQo+PiB8IGI5NTVhN2JmLTA5NjUtNGU1Ni1hMjI0LThhOTNiYmNiM2U5OSB8IHByb3ZpZGVyLXB1 YmxpYyB8IDY3OTE3YzA5LTZjYjQtNDYyMi1hZTFiLTlmNWFlZjg5MGIwZiB8IDEuMS4xLjAvMjQg fA0KPiANCj4gTm9ybWFsbHkgSSB1c2UgdGhlIG5ldXRyb24gY2xpIHRvIGNyZWF0ZSBhIGZsb2F0 aW5nIGlwIGFkZHJlc3Mgb24gc3BlY2lmaWMgc3VibmV0cywgYnV0IEknbSB0cnlpbmcgdG8gbWln cmF0ZSB0byB0aGUgb3BlbnN0YWNrIGNsaSBzaW5jZSB0aGUgbmV1dHJvbiBjbGkgaXMgbWFya2Vk IGFzIGRlcHJlY2F0ZWQuICBNeSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgdGhlIHR3byBmb2xsb3dp bmcgY29tbWFuZCBzaG91bGQgYmUgZXF1aXZhbGVudCB0byBjcmVhdGUgYSBmbG9hdGluZyBpcCBv biB0aGUgcHJvdmlkZXItcHVibGljIHN1Ym5ldC4gSG93ZXZlciB0aGUgZmlyc3QgbGlzdGVkIHN1 Ym5ldCAocHJvdmlkZXItY2FtcHVzKSBpcyBhbHdheXMgdXNlZCBieSB0aGUgb3BlbnN0YWNrIGNs aSBhbmQgaXMgdGhlIGRlZmF1bHQgaXMgbm90IHN1Ym5ldCBpcyBzcGVjaWZpZWQ6DQo+IA0KPiAN Cj4gVGhpcyByZXN1bHQgaXMgY29ycmVjdDoNCj4gDQo+PiAjIG5ldXRyb24gZmxvYXRpbmdpcC1j cmVhdGUgLS10ZW5hbnQtaWQgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJhMTggLS1zdWJu ZXQgYjk1NWE3YmYtMDk2NS00ZTU2LWEyMjQtOGE5M2JiY2IzZTk5IHByb3ZpZGVyDQo+PiBuZXV0 cm9uIENMSSBpcyBkZXByZWNhdGVkIGFuZCB3aWxsIGJlIHJlbW92ZWQgaW4gdGhlIGZ1dHVyZS4g VXNlIG9wZW5zdGFjayBDTEkgaW5zdGVhZC4NCj4+IENyZWF0ZWQgYSBuZXcgZmxvYXRpbmdpcDoN Cj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0rDQo+PiB8IEZpZWxkICAgICAgICAgICAgICAgfCBWYWx1ZSAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgfA0KPj4gKy0tLS0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSsNCj4+IHwgY3JlYXRlZF9hdCAgICAgICAgICB8 IDIwMTgtMDUtMjNUMTM6NTE6NTFaICAgICAgICAgICAgICAgICB8DQo+PiB8IGRlc2NyaXB0aW9u ICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBm aXhlZF9pcF9hZGRyZXNzICAgIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg IHwNCj4+IHwgZmxvYXRpbmdfaXBfYWRkcmVzcyB8IDEuMS4xLjM5ICAgICAgICAgICAgICAgICAg ICAgICAgfA0KPj4gfCBmbG9hdGluZ19uZXR3b3JrX2lkIHwgNjc5MTdjMDktNmNiNC00NjIyLWFl MWItOWY1YWVmODkwYjBmIHwNCj4+IHwgaWQgICAgICAgICAgICAgICAgICB8IDNiMDJlYjZhLTEy YjEtNDZkOC05ODBjLWE1NDNjNDc4MzZjOSB8DQo+PiB8IHBvcnRfaWQgICAgICAgICAgICAgfCAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBwcm9qZWN0X2lkICAg ICAgICAgIHwgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJhMTggICAgIHwNCj4+IHwgcmV2 aXNpb25fbnVtYmVyICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8 DQo+PiB8IHJvdXRlcl9pZCAgICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgfA0KPj4gfCBzdGF0dXMgICAgICAgICAgICAgIHwgRE9XTiAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIHwNCj4+IHwgdGFncyAgICAgICAgICAgICAgICB8ICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+PiB8IHRlbmFudF9pZCAgICAgICAgICAg fCA3NzQ4MTBjOTFlZGY0Zjk3YWUyM2FkNTVlYmFmMmExOCAgICAgfA0KPj4gfCB1cGRhdGVkX2F0 ICAgICAgICAgIHwgMjAxOC0wNS0yM1QxMzo1MTo1MVogICAgICAgICAgICAgICAgIHwNCj4+ICst LS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0rDQo+IA0KPiBUaGlzIHJlc3VsdCBpcyBpbmNvcnJlY3Q6DQo+IA0KPj4gIyBvcGVuc3RhY2sg ZmxvYXRpbmcgaXAgY3JlYXRlIC0tcHJvamVjdCA3NzQ4MTBjOTFlZGY0Zjk3YWUyM2FkNTVlYmFm MmExOCAtLXN1Ym5ldCBiOTU1YTdiZi0wOTY1LTRlNTYtYTIyNC04YTkzYmJjYjNlOTkgcHJvdmlk ZXINCj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0rDQo+PiB8IEZpZWxkICAgICAgICAgICAgICAgfCBWYWx1ZSAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgfA0KPj4gKy0tLS0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSsNCj4+IHwgY3JlYXRlZF9hdCAgICAgICAg ICB8IDIwMTgtMDUtMjNUMTM6NTM6MzVaICAgICAgICAgICAgICAgICB8DQo+PiB8IGRlc2NyaXB0 aW9uICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4g fCBmaXhlZF9pcF9hZGRyZXNzICAgIHwgTm9uZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgIHwNCj4+IHwgZmxvYXRpbmdfaXBfYWRkcmVzcyB8IDE3Mi4zMS45Ni42MSAgICAgICAgICAg ICAgICAgICAgICAgICB8DQo+PiB8IGZsb2F0aW5nX25ldHdvcmtfaWQgfCA2NzkxN2MwOS02Y2I0 LTQ2MjItYWUxYi05ZjVhZWY4OTBiMGYgfA0KPj4gfCBpZCAgICAgICAgICAgICAgICAgIHwgMzdm ZDI2MWQtZmZkMy00NDBiLWExOWUtNmQwZmQwOTNkNTc1IHwNCj4+IHwgbmFtZSAgICAgICAgICAg ICAgICB8IDE3Mi4zMS45Ni42MSAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+PiB8IHBvcnRf aWQgICAgICAgICAgICAgfCBOb25lICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0K Pj4gfCBwcm9qZWN0X2lkICAgICAgICAgIHwgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJh MTggICAgIHwNCj4+IHwgcmV2aXNpb25fbnVtYmVyICAgICB8IDAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICB8DQo+PiB8IHJvdXRlcl9pZCAgICAgICAgICAgfCBOb25lICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBzdGF0dXMgICAgICAgICAgICAgIHwg RE9XTiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwNCj4+IHwgdXBkYXRlZF9hdCAg ICAgICAgICB8IDIwMTgtMDUtMjNUMTM6NTM6MzVaICAgICAgICAgICAgICAgICB8DQo+PiArLS0t LS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t Kw0KPiANCj4gSXMgdGhpcyBicm9rZW4gb3IgYW0gSSBkb2luZyBzb21ldGhpbmcgaW5jb3JyZWN0 IGhlcmU/ICBBbnkgcG9pbnRlcnMgd291bGQgYmUgYXBwcmVjaWF0ZWQuDQo+IA0KPiBWZXJzaW9u IGRldGFpbHM6DQo+IA0KPiBCYXNlT1MgOiBDZW50b3MgNy40LjE3MDgNCj4gT3BlbnN0YWNrLXJl bGVhc2U6IGNlbnRvcy1yZWxlYXNlLW9wZW5zdGFjay1waWtlLTEtMS5lbDcueDg2XzY0DQo+IG9w ZW5zdGFjayBjbGllbnQ6ICBweXRob24yLW9wZW5zdGFja2NsaWVudC0zLjEyLjEtMS5lbDcubm9h cmNoDQo+IG5ldXRyb24gY2xpZW50OiBweXRob24yLW5ldXRyb25jbGllbnQtNi41LjAtMS5lbDcu bm9hcmNoDQo+IA0KPiANCj4gVGhhbmtzDQo+IEdhcnkuDQo+IA0KPiANCj4gDQo+IC0tIA0KPiBH YXJ5IE1vbGVua2FtcCAgICAgICAgICAgIENvbXB1dGVyIFNjaWVuY2UvU2NpZW5jZSBUZWNobm9s b2d5IFNlcnZpY2VzDQo+IFN5c3RlbXMgQWRtaW5pc3RyYXRvciAgICAgICAgVW5pdmVyc2l0eSBv ZiBXZXN0ZXJuIE9udGFyaW8NCj4gbW9sZW5rYW1AdXdvLmNhICAgICAgICAgICAgICAgICBodHRw Oi8vd3d3LmNzZC51d28uY2ENCj4gKDUxOSkgNjYxLTIxMTEgeDg2ODgyICAgICAgICAoNTE5KSA2 NjEtMzU2Ng0KPiANCj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dp LWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFj a0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0K --=_445309788396a794fb382fe3ca5a9709-- From remo at italy1.com Wed May 23 14:26:08 2018 From: remo at italy1.com (remo at italy1.com) Date: Wed, 23 May 2018 07:26:08 -0700 Subject: [Openstack] create floating ip broken under the openstack cli In-Reply-To: References: Message-ID: Content-Type: multipart/alternative; boundary="=_445309788396a794fb382fe3ca5a9709" --=_445309788396a794fb382fe3ca5a9709 Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 SSB3aWxsIHNoYXJlIHRoZSBzdGVwcyBJIHRoaW5rIHlvdSBhcmUgbWlzc2luZyBzb21lLiANCkF0 IHRoZSBvcGVuc3RhY2sgc3VtbWl0IG5vdyBzbyB3aWxsIHRyeSBhcyBzb29uIGFzIEkgaGF2ZSBh IG1pbi4gDQoNCj4gSWwgZ2lvcm5vIDIzIG1hZyAyMDE4LCBhbGxlIG9yZSAwNzowMiwgR2FyeSBN b2xlbmthbXAgPG1vbGVua2FtQHV3by5jYT4gaGEgc2NyaXR0bzoNCj4gDQo+IEkgaGF2ZSBhIHBy b3ZpZGVyIG5ldHdvcmsgdGhhdCBoYXMgdHdvIHN1Ym5ldHMgKFVzaW5nIDEuMS4xLjAvMjQgYXMg YW4gcHVibGljbHkgcm91dGFibGUgZXhhbXBsZSk6DQo+PiAjIG9wZW5zdGFjayBzdWJuZXQgbGlz dCB8IGdyZXAgNjc5MTdjMDktNmNiNC00NjIyLWFlMWItOWY1YWVmODkwYjBmDQo+PiB8IDA2NmRm MjFhLWQyM2QtNDkxNy04YjI4LWQwOTc5NTc2MzNkYyB8IHByb3ZpZGVyLWNhbXB1cyAgICAgICB8 IDY3OTE3YzA5LTZjYjQtNDYyMi1hZTFiLTlmNWFlZjg5MGIwZiB8IDE3Mi4zMS45Ni4wLzIyICB8 DQo+PiB8IGI5NTVhN2JmLTA5NjUtNGU1Ni1hMjI0LThhOTNiYmNiM2U5OSB8IHByb3ZpZGVyLXB1 YmxpYyB8IDY3OTE3YzA5LTZjYjQtNDYyMi1hZTFiLTlmNWFlZjg5MGIwZiB8IDEuMS4xLjAvMjQg fA0KPiANCj4gTm9ybWFsbHkgSSB1c2UgdGhlIG5ldXRyb24gY2xpIHRvIGNyZWF0ZSBhIGZsb2F0 aW5nIGlwIGFkZHJlc3Mgb24gc3BlY2lmaWMgc3VibmV0cywgYnV0IEknbSB0cnlpbmcgdG8gbWln cmF0ZSB0byB0aGUgb3BlbnN0YWNrIGNsaSBzaW5jZSB0aGUgbmV1dHJvbiBjbGkgaXMgbWFya2Vk IGFzIGRlcHJlY2F0ZWQuICBNeSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgdGhlIHR3byBmb2xsb3dp bmcgY29tbWFuZCBzaG91bGQgYmUgZXF1aXZhbGVudCB0byBjcmVhdGUgYSBmbG9hdGluZyBpcCBv biB0aGUgcHJvdmlkZXItcHVibGljIHN1Ym5ldC4gSG93ZXZlciB0aGUgZmlyc3QgbGlzdGVkIHN1 Ym5ldCAocHJvdmlkZXItY2FtcHVzKSBpcyBhbHdheXMgdXNlZCBieSB0aGUgb3BlbnN0YWNrIGNs aSBhbmQgaXMgdGhlIGRlZmF1bHQgaXMgbm90IHN1Ym5ldCBpcyBzcGVjaWZpZWQ6DQo+IA0KPiAN Cj4gVGhpcyByZXN1bHQgaXMgY29ycmVjdDoNCj4gDQo+PiAjIG5ldXRyb24gZmxvYXRpbmdpcC1j cmVhdGUgLS10ZW5hbnQtaWQgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJhMTggLS1zdWJu ZXQgYjk1NWE3YmYtMDk2NS00ZTU2LWEyMjQtOGE5M2JiY2IzZTk5IHByb3ZpZGVyDQo+PiBuZXV0 cm9uIENMSSBpcyBkZXByZWNhdGVkIGFuZCB3aWxsIGJlIHJlbW92ZWQgaW4gdGhlIGZ1dHVyZS4g VXNlIG9wZW5zdGFjayBDTEkgaW5zdGVhZC4NCj4+IENyZWF0ZWQgYSBuZXcgZmxvYXRpbmdpcDoN Cj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0rDQo+PiB8IEZpZWxkICAgICAgICAgICAgICAgfCBWYWx1ZSAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgfA0KPj4gKy0tLS0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSsNCj4+IHwgY3JlYXRlZF9hdCAgICAgICAgICB8 IDIwMTgtMDUtMjNUMTM6NTE6NTFaICAgICAgICAgICAgICAgICB8DQo+PiB8IGRlc2NyaXB0aW9u ICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBm aXhlZF9pcF9hZGRyZXNzICAgIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg IHwNCj4+IHwgZmxvYXRpbmdfaXBfYWRkcmVzcyB8IDEuMS4xLjM5ICAgICAgICAgICAgICAgICAg ICAgICAgfA0KPj4gfCBmbG9hdGluZ19uZXR3b3JrX2lkIHwgNjc5MTdjMDktNmNiNC00NjIyLWFl MWItOWY1YWVmODkwYjBmIHwNCj4+IHwgaWQgICAgICAgICAgICAgICAgICB8IDNiMDJlYjZhLTEy YjEtNDZkOC05ODBjLWE1NDNjNDc4MzZjOSB8DQo+PiB8IHBvcnRfaWQgICAgICAgICAgICAgfCAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBwcm9qZWN0X2lkICAg ICAgICAgIHwgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJhMTggICAgIHwNCj4+IHwgcmV2 aXNpb25fbnVtYmVyICAgICB8IDAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8 DQo+PiB8IHJvdXRlcl9pZCAgICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgfA0KPj4gfCBzdGF0dXMgICAgICAgICAgICAgIHwgRE9XTiAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgIHwNCj4+IHwgdGFncyAgICAgICAgICAgICAgICB8ICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+PiB8IHRlbmFudF9pZCAgICAgICAgICAg fCA3NzQ4MTBjOTFlZGY0Zjk3YWUyM2FkNTVlYmFmMmExOCAgICAgfA0KPj4gfCB1cGRhdGVkX2F0 ICAgICAgICAgIHwgMjAxOC0wNS0yM1QxMzo1MTo1MVogICAgICAgICAgICAgICAgIHwNCj4+ICst LS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0rDQo+IA0KPiBUaGlzIHJlc3VsdCBpcyBpbmNvcnJlY3Q6DQo+IA0KPj4gIyBvcGVuc3RhY2sg ZmxvYXRpbmcgaXAgY3JlYXRlIC0tcHJvamVjdCA3NzQ4MTBjOTFlZGY0Zjk3YWUyM2FkNTVlYmFm MmExOCAtLXN1Ym5ldCBiOTU1YTdiZi0wOTY1LTRlNTYtYTIyNC04YTkzYmJjYjNlOTkgcHJvdmlk ZXINCj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0rDQo+PiB8IEZpZWxkICAgICAgICAgICAgICAgfCBWYWx1ZSAgICAgICAgICAg ICAgICAgICAgICAgICAgICAgICAgfA0KPj4gKy0tLS0tLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLSsNCj4+IHwgY3JlYXRlZF9hdCAgICAgICAg ICB8IDIwMTgtMDUtMjNUMTM6NTM6MzVaICAgICAgICAgICAgICAgICB8DQo+PiB8IGRlc2NyaXB0 aW9uICAgICAgICAgfCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4g fCBmaXhlZF9pcF9hZGRyZXNzICAgIHwgTm9uZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg ICAgIHwNCj4+IHwgZmxvYXRpbmdfaXBfYWRkcmVzcyB8IDE3Mi4zMS45Ni42MSAgICAgICAgICAg ICAgICAgICAgICAgICB8DQo+PiB8IGZsb2F0aW5nX25ldHdvcmtfaWQgfCA2NzkxN2MwOS02Y2I0 LTQ2MjItYWUxYi05ZjVhZWY4OTBiMGYgfA0KPj4gfCBpZCAgICAgICAgICAgICAgICAgIHwgMzdm ZDI2MWQtZmZkMy00NDBiLWExOWUtNmQwZmQwOTNkNTc1IHwNCj4+IHwgbmFtZSAgICAgICAgICAg ICAgICB8IDE3Mi4zMS45Ni42MSAgICAgICAgICAgICAgICAgICAgICAgICB8DQo+PiB8IHBvcnRf aWQgICAgICAgICAgICAgfCBOb25lICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0K Pj4gfCBwcm9qZWN0X2lkICAgICAgICAgIHwgNzc0ODEwYzkxZWRmNGY5N2FlMjNhZDU1ZWJhZjJh MTggICAgIHwNCj4+IHwgcmV2aXNpb25fbnVtYmVyICAgICB8IDAgICAgICAgICAgICAgICAgICAg ICAgICAgICAgICAgICAgICB8DQo+PiB8IHJvdXRlcl9pZCAgICAgICAgICAgfCBOb25lICAgICAg ICAgICAgICAgICAgICAgICAgICAgICAgICAgfA0KPj4gfCBzdGF0dXMgICAgICAgICAgICAgIHwg RE9XTiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwNCj4+IHwgdXBkYXRlZF9hdCAg ICAgICAgICB8IDIwMTgtMDUtMjNUMTM6NTM6MzVaICAgICAgICAgICAgICAgICB8DQo+PiArLS0t LS0tLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t Kw0KPiANCj4gSXMgdGhpcyBicm9rZW4gb3IgYW0gSSBkb2luZyBzb21ldGhpbmcgaW5jb3JyZWN0 IGhlcmU/ICBBbnkgcG9pbnRlcnMgd291bGQgYmUgYXBwcmVjaWF0ZWQuDQo+IA0KPiBWZXJzaW9u IGRldGFpbHM6DQo+IA0KPiBCYXNlT1MgOiBDZW50b3MgNy40LjE3MDgNCj4gT3BlbnN0YWNrLXJl bGVhc2U6IGNlbnRvcy1yZWxlYXNlLW9wZW5zdGFjay1waWtlLTEtMS5lbDcueDg2XzY0DQo+IG9w ZW5zdGFjayBjbGllbnQ6ICBweXRob24yLW9wZW5zdGFja2NsaWVudC0zLjEyLjEtMS5lbDcubm9h cmNoDQo+IG5ldXRyb24gY2xpZW50OiBweXRob24yLW5ldXRyb25jbGllbnQtNi41LjAtMS5lbDcu bm9hcmNoDQo+IA0KPiANCj4gVGhhbmtzDQo+IEdhcnkuDQo+IA0KPiANCj4gDQo+IC0tIA0KPiBH YXJ5IE1vbGVua2FtcCAgICAgICAgICAgIENvbXB1dGVyIFNjaWVuY2UvU2NpZW5jZSBUZWNobm9s b2d5IFNlcnZpY2VzDQo+IFN5c3RlbXMgQWRtaW5pc3RyYXRvciAgICAgICAgVW5pdmVyc2l0eSBv ZiBXZXN0ZXJuIE9udGFyaW8NCj4gbW9sZW5rYW1AdXdvLmNhICAgICAgICAgICAgICAgICBodHRw Oi8vd3d3LmNzZC51d28uY2ENCj4gKDUxOSkgNjYxLTIxMTEgeDg2ODgyICAgICAgICAoNTE5KSA2 NjEtMzU2Ng0KPiANCj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fDQo+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dp LWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPiBQb3N0IHRvICAgICA6IG9wZW5zdGFj a0BsaXN0cy5vcGVuc3RhY2sub3JnDQo+IFVuc3Vic2NyaWJlIDogaHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0K --=_445309788396a794fb382fe3ca5a9709-- From j.harbott at x-ion.de Wed May 23 15:33:55 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Wed, 23 May 2018 15:33:55 +0000 Subject: [Openstack] create floating ip broken under the openstack cli In-Reply-To: References: Message-ID: 2018-05-23 14:02 GMT+00:00 Gary Molenkamp : > I have a provider network that has two subnets (Using 1.1.1.0/24 as an > publicly routable example): >> >> # openstack subnet list | grep 67917c09-6cb4-4622-ae1b-9f5aef890b0f >> | 066df21a-d23d-4917-8b28-d097957633dc | provider-campus | >> 67917c09-6cb4-4622-ae1b-9f5aef890b0f | 172.31.96.0/22 | >> | b955a7bf-0965-4e56-a224-8a93bbcb3e99 | provider-public | >> 67917c09-6cb4-4622-ae1b-9f5aef890b0f | 1.1.1.0/24 | > > > Normally I use the neutron cli to create a floating ip address on specific > subnets, but I'm trying to migrate to the openstack cli since the neutron > cli is marked as deprecated. My understanding is that the two following > command should be equivalent to create a floating ip on the provider-public > subnet. However the first listed subnet (provider-campus) is always used by > the openstack cli and is the default is not subnet is specified: > > > This result is correct: > >> # neutron floatingip-create --tenant-id 774810c91edf4f97ae23ad55ebaf2a18 >> --subnet b955a7bf-0965-4e56-a224-8a93bbcb3e99 provider >> neutron CLI is deprecated and will be removed in the future. Use openstack >> CLI instead. >> Created a new floatingip: >> +---------------------+--------------------------------------+ >> | Field | Value | >> +---------------------+--------------------------------------+ >> | created_at | 2018-05-23T13:51:51Z | >> | description | | >> | fixed_ip_address | | >> | floating_ip_address | 1.1.1.39 | >> | floating_network_id | 67917c09-6cb4-4622-ae1b-9f5aef890b0f | >> | id | 3b02eb6a-12b1-46d8-980c-a543c47836c9 | >> | port_id | | >> | project_id | 774810c91edf4f97ae23ad55ebaf2a18 | >> | revision_number | 0 | >> | router_id | | >> | status | DOWN | >> | tags | | >> | tenant_id | 774810c91edf4f97ae23ad55ebaf2a18 | >> | updated_at | 2018-05-23T13:51:51Z | >> +---------------------+--------------------------------------+ > > > This result is incorrect: > >> # openstack floating ip create --project 774810c91edf4f97ae23ad55ebaf2a18 >> --subnet b955a7bf-0965-4e56-a224-8a93bbcb3e99 provider >> +---------------------+--------------------------------------+ >> | Field | Value | >> +---------------------+--------------------------------------+ >> | created_at | 2018-05-23T13:53:35Z | >> | description | | >> | fixed_ip_address | None | >> | floating_ip_address | 172.31.96.61 | >> | floating_network_id | 67917c09-6cb4-4622-ae1b-9f5aef890b0f | >> | id | 37fd261d-ffd3-440b-a19e-6d0fd093d575 | >> | name | 172.31.96.61 | >> | port_id | None | >> | project_id | 774810c91edf4f97ae23ad55ebaf2a18 | >> | revision_number | 0 | >> | router_id | None | >> | status | DOWN | >> | updated_at | 2018-05-23T13:53:35Z | >> +---------------------+--------------------------------------+ > > > Is this broken or am I doing something incorrect here? Any pointers would > be appreciated. > > Version details: > > BaseOS : Centos 7.4.1708 > Openstack-release: centos-release-openstack-pike-1-1.el7.x86_64 > openstack client: python2-openstackclient-3.12.1-1.el7.noarch > neutron client: python2-neutronclient-6.5.0-1.el7.noarch There was a bug in openstacksdk that could cause this behaviour, see https://bugs.launchpad.net/python-openstacksdk/+bug/1733258 . You may want to install the latest version of python-openstackclient into a virtualenv and use that as a workaround. Not sure if we can backport the fix, but I'll take a look. Yours, Jens From nspmangalore at gmail.com Thu May 24 11:48:26 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Thu, 24 May 2018 17:18:26 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. Message-ID: Hi, I've been trying to get swift3 to work for several days now. But I haven't managed to get it running. Both with tempauth and keystoneauth, I'm getting the same error: eightkpc at objectstore1:~/s3curl$ ./s3curl.pl --id=testerks -- http://127.0.0.1:8080/ SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.txa691e7ca97a44d56bc4c2-005b06a292 May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 1527161490.563107014 - May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: txa691e7ca97a44d56bc4c2-005b06a292) eightkpc at objectstore1:~$ cat .s3curl %awsSecretAccessKeys = ( tester => { id => 'test:tester', key => 'testing', }, testerks => { id => 'e6289a1b5692461388d0597a4873d054', key => '88bb706887094696b082f008ba133ad7', }, ); eightkpc at objectstore1:~$ openstack ec2 credentials show e6289a1b5692461388d0597a4873d054 +------------+------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+------------------------------------------------------------------------------------------------------------------------------------+ | access | e6289a1b5692461388d0597a4873d054 | | links | {u'self': u' http://controller:5000/v3/users/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e6289a1b5692461388d0597a4873d054'} | | project_id | dc86f7d8787b46158268bd77098b6578 | | secret | 88bb706887094696b082f008ba133ad7 | | trust_id | None | | user_id | d7df7b56343b4ea988869fc30efeda09 | +------------+------------------------------------------------------------------------------------------------------------------------------------+ Can someone please let me know what is going on? Regards, Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: proxy-server.conf Type: application/octet-stream Size: 2240 bytes Desc: not available URL: From torin.woltjer at granddial.com Thu May 24 12:36:44 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Thu, 24 May 2018 12:36:44 GMT Subject: [Openstack] Cinder Queens installdoc wrong Message-ID: <61623e39b2d540bb87676be6258bb12a@granddial.com> I've upgraded from pike to queens, and the keystone admin port 35357 has been deprecated in favor of 5000 it seems. However, the documentation for the installation of cinder still uses that port in [keystone_authtoken]. What is the correct entry for this line? auth_url = http://controller:5000 I imagine. https://docs.openstack.org/cinder/queens/install/cinder-controller-install-ubuntu.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu May 24 15:55:49 2018 From: amy at demarco.com (Amy) Date: Thu, 24 May 2018 08:55:49 -0700 Subject: [Openstack] Cinder Queens installdoc wrong In-Reply-To: <61623e39b2d540bb87676be6258bb12a@granddial.com> References: <61623e39b2d540bb87676be6258bb12a@granddial.com> Message-ID: <28A11745-9A1C-43E6-A9A5-9643CEAE9F61@demarco.com> Hi Torin, I double checked with Cinder and you are correct the port should be 5000. I’ll get it bugged and patched after Summit. Thanks, Amy (spotz) Sent from my iPhone > On May 24, 2018, at 5:36 AM, Torin Woltjer wrote: > > I've upgraded from pike to queens, and the keystone admin port 35357 has been deprecated in favor of 5000 it seems. However, the documentation for the installation of cinder still uses that port in [keystone_authtoken]. What is the correct entry for this line? auth_url = http://controller:5000 I imagine. > > https://docs.openstack.org/cinder/queens/install/cinder-controller-install-ubuntu.html > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsuyuzaki.kota at lab.ntt.co.jp Thu May 24 17:16:36 2018 From: tsuyuzaki.kota at lab.ntt.co.jp (Kota TSUYUZAKI) Date: Fri, 25 May 2018 02:16:36 +0900 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: Message-ID: <5B06F374.5030105@lab.ntt.co.jp> Hi, Shyam > tester => { > id => 'test:tester', > key => 'testing', > }, If you are using this id/password to get your token from keystone, you should set them as access_key and secret key for your s3 client. You don't have to set any token information from keystone for your client. i.e. `./s3curl.pl --id=tester -- http://127.0.0.1:8080/` may work. I'm not an expert of the s3curl client though. Best, Kota (2018/05/24 20:48), Shyam Prasad N wrote: > Hi, > > I've been trying to get swift3 to work for several days now. But I haven't > managed to get it running. > Both with tempauth and keystoneauth, I'm getting the same error: > > eightkpc at objectstore1:~/s3curl$ ./s3curl.pl --id=testerks -- > http://127.0.0.1:8080/ > > SignatureDoesNotMatchThe request signature we > calculated does not match the signature you provided. Check your key and > signing > method.txa691e7ca97a44d56bc4c2-005b06a292 > > May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 > 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - > txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 > 1527161490.563107014 - > May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - [24/May/2018 > 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: > txa691e7ca97a44d56bc4c2-005b06a292) > > eightkpc at objectstore1:~$ cat .s3curl > %awsSecretAccessKeys = ( > tester => { > id => 'test:tester', > key => 'testing', > }, > testerks => { > id => 'e6289a1b5692461388d0597a4873d054', > key => '88bb706887094696b082f008ba133ad7', > }, > ); > > eightkpc at objectstore1:~$ openstack ec2 credentials show > e6289a1b5692461388d0597a4873d054 > +------------+------------------------------------------------------------------------------------------------------------------------------------+ > | Field | > Value > | > +------------+------------------------------------------------------------------------------------------------------------------------------------+ > | access | > e6289a1b5692461388d0597a4873d054 > | > | links | {u'self': u' > http://controller:5000/v3/users/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e6289a1b5692461388d0597a4873d054'} > | > | project_id | > dc86f7d8787b46158268bd77098b6578 > | > | secret | > 88bb706887094696b082f008ba133ad7 > | > | trust_id | > None > | > | user_id | > d7df7b56343b4ea988869fc30efeda09 > | > +------------+------------------------------------------------------------------------------------------------------------------------------------+ > > Can someone please let me know what is going on? > > Regards, > Shyam > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From nspmangalore at gmail.com Fri May 25 06:57:46 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Fri, 25 May 2018 12:27:46 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> Message-ID: Thanks. I'll try this. But what values do I use in place of ak and sk? I want to use some command to get those values, right? On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang wrote: > I created ec2 credentials using command `openstack credential create`. > > i.e. > > openstack credential create --type ec2 --project proj user '{"access": > "ak", "secret": "sk”}' > > > It seems the two credentials are not the same thing. > > Ref: > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1. > 1/com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ > ConfigureOpenstackEC2credentials.htm > > 在 2018年5月25日,10:32,Shyam Prasad N 写道: > > Yes, I did. > I don't think this is s3curl related issue, because I tried with python > AWS SDK, and got the same error. > > On Fri, May 25, 2018, 07:42 Yuxin Wang wrote: > >> Did you add 127.0.0.1 to the endpoint list in s3curl.pl? >> >> i.e. >> >> my @endpoints = (‘127.0.0.1’); >> >> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >> >> Hi, >> >> I've been trying to get swift3 to work for several days now. But I >> haven't managed to get it running. >> Both with tempauth and keystoneauth, I'm getting the same error: >> >> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >> >> --id=testerks -- http://127.0.0.1:8080/ >> >> >> SignatureDoesNotMatchThe request signature >> we calculated does not match the signature you provided. Check your key and >> signing method.txa691e7ca97a44d56bc4c2- >> 005b06a292 >> >> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >> 1527161490.563107014 - >> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >> txa691e7ca97a44d56bc4c2-005b06a292) >> >> eightkpc at objectstore1:~$ cat .s3curl >> %awsSecretAccessKeys = ( >> tester => { >> id => 'test:tester', >> key => 'testing', >> }, >> testerks => { >> id => 'e6289a1b5692461388d0597a4873d054', >> key => '88bb706887094696b082f008ba133ad7', >> }, >> ); >> >> eightkpc at objectstore1:~$ openstack ec2 credentials show >> e6289a1b5692461388d0597a4873d054 >> +------------+---------------------------------------------- >> ------------------------------------------------------------ >> --------------------------+ >> | Field | Value >> >> | >> +------------+---------------------------------------------- >> ------------------------------------------------------------ >> --------------------------+ >> | access | e6289a1b5692461388d0597a4873d0 >> 54 >> | >> | links | {u'self': u'http://controller:5000/v3/users/ >> d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/ >> e6289a1b5692461388d0597a4873d054'} | >> | project_id | dc86f7d8787b46158268bd77098b65 >> 78 >> | >> | secret | 88bb706887094696b082f008ba133a >> d7 >> | >> | trust_id | None >> >> | >> | user_id | d7df7b56343b4ea988869fc30efeda >> 09 >> | >> +------------+---------------------------------------------- >> ------------------------------------------------------------ >> --------------------------+ >> >> Can someone please let me know what is going on? >> >> Regards, >> Shyam >> _______________________________________________ >> Mailing list: https://eur03.safelinks.protection.outlook.com/?url= >> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d >> 1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0% >> 7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbT >> q6Pplo1bc7ak%3D&reserved=0 >> Post to : openstack at lists.openstack.org >> Unsubscribe : https://eur03.safelinks.protection.outlook.com/?url= >> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d >> 1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0% >> 7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbT >> q6Pplo1bc7ak%3D&reserved=0 >> >> >> > -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.vanommen at gmail.com Fri May 25 08:24:25 2018 From: john.vanommen at gmail.com (John van Ommen) Date: Fri, 25 May 2018 01:24:25 -0700 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> Message-ID: What release are you using? In 2016 I tried to get this working for a client of mine at HPE, and found that it wouldn't work without a fair bit of hacking. Basically the software hadn't been updated in about a year, and the newest release was incompatible with the version of OpenStack that we were selling. On Thu, May 24, 2018 at 11:57 PM, Shyam Prasad N wrote: > Thanks. I'll try this. > But what values do I use in place of ak and sk? I want to use some command > to get those values, right? > > On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang > wrote: > >> I created ec2 credentials using command `openstack credential create`. >> >> i.e. >> >> openstack credential create --type ec2 --project proj user '{"access": >> "ak", "secret": "sk”}' >> >> >> It seems the two credentials are not the same thing. >> >> Ref: >> >> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/ >> com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpenst >> ackEC2credentials.htm >> >> 在 2018年5月25日,10:32,Shyam Prasad N 写道: >> >> Yes, I did. >> I don't think this is s3curl related issue, because I tried with python >> AWS SDK, and got the same error. >> >> On Fri, May 25, 2018, 07:42 Yuxin Wang >> wrote: >> >>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl? >>> >>> i.e. >>> >>> my @endpoints = (‘127.0.0.1’); >>> >>> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >>> >>> Hi, >>> >>> I've been trying to get swift3 to work for several days now. But I >>> haven't managed to get it running. >>> Both with tempauth and keystoneauth, I'm getting the same error: >>> >>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >>> >>> --id=testerks -- http://127.0.0.1:8080/ >>> >>> >>> SignatureDoesNotMatchThe request signature >>> we calculated does not match the signature you provided. Check your key and >>> signing method.txa691e7ca97a44d56bc4c2-005b06a2 >>> 92 >>> >>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >>> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >>> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >>> 1527161490.563107014 - >>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >>> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >>> txa691e7ca97a44d56bc4c2-005b06a292) >>> >>> eightkpc at objectstore1:~$ cat .s3curl >>> %awsSecretAccessKeys = ( >>> tester => { >>> id => 'test:tester', >>> key => 'testing', >>> }, >>> testerks => { >>> id => 'e6289a1b5692461388d0597a4873d054', >>> key => '88bb706887094696b082f008ba133ad7', >>> }, >>> ); >>> >>> eightkpc at objectstore1:~$ openstack ec2 credentials show >>> e6289a1b5692461388d0597a4873d054 >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> | Field | Value >>> >>> | >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> | access | e6289a1b5692461388d0597a4873d0 >>> 54 >>> | >>> | links | {u'self': u'http://controller:5000/v3/us >>> ers/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e628 >>> 9a1b5692461388d0597a4873d054'} | >>> | project_id | dc86f7d8787b46158268bd77098b65 >>> 78 >>> | >>> | secret | 88bb706887094696b082f008ba133a >>> d7 >>> | >>> | trust_id | None >>> >>> | >>> | user_id | d7df7b56343b4ea988869fc30efeda >>> 09 >>> | >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> >>> Can someone please let me know what is going on? >>> >>> Regards, >>> Shyam >>> _______________________________________________ >>> Mailing list: https://eur03.safelinks.protec >>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >>> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >>> C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaa >>> aaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX >>> 1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : https://eur03.safelinks.protec >>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >>> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >>> C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaa >>> aaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX >>> 1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> >>> >>> >> > > > -- > -Shyam > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wang.yuxin at ostorage.com.cn Fri May 25 09:26:54 2018 From: wang.yuxin at ostorage.com.cn (Yuxin Wang) Date: Fri, 25 May 2018 17:26:54 +0800 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> Message-ID: <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> They can be any strings. Replace them with whatever you want. - Yuxin > 在 2018年5月25日,14:57,Shyam Prasad N 写道: > > Thanks. I'll try this. > But what values do I use in place of ak and sk? I want to use some command to get those values, right? > > On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang > wrote: > I created ec2 credentials using command `openstack credential create`. > > i.e. > > openstack credential create --type ec2 --project proj user '{"access": "ak", "secret": "sk”}' > > > It seems the two credentials are not the same thing. > > Ref: > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpenstackEC2credentials.htm > >> 在 2018年5月25日,10:32,Shyam Prasad N > 写道: >> >> Yes, I did. >> I don't think this is s3curl related issue, because I tried with python AWS SDK, and got the same error. >> >> On Fri, May 25, 2018, 07:42 Yuxin Wang > wrote: >> Did you add 127.0.0.1 to the endpoint list in s3curl.pl ? >> >> i.e. >> >> my @endpoints = (‘127.0.0.1’); >> >>> 在 2018年5月24日,19:48,Shyam Prasad N > 写道: >>> >>> Hi, >>> >>> I've been trying to get swift3 to work for several days now. But I haven't managed to get it running. >>> Both with tempauth and keystoneauth, I'm getting the same error: >>> >>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl --id=testerks -- http://127.0.0.1:8080/ >>> >>> SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.txa691e7ca97a44d56bc4c2-005b06a292 >>> >>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 1527161490.563107014 - >>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: txa691e7ca97a44d56bc4c2-005b06a292) >>> >>> eightkpc at objectstore1:~$ cat .s3curl >>> %awsSecretAccessKeys = ( >>> tester => { >>> id => 'test:tester', >>> key => 'testing', >>> }, >>> testerks => { >>> id => 'e6289a1b5692461388d0597a4873d054', >>> key => '88bb706887094696b082f008ba133ad7', >>> }, >>> ); >>> >>> eightkpc at objectstore1:~$ openstack ec2 credentials show e6289a1b5692461388d0597a4873d054 >>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>> | Field | Value | >>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>> | access | e6289a1b5692461388d0597a4873d054 | >>> | links | {u'self': u'http://controller:5000/v3/users/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e6289a1b5692461388d0597a4873d054 '} | >>> | project_id | dc86f7d8787b46158268bd77098b6578 | >>> | secret | 88bb706887094696b082f008ba133ad7 | >>> | trust_id | None | >>> | user_id | d7df7b56343b4ea988869fc30efeda09 | >>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>> >>> Can someone please let me know what is going on? >>> >>> Regards, >>> Shyam >>> _______________________________________________ >>> Mailing list: https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >> > > > > > -- > -Shyam > _______________________________________________ > Mailing list: https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 > Post to : openstack at lists.openstack.org > Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nspmangalore at gmail.com Fri May 25 09:34:49 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Fri, 25 May 2018 15:04:49 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> Message-ID: I'm using Queens on Ubuntu 18.04. On Fri, May 25, 2018 at 1:54 PM, John van Ommen wrote: > What release are you using? > > In 2016 I tried to get this working for a client of mine at HPE, and found > that it wouldn't work without a fair bit of hacking. Basically the software > hadn't been updated in about a year, and the newest release was > incompatible with the version of OpenStack that we were selling. > > On Thu, May 24, 2018 at 11:57 PM, Shyam Prasad N > wrote: > >> Thanks. I'll try this. >> But what values do I use in place of ak and sk? I want to use some >> command to get those values, right? >> >> On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang >> wrote: >> >>> I created ec2 credentials using command `openstack credential create`. >>> >>> i.e. >>> >>> openstack credential create --type ec2 --project proj user '{"access": >>> "ak", "secret": "sk”}' >>> >>> >>> It seems the two credentials are not the same thing. >>> >>> Ref: >>> >>> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/ >>> com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpensta >>> ckEC2credentials.htm >>> >>> 在 2018年5月25日,10:32,Shyam Prasad N 写道: >>> >>> Yes, I did. >>> I don't think this is s3curl related issue, because I tried with python >>> AWS SDK, and got the same error. >>> >>> On Fri, May 25, 2018, 07:42 Yuxin Wang >>> wrote: >>> >>>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl? >>>> >>>> i.e. >>>> >>>> my @endpoints = (‘127.0.0.1’); >>>> >>>> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >>>> >>>> Hi, >>>> >>>> I've been trying to get swift3 to work for several days now. But I >>>> haven't managed to get it running. >>>> Both with tempauth and keystoneauth, I'm getting the same error: >>>> >>>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >>>> >>>> --id=testerks -- http://127.0.0.1:8080/ >>>> >>>> >>>> SignatureDoesNotMatchThe request >>>> signature we calculated does not match the signature you provided. Check >>>> your key and signing method.tx >>>> a691e7ca97a44d56bc4c2-005b06a292 >>>> >>>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >>>> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >>>> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >>>> 1527161490.563107014 - >>>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >>>> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >>>> txa691e7ca97a44d56bc4c2-005b06a292) >>>> >>>> eightkpc at objectstore1:~$ cat .s3curl >>>> %awsSecretAccessKeys = ( >>>> tester => { >>>> id => 'test:tester', >>>> key => 'testing', >>>> }, >>>> testerks => { >>>> id => 'e6289a1b5692461388d0597a4873d054', >>>> key => '88bb706887094696b082f008ba133ad7', >>>> }, >>>> ); >>>> >>>> eightkpc at objectstore1:~$ openstack ec2 credentials show >>>> e6289a1b5692461388d0597a4873d054 >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> | Field | Value >>>> >>>> | >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> | access | e6289a1b5692461388d0597a4873d0 >>>> 54 >>>> | >>>> | links | {u'self': u'http://controller:5000/v3/us >>>> ers/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e628 >>>> 9a1b5692461388d0597a4873d054'} | >>>> | project_id | dc86f7d8787b46158268bd77098b65 >>>> 78 >>>> | >>>> | secret | 88bb706887094696b082f008ba133a >>>> d7 >>>> | >>>> | trust_id | None >>>> >>>> | >>>> | user_id | d7df7b56343b4ea988869fc30efeda >>>> 09 >>>> | >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> >>>> Can someone please let me know what is going on? >>>> >>>> Regards, >>>> Shyam >>>> _______________________________________________ >>>> Mailing list: https://eur03.safelinks.protec >>>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi >>>> -bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C >>>> 39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaa >>>> aaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1 >>>> KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : https://eur03.safelinks.protec >>>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi >>>> -bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C >>>> 39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaa >>>> aaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1 >>>> KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>> >>>> >>>> >>> >> >> >> -- >> -Shyam >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack >> >> > -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: From nspmangalore at gmail.com Fri May 25 09:35:59 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Fri, 25 May 2018 15:05:59 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> Message-ID: Tried that. Unfortunately same error. Is there anything I can do to troubleshoot this? On Fri, May 25, 2018 at 2:56 PM, Yuxin Wang wrote: > They can be any strings. > > Replace them with whatever you want. > > - Yuxin > > 在 2018年5月25日,14:57,Shyam Prasad N 写道: > > Thanks. I'll try this. > But what values do I use in place of ak and sk? I want to use some command > to get those values, right? > > On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang > wrote: > >> I created ec2 credentials using command `openstack credential create`. >> >> i.e. >> >> openstack credential create --type ec2 --project proj user '{"access": >> "ak", "secret": "sk”}' >> >> >> It seems the two credentials are not the same thing. >> >> Ref: >> >> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/ >> com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpenst >> ackEC2credentials.htm >> >> >> 在 2018年5月25日,10:32,Shyam Prasad N 写道: >> >> Yes, I did. >> I don't think this is s3curl related issue, because I tried with python >> AWS SDK, and got the same error. >> >> On Fri, May 25, 2018, 07:42 Yuxin Wang >> wrote: >> >>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl >>> >>> ? >>> >>> i.e. >>> >>> my @endpoints = (‘127.0.0.1’); >>> >>> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >>> >>> Hi, >>> >>> I've been trying to get swift3 to work for several days now. But I >>> haven't managed to get it running. >>> Both with tempauth and keystoneauth, I'm getting the same error: >>> >>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >>> >>> --id=testerks -- http://127.0.0.1:8080/ >>> >>> >>> SignatureDoesNotMatchThe request signature >>> we calculated does not match the signature you provided. Check your key and >>> signing method.txa691e7ca97a44d56bc4c2-005b06a2 >>> 92 >>> >>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >>> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >>> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >>> 1527161490.563107014 - >>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >>> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >>> txa691e7ca97a44d56bc4c2-005b06a292) >>> >>> eightkpc at objectstore1:~$ cat .s3curl >>> %awsSecretAccessKeys = ( >>> tester => { >>> id => 'test:tester', >>> key => 'testing', >>> }, >>> testerks => { >>> id => 'e6289a1b5692461388d0597a4873d054', >>> key => '88bb706887094696b082f008ba133ad7', >>> }, >>> ); >>> >>> eightkpc at objectstore1:~$ openstack ec2 credentials show >>> e6289a1b5692461388d0597a4873d054 >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> | Field | Value >>> >>> | >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> | access | e6289a1b5692461388d0597a4873d0 >>> 54 >>> | >>> | links | {u'self': u'http://controller:5000/v3/us >>> ers/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e628 >>> 9a1b5692461388d0597a4873d054'} | >>> | project_id | dc86f7d8787b46158268bd77098b65 >>> 78 >>> | >>> | secret | 88bb706887094696b082f008ba133a >>> d7 >>> | >>> | trust_id | None >>> >>> | >>> | user_id | d7df7b56343b4ea988869fc30efeda >>> 09 >>> | >>> +------------+---------------------------------------------- >>> ------------------------------------------------------------ >>> --------------------------+ >>> >>> Can someone please let me know what is going on? >>> >>> Regards, >>> Shyam >>> _______________________________________________ >>> Mailing list: https://eur03.safelinks.protec >>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >>> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >>> C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaa >>> aaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX >>> 1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : https://eur03.safelinks.protec >>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >>> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >>> C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaa >>> aaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX >>> 1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> >>> >>> >> > > > -- > -Shyam > _______________________________________________ > Mailing list: https://nam05.safelinks.protection.outlook.com/?url= > http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% > 2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30% > 7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata= > tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 > Post to : openstack at lists.openstack.org > Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url= > http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% > 2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30% > 7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata= > tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 > > > -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: From nspmangalore at gmail.com Fri May 25 10:17:53 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Fri, 25 May 2018 15:47:53 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> Message-ID: Hi Yuxin, If you don't mind, can you share the output of the following commands in your running swift3 setup? openstack credential list openstack ec2 credentials list cat /etc/swift/proxy-server.conf Also, what are the access keys and secret keys that you use? I want to make sure that I'm not missing anything in configuration. Regards, Shyam On Fri, May 25, 2018 at 3:05 PM, Shyam Prasad N wrote: > Tried that. Unfortunately same error. > Is there anything I can do to troubleshoot this? > > On Fri, May 25, 2018 at 2:56 PM, Yuxin Wang > wrote: > >> They can be any strings. >> >> Replace them with whatever you want. >> >> - Yuxin >> >> 在 2018年5月25日,14:57,Shyam Prasad N 写道: >> >> Thanks. I'll try this. >> But what values do I use in place of ak and sk? I want to use some >> command to get those values, right? >> >> On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang >> wrote: >> >>> I created ec2 credentials using command `openstack credential create`. >>> >>> i.e. >>> >>> openstack credential create --type ec2 --project proj user '{"access": >>> "ak", "secret": "sk”}' >>> >>> >>> It seems the two credentials are not the same thing. >>> >>> Ref: >>> >>> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/ >>> com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpensta >>> ckEC2credentials.htm >>> >>> >>> 在 2018年5月25日,10:32,Shyam Prasad N 写道: >>> >>> Yes, I did. >>> I don't think this is s3curl related issue, because I tried with python >>> AWS SDK, and got the same error. >>> >>> On Fri, May 25, 2018, 07:42 Yuxin Wang >>> wrote: >>> >>>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl >>>> >>>> ? >>>> >>>> i.e. >>>> >>>> my @endpoints = (‘127.0.0.1’); >>>> >>>> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >>>> >>>> Hi, >>>> >>>> I've been trying to get swift3 to work for several days now. But I >>>> haven't managed to get it running. >>>> Both with tempauth and keystoneauth, I'm getting the same error: >>>> >>>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >>>> >>>> --id=testerks -- http://127.0.0.1:8080/ >>>> >>>> >>>> SignatureDoesNotMatchThe request >>>> signature we calculated does not match the signature you provided. Check >>>> your key and signing method.tx >>>> a691e7ca97a44d56bc4c2-005b06a292 >>>> >>>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >>>> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >>>> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >>>> 1527161490.563107014 - >>>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >>>> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >>>> txa691e7ca97a44d56bc4c2-005b06a292) >>>> >>>> eightkpc at objectstore1:~$ cat .s3curl >>>> %awsSecretAccessKeys = ( >>>> tester => { >>>> id => 'test:tester', >>>> key => 'testing', >>>> }, >>>> testerks => { >>>> id => 'e6289a1b5692461388d0597a4873d054', >>>> key => '88bb706887094696b082f008ba133ad7', >>>> }, >>>> ); >>>> >>>> eightkpc at objectstore1:~$ openstack ec2 credentials show >>>> e6289a1b5692461388d0597a4873d054 >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> | Field | Value >>>> >>>> | >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> | access | e6289a1b5692461388d0597a4873d0 >>>> 54 >>>> | >>>> | links | {u'self': u'http://controller:5000/v3/us >>>> ers/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e628 >>>> 9a1b5692461388d0597a4873d054'} | >>>> | project_id | dc86f7d8787b46158268bd77098b65 >>>> 78 >>>> | >>>> | secret | 88bb706887094696b082f008ba133a >>>> d7 >>>> | >>>> | trust_id | None >>>> >>>> | >>>> | user_id | d7df7b56343b4ea988869fc30efeda >>>> 09 >>>> | >>>> +------------+---------------------------------------------- >>>> ------------------------------------------------------------ >>>> --------------------------+ >>>> >>>> Can someone please let me know what is going on? >>>> >>>> Regards, >>>> Shyam >>>> _______________________________________________ >>>> Mailing list: https://eur03.safelinks.protec >>>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi >>>> -bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C >>>> 39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaa >>>> aaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1 >>>> KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : https://eur03.safelinks.protec >>>> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi >>>> -bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C >>>> 39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaa >>>> aaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1 >>>> KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>> >>>> >>>> >>> >> >> >> -- >> -Shyam >> _______________________________________________ >> Mailing list: https://nam05.safelinks.protec >> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >> Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaa >> aaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX% >> 2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 >> Post to : openstack at lists.openstack.org >> Unsubscribe : https://nam05.safelinks.protec >> tion.outlook.com/?url=http%3A%2F%2Flists.openstack.org% >> 2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7 >> Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaa >> aaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX% >> 2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 >> >> >> > > > -- > -Shyam > -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Fri May 25 16:41:56 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Fri, 25 May 2018 12:41:56 -0400 Subject: [Openstack] "Resource doesn't have field name." Message-ID: <892104447038e4885ad6936675a0a2a5@jots.org> Hey, all. I've got a new job, and I tried my first Openstack command on it -- a Juno cloud -- with Openstack CLI 3.14.0, and it failed. Specifically: kdambrosio at mintyfresh:~/oscreds newton(QA/PCI)$ openstack image list Resource doesn't have field name glance image-list does fine. Is this a case of, "Don't do that!"? Or is there something I should be digging into? Thanks! -Ken From auniyal61 at gmail.com Fri May 25 17:08:09 2018 From: auniyal61 at gmail.com (Amit Uniyal) Date: Fri, 25 May 2018 22:38:09 +0530 Subject: [Openstack] "Resource doesn't have field name." In-Reply-To: <892104447038e4885ad6936675a0a2a5@jots.org> References: <892104447038e4885ad6936675a0a2a5@jots.org> Message-ID: You can use -v (for verbose), directly check logs on openstackclient run. On Fri, May 25, 2018 at 10:11 PM, Ken D'Ambrosio wrote: > Hey, all. I've got a new job, and I tried my first Openstack command on > it -- a Juno cloud -- with Openstack CLI 3.14.0, and it failed. > Specifically: > > kdambrosio at mintyfresh:~/oscreds newton(QA/PCI)$ openstack image list > Resource doesn't have field name > > glance image-list does fine. > > Is this a case of, "Don't do that!"? Or is there something I should be > digging into? > > Thanks! > > -Ken > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac > k > -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at italy1.com Fri May 25 17:52:41 2018 From: remo at italy1.com (remo at italy1.com) Date: Fri, 25 May 2018 10:52:41 -0700 Subject: [Openstack] "Resource doesn't have field name." In-Reply-To: References: <892104447038e4885ad6936675a0a2a5@jots.org> Message-ID: <266285A2-4BA0-4D9A-B437-E25DA7BFF747@italy1.com> Content-Type: multipart/alternative; boundary="=_cf36a87f98ab82fda0943c9d76d4868b" --=_cf36a87f98ab82fda0943c9d76d4868b Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 VXNlIHRoZSDigJRkZWJ1ZyBvcHRpb24gdG8gc2VlIHdoYXQgY2FsbHMgYXJlIGdvaW5nIG9uIGFu ZCB3aGljaCBvbmUgZmFpbHMuIA0KDQo+IElsIGdpb3JubyAyNSBtYWcgMjAxOCwgYWxsZSBvcmUg MTA6MDgsIEFtaXQgVW5peWFsIDxhdW5peWFsNjFAZ21haWwuY29tPiBoYSBzY3JpdHRvOg0KPiAN Cj4gWW91IGNhbiB1c2UgLXYgKGZvciB2ZXJib3NlKSwgZGlyZWN0bHkgY2hlY2sgbG9ncyBvbiBv cGVuc3RhY2tjbGllbnQgcnVuLg0KPiANCj4+IE9uIEZyaSwgTWF5IDI1LCAyMDE4IGF0IDEwOjEx IFBNLCBLZW4gRCdBbWJyb3NpbyA8a2VuQGpvdHMub3JnPiB3cm90ZToNCj4+IEhleSwgYWxsLiAg SSd2ZSBnb3QgYSBuZXcgam9iLCBhbmQgSSB0cmllZCBteSBmaXJzdCBPcGVuc3RhY2sgY29tbWFu ZCBvbiBpdCAtLSBhIEp1bm8gY2xvdWQgLS0gd2l0aCBPcGVuc3RhY2sgQ0xJIDMuMTQuMCwgYW5k IGl0IGZhaWxlZC4gIFNwZWNpZmljYWxseToNCj4+IA0KPj4ga2RhbWJyb3Npb0BtaW50eWZyZXNo On4vb3NjcmVkcyBuZXd0b24oUUEvUENJKSQgb3BlbnN0YWNrIGltYWdlIGxpc3QNCj4+IFJlc291 cmNlIGRvZXNuJ3QgaGF2ZSBmaWVsZCBuYW1lDQo+PiANCj4+IGdsYW5jZSBpbWFnZS1saXN0IGRv ZXMgZmluZS4NCj4+IA0KPj4gSXMgdGhpcyBhIGNhc2Ugb2YsICJEb24ndCBkbyB0aGF0ISI/ICBP ciBpcyB0aGVyZSBzb21ldGhpbmcgSSBzaG91bGQgYmUgZGlnZ2luZyBpbnRvPw0KPj4gDQo+PiBU aGFua3MhDQo+PiANCj4+IC1LZW4NCj4+IA0KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX18NCj4+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4gUG9zdCB0byAg ICAgOiBvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZw0KPj4gVW5zdWJzY3JpYmUgOiBodHRw Oi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNr DQo+IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K PiBNYWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1h bi9saXN0aW5mby9vcGVuc3RhY2sNCj4gUG9zdCB0byAgICAgOiBvcGVuc3RhY2tAbGlzdHMub3Bl bnN0YWNrLm9yZw0KPiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2Nn aS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_cf36a87f98ab82fda0943c9d76d4868b Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5Vc2UgdGhlIOKAlGRlYnVnIG9wdGlvbiB0byBzZWUgd2hhdCBjYWxscyBhcmUgZ29pbmcg b24gYW5kIHdoaWNoIG9uZSBmYWlscy4mbmJzcDs8L2Rpdj48ZGl2Pjxicj5JbCBnaW9ybm8gMjUg bWFnIDIwMTgsIGFsbGUgb3JlIDEwOjA4LCBBbWl0IFVuaXlhbCAmbHQ7PGEgaHJlZj0ibWFpbHRv OmF1bml5YWw2MUBnbWFpbC5jb20iPmF1bml5YWw2MUBnbWFpbC5jb208L2E+Jmd0OyBoYSBzY3Jp dHRvOjxicj48YnI+PC9kaXY+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRpcj0i bHRyIj5Zb3UgY2FuIHVzZSAtdiAoZm9yIHZlcmJvc2UpLCBkaXJlY3RseSZuYnNwO2NoZWNrIGxv Z3Mgb24gb3BlbnN0YWNrY2xpZW50Jm5ic3A7cnVuLjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2V4 dHJhIj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIEZyaSwgTWF5IDI1LCAyMDE4IGF0 IDEwOjExIFBNLCBLZW4gRCdBbWJyb3NpbyA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhyZWY9Im1h aWx0bzprZW5Aam90cy5vcmciIHRhcmdldD0iX2JsYW5rIj5rZW5Aam90cy5vcmc8L2E+Jmd0Ozwv c3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1h cmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDox ZXgiPkhleSwgYWxsLiZuYnNwOyBJJ3ZlIGdvdCBhIG5ldyBqb2IsIGFuZCBJIHRyaWVkIG15IGZp cnN0IE9wZW5zdGFjayBjb21tYW5kIG9uIGl0IC0tIGEgSnVubyBjbG91ZCAtLSB3aXRoIE9wZW5z dGFjayBDTEkgMy4xNC4wLCBhbmQgaXQgZmFpbGVkLiZuYnNwOyBTcGVjaWZpY2FsbHk6PGJyPg0K PGJyPg0Ka2RhbWJyb3Npb0BtaW50eWZyZXNoOn4vb3NjcmVkPHdicj5zIG5ld3RvbihRQS9QQ0kp JCBvcGVuc3RhY2sgaW1hZ2UgbGlzdDxicj4NClJlc291cmNlIGRvZXNuJ3QgaGF2ZSBmaWVsZCBu YW1lPGJyPg0KPGJyPg0KZ2xhbmNlIGltYWdlLWxpc3QgZG9lcyBmaW5lLjxicj4NCjxicj4NCklz IHRoaXMgYSBjYXNlIG9mLCAiRG9uJ3QgZG8gdGhhdCEiPyZuYnNwOyBPciBpcyB0aGVyZSBzb21l dGhpbmcgSSBzaG91bGQgYmUgZGlnZ2luZyBpbnRvPzxicj4NCjxicj4NClRoYW5rcyE8YnI+DQo8 YnI+DQotS2VuPGJyPg0KPGJyPg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5f X19fX19fX19fX19fX19fXzxicj4NCk1haWxpbmcgbGlzdDogPGEgaHJlZj0iaHR0cDovL2xpc3Rz Lm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjayIgcmVsPSJu b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dp PHdicj4tYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWM8d2JyPms8L2E+PGJyPg0KUG9zdCB0 byZuYnNwOyAmbmJzcDsgJm5ic3A7OiA8YSBocmVmPSJtYWlsdG86b3BlbnN0YWNrQGxpc3RzLm9w ZW5zdGFjay5vcmciIHRhcmdldD0iX2JsYW5rIj5vcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9y ZzwvYT48YnI+DQpVbnN1YnNjcmliZSA6IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2su b3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2siIHJlbD0ibm9yZWZlcnJlciIg dGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaTx3YnI+LWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjPHdicj5rPC9hPjxicj4NCjwvYmxvY2txdW90ZT48L2Rp dj48YnI+PC9kaXY+DQoNCjwvZGl2PjwvYmxvY2txdW90ZT48YmxvY2txdW90ZSB0eXBlPSJjaXRl Ij48ZGl2PjxzcGFuPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fPC9zcGFuPjxicj48c3Bhbj5NYWlsaW5nIGxpc3Q6IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5v cGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2siPmh0dHA6Ly9s aXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2s8L2E+ PC9zcGFuPjxicj48c3Bhbj5Qb3N0IHRvICZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOzogPGEgaHJl Zj0ibWFpbHRvOm9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnIj5vcGVuc3RhY2tAbGlzdHMu b3BlbnN0YWNrLm9yZzwvYT48L3NwYW4+PGJyPjxzcGFuPlVuc3Vic2NyaWJlIDogPGEgaHJlZj0i aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5z dGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjazwvYT48L3NwYW4+PGJyPjwvZGl2PjwvYmxvY2txdW90ZT48L2JvZHk+PC9odG1s Pg== --=_cf36a87f98ab82fda0943c9d76d4868b-- From remo at italy1.com Fri May 25 17:52:41 2018 From: remo at italy1.com (remo at italy1.com) Date: Fri, 25 May 2018 10:52:41 -0700 Subject: [Openstack] "Resource doesn't have field name." In-Reply-To: References: <892104447038e4885ad6936675a0a2a5@jots.org> Message-ID: <266285A2-4BA0-4D9A-B437-E25DA7BFF747@italy1.com> Content-Type: multipart/alternative; boundary="=_cf36a87f98ab82fda0943c9d76d4868b" --=_cf36a87f98ab82fda0943c9d76d4868b Content-Transfer-Encoding: base64 Content-Type: text/plain; charset=utf-8 VXNlIHRoZSDigJRkZWJ1ZyBvcHRpb24gdG8gc2VlIHdoYXQgY2FsbHMgYXJlIGdvaW5nIG9uIGFu ZCB3aGljaCBvbmUgZmFpbHMuIA0KDQo+IElsIGdpb3JubyAyNSBtYWcgMjAxOCwgYWxsZSBvcmUg MTA6MDgsIEFtaXQgVW5peWFsIDxhdW5peWFsNjFAZ21haWwuY29tPiBoYSBzY3JpdHRvOg0KPiAN Cj4gWW91IGNhbiB1c2UgLXYgKGZvciB2ZXJib3NlKSwgZGlyZWN0bHkgY2hlY2sgbG9ncyBvbiBv cGVuc3RhY2tjbGllbnQgcnVuLg0KPiANCj4+IE9uIEZyaSwgTWF5IDI1LCAyMDE4IGF0IDEwOjEx IFBNLCBLZW4gRCdBbWJyb3NpbyA8a2VuQGpvdHMub3JnPiB3cm90ZToNCj4+IEhleSwgYWxsLiAg SSd2ZSBnb3QgYSBuZXcgam9iLCBhbmQgSSB0cmllZCBteSBmaXJzdCBPcGVuc3RhY2sgY29tbWFu ZCBvbiBpdCAtLSBhIEp1bm8gY2xvdWQgLS0gd2l0aCBPcGVuc3RhY2sgQ0xJIDMuMTQuMCwgYW5k IGl0IGZhaWxlZC4gIFNwZWNpZmljYWxseToNCj4+IA0KPj4ga2RhbWJyb3Npb0BtaW50eWZyZXNo On4vb3NjcmVkcyBuZXd0b24oUUEvUENJKSQgb3BlbnN0YWNrIGltYWdlIGxpc3QNCj4+IFJlc291 cmNlIGRvZXNuJ3QgaGF2ZSBmaWVsZCBuYW1lDQo+PiANCj4+IGdsYW5jZSBpbWFnZS1saXN0IGRv ZXMgZmluZS4NCj4+IA0KPj4gSXMgdGhpcyBhIGNhc2Ugb2YsICJEb24ndCBkbyB0aGF0ISI/ICBP ciBpcyB0aGVyZSBzb21ldGhpbmcgSSBzaG91bGQgYmUgZGlnZ2luZyBpbnRvPw0KPj4gDQo+PiBU aGFua3MhDQo+PiANCj4+IC1LZW4NCj4+IA0KPj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX18NCj4+IE1haWxpbmcgbGlzdDogaHR0cDovL2xpc3RzLm9wZW5z dGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjaw0KPj4gUG9zdCB0byAg ICAgOiBvcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9yZw0KPj4gVW5zdWJzY3JpYmUgOiBodHRw Oi8vbGlzdHMub3BlbnN0YWNrLm9yZy9jZ2ktYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWNr DQo+IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K PiBNYWlsaW5nIGxpc3Q6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1h bi9saXN0aW5mby9vcGVuc3RhY2sNCj4gUG9zdCB0byAgICAgOiBvcGVuc3RhY2tAbGlzdHMub3Bl bnN0YWNrLm9yZw0KPiBVbnN1YnNjcmliZSA6IGh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2Nn aS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2sNCg== --=_cf36a87f98ab82fda0943c9d76d4868b Content-Transfer-Encoding: base64 Content-Type: text/html; charset=utf-8 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj5Vc2UgdGhlIOKAlGRlYnVnIG9wdGlvbiB0byBzZWUgd2hhdCBjYWxscyBhcmUgZ29pbmcg b24gYW5kIHdoaWNoIG9uZSBmYWlscy4mbmJzcDs8L2Rpdj48ZGl2Pjxicj5JbCBnaW9ybm8gMjUg bWFnIDIwMTgsIGFsbGUgb3JlIDEwOjA4LCBBbWl0IFVuaXlhbCAmbHQ7PGEgaHJlZj0ibWFpbHRv OmF1bml5YWw2MUBnbWFpbC5jb20iPmF1bml5YWw2MUBnbWFpbC5jb208L2E+Jmd0OyBoYSBzY3Jp dHRvOjxicj48YnI+PC9kaXY+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRpcj0i bHRyIj5Zb3UgY2FuIHVzZSAtdiAoZm9yIHZlcmJvc2UpLCBkaXJlY3RseSZuYnNwO2NoZWNrIGxv Z3Mgb24gb3BlbnN0YWNrY2xpZW50Jm5ic3A7cnVuLjwvZGl2PjxkaXYgY2xhc3M9ImdtYWlsX2V4 dHJhIj48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIEZyaSwgTWF5IDI1LCAyMDE4IGF0 IDEwOjExIFBNLCBLZW4gRCdBbWJyb3NpbyA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhyZWY9Im1h aWx0bzprZW5Aam90cy5vcmciIHRhcmdldD0iX2JsYW5rIj5rZW5Aam90cy5vcmc8L2E+Jmd0Ozwv c3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1h cmdpbjowIDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDox ZXgiPkhleSwgYWxsLiZuYnNwOyBJJ3ZlIGdvdCBhIG5ldyBqb2IsIGFuZCBJIHRyaWVkIG15IGZp cnN0IE9wZW5zdGFjayBjb21tYW5kIG9uIGl0IC0tIGEgSnVubyBjbG91ZCAtLSB3aXRoIE9wZW5z dGFjayBDTEkgMy4xNC4wLCBhbmQgaXQgZmFpbGVkLiZuYnNwOyBTcGVjaWZpY2FsbHk6PGJyPg0K PGJyPg0Ka2RhbWJyb3Npb0BtaW50eWZyZXNoOn4vb3NjcmVkPHdicj5zIG5ld3RvbihRQS9QQ0kp JCBvcGVuc3RhY2sgaW1hZ2UgbGlzdDxicj4NClJlc291cmNlIGRvZXNuJ3QgaGF2ZSBmaWVsZCBu YW1lPGJyPg0KPGJyPg0KZ2xhbmNlIGltYWdlLWxpc3QgZG9lcyBmaW5lLjxicj4NCjxicj4NCklz IHRoaXMgYSBjYXNlIG9mLCAiRG9uJ3QgZG8gdGhhdCEiPyZuYnNwOyBPciBpcyB0aGVyZSBzb21l dGhpbmcgSSBzaG91bGQgYmUgZGlnZ2luZyBpbnRvPzxicj4NCjxicj4NClRoYW5rcyE8YnI+DQo8 YnI+DQotS2VuPGJyPg0KPGJyPg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5f X19fX19fX19fX19fX19fXzxicj4NCk1haWxpbmcgbGlzdDogPGEgaHJlZj0iaHR0cDovL2xpc3Rz Lm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjayIgcmVsPSJu b3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dp PHdicj4tYmluL21haWxtYW4vbGlzdGluZm8vb3BlbnN0YWM8d2JyPms8L2E+PGJyPg0KUG9zdCB0 byZuYnNwOyAmbmJzcDsgJm5ic3A7OiA8YSBocmVmPSJtYWlsdG86b3BlbnN0YWNrQGxpc3RzLm9w ZW5zdGFjay5vcmciIHRhcmdldD0iX2JsYW5rIj5vcGVuc3RhY2tAbGlzdHMub3BlbnN0YWNrLm9y ZzwvYT48YnI+DQpVbnN1YnNjcmliZSA6IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vcGVuc3RhY2su b3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2siIHJlbD0ibm9yZWZlcnJlciIg dGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9saXN0cy5vcGVuc3RhY2sub3JnL2NnaTx3YnI+LWJpbi9t YWlsbWFuL2xpc3RpbmZvL29wZW5zdGFjPHdicj5rPC9hPjxicj4NCjwvYmxvY2txdW90ZT48L2Rp dj48YnI+PC9kaXY+DQoNCjwvZGl2PjwvYmxvY2txdW90ZT48YmxvY2txdW90ZSB0eXBlPSJjaXRl Ij48ZGl2PjxzcGFuPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fPC9zcGFuPjxicj48c3Bhbj5NYWlsaW5nIGxpc3Q6IDxhIGhyZWY9Imh0dHA6Ly9saXN0cy5v cGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2siPmh0dHA6Ly9s aXN0cy5vcGVuc3RhY2sub3JnL2NnaS1iaW4vbWFpbG1hbi9saXN0aW5mby9vcGVuc3RhY2s8L2E+ PC9zcGFuPjxicj48c3Bhbj5Qb3N0IHRvICZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOzogPGEgaHJl Zj0ibWFpbHRvOm9wZW5zdGFja0BsaXN0cy5vcGVuc3RhY2sub3JnIj5vcGVuc3RhY2tAbGlzdHMu b3BlbnN0YWNrLm9yZzwvYT48L3NwYW4+PGJyPjxzcGFuPlVuc3Vic2NyaWJlIDogPGEgaHJlZj0i aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZvL29wZW5z dGFjayI+aHR0cDovL2xpc3RzLm9wZW5zdGFjay5vcmcvY2dpLWJpbi9tYWlsbWFuL2xpc3RpbmZv L29wZW5zdGFjazwvYT48L3NwYW4+PGJyPjwvZGl2PjwvYmxvY2txdW90ZT48L2JvZHk+PC9odG1s Pg== --=_cf36a87f98ab82fda0943c9d76d4868b-- From manuel.sb at garvan.org.au Mon May 28 08:15:49 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Mon, 28 May 2018 08:15:49 +0000 Subject: [Openstack] can't resize server Message-ID: <9D8A2486E35F0941A60430473E29F15B01739C7B89@MXDB1.ad.garvan.unsw.edu.au> Dear openstack community, I have a packstack all-in-one environment and I would like to resize one of the vms. It seems like the resize process fails due to an issue with cinder NOTE: the vm boots from volume and not from image This is the vm I am trying to resize [root at openstack ~(keystone_admin)]# openstack server show 7292a929-54d9-4ce6-a595-aaf93a2be320 +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | openstack.localdomain | | OS-EXT-SRV-ATTR:hypervisor_hostname | openstack.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-0000005f | | OS-EXT-STS:power_state | Shutdown | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2018-05-14T07:24:00.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | privatenetwork=192.168.1.106, 129.94.14.238 | | config_drive | | | created | 2018-05-14T07:23:52Z | | fault | {u'message': u'The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a)', u'code': 500, u'details': u' File | | | "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 204, in decorated_function\n return function(self, context, *args, **kwargs)\n File "/usr/lib/python2.7/site- | | | packages/nova/compute/manager.py", line 3810, in resize_instance\n self._terminate_volume_connections(context, instance, bdms)\n File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3843, | | | in _terminate_volume_connections\n connector)\n File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 188, in wrapper\n res = method(self, ctx, *args, **kwargs)\n File "/usr/lib/python2.7 | | | /site-packages/nova/volume/cinder.py", line 210, in wrapper\n res = method(self, ctx, volume_id, *args, **kwargs)\n File "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 416, in | | | terminate_connection\n connector)\n File "/usr/lib/python2.7/site-packages/cinderclient/v3/volumes.py", line 426, in terminate_connection\n {\'connector\': connector})\n File "/usr/lib/python2.7/site- | | | packages/cinderclient/v3/volumes.py", line 346, in _action\n resp, body = self.api.client.post(url, body=body)\n File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 146, in post\n return | | | self._cs_request(url, \'POST\', **kwargs)\n File "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 134, in _cs_request\n return self.request(url, method, **kwargs)\n File "/usr/lib/python2.7 | | | /site-packages/cinderclient/client.py", line 123, in request\n raise exceptions.from_response(resp, body)\n', u'created': u'2018-05-28T07:54:40Z'} | | flavor | m1.medium (3) | | hostId | ecef276660cd714fe626073a18c11fe1c00bec91c15516178fb6ac28 | | id | 7292a929-54d9-4ce6-a595-aaf93a2be320 | | image | | | key_name | None | | name | danrod-server | | os-extended-volumes:volumes_attached | [{u'id': u'f1ac2e94-b0ed-4089-898f-5b6467fb51e3'}] | | project_id | d58cf22d960e4de49b71658aee642e94 | | properties | | | security_groups | [{u'name': u'admin'}, {u'name': u'R-Studio Server'}] | | status | ERROR | | updated | 2018-05-28T07:54:40Z | | user_id | c412f34c353244eabecd4b6dc4d36392 | +--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Cinder volume logs 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Failed to delete initiator iqn iqn.1994-05.com.redhat:401b935e7b19 from target. 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Traceback (most recent call last): 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 197, in terminate_connection 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio run_as_root=True) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return f(*args, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 52, in _execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return utils.execute(*args, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 123, in execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio return processutils.execute(*cmd, **kwargs) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 389, in execute 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio cmd=sanitized_cmd) 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio ProcessExecutionError: Unexpected error while running command. 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3 iqn.1994-05.com.redhat:401b935e7b19 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Exit code: 1 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Stdout: u'' 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio Stderr: u'Traceback (most recent call last):\n File "/bin/cinder-rtstool", line 10, in \n sys.exit(main())\n File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 313, in main\n delete_initiator(target_iqn, initiator_iqn)\n File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 143, in delete_initiator\n target = _lookup_target(target_iqn, initiator_iqn)\n File "/usr/lib/python2.7/site-packages/cinder/cmd/rtstool.py", line 123, in _lookup_target\n raise RtstoolError(_(\'Could not find target %s\') % target_iqn)\ncinder.cmd.rtstool.RtstoolError: Could not find target iqn.2010-10.org.openstack:volume-f1ac2e94-b0ed-4089-898f-5b6467fb51e3\n' 2018-05-28 17:54:39.809 6804 ERROR cinder.volume.targets.lio 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Terminate volume connection failed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3. 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager Traceback (most recent call last): 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1493, in terminate_connection 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager force=force) 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 848, in terminate_connection 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager **kwargs) 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/targets/lio.py", line 202, in terminate_connection 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager raise exception.ISCSITargetDetachFailed(volume_id=volume['id']) 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager ISCSITargetDetachFailed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3. 2018-05-28 17:54:39.813 6804 ERROR cinder.volume.manager 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server [req-bf6a33bd-affc-48a3-80f3-e6e1be459e7a c412f34c353244eabecd4b6dc4d36392 d58cf22d960e4de49b71658aee642e94 - default default] Exception during message handling 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 133, in _process_incoming 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 150, in dispatch 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 121, in _do_dispatch 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 4404, in terminate_connection 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server force=force) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1498, in terminate_connection 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server raise exception.VolumeBackendAPIException(data=err_msg) 2018-05-28 17:54:39.814 6804 ERROR oslo_messaging.rpc.server VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Terminate volume connection failed: Failed to detach iSCSI target for volume f1ac2e94-b0ed-4089-898f-5b6467fb51e3. Any thoughts? Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Tue May 29 16:36:15 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Tue, 29 May 2018 12:36:15 -0400 Subject: [Openstack] "Resource doesn't have field name." In-Reply-To: <266285A2-4BA0-4D9A-B437-E25DA7BFF747@italy1.com> References: <892104447038e4885ad6936675a0a2a5@jots.org> <266285A2-4BA0-4D9A-B437-E25DA7BFF747@italy1.com> Message-ID: <15fb9db81112cfb2cd8ca9f26f77e408@jots.org> On 2018-05-25 13:52, remo at italy1.com wrote: > Use the --debug option to see what calls are going on and which one fails. Thanks! That did the trick. Turned out the image that was causing failure was one that's been stuck queueing since July, and has no associated name. The lack of a name is causing the "openstack image list" to fail. GET call to None for http://10.20.139.20:9292/v2/images?marker=2fd99d59-01de-4bde-a432-0e5274f45536 used request id req-6c1a9c23-1edd-4e6f-b970-4bd1ea5a7324 Resource doesn't have field name Note that the (incredibly, insanely ancient) 1.0.1 release of the "openstack" CLI command works fine. This is against Juno, so maybe that's just the way it is? Should that be expected behavior, or a bug? -Ken > Il giorno 25 mag 2018, alle ore 10:08, Amit Uniyal ha scritto: > > You can use -v (for verbose), directly check logs on openstackclient run. > > On Fri, May 25, 2018 at 10:11 PM, Ken D'Ambrosio wrote: > Hey, all. I've got a new job, and I tried my first Openstack command on it -- a Juno cloud -- with Openstack CLI 3.14.0, and it failed. Specifically: > > kdambrosio at mintyfresh:~/oscreds newton(QA/PCI)$ openstack image list > Resource doesn't have field name > > glance image-list does fine. > > Is this a case of, "Don't do that!"? Or is there something I should be digging into? > > Thanks! > > -Ken > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From SSearles at zimcom.net Tue May 29 19:03:42 2018 From: SSearles at zimcom.net (Steven D. Searles) Date: Tue, 29 May 2018 19:03:42 +0000 Subject: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt Message-ID: Hello everyone, I am seeing a strange issue with cinder block live migration and libvirt and looking for some assistance. Environment: Openstack Pike OS: Ubuntu 16.04LTS Cinder FC Driver: Pure Storage Cinder FC Driver: Dell Compellent libvirtd (libvirt) 3.6.0 Cinder-volume 2:11.1.0-0ubuntu1~cloud0 I am trying to migrate some volumes from our older Dell Compellent SC storage arrays to new Pure Storage arrays. I can create new volumes on both array’s via cinder. I can create a volume and boot it on the Pure AFA’s. I can live migrate volumes from the Pure array to the Compellent array but not from the Compellent to the Pure. The volume is created and data copied to it, it is then deleted by Cinder after the failure is logged. I receive the following error when going from the Compellent to the Pure Array. I can migrate in both directions if the volume is not attached to an instance. Any ideas? 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Failure rebasing volume /dev/disk/by-path/pci-0000:03:00.0-fc-0x524a937cddfa5902-lun-2 on vda.: libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver Traceback (most recent call last): 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1345, in _swap_volume 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver dev.abort_job(pivot=True) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 751, in abort_job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver self._guest._domain.blockJobAbort(self._disk, flags=flags) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = execute(f, *args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver six.reraise(c, e, tb) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = meth(*args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/libvirt.py", line 766, in blockJobAbort 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Failed to swap volume bb59023e-d463-44e9-8b1a-a9af495d3d4f for ecc059d0-79f9-402e-aea8-8c99004f221d: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Traceback (most recent call last): 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] mountpoint, resize_to) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] instance) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self.force_reraise() 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self._swap_volume(guest, disk_dev, conf, resize_to) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Exception during message handling: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server function_name, call_dict, binary) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 218, in decorated_function 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 206, in decorated_function 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5255, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server is_cinder_migration) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5154, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server context, new_attachment_id) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server mountpoint, resize_to) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server instance) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self._swap_volume(guest, disk_dev, conf, resize_to) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Any help would be greatly appreciated. Thanks! — Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Tue May 29 19:07:48 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 29 May 2018 13:07:48 -0600 Subject: [Openstack] "Resource doesn't have field name." In-Reply-To: <15fb9db81112cfb2cd8ca9f26f77e408@jots.org> References: <892104447038e4885ad6936675a0a2a5@jots.org> <266285A2-4BA0-4D9A-B437-E25DA7BFF747@italy1.com> <15fb9db81112cfb2cd8ca9f26f77e408@jots.org> Message-ID: <5B0DA504.4090603@windriver.com> I think it'd be worth filing a bug against the "openstack" client...most of the clients try to be compatible with any server version. Probably best to include the details from the run with the --debug option for both the new and old version of the client. Chris On 05/29/2018 10:36 AM, Ken D'Ambrosio wrote: > On 2018-05-25 13:52, remo at italy1.com wrote: > >> Use the —debug option to see what calls are going on and which one fails. > Thanks! That did the trick. Turned out the image that was causing failure was > one that's been stuck queueing since July, and has no associated name. The lack > of a name is causing the "openstack image list" to fail. > GET call to None for > http://10.20.139.20:9292/v2/images?marker=2fd99d59-01de-4bde-a432-0e5274f45536 > used request id req-6c1a9c23-1edd-4e6f-b970-4bd1ea5a7324 > Resource doesn't have field name > Note that the (incredibly, insanely ancient) 1.0.1 release of the "openstack" > CLI command works fine. This is against Juno, so maybe that's just the way it is? > Should that be expected behavior, or a bug? > -Ken >> >> Il giorno 25 mag 2018, alle ore 10:08, Amit Uniyal > > ha scritto: >> >>> You can use -v (for verbose), directly check logs on openstackclient run. >>> >>> On Fri, May 25, 2018 at 10:11 PM, Ken D'Ambrosio >> > wrote: >>> >>> Hey, all. I've got a new job, and I tried my first Openstack command on >>> it -- a Juno cloud -- with Openstack CLI 3.14.0, and it failed. >>> Specifically: >>> >>> kdambrosio at mintyfresh:~/oscreds newton(QA/PCI)$ openstack image list >>> Resource doesn't have field name >>> >>> glance image-list does fine. >>> >>> Is this a case of, "Don't do that!"? Or is there something I should be >>> digging into? >>> >>> Thanks! >>> >>> -Ken >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >>> _______________________________________________ >>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to : openstack at lists.openstack.org >>> >>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From Remo at italy1.com Wed May 30 01:55:16 2018 From: Remo at italy1.com (Remo Mattei) Date: Tue, 29 May 2018 18:55:16 -0700 Subject: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt In-Reply-To: References: Message-ID: <0AACD4DD-E019-4FDF-AB15-29F7A300BFCC@italy1.com> What’s the config on PURE? > On May 29, 2018, at 12:03 PM, Steven D. Searles wrote: > > Hello everyone, I am seeing a strange issue with cinder block live migration and libvirt and looking for some assistance. > > Environment: Openstack Pike > OS: Ubuntu 16.04LTS > Cinder FC Driver: Pure Storage > Cinder FC Driver: Dell Compellent > libvirtd (libvirt) 3.6.0 > Cinder-volume 2:11.1.0-0ubuntu1~cloud0 > > I am trying to migrate some volumes from our older Dell Compellent SC storage arrays to new Pure Storage arrays. I can create new volumes on both array’s via cinder. I can create a volume and boot it on the Pure AFA’s. I can live migrate volumes from the Pure array to the Compellent array but not from the Compellent to the Pure. The volume is created and data copied to it, it is then deleted by Cinder after the failure is logged. I receive the following error when going from the Compellent to the Pure Array. I can migrate in both directions if the volume is not attached to an instance. Any ideas? > > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Failure rebasing volume /dev/disk/by-path/pci-0000:03:00.0-fc-0x524a937cddfa5902-lun-2 on vda.: libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver Traceback (most recent call last): > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1345, in _swap_volume > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver dev.abort_job(pivot=True) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 751, in abort_job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver self._guest._domain.blockJobAbort(self._disk, flags=flags) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = execute(f, *args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver six.reraise(c, e, tb) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = meth(*args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/libvirt.py", line 766, in blockJobAbort > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Failed to swap volume bb59023e-d463-44e9-8b1a-a9af495d3d4f for ecc059d0-79f9-402e-aea8-8c99004f221d: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Traceback (most recent call last): > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] mountpoint, resize_to) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] instance) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self.force_reraise() > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self._swap_volume(guest, disk_dev, conf, resize_to) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Exception during message handling: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server function_name, call_dict, binary) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 218, in decorated_function > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 206, in decorated_function > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5255, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server is_cinder_migration) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5154, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server context, new_attachment_id) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server mountpoint, resize_to) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server instance) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self._swap_volume(guest, disk_dev, conf, resize_to) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server > > > Any help would be greatly appreciated. > > Thanks! > > — Steve > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From SSearles at zimcom.net Wed May 30 02:05:23 2018 From: SSearles at zimcom.net (Steven D. Searles) Date: Wed, 30 May 2018 02:05:23 +0000 Subject: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt In-Reply-To: <0AACD4DD-E019-4FDF-AB15-29F7A300BFCC@italy1.com> References: <0AACD4DD-E019-4FDF-AB15-29F7A300BFCC@italy1.com> Message-ID: <63260344d5d6452c94143598c95f78b5@zimcom.net> Nothing special, Purity 4.1.5. These are not brand new array’s we have had them for a while. (FAS450) 8gb FC switches (Force 10) All hosts zoned, initiators manually created in the pure or configured by cinder does not seem to matter. The weird thing is everything seems to be working fine except the migration. Provisioning a new instance on the pure no problem, move a volume when not attached and boot on the pure, no probem. Migrate from the pure to the compellent, no problem. If you need more details on the Pure config let me know. -Steve From: Remo Mattei [mailto:Remo at italy1.com] Sent: Tuesday, May 29, 2018 9:55 PM To: Steven D. Searles Cc: openstack at lists.openstack.org Subject: Re: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt What’s the config on PURE? On May 29, 2018, at 12:03 PM, Steven D. Searles > wrote: Hello everyone, I am seeing a strange issue with cinder block live migration and libvirt and looking for some assistance. Environment: Openstack Pike OS: Ubuntu 16.04LTS Cinder FC Driver: Pure Storage Cinder FC Driver: Dell Compellent libvirtd (libvirt) 3.6.0 Cinder-volume 2:11.1.0-0ubuntu1~cloud0 I am trying to migrate some volumes from our older Dell Compellent SC storage arrays to new Pure Storage arrays. I can create new volumes on both array’s via cinder. I can create a volume and boot it on the Pure AFA’s. I can live migrate volumes from the Pure array to the Compellent array but not from the Compellent to the Pure. The volume is created and data copied to it, it is then deleted by Cinder after the failure is logged. I receive the following error when going from the Compellent to the Pure Array. I can migrate in both directions if the volume is not attached to an instance. Any ideas? 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Failure rebasing volume /dev/disk/by-path/pci-0000:03:00.0-fc-0x524a937cddfa5902-lun-2 on vda.: libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver Traceback (most recent call last): 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1345, in _swap_volume 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver dev.abort_job(pivot=True) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 751, in abort_job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver self._guest._domain.blockJobAbort(self._disk, flags=flags) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = execute(f, *args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver six.reraise(c, e, tb) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = meth(*args, **kwargs) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/libvirt.py", line 766, in blockJobAbort 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self) 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Failed to swap volume bb59023e-d463-44e9-8b1a-a9af495d3d4f for ecc059d0-79f9-402e-aea8-8c99004f221d: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Traceback (most recent call last): 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] mountpoint, resize_to) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] instance) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self.force_reraise() 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self._swap_volume(guest, disk_dev, conf, resize_to) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Exception during message handling: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server function_name, call_dict, binary) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 218, in decorated_function 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 206, in decorated_function 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5255, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server is_cinder_migration) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5154, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server context, new_attachment_id) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server mountpoint, resize_to) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server instance) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self._swap_volume(guest, disk_dev, conf, resize_to) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Any help would be greatly appreciated. Thanks! — Steve _______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack at lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [https://bowtie.mailbutler.io/tracking/hit/2FC22EFC-C1AD-4242-8330-00BBDA659936/86971503-CAA6-4663-A993-F4396925CE1B/t.gif] -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at italy1.com Wed May 30 02:52:00 2018 From: Remo at italy1.com (Remo Mattei) Date: Tue, 29 May 2018 19:52:00 -0700 Subject: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt In-Reply-To: <63260344d5d6452c94143598c95f78b5@zimcom.net> References: <0AACD4DD-E019-4FDF-AB15-29F7A300BFCC@italy1.com> <63260344d5d6452c94143598c95f78b5@zimcom.net> Message-ID: We have implemented PURE in the OOO deployment I have, I am going to push Production next week or so with Dual PURE, their docs had some configuration that was not correct, and I am not sure if you follow that or not. I have in fact created a pure Ansible playbook to deploy PURE Flash Array. I can share the points I have setup and you can compare them and see if any of them matches.. Remo > On May 29, 2018, at 7:05 PM, Steven D. Searles wrote: > > Nothing special, Purity 4.1.5. These are not brand new array’s we have had them for a while. (FAS450) 8gb FC switches (Force 10) All hosts zoned, initiators manually created in the pure or configured by cinder does not seem to matter. The weird thing is everything seems to be working fine except the migration. Provisioning a new instance on the pure no problem, move a volume when not attached and boot on the pure, no probem. Migrate from the pure to the compellent, no problem. If you need more details on the Pure config let me know. > > -Steve > > > From: Remo Mattei [mailto:Remo at italy1.com] > Sent: Tuesday, May 29, 2018 9:55 PM > To: Steven D. Searles > Cc: openstack at lists.openstack.org > Subject: Re: [Openstack] Cinder volume live migration issue. Pure Storage Cinder-Libvirt > > What’s the config on PURE? > > > On May 29, 2018, at 12:03 PM, Steven D. Searles > wrote: > > Hello everyone, I am seeing a strange issue with cinder block live migration and libvirt and looking for some assistance. > > Environment: Openstack Pike > OS: Ubuntu 16.04LTS > Cinder FC Driver: Pure Storage > Cinder FC Driver: Dell Compellent > libvirtd (libvirt) 3.6.0 > Cinder-volume 2:11.1.0-0ubuntu1~cloud0 > > I am trying to migrate some volumes from our older Dell Compellent SC storage arrays to new Pure Storage arrays. I can create new volumes on both array’s via cinder. I can create a volume and boot it on the Pure AFA’s. I can live migrate volumes from the Pure array to the Compellent array but not from the Compellent to the Pure. The volume is created and data copied to it, it is then deleted by Cinder after the failure is logged. I receive the following error when going from the Compellent to the Pure Array. I can migrate in both directions if the volume is not attached to an instance. Any ideas? > > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Failure rebasing volume /dev/disk/by-path/pci-0000:03:00.0-fc-0x524a937cddfa5902-lun-2 on vda.: libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver Traceback (most recent call last): > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1345, in _swap_volume > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver dev.abort_job(pivot=True) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 751, in abort_job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver self._guest._domain.blockJobAbort(self._disk, flags=flags) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver result = proxy_call(self._autowrap, f, *args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in proxy_call > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = execute(f, *args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver six.reraise(c, e, tb) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver rv = meth(*args, **kwargs) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/libvirt.py", line 766, in blockJobAbort > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver if ret == -1: raise libvirtError ('virDomainBlockJobAbort() failed', dom=self) > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver libvirtError: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:27.618 3198 ERROR nova.virt.libvirt.driver > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Failed to swap volume bb59023e-d463-44e9-8b1a-a9af495d3d4f for ecc059d0-79f9-402e-aea8-8c99004f221d: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] Traceback (most recent call last): > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] mountpoint, resize_to) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] instance) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self.force_reraise() > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] self._swap_volume(guest, disk_dev, conf, resize_to) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:28.719 3198 ERROR nova.compute.manager [instance: f834fc03-7e2d-41f4-9307-a1bded3abb29] > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server [req-8ab3bc34-d810-4584-8f31-4dd611ed7b98 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Exception during message handling: VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server Traceback (most recent call last): > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in wrapped > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server function_name, call_dict, binary) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in wrapped > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 218, in decorated_function > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info()) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 206, in decorated_function > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5255, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server is_cinder_migration) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5154, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server context, new_attachment_id) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 5114, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server mountpoint, resize_to) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1408, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server instance) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self.force_reraise() > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1404, in swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server self._swap_volume(guest, disk_dev, conf, resize_to) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1351, in _swap_volume > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server raise exception.VolumeRebaseFailed(reason=six.text_type(exc)) > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server VolumeRebaseFailed: Volume rebase failed: Requested operation is not valid: pivot of disk 'vda' requires an active copy job > 2018-05-29 13:08:30.417 3198 ERROR oslo_messaging.rpc.server > > > Any help would be greatly appreciated. > > Thanks! > > — Steve > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashikameher at gmail.com Wed May 30 10:01:09 2018 From: ashikameher at gmail.com (ashika majety) Date: Wed, 30 May 2018 15:31:09 +0530 Subject: [Openstack] Solution regarding Bug 1769089 Message-ID: Hi, We have raised a bug in launchpad and the link is as follows : https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . Can we please have any solution or reply for this bug as it has been more than 20 days since we raised the bug. Thanks and Regards, Ashika Meher -------------- next part -------------- An HTML attachment was scrubbed... URL: From wang.yuxin at ostorage.com.cn Wed May 30 14:02:06 2018 From: wang.yuxin at ostorage.com.cn (Yuxin Wang) Date: Wed, 30 May 2018 22:02:06 +0800 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> Message-ID: Hi Shyam, No problem. The output of the commands is attached. And my test cluster is on Swift v2.15.1 with Swift3 v1.12 Also, here is the common process when I'm creating an S3 credential and using in s3curl. Hope it helps. 1. Create a user and a project, and assign a proper role. openstack project create testproject openstack user create testuser --password 123 openstack role add --project testproject --user testuser _member_ 2. Check accessibility to swift create a test-openrc file with above info source test-openrc swift list 3.Create a credential openstack credential create --type ec2 --project testproject testuser '{"access": "testaccess", "secret": "testsecret"}' 4. Use it in s3curl add the endpoint url to `my @endpoints` in s3curl.pl add the credential to .s3curl config file do `s3curl.pl -i cred_name --debug -- http://endpoint -X GET` > 在 2018年5月25日,18:17,Shyam Prasad N 写道: > > Hi Yuxin, > > If you don't mind, can you share the output of the following commands in your running swift3 setup? > > openstack credential list > openstack ec2 credentials list > cat /etc/swift/proxy-server.conf > > Also, what are the access keys and secret keys that you use? > I want to make sure that I'm not missing anything in configuration. > > Regards, > Shyam > > On Fri, May 25, 2018 at 3:05 PM, Shyam Prasad N > wrote: > Tried that. Unfortunately same error. > Is there anything I can do to troubleshoot this? > > On Fri, May 25, 2018 at 2:56 PM, Yuxin Wang > wrote: > They can be any strings. > > Replace them with whatever you want. > > - Yuxin > >> 在 2018年5月25日,14:57,Shyam Prasad N > 写道: >> >> Thanks. I'll try this. >> But what values do I use in place of ak and sk? I want to use some command to get those values, right? >> >> On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang > wrote: >> I created ec2 credentials using command `openstack credential create`. >> >> i.e. >> >> openstack credential create --type ec2 --project proj user '{"access": "ak", "secret": "sk”}' >> >> >> It seems the two credentials are not the same thing. >> >> Ref: >> >> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpenstackEC2credentials.htm >> >>> 在 2018年5月25日,10:32,Shyam Prasad N > 写道: >>> >>> Yes, I did. >>> I don't think this is s3curl related issue, because I tried with python AWS SDK, and got the same error. >>> >>> On Fri, May 25, 2018, 07:42 Yuxin Wang > wrote: >>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl ? >>> >>> i.e. >>> >>> my @endpoints = (‘127.0.0.1’); >>> >>>> 在 2018年5月24日,19:48,Shyam Prasad N > 写道: >>>> >>>> Hi, >>>> >>>> I've been trying to get swift3 to work for several days now. But I haven't managed to get it running. >>>> Both with tempauth and keystoneauth, I'm getting the same error: >>>> >>>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl --id=testerks -- http://127.0.0.1:8080/ >>>> >>>> SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.txa691e7ca97a44d56bc4c2-005b06a292 >>>> >>>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 1527161490.563107014 - >>>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: txa691e7ca97a44d56bc4c2-005b06a292) >>>> >>>> eightkpc at objectstore1:~$ cat .s3curl >>>> %awsSecretAccessKeys = ( >>>> tester => { >>>> id => 'test:tester', >>>> key => 'testing', >>>> }, >>>> testerks => { >>>> id => 'e6289a1b5692461388d0597a4873d054', >>>> key => '88bb706887094696b082f008ba133ad7', >>>> }, >>>> ); >>>> >>>> eightkpc at objectstore1:~$ openstack ec2 credentials show e6289a1b5692461388d0597a4873d054 >>>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>>> | Field | Value | >>>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>>> | access | e6289a1b5692461388d0597a4873d054 | >>>> | links | {u'self': u'http://controller:5000/v3/users/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e6289a1b5692461388d0597a4873d054 '} | >>>> | project_id | dc86f7d8787b46158268bd77098b6578 | >>>> | secret | 88bb706887094696b082f008ba133ad7 | >>>> | trust_id | None | >>>> | user_id | d7df7b56343b4ea988869fc30efeda09 | >>>> +------------+------------------------------------------------------------------------------------------------------------------------------------+ >>>> >>>> Can someone please let me know what is going on? >>>> >>>> Regards, >>>> Shyam >>>> _______________________________________________ >>>> Mailing list: https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>> Post to : openstack at lists.openstack.org >>>> Unsubscribe : https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee381508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbIObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>> >> >> >> >> >> -- >> -Shyam >> _______________________________________________ >> Mailing list: https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 >> Post to : openstack at lists.openstack.org >> Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl%2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 > > > > > -- > -Shyam > > > > -- > -Shyam > _______________________________________________ > Mailing list: https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C7bf3d25a540b4d402d3d08d5c22a5c90%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628409546910462&sdata=%2FhXb5bRIU0kCZSDj4pJXfEkVYEtBApcA6OjhKHw1fMQ%3D&reserved=0 > Post to : openstack at lists.openstack.org > Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack&data=02%7C01%7C%7C7bf3d25a540b4d402d3d08d5c22a5c90%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628409546910462&sdata=%2FhXb5bRIU0kCZSDj4pJXfEkVYEtBApcA6OjhKHw1fMQ%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test_cluster_outputs.txt URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nspmangalore at gmail.com Thu May 31 12:33:14 2018 From: nspmangalore at gmail.com (Shyam Prasad N) Date: Thu, 31 May 2018 18:03:14 +0530 Subject: [Openstack] Struggling to get the s3 api interface to work with swift. In-Reply-To: References: <2925BD08-5166-4FCB-97D0-84FBA872FFC4@ostorage.com.cn> <5B5E845C-A3D1-436E-BDC9-846257587357@ostorage.com.cn> <5F134369-41C0-4AAA-8E49-8D921AF524FA@ostorage.com.cn> Message-ID: Hi Yuxin, Thank you for sharing your configs. So I've managed to get past the Signature not matching error. Now the error is different. InvalidbucketName: eightkpc at objectstore1:~/s3curl$ ./s3curl.pl --debug --id=testerks --createBucket -- http://20.20.20.229:8080/v1/AUTH_dc86f7d8787b46158268bd77098b6578/testBucket s3curl: Found the url: host=20.20.20.229; port=8080; uri=/v1/AUTH_dc86f7d8787b46158268bd77098b6578/testBucket; query=; s3curl: cname endpoint signing case s3curl: StringToSign='PUT\n\n\nThu, 31 May 2018 12:02:57 +0000\n/ 20.20.20.229/v1/AUTH_dc86f7d8787b46158268bd77098b6578/testBucket' s3curl: exec curl -v -H 'Date: Thu, 31 May 2018 12:02:57 +0000' -H 'Authorization: AWS 76498e1413284b9d961d452db608dff4:jj/kaAEuX/vK+WUTvZyDQUUEGV0=' -L -H 'content-type: ' --data-binary -X PUT http://20.20.20.229:8080/v1/AUTH_dc86f7d8787b46158268bd77098b6578/testBucket * Trying 20.20.20.229... * TCP_NODELAY set * Connected to 20.20.20.229 (20.20.20.229) port 8080 (#0) > PUT /v1/AUTH_dc86f7d8787b46158268bd77098b6578/testBucket HTTP/1.1 > Host: 20.20.20.229:8080 > User-Agent: curl/7.58.0 > Accept: */* > Date: Thu, 31 May 2018 12:02:57 +0000 > Authorization: AWS 76498e1413284b9d961d452db608dff4:jj/kaAEuX/vK+WUTvZyDQUUEGV0= > Content-Length: 0 > < HTTP/1.1 400 Bad Request < x-amz-id-2: tx18266052d5044eb2a3bc7-005b0fe471 < x-amz-request-id: tx18266052d5044eb2a3bc7-005b0fe471 < Content-Type: application/xml < X-Trans-Id: tx18266052d5044eb2a3bc7-005b0fe471 < X-Openstack-Request-Id: tx18266052d5044eb2a3bc7-005b0fe471 < Date: Thu, 31 May 2018 12:02:57 GMT < Transfer-Encoding: chunked * HTTP error before end of send, stop sending < * Closing connection 0 InvalidBucketNameThe specified bucket is not valid.tx18266052d5044eb2a3bc7-005b0fe471v1eightkpc at objectstore1:~/s3curl$ My specified endpoint is http://20.20.20.229:8080/v1/AUTH_dc86f7d8787b46158268bd77098b6578 What am I doing wrong? Regards, Shyam On Wed, May 30, 2018 at 7:32 PM, Yuxin Wang wrote: > Hi Shyam, > > No problem. The output of the commands is attached. > > And my test cluster is on Swift v2.15.1 with Swift3 v1.12 > > Also, here is the common process when I'm creating an S3 credential and > using in s3curl. Hope it helps. > > 1. Create a user and a project, and assign a proper role. > > openstack project create testproject > openstack user create testuser --password 123 > openstack role add --project testproject --user testuser _member_ > > 2. Check accessibility to swift > > create a test-openrc file with above info > source test-openrc > swift list > > 3.Create a credential > > openstack credential create --type ec2 --project testproject testuser > '{"access": "testaccess", "secret": "testsecret"}' > > 4. Use it in s3curl > > add the endpoint url to `my @endpoints` in s3curl.pl > add the credential to .s3curl config file > > do `s3curl.pl -i cred_name --debug -- http://endpoint -X GET` > > > > > 在 2018年5月25日,18:17,Shyam Prasad N 写道: > > Hi Yuxin, > > If you don't mind, can you share the output of the following commands in > your running swift3 setup? > > openstack credential list > openstack ec2 credentials list > cat /etc/swift/proxy-server.conf > > Also, what are the access keys and secret keys that you use? > I want to make sure that I'm not missing anything in configuration. > > Regards, > Shyam > > On Fri, May 25, 2018 at 3:05 PM, Shyam Prasad N > wrote: > >> Tried that. Unfortunately same error. >> Is there anything I can do to troubleshoot this? >> >> On Fri, May 25, 2018 at 2:56 PM, Yuxin Wang >> wrote: >> >>> They can be any strings. >>> >>> Replace them with whatever you want. >>> >>> - Yuxin >>> >>> 在 2018年5月25日,14:57,Shyam Prasad N 写道: >>> >>> Thanks. I'll try this. >>> But what values do I use in place of ak and sk? I want to use some >>> command to get those values, right? >>> >>> On Fri, May 25, 2018 at 9:52 AM, Yuxin Wang >>> wrote: >>> >>>> I created ec2 credentials using command `openstack credential create`. >>>> >>>> i.e. >>>> >>>> openstack credential create --type ec2 --project proj user '{"access": >>>> "ak", "secret": "sk”}' >>>> >>>> >>>> It seems the two credentials are not the same thing. >>>> >>>> Ref: >>>> >>>> https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.1.1/ >>>> com.ibm.spectrum.scale.v4r11.adv.doc/bl1adv_ConfigureOpensta >>>> ckEC2credentials.htm >>>> >>>> >>>> 在 2018年5月25日,10:32,Shyam Prasad N 写道: >>>> >>>> Yes, I did. >>>> I don't think this is s3curl related issue, because I tried with python >>>> AWS SDK, and got the same error. >>>> >>>> On Fri, May 25, 2018, 07:42 Yuxin Wang >>>> wrote: >>>> >>>>> Did you add 127.0.0.1 to the endpoint list in s3curl.pl >>>>> >>>>> ? >>>>> >>>>> i.e. >>>>> >>>>> my @endpoints = (‘127.0.0.1’); >>>>> >>>>> 在 2018年5月24日,19:48,Shyam Prasad N 写道: >>>>> >>>>> Hi, >>>>> >>>>> I've been trying to get swift3 to work for several days now. But I >>>>> haven't managed to get it running. >>>>> Both with tempauth and keystoneauth, I'm getting the same error: >>>>> >>>>> eightkpc at objectstore1:~/s3curl$ ./s3curl.pl >>>>> >>>>> --id=testerks -- http://127.0.0.1:8080/ >>>>> >>>>> >>>>> SignatureDoesNotMatchThe request >>>>> signature we calculated does not match the signature you provided. Check >>>>> your key and signing method.tx >>>>> a691e7ca97a44d56bc4c2-005b06a292 >>>>> >>>>> May 24 11:31:30 localhost proxy-server: 127.0.0.1 127.0.0.1 >>>>> 24/May/2018/11/31/30 GET / HTTP/1.0 403 - curl/7.58.0 - - 277 - >>>>> txa691e7ca97a44d56bc4c2-005b06a292 - 0.0200 - - 1527161490.543112040 >>>>> 1527161490.563107014 - >>>>> May 24 11:31:30 localhost proxy-server: STDERR: 127.0.0.1 - - >>>>> [24/May/2018 11:31:30] "GET / HTTP/1.1" 403 621 0.021979 (txn: >>>>> txa691e7ca97a44d56bc4c2-005b06a292) >>>>> >>>>> eightkpc at objectstore1:~$ cat .s3curl >>>>> %awsSecretAccessKeys = ( >>>>> tester => { >>>>> id => 'test:tester', >>>>> key => 'testing', >>>>> }, >>>>> testerks => { >>>>> id => 'e6289a1b5692461388d0597a4873d054', >>>>> key => '88bb706887094696b082f008ba133ad7', >>>>> }, >>>>> ); >>>>> >>>>> eightkpc at objectstore1:~$ openstack ec2 credentials show >>>>> e6289a1b5692461388d0597a4873d054 >>>>> +------------+---------------------------------------------- >>>>> ------------------------------------------------------------ >>>>> --------------------------+ >>>>> | Field | Value >>>>> >>>>> | >>>>> +------------+---------------------------------------------- >>>>> ------------------------------------------------------------ >>>>> --------------------------+ >>>>> | access | e6289a1b5692461388d0597a4873d0 >>>>> 54 >>>>> | >>>>> | links | {u'self': u'http://controller:5000/v3/us >>>>> ers/d7df7b56343b4ea988869fc30efeda09/credentials/OS-EC2/e628 >>>>> 9a1b5692461388d0597a4873d054'} | >>>>> | project_id | dc86f7d8787b46158268bd77098b65 >>>>> 78 >>>>> | >>>>> | secret | 88bb706887094696b082f008ba133a >>>>> d7 >>>>> | >>>>> | trust_id | None >>>>> >>>>> | >>>>> | user_id | d7df7b56343b4ea988869fc30efeda >>>>> 09 >>>>> | >>>>> +------------+---------------------------------------------- >>>>> ------------------------------------------------------------ >>>>> --------------------------+ >>>>> >>>>> Can someone please let me know what is going on? >>>>> >>>>> Regards, >>>>> Shyam >>>>> _______________________________________________ >>>>> Mailing list: https://eur03.safelinks.protection.outlook.com/?url= >>>>> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >>>>> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee3 >>>>> 81508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaa >>>>> aaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbI >>>>> ObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>>> Post to : openstack at lists.openstack.org >>>>> Unsubscribe : https://eur03.safelinks.protection.outlook.com/?url= >>>>> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >>>>> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C39742b8c6bf847ee3 >>>>> 81508d5c16d1b21%7C84df9e7fe9f640afb435aaaaaaaa >>>>> aaaa%7C1%7C0%7C636627596701206160&sdata=KI%2F2T2FhVQJTeX1KbI >>>>> ObDZVDiUA3SbTq6Pplo1bc7ak%3D&reserved=0 >>>>> >>>>> >>>>> >>>> >>> >>> >>> -- >>> -Shyam >>> _______________________________________________ >>> Mailing list: https://nam05.safelinks.protection.outlook.com/?url= >>> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >>> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f >>> 9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaa >>> aaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl% >>> 2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 >>> Post to : openstack at lists.openstack.org >>> Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url= >>> http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% >>> 2Flistinfo%2Fopenstack&data=02%7C01%7C%7Cc6d4af73a0fd4208f >>> 9ac08d5c20f0a30%7C84df9e7fe9f640afb435aaaaaaaa >>> aaaa%7C1%7C0%7C636628292198347486&sdata=tGhHmhX%2By9RVFjl% >>> 2B31%2BVgRiN1mD%2Fc%2B7QLiImlGnCv98%3D&reserved=0 >>> >>> >>> >> >> >> -- >> -Shyam >> > > > > -- > -Shyam > _______________________________________________ > Mailing list: https://nam05.safelinks.protection.outlook.com/?url= > http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% > 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C7bf3d25a540b4d402d3d08d5c22a5c90% > 7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628409546910462&sdata=% > 2FhXb5bRIU0kCZSDj4pJXfEkVYEtBApcA6OjhKHw1fMQ%3D&reserved=0 > Post to : openstack at lists.openstack.org > Unsubscribe : https://nam05.safelinks.protection.outlook.com/?url= > http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman% > 2Flistinfo%2Fopenstack&data=02%7C01%7C%7C7bf3d25a540b4d402d3d08d5c22a5c90% > 7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636628409546910462&sdata=% > 2FhXb5bRIU0kCZSDj4pJXfEkVYEtBApcA6OjhKHw1fMQ%3D&reserved=0 > > > > -- -Shyam -------------- next part -------------- An HTML attachment was scrubbed... URL: