From mrhillsman at gmail.com Thu Nov 1 11:25:31 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 1 Nov 2018 06:25:31 -0500 Subject: [Openstack] [openlab] October Report Message-ID: Hi everyone, Here are some highlights from OpenLab for the month of October: CI additions - cluster-api-provider-openstack - AdoptOpenJDK - very important open source project - many Java developers - strategic for open source ecosystem Website redesign completed - fielding resource and support requests via GitHub - ML sign up via website - Community page - CI Infrastructure and High level request pipeline still manual but driven by Google Sheets - closer to being fully automated; easier to manage via spreadsheet instead of website backend Promotion - OSN Day Dallas, November 6th, 2018 https://events.linuxfoundation.org/events/osn_days_2018/north-america/ dallas/ - Twitter account is live - @askopenlab Mailing List - https://lists.openlabtesting.org - running latest mailman - postorious frontend - net new members - 7 OpenLab Tests (October) - total number of tests run - 3504 - SUCCESS - 2421 - FAILURE - 871 - POST_FAILURE- 72 - RETRY_LIMIT - 131 - TIMED_OUT - 9 - NODE_FAILURE - 0 - SKIPPED - 0 - 69.0925% : 30.9075% (success to fail/other job ratio) (September) - total number of tests run - 4350 - SUCCESS - 2611 - FAILURE - 1326 - POST_FAILURE- 336 - RETRY_LIMIT - 66 - TIMED_OUT - 11 - NODE_FAILURE - 0 - SKIPPED - 0 - 60.0230% : 39.9770% (success to fail/other job ratio) Delta - 9.0695% increase in success to fail/other job ratio - testament to great support by Chenrui and Liusheng "keeping the lights on". Additional Infrastructure - Packet - 80 vCPUs, 80G RAM, 1000G Disk - ARM - ARM-based OpenStack Cloud - Managed by codethink.co.uk - single compute node - 96 vCPUs, 128G RAM, 800G Disk - Massachusetts Open Cloud - in progress - small project for now - academia partner Build Status Legend: SUCCESS job executed correctly and exited without failure FAILURE job executed correctly, but exited with a failure RETRY_LIMIT pre-build tasks/plays failed more than the maximum number of retry attempts POST_FAILURE post-build tasks/plays failed SKIPPED one of the build dependencies failed and this job was not executed NODE_FAILURE no device available to run the build TIMED_OUT build got stuck at some point and hit the timeout limit Thank you to everyone who has read through this month’s update. If you have any question/concerns please feel free to start a thread on the mailing list or if it is something not to be shared publicly right now you can email info at openlabtesting.org Kind regards, OpenLab Governance Team -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Thu Nov 1 11:43:33 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Thu, 1 Nov 2018 11:43:33 +0000 Subject: [Openstack] [NOVA][KOLLA] not able to configure pci pass-through on gpu devices Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAD9F6D@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, I am not able to configure PCI pass-through with GPUs successfully and was wondering if someone could give advice. physical machine: Dell C4140 DT-V and SR-IOV are enabled on BIOS # hosts names/role: openstack-deployment --> kolla deployment and openstack client controller node --> TEST-openstack-controller proto-gpu --> compute node with 4 GPUs installed latest nvidia cude software has been installed on the node iommu kernel module has been enabled on GRUB but I am not sure whether it is working # Check Grub config [root at proto-gpu ~]# vi /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="nouveau.modeset=0 rd.driver.blacklist=nouveau crashkernel=auto rhgb quiet intel_iommu=on" GRUB_DISABLE_RECOVERY="true" ### Check dmesg after system reboot [root at proto-gpu ~]# dmesg | grep -e DMAR -e IOMMU [ 0.000000] ACPI: DMAR 000000006f6c3000 001D0 (v01 DELL PE_SC3 00000001 DELL 00000001) [ 0.000000] DMAR: IOMMU enabled [ 0.232060] DMAR: Host address width 46 [ 0.232062] DMAR: DRHD base: 0x000000d37fc000 flags: 0x0 [ 0.232068] DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232069] DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0 [ 0.232074] DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232075] DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0 [ 0.232080] DMAR: dmar2: reg_base_addr ee7fc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232081] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0 [ 0.232085] DMAR: dmar3: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232086] DMAR: DRHD base: 0x000000aaffc000 flags: 0x0 [ 0.232090] DMAR: dmar4: reg_base_addr aaffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232091] DMAR: DRHD base: 0x000000b87fc000 flags: 0x0 [ 0.232096] DMAR: dmar5: reg_base_addr b87fc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232097] DMAR: DRHD base: 0x000000c5ffc000 flags: 0x0 [ 0.232101] DMAR: dmar6: reg_base_addr c5ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232102] DMAR: DRHD base: 0x0000009d7fc000 flags: 0x1 [ 0.232106] DMAR: dmar7: reg_base_addr 9d7fc000 ver 1:0 cap 8d2078c106f0466 ecap f020df [ 0.232107] DMAR: RMRR base: 0x0000006e2cd000 end: 0x0000006e7ccfff [ 0.232108] DMAR: RMRR base: 0x0000006f362000 end: 0x0000006f364fff [ 0.232109] DMAR: ATSR flags: 0x0 [ 0.232110] DMAR: ATSR flags: 0x0 [ 0.232112] DMAR-IR: IOAPIC id 12 under DRHD base 0xc5ffc000 IOMMU 6 [ 0.232113] DMAR-IR: IOAPIC id 11 under DRHD base 0xb87fc000 IOMMU 5 [ 0.232114] DMAR-IR: IOAPIC id 10 under DRHD base 0xaaffc000 IOMMU 4 [ 0.232115] DMAR-IR: IOAPIC id 18 under DRHD base 0xfbffc000 IOMMU 3 [ 0.232116] DMAR-IR: IOAPIC id 17 under DRHD base 0xee7fc000 IOMMU 2 [ 0.232117] DMAR-IR: IOAPIC id 16 under DRHD base 0xe0ffc000 IOMMU 1 [ 0.232118] DMAR-IR: IOAPIC id 15 under DRHD base 0xd37fc000 IOMMU 0 [ 0.232120] DMAR-IR: IOAPIC id 8 under DRHD base 0x9d7fc000 IOMMU 7 [ 0.232121] DMAR-IR: IOAPIC id 9 under DRHD base 0x9d7fc000 IOMMU 7 [ 0.232122] DMAR-IR: HPET id 0 under DRHD base 0x9d7fc000 [ 0.232123] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 0.232124] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 0.234147] DMAR-IR: Enabled IRQ remapping in xapic mode [ 2.094974] DMAR: dmar6: Using Queued invalidation [ 2.094983] DMAR: dmar4: Using Queued invalidation [ 2.094990] DMAR: dmar2: Using Queued invalidation [ 2.094995] DMAR: dmar1: Using Queued invalidation [ 2.095000] DMAR: dmar7: Using Queued invalidation [ 2.095043] DMAR: Setting RMRR: [ 2.098832] DMAR: Setting identity map for device 0000:00:14.0 [0x6f362000 - 0x6f364fff] [ 2.098838] DMAR: Setting identity map for device 0000:00:14.0 [0x6e2cd000 - 0x6e7ccfff] [ 2.098846] DMAR: Prepare 0-16MiB unity mapping for LPC [ 2.102818] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff] [ 2.102831] DMAR: Intel(R) Virtualization Technology for Directed I/O ### List GPUs [root at proto-gpu ~]# lspci -nn | egrep -i nvidia 1a:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 SXM2] [10de:1db1] (rev a1) 1c:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 SXM2] [10de:1db1] (rev a1) 1d:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 SXM2] [10de:1db1] (rev a1) 1e:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 SXM2] [10de:1db1] (rev a1) ### Check pci configuration on nova-compute [root at proto-gpu ~]# docker exec -it nova_compute cat /etc/nova/nova.conf ... [pci] passthrough_whitelist = { "vendor_id": "10de", "product_id": "1db1" } alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PF", "name":"nv_v100" } ### Check pci configuration on nova-scheduler and nova-api [root at TEST-openstack-controller ~]# docker exec -it nova_scheduler cat /etc/nova/nova.conf ... [filter_scheduler] enabled_filters = RetryFilter, AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter available_filters = nova.scheduler.filters.all_filters [root at TEST-openstack-controller ~]# docker exec -it nova_api cat /etc/nova/nova.conf ... [pci] alias = { "vendor_id":"10de", "product_id":"1db1", "device_type":"type-PF", "name":"nv_v100" } ### Create a flavor [root at openstack-deployment ~]# openstack flavor create gpu.medium --ram 4096 --disk 40 --vcpus 2 --property "pci_passthrough:alias"="nv_v100:2" ### Nova scheduler logs 2018-11-01 22:07:20.832 32 INFO nova.filters [req-7c6bc2df-028a-4d5f-a025-1e9a677883a4 02132c31dafa4d1d939bd52e0420b975 abc29399b91d423088549d7446766573 - default default] Filtering removed all hosts for the request with instance ID '6fd33275-7e1f-4673-a905-1213d4eaa1b3'. Filter results: ['RetryFilter: (start: 2, end: 2)', 'AvailabilityZoneFilter: (start: 2, end: 2)', 'ComputeFilter: (start: 2, end: 2)', 'ComputeCapabilitiesFilter: (start: 2, end: 2)', 'ImagePropertiesFilter: (start: 2, end: 2)', 'ServerGroupAntiAffinityFilter: (start: 2, end: 2)', 'ServerGroupAffinityFilter: (start: 2, end: 2)', 'PciPassthroughFilter: (start: 2, end: 0)'] PciPassthroughFilter is not returning any host and I am not sure why. What I am doing wrong? please feel free to ask for more details as I have missed something important. thank you very much Manuel NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcioprado at marcioprado.eti.br Thu Nov 1 13:49:48 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Thu, 01 Nov 2018 10:49:48 -0300 Subject: [Openstack] DHCP not accessible on new compute node. In-Reply-To: References: Message-ID: <187dfec53f92fe6657bb393dd6670a20@marcioprado.eti.br> I believe you have not forgotten anything. This should probably be bug ... As my cloud is not production, but rather masters research. I migrate the VM live to a node that is working, restart it, after that I migrate back to the original node that was not working and it keeps running ... Em 30-10-2018 17:50, Torin Woltjer escreveu: > Interestingly, I created a brand new selfservice network and DHCP > doesn't work on that either. I've followed the instructions in the > minimal setup (excluding the controllers as they're already set up) > but the new node has no access to the DHCP agent in neutron it seems. > Is there a likely component that I've overlooked? > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: "Torin Woltjer" > SENT: 10/30/18 10:48 AM > TO: , "openstack at lists.openstack.org" > > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > > I deleted both DHCP ports and they recreated as you said. However, > instances are still unable to get network addresses automatically. > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _ [1] [1]WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: Marcio Prado > SENT: 10/29/18 6:23 PM > TO: torin.woltjer at granddial.com > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > The door is recreated automatically. The problem like I said is not in > DHCP, but for some reason, erasing and waiting for OpenStack to > re-create the port often solves the problem. > > Please, if you can find out the problem in fact, let me know. I'm very > interested to know. > > You can delete the door without fear. OpenStack will recreate in a > short > time. > > Links: > ------ > [1] http://www.granddial.com -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br From nickeysgo at gmail.com Thu Nov 1 14:01:13 2018 From: nickeysgo at gmail.com (Minjun Hong) Date: Thu, 1 Nov 2018 23:01:13 +0900 Subject: [Openstack] Troubles on creating an Xen instance via libvirt Message-ID: Hi everyone. I'm really new to OpenStack. After I install essential components of OpenStack, such as Nova, Keystone, etc, I attempted creating an instance through Openstack command in the terminal. But a trouble has occurred. It was logged in "nova-conductor.log": 2018-11-01 22:18:59.831 2570 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node2 (node > node2): [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in > _do_build_and_run_instance\n filter_properties, request_spec)\n', u' > File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, > in _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: Invalid input for > field 'identity/password/user/password': None is not of type > 'string'\n\nFailed validating 'type' in > schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n > {'type': 'string'}\n\nOn > instance['identity']['password']['user']['password']:\n None (HTTP 400) > (Request-ID: req-350beee4-2fed-4645-9e21-79806d7ebfe7)\n"] > 2018-11-01 22:18:59.833 2570 WARNING oslo_config.cfg > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] Option "os_region_name" > from group "placement" is deprecated. Use option "region-name" from group > "placement". > 2018-11-01 22:19:10.936 2571 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node4 (node > node4): [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in > _do_build_and_run_instance\n filter_properties, request_spec)\n', u' > File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, > in _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: > libxenlight failed to create new domain 'instance-00000005'\n"] > 2018-11-01 22:19:21.783 2567 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node5 (node > node5): [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in > _do_build_and_run_instance\n filter_properties, request_spec)\n', u' > File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, > in _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: > libxenlight failed to create new domain 'instance-00000005'\n"] > 2018-11-01 22:19:21.783 2567 WARNING nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] Failed to > compute_task_build_instances: Exceeded maximum number of retries. Exceeded > max scheduling attempts 3 for instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: internal error: > libxenlight failed to create new domain 'instance-00000005': > MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max > scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. > Last exception: internal error: libxenlight failed to create new domain > 'instance-00000005' > 2018-11-01 22:19:21.784 2567 WARNING nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 > 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Setting instance to ERROR state.: > MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max > scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. > Last exception: internal error: libxenlight failed to create new domain > 'instance-00000005' I'm not sure which component is involved in this trouble. And libvirt and Xen have been successfully installed on all compute node I have without any problem. nickeys at node2:~$ virsh create ./test.xml > Domain guest1 created from ./test.xml > nickeys at node2:~$ virsh list > Id Name State > -------------------------- > 0 Domain-0 running > 1 guest1 running > nickeys at node2:~$ What should I check first for that issue ? Your hint would be big help for me. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Fri Nov 2 00:05:18 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Fri, 2 Nov 2018 09:05:18 +0900 Subject: [Openstack] Troubles on creating an Xen instance via libvirt In-Reply-To: References: Message-ID: <357884aa-d222-46e6-d84a-52f6fa58f427@gmail.com> Welcome to OpenStack! I think the joy of troubleshooting it is one of its main selling points. The conductor log says "/libxenlight failed to create new domain/" on various nodes. You should check the nova-compute logs on those nodes for relevant messages. I suspect an ill-configured interface between Nova Compute and libvirt/Xen. In other words, double-check the libvirt and xen sections of nova.conf on those nodes. Then there is this message "/Invalid //input for field 'identity/password/user/password': None is not of type 'string' ..../". This is often caused by incompatible software. I.e. you installed Openstack components that don't fit together. Might also be caused by a configuration error. Or it's an ordinary bug, but it seems to occur on node 2 only. On 11/1/2018 11:01 PM, Minjun Hong wrote: > Hi everyone. > I'm really new to OpenStack. > After I install essential components of OpenStack, such as Nova, > Keystone, etc, I attempted creating an instance through Openstack > command in the terminal. > But a trouble has occurred. It was logged in "nova-conductor.log": > > 2018-11-01 22:18:59.831 2570 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node2 > (node node2): [u'Traceback (most recent call last):\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 1840, in _do_build_and_run_instance\n    filter_properties, > request_spec)\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 2120, in _build_and_run_instance\n    instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of > instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: > Invalid input for field 'identity/password/user/password': None is > not of type 'string'\n\nFailed validating 'type' in > schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n  >   {'type': 'string'}\n\nOn > instance['identity']['password']['user']['password']:\n    None > (HTTP 400) (Request-ID: req-350beee4-2fed-4645-9e21-79806d7ebfe7)\n"] > 2018-11-01 22:18:59.833 2570 WARNING oslo_config.cfg > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] Option "os_region_name" from group "placement" > is deprecated. Use option "region-name" from group "placement". > 2018-11-01 22:19:10.936 2571 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node4 > (node node4): [u'Traceback (most recent call last):\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 1840, in _do_build_and_run_instance\n    filter_properties, > request_spec)\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 2120, in _build_and_run_instance\n    instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of > instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: > internal error: libxenlight failed to create new domain > 'instance-00000005'\n"] > 2018-11-01 22:19:21.783 2567 ERROR nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node5 > (node node5): [u'Traceback (most recent call last):\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 1840, in _do_build_and_run_instance\n    filter_properties, > request_spec)\n', u'  File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 2120, in _build_and_run_instance\n    instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of > instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: > internal error: libxenlight failed to create new domain > 'instance-00000005'\n"] > 2018-11-01 22:19:21.783 2567 WARNING nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] Failed to compute_task_build_instances: > Exceeded maximum number of retries. Exceeded max scheduling > attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last > exception: internal error: libxenlight failed to create new domain > 'instance-00000005': MaxRetriesExceeded: Exceeded maximum number > of retries. Exceeded max scheduling attempts 3 for instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: internal > error: libxenlight failed to create new domain 'instance-00000005' > 2018-11-01 22:19:21.784 2567 WARNING nova.scheduler.utils > [req-12aae2ff-4186-4ab0-964c-35b335c3188a > cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 > - default default] [instance: > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Setting instance to ERROR > state.: MaxRetriesExceeded: Exceeded maximum number of retries. > Exceeded max scheduling attempts 3 for instance > 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: internal > error: libxenlight failed to create new domain 'instance-00000005' > > >  I'm not sure which component is involved in this trouble. > And libvirt and Xen have been successfully installed on all compute > node I have without any problem. > > nickeys at node2:~$ virsh create ./test.xml > Domain guest1 created from ./test.xml > nickeys at node2:~$ virsh list >  Id   Name       State > -------------------------- >  0    Domain-0   running >  1    guest1     running > nickeys at node2:~$ > > > What should I check first for that issue ?  > Your hint would be big help for me. > > Thanks! > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From nickeysgo at gmail.com Fri Nov 2 09:30:01 2018 From: nickeysgo at gmail.com (Minjun Hong) Date: Fri, 2 Nov 2018 18:30:01 +0900 Subject: [Openstack] Troubles on creating an Xen instance via libvirt In-Reply-To: <357884aa-d222-46e6-d84a-52f6fa58f427@gmail.com> References: <357884aa-d222-46e6-d84a-52f6fa58f427@gmail.com> Message-ID: On Fri, Nov 2, 2018 at 9:05 AM Bernd Bausch wrote: > Welcome to OpenStack! I think the joy of troubleshooting it is one of its > main selling points. > > The conductor log says "*libxenlight failed to create new domain*" on > various nodes. You should check the nova-compute logs on those nodes for > relevant messages. I suspect an ill-configured interface between Nova > Compute and libvirt/Xen. In other words, double-check the libvirt and xen > sections of nova.conf on those nodes. > > Then there is this message "*Invalid **input for field > 'identity/password/user/password': None is not of type 'string' ....*". > This is often caused by incompatible software. I.e. you installed Openstack > components that don't fit together. Might also be caused by a configuration > error. > Or it's an ordinary bug, but it seems to occur on node 2 only. > On 11/1/2018 11:01 PM, Minjun Hong wrote: > > Hi everyone. > I'm really new to OpenStack. > After I install essential components of OpenStack, such as Nova, Keystone, > etc, I attempted creating an instance through Openstack command in the > terminal. > But a trouble has occurred. It was logged in "nova-conductor.log": > > 2018-11-01 22:18:59.831 2570 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node2 (node >> node2): [u'Traceback (most recent call last):\n', u' File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >> in _build_and_run_instance\n instance_uuid=instance.uuid, >> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: Invalid input for >> field 'identity/password/user/password': None is not of type >> 'string'\n\nFailed validating 'type' in >> schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n >> {'type': 'string'}\n\nOn >> instance['identity']['password']['user']['password']:\n None (HTTP 400) >> (Request-ID: req-350beee4-2fed-4645-9e21-79806d7ebfe7)\n"] >> 2018-11-01 22:18:59.833 2570 WARNING oslo_config.cfg >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Option "os_region_name" >> from group "placement" is deprecated. Use option "region-name" from group >> "placement". >> 2018-11-01 22:19:10.936 2571 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node4 (node >> node4): [u'Traceback (most recent call last):\n', u' File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >> in _build_and_run_instance\n instance_uuid=instance.uuid, >> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: >> libxenlight failed to create new domain 'instance-00000005'\n"] >> 2018-11-01 22:19:21.783 2567 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node5 (node >> node5): [u'Traceback (most recent call last):\n', u' File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >> in _build_and_run_instance\n instance_uuid=instance.uuid, >> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: >> libxenlight failed to create new domain 'instance-00000005'\n"] >> 2018-11-01 22:19:21.783 2567 WARNING nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Failed to >> compute_task_build_instances: Exceeded maximum number of retries. Exceeded >> max scheduling attempts 3 for instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: internal error: >> libxenlight failed to create new domain 'instance-00000005': >> MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max >> scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. >> Last exception: internal error: libxenlight failed to create new domain >> 'instance-00000005' >> 2018-11-01 22:19:21.784 2567 WARNING nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Setting instance to ERROR state.: >> MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max >> scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. >> Last exception: internal error: libxenlight failed to create new domain >> 'instance-00000005' > > > I'm not sure which component is involved in this trouble. > And libvirt and Xen have been successfully installed on all compute node I > have without any problem. > > nickeys at node2:~$ virsh create ./test.xml >> Domain guest1 created from ./test.xml >> nickeys at node2:~$ virsh list >> Id Name State >> -------------------------- >> 0 Domain-0 running >> 1 guest1 running >> nickeys at node2:~$ > > > What should I check first for that issue ? > Your hint would be big help for me. > > Thanks! > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Thanks for your kind answer, Bernd. I checked the configure files which you mentioned like it can cause the trouble. following files are my nova.conf and nova-compute.conf of the compute node ('node2'). # /etc/nova/nova.conf [DEFAULT] > dhcpbridge_flagfile=/etc/nova/nova.conf > dhcpbridge=/usr/bin/nova-dhcpbridge > state_path=/var/lib/nova > lock_path=/var/lock/nova > force_dhcp_release=True > libvirt_use_virtio_for_bridges=True > verbose=True > ec2_private_dns_show_ip=True > api_paste_config=/etc/nova/api-paste.ini > enabled_apis=osapi_compute,metadata > transport_url = rabbit://admin:1234 at 10.150.3.88 > my_ip = 10.150.21.182 > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > [api] > auth_strategy = keystone > [vnc] > enabled = True > server_listen = 0.0.0.0 > vncserver_proxyclient_address = 10.150.21.182 > novncproxy_base_url = http://10.150.3.88:6080/vnc_auth.html > [glance] > api_servers = http://10.150.3.88:9292 > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > [keystone_authtoken] > auth_url = http://10.150.3.88:5000/v3 > memcached_servers = 10.150.3.88:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = > [placement] > os_region_name = RegionOne > project_domain_name = Default > project_name = service > auth_type = password > user_domain_name = Default > auth_url = http://10.150.3.88:5000/v3 > username = placement > password = > [database] > connection=sqlite:////var/lib/nova/nova.sqlite > [api_database] > connection=sqlite:////var/lib/nova/nova.sqlite > [neutron] > url = http://10.150.3.88:9696 > auth_url = http://10.150.3.88:5000 > auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = neutron > passsword = # /etc/nova/nova-compute.conf > [DEFAULT] > compute_driver=libvirt.LibvirtDriver > [libvirt] > virt_type=xen And I set all the compute nodes I'm using with the same configure file above. Also, I installed the OpenStack components, which belong to Queens version, following the official installation guide ( https://docs.openstack.org/install-guide/). By the way, I found something weird in '/var/log/nova/nova-compute.log' of the 'node2'. After I manually created a VM by 'virsh' command (i.g. $ virsh create test.xml), nova-compute of 'node2' started to complain. And on an other node which have not created a virtual machine by the 'virsh' command, there is not below error. I guess that the nova-compute component was configured to qemu as virtualization type, but I have no idea where the setting value is. 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > [req-ec324126-6dfc-4011-9f45-41d014e6f900 - - - - -] Error updating > resources for node node2.: InvalidDiskInfo: Disk info file is invalid: > qemu-img failed to execute on > /home/caslab/xenguest1/domains/guest1/disk.img : Unexpected error while > running command. > Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 > --cpu=30 -- env LC_ALL=C LANG=C qemu-img info > /home/caslab/xenguest1/domains/guest1/disk.img > Exit code: 1 > Stdout: u'' > Stderr: u"qemu-img: Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Traceback (most > recent call last): > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 7284, in > update_available_resource_for_node > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > rt.update_available_resource(context, nodename) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line > 664, in update_available_resource > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager resources = > self.driver.get_available_resource(nodename) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6440, > in get_available_resource > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > disk_over_committed = self._get_disk_over_committed_size_total() > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 8019, > in _get_disk_over_committed_size_total > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager config, > block_device_info) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 7918, > in _get_instance_disk_info_from_config > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager dk_size = > disk_api.get_allocated_disk_size(path) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 147, in > get_allocated_disk_size > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager return > images.qemu_img_info(path).disk_size > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File > "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 87, in > qemu_img_info > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager raise > exception.InvalidDiskInfo(reason=msg) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager InvalidDiskInfo: > Disk info file is invalid: qemu-img failed to execute on > /home/caslab/xenguest1/domains/guest1/disk.img : Unexpected error while > running command. > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Command: > /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- > env LC_ALL=C LANG=C qemu-img info > /home/caslab/xenguest1/domains/guest1/disk.img > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Exit code: 1 > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stdout: u'' > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stderr: > u"qemu-img: Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Fri Nov 2 12:11:46 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Fri, 2 Nov 2018 21:11:46 +0900 Subject: [Openstack] Troubles on creating an Xen instance via libvirt In-Reply-To: References: <357884aa-d222-46e6-d84a-52f6fa58f427@gmail.com> Message-ID: <2cee2e7e-37ef-0b32-eea8-438213561fc8@gmail.com> This is the part that determines the hypervisor: # /etc/nova/nova-compute.conf [DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=xen Nova uses libvirt to manage VMs, and libvirt uses Xen. According to the Nova/Xen configuration guide , it is enough to set these variables. However, if you have not read that config page, do it now. Are you using correct image formats? Do you find anything in the other logfiles mentioned on that page? The references to /qemu-img /in the nova-compute log don't mean that qemu is used. Nova uses /qemu-img/ as a general tool for converting VM images, no matter which hypervisor is configured. However, something prevents Nova to access one of the disk images, namely /home/caslab/xenguest1/domains/guest1/disk.img. Is /guest1/ the VM you started manually? I see: Unexpected error while running command. Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C *qemu-img info /home/caslab/xenguest1/domains/guest1/disk.img* and Could not open '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied I don't know, though, whether these errors are related to your problem. Why is Libvirt or Xen unable to start your instances? I suggest you look for other errors in the nova-compute.log files. Bernd On 11/2/2018 6:30 PM, Minjun Hong wrote: > On Fri, Nov 2, 2018 at 9:05 AM Bernd Bausch > wrote: > > Welcome to OpenStack! I think the joy of troubleshooting it is one > of its main selling points. > > The conductor log says "/libxenlight failed to create new domain/" > on various nodes. You should check the nova-compute logs on those > nodes for relevant messages. I suspect an ill-configured interface > between Nova Compute and libvirt/Xen. In other words, double-check > the libvirt and xen sections of nova.conf on those nodes. > > Then there is this message "/Invalid //input for field > 'identity/password/user/password': None is not of type 'string' > ..../". This is often caused by incompatible software. I.e. you > installed Openstack components that don't fit together. Might also > be caused by a configuration error. > Or it's an ordinary bug, but it seems to occur on node 2 only. > > On 11/1/2018 11:01 PM, Minjun Hong wrote: >> Hi everyone. >> I'm really new to OpenStack. >> After I install essential components of OpenStack, such as Nova, >> Keystone, etc, I attempted creating an instance through Openstack >> command in the terminal. >> But a trouble has occurred. It was logged in "nova-conductor.log": >> >> 2018-11-01 22:18:59.831 2570 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] >> [instance: 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from >> last host: node2 (node node2): [u'Traceback (most recent call >> last):\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 1840, in _do_build_and_run_instance\n    >> filter_properties, request_spec)\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 2120, in _build_and_run_instance\n    >> instance_uuid=instance.uuid, reason=six.text_type(e))\n', >> u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: >> Invalid input for field 'identity/password/user/password': >> None is not of type 'string'\n\nFailed validating 'type' in >> schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n  >>   {'type': 'string'}\n\nOn >> instance['identity']['password']['user']['password']:\n    >> None (HTTP 400) (Request-ID: >> req-350beee4-2fed-4645-9e21-79806d7ebfe7)\n"] >> 2018-11-01 22:18:59.833 2570 WARNING oslo_config.cfg >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Option >> "os_region_name" from group "placement" is deprecated. Use >> option "region-name" from group "placement". >> 2018-11-01 22:19:10.936 2571 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] >> [instance: 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from >> last host: node4 (node node4): [u'Traceback (most recent call >> last):\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 1840, in _do_build_and_run_instance\n    >> filter_properties, request_spec)\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 2120, in _build_and_run_instance\n    >> instance_uuid=instance.uuid, reason=six.text_type(e))\n', >> u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: >> internal error: libxenlight failed to create new domain >> 'instance-00000005'\n"] >> 2018-11-01 22:19:21.783 2567 ERROR nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] >> [instance: 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from >> last host: node5 (node node5): [u'Traceback (most recent call >> last):\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 1840, in _do_build_and_run_instance\n    >> filter_properties, request_spec)\n', u'  File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", >> line 2120, in _build_and_run_instance\n    >> instance_uuid=instance.uuid, reason=six.text_type(e))\n', >> u"RescheduledException: Build of instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: >> internal error: libxenlight failed to create new domain >> 'instance-00000005'\n"] >> 2018-11-01 22:19:21.783 2567 WARNING nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Failed to >> compute_task_build_instances: Exceeded maximum number of >> retries. Exceeded max scheduling attempts 3 for instance >> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: >> internal error: libxenlight failed to create new domain >> 'instance-00000005': MaxRetriesExceeded: Exceeded maximum >> number of retries. Exceeded max scheduling attempts 3 for >> instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last >> exception: internal error: libxenlight failed to create new >> domain 'instance-00000005' >> 2018-11-01 22:19:21.784 2567 WARNING nova.scheduler.utils >> [req-12aae2ff-4186-4ab0-964c-35b335c3188a >> cc22ec575cb44e53aced9ddf58d9e8d7 >> 965ff1c2002d4c278b5f838dbdbbb780 - default default] >> [instance: 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Setting >> instance to ERROR state.: MaxRetriesExceeded: Exceeded >> maximum number of retries. Exceeded max scheduling attempts 3 >> for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last >> exception: internal error: libxenlight failed to create new >> domain 'instance-00000005' >> >> >>  I'm not sure which component is involved in this trouble. >> And libvirt and Xen have been successfully installed on all >> compute node I have without any problem. >> >> nickeys at node2:~$ virsh create ./test.xml >> Domain guest1 created from ./test.xml >> nickeys at node2:~$ virsh list >>  Id   Name       State >> -------------------------- >>  0    Domain-0   running >>  1    guest1     running >> nickeys at node2:~$ >> >> >> What should I check first for that issue ?  >> Your hint would be big help for me. >> >> Thanks! >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Thanks for your kind answer, Bernd. > I checked the configure files which you mentioned like it can cause > the trouble. following files are my nova.conf and nova-compute.conf of > the compute node ('node2'). > > # /etc/nova/nova.conf  > > [DEFAULT] > dhcpbridge_flagfile=/etc/nova/nova.conf > dhcpbridge=/usr/bin/nova-dhcpbridge > state_path=/var/lib/nova > lock_path=/var/lock/nova > force_dhcp_release=True > libvirt_use_virtio_for_bridges=True > verbose=True > ec2_private_dns_show_ip=True > api_paste_config=/etc/nova/api-paste.ini > enabled_apis=osapi_compute,metadata > transport_url = rabbit://admin:1234 at 10.150.3.88 > > my_ip = 10.150.21.182 > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > [api] > auth_strategy = keystone > [vnc] > enabled = True > server_listen = 0.0.0.0 > vncserver_proxyclient_address = 10.150.21.182 > novncproxy_base_url = http://10.150.3.88:6080/vnc_auth.html > [glance] > api_servers = http://10.150.3.88:9292 > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > [keystone_authtoken] > auth_url = http://10.150.3.88:5000/v3 > memcached_servers = 10.150.3.88:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = > [placement] > os_region_name = RegionOne > project_domain_name = Default > project_name = service > auth_type = password > user_domain_name = Default > auth_url = http://10.150.3.88:5000/v3 > username = placement > password = > [database] > connection=sqlite:////var/lib/nova/nova.sqlite > [api_database] > connection=sqlite:////var/lib/nova/nova.sqlite > [neutron] > url = http://10.150.3.88:9696 > auth_url = http://10.150.3.88:5000 > auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = neutron > passsword = > > > # /etc/nova/nova-compute.conf > [DEFAULT] > compute_driver=libvirt.LibvirtDriver > [libvirt] > virt_type=xen > > > And I set all the compute nodes I'm using with the same configure file > above. > Also, I installed the OpenStack components, which belong to Queens > version, following the official installation guide > (https://docs.openstack.org/install-guide/). > > By the way, I found something weird in > '/var/log/nova/nova-compute.log' of the 'node2'. > After I manually created a VM by 'virsh' command (i.g. $ virsh create > test.xml), nova-compute of 'node2' started to complain. > And on an other node which have not created a virtual machine by the > 'virsh' command, there is not below error. > I guess that the nova-compute component was configured to qemu as > virtualization type, but I have no idea where the setting value is. > > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > [req-ec324126-6dfc-4011-9f45-41d014e6f900 - - - - -] Error > updating resources for node node2.: InvalidDiskInfo: Disk info > file is invalid: qemu-img failed to execute on > /home/caslab/xenguest1/domains/guest1/disk.img : Unexpected error > while running command. > Command: /usr/bin/python2 -m oslo_concurrency.prlimit > --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info > /home/caslab/xenguest1/domains/guest1/disk.img > Exit code: 1 > Stdout: u'' > Stderr: u"qemu-img: Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Traceback > (most recent call last): > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line > 7284, in update_available_resource_for_node > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  rt.update_available_resource(context, nodename) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", > line 664, in update_available_resource > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  resources = self.driver.get_available_resource(nodename) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", > line 6440, in get_available_resource > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  disk_over_committed = self._get_disk_over_committed_size_total() > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", > line 8019, in _get_disk_over_committed_size_total > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  config, block_device_info) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", > line 7918, in _get_instance_disk_info_from_config > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  dk_size = disk_api.get_allocated_disk_size(path) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line > 147, in get_allocated_disk_size > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager    >  return images.qemu_img_info(path).disk_size > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager   File > "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 87, > in qemu_img_info > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager     raise > exception.InvalidDiskInfo(reason=msg) > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > InvalidDiskInfo: Disk info file is invalid: qemu-img failed to > execute on /home/caslab/xenguest1/domains/guest1/disk.img : > Unexpected error while running command. > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Command: > /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 > --cpu=30 -- env LC_ALL=C LANG=C qemu-img info > /home/caslab/xenguest1/domains/guest1/disk.img > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Exit code: 1 > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stdout: u'' > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stderr: > u"qemu-img: Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open > '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > > > Thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From amy at demarco.com Fri Nov 2 15:14:24 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 2 Nov 2018 10:14:24 -0500 Subject: [Openstack] OpenStack Diversity and Inclusion Survey Message-ID: The Diversity and Inclusion WG is still looking for your assistance in reaching and including data from as many members of our community as possible. We revised the Diversity Survey that was originally distributed to the Community in the Fall of 2015 and reached out in August with our new survey. We are looking to update our view of the OpenStack community and it's diversity. We are pleased to be working with members of the CHAOSS project who have signed confidentiality agreements in order to assist us in the following ways: 1) Assistance in analyzing the results 2) And feeding the results into the CHAOSS software and metrics development work so that we can help other Open Source projects Please take the time to fill out the survey and share it with others in the community. The survey can be found at: https://www.surveymonkey.com/r/OpenStackDiversity Thank you for assisting us in this important task! Please feel free to reach out to me via email, in Berlin, or to myself or any WG member in #openstack-diversity! Amy Marrich (spotz) Diversity and Inclusion Working Group Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Fri Nov 2 19:27:30 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Fri, 02 Nov 2018 19:27:30 GMT Subject: [Openstack] DHCP not accessible on new compute node. Message-ID: I've completely wiped the node and reinstalled it, and the problem still persists. I can't ping instances on other compute nodes, or ping the DHCP ports. Instances don't get addresses or metadata when started on this node. From: Marcio Prado Sent: 11/1/18 9:51 AM To: torin.woltjer at granddial.com Cc: openstack at lists.openstack.org Subject: Re: [Openstack] DHCP not accessible on new compute node. I believe you have not forgotten anything. This should probably be bug ... As my cloud is not production, but rather masters research. I migrate the VM live to a node that is working, restart it, after that I migrate back to the original node that was not working and it keeps running ... Em 30-10-2018 17:50, Torin Woltjer escreveu: > Interestingly, I created a brand new selfservice network and DHCP > doesn't work on that either. I've followed the instructions in the > minimal setup (excluding the controllers as they're already set up) > but the new node has no access to the DHCP agent in neutron it seems. > Is there a likely component that I've overlooked? > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: "Torin Woltjer" > SENT: 10/30/18 10:48 AM > TO: , "openstack at lists.openstack.org" > > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > > I deleted both DHCP ports and they recreated as you said. However, > instances are still unable to get network addresses automatically. > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _ [1] [1]WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: Marcio Prado > SENT: 10/29/18 6:23 PM > TO: torin.woltjer at granddial.com > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > The door is recreated automatically. The problem like I said is not in > DHCP, but for some reason, erasing and waiting for OpenStack to > re-create the port often solves the problem. > > Please, if you can find out the problem in fact, let me know. I'm very > interested to know. > > You can delete the door without fear. OpenStack will recreate in a > short > time. > > Links: > ------ > [1] http://www.granddial.com -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From nickeysgo at gmail.com Sat Nov 3 17:08:24 2018 From: nickeysgo at gmail.com (Minjun Hong) Date: Sun, 4 Nov 2018 02:08:24 +0900 Subject: [Openstack] Troubles on creating an Xen instance via libvirt In-Reply-To: <2cee2e7e-37ef-0b32-eea8-438213561fc8@gmail.com> References: <357884aa-d222-46e6-d84a-52f6fa58f427@gmail.com> <2cee2e7e-37ef-0b32-eea8-438213561fc8@gmail.com> Message-ID: On Fri, Nov 2, 2018 at 9:12 PM Bernd Bausch wrote: > This is the part that determines the hypervisor: > >> # /etc/nova/nova-compute.conf >> [DEFAULT] >> compute_driver=libvirt.LibvirtDriver >> [libvirt] >> virt_type=xen >> > > Nova uses libvirt to manage VMs, and libvirt uses Xen. According to the Nova/Xen > configuration guide > , > it is enough to set these variables. However, if you have not read that > config page, do it now. Are you using correct image formats? Do you find > anything in the other logfiles mentioned on that page? > > The references to *qemu-img *in the nova-compute log don't mean that qemu > is used. Nova uses *qemu-img* as a general tool for converting VM images, > no matter which hypervisor is configured. > > However, something prevents Nova to access one of the disk images, namely > /home/caslab/xenguest1/domains/guest1/disk.img. Is *guest1* the VM you > started manually? > > I see: > > Unexpected error while running command. > Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 > --cpu=30 -- env LC_ALL=C LANG=C * qemu-img info > /home/caslab/xenguest1/domains/guest1/disk.img* > > and > > Could not open '/home/caslab/xenguest1/domains/guest1/disk.img': > Permission denied > > I don't know, though, whether these errors are related to your problem. > Why is Libvirt or Xen unable to start your instances? I suggest you look > for other errors in the nova-compute.log files. > > Bernd > > On 11/2/2018 6:30 PM, Minjun Hong wrote: > > On Fri, Nov 2, 2018 at 9:05 AM Bernd Bausch wrote: > >> Welcome to OpenStack! I think the joy of troubleshooting it is one of its >> main selling points. >> >> The conductor log says "*libxenlight failed to create new domain*" on >> various nodes. You should check the nova-compute logs on those nodes for >> relevant messages. I suspect an ill-configured interface between Nova >> Compute and libvirt/Xen. In other words, double-check the libvirt and xen >> sections of nova.conf on those nodes. >> >> Then there is this message "*Invalid **input for field >> 'identity/password/user/password': None is not of type 'string' ....*". >> This is often caused by incompatible software. I.e. you installed Openstack >> components that don't fit together. Might also be caused by a configuration >> error. >> Or it's an ordinary bug, but it seems to occur on node 2 only. >> On 11/1/2018 11:01 PM, Minjun Hong wrote: >> >> Hi everyone. >> I'm really new to OpenStack. >> After I install essential components of OpenStack, such as Nova, >> Keystone, etc, I attempted creating an instance through Openstack command >> in the terminal. >> But a trouble has occurred. It was logged in "nova-conductor.log": >> >> 2018-11-01 22:18:59.831 2570 ERROR nova.scheduler.utils >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node2 (node >>> node2): [u'Traceback (most recent call last):\n', u' File >>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >>> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >>> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >>> in _build_and_run_instance\n instance_uuid=instance.uuid, >>> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: Invalid input for >>> field 'identity/password/user/password': None is not of type >>> 'string'\n\nFailed validating 'type' in >>> schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n >>> {'type': 'string'}\n\nOn >>> instance['identity']['password']['user']['password']:\n None (HTTP 400) >>> (Request-ID: req-350beee4-2fed-4645-9e21-79806d7ebfe7)\n"] >>> 2018-11-01 22:18:59.833 2570 WARNING oslo_config.cfg >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Option "os_region_name" >>> from group "placement" is deprecated. Use option "region-name" from group >>> "placement". >>> 2018-11-01 22:19:10.936 2571 ERROR nova.scheduler.utils >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node4 (node >>> node4): [u'Traceback (most recent call last):\n', u' File >>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >>> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >>> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >>> in _build_and_run_instance\n instance_uuid=instance.uuid, >>> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: >>> libxenlight failed to create new domain 'instance-00000005'\n"] >>> 2018-11-01 22:19:21.783 2567 ERROR nova.scheduler.utils >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Error from last host: node5 (node >>> node5): [u'Traceback (most recent call last):\n', u' File >>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in >>> _do_build_and_run_instance\n filter_properties, request_spec)\n', u' >>> File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, >>> in _build_and_run_instance\n instance_uuid=instance.uuid, >>> reason=six.text_type(e))\n', u"RescheduledException: Build of instance >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee was re-scheduled: internal error: >>> libxenlight failed to create new domain 'instance-00000005'\n"] >>> 2018-11-01 22:19:21.783 2567 WARNING nova.scheduler.utils >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] Failed to >>> compute_task_build_instances: Exceeded maximum number of retries. Exceeded >>> max scheduling attempts 3 for instance >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. Last exception: internal error: >>> libxenlight failed to create new domain 'instance-00000005': >>> MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max >>> scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. >>> Last exception: internal error: libxenlight failed to create new domain >>> 'instance-00000005' >>> 2018-11-01 22:19:21.784 2567 WARNING nova.scheduler.utils >>> [req-12aae2ff-4186-4ab0-964c-35b335c3188a cc22ec575cb44e53aced9ddf58d9e8d7 >>> 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: >>> 684b0a7d-22b9-4c87-88f8-b1474d3f9cee] Setting instance to ERROR state.: >>> MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max >>> scheduling attempts 3 for instance 684b0a7d-22b9-4c87-88f8-b1474d3f9cee. >>> Last exception: internal error: libxenlight failed to create new domain >>> 'instance-00000005' >> >> >> I'm not sure which component is involved in this trouble. >> And libvirt and Xen have been successfully installed on all compute node >> I have without any problem. >> >> nickeys at node2:~$ virsh create ./test.xml >>> Domain guest1 created from ./test.xml >>> nickeys at node2:~$ virsh list >>> Id Name State >>> -------------------------- >>> 0 Domain-0 running >>> 1 guest1 running >>> nickeys at node2:~$ >> >> >> What should I check first for that issue ? >> Your hint would be big help for me. >> >> Thanks! >> >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> >> > Thanks for your kind answer, Bernd. > I checked the configure files which you mentioned like it can cause the > trouble. following files are my nova.conf and nova-compute.conf of the > compute node ('node2'). > > # /etc/nova/nova.conf > > [DEFAULT] >> dhcpbridge_flagfile=/etc/nova/nova.conf >> dhcpbridge=/usr/bin/nova-dhcpbridge >> state_path=/var/lib/nova >> lock_path=/var/lock/nova >> force_dhcp_release=True >> libvirt_use_virtio_for_bridges=True >> verbose=True >> ec2_private_dns_show_ip=True >> api_paste_config=/etc/nova/api-paste.ini >> enabled_apis=osapi_compute,metadata >> transport_url = rabbit://admin:1234 at 10.150.3.88 >> my_ip = 10.150.21.182 >> use_neutron = True >> firewall_driver = nova.virt.firewall.NoopFirewallDriver >> [api] >> auth_strategy = keystone >> [vnc] >> enabled = True >> server_listen = 0.0.0.0 >> vncserver_proxyclient_address = 10.150.21.182 >> novncproxy_base_url = http://10.150.3.88:6080/vnc_auth.html >> [glance] >> api_servers = http://10.150.3.88:9292 >> [oslo_concurrency] >> lock_path = /var/lib/nova/tmp >> [keystone_authtoken] >> auth_url = http://10.150.3.88:5000/v3 >> memcached_servers = 10.150.3.88:11211 >> auth_type = password >> project_domain_name = default >> user_domain_name = default >> project_name = service >> username = nova >> password = >> [placement] >> os_region_name = RegionOne >> project_domain_name = Default >> project_name = service >> auth_type = password >> user_domain_name = Default >> auth_url = http://10.150.3.88:5000/v3 >> username = placement >> password = >> [database] >> connection=sqlite:////var/lib/nova/nova.sqlite >> [api_database] >> connection=sqlite:////var/lib/nova/nova.sqlite >> [neutron] >> url = http://10.150.3.88:9696 >> auth_url = http://10.150.3.88:5000 >> auth_type = password >> project_domain_name = default >> user_domain_name = default >> region_name = RegionOne >> project_name = service >> username = neutron >> passsword = > > > # /etc/nova/nova-compute.conf >> [DEFAULT] >> compute_driver=libvirt.LibvirtDriver >> [libvirt] >> virt_type=xen > > > And I set all the compute nodes I'm using with the same configure file > above. > Also, I installed the OpenStack components, which belong to Queens > version, following the official installation guide ( > https://docs.openstack.org/install-guide/). > > By the way, I found something weird in '/var/log/nova/nova-compute.log' of > the 'node2'. > After I manually created a VM by 'virsh' command (i.g. $ virsh create > test.xml), nova-compute of 'node2' started to complain. > And on an other node which have not created a virtual machine by the > 'virsh' command, there is not below error. > I guess that the nova-compute component was configured to qemu as > virtualization type, but I have no idea where the setting value is. > > 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager >> [req-ec324126-6dfc-4011-9f45-41d014e6f900 - - - - -] Error updating >> resources for node node2.: InvalidDiskInfo: Disk info file is invalid: >> qemu-img failed to execute on >> /home/caslab/xenguest1/domains/guest1/disk.img : Unexpected error while >> running command. >> Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 >> --cpu=30 -- env LC_ALL=C LANG=C qemu-img info >> /home/caslab/xenguest1/domains/guest1/disk.img >> Exit code: 1 >> Stdout: u'' >> Stderr: u"qemu-img: Could not open >> '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open >> '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Traceback (most >> recent call last): >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 7284, in >> update_available_resource_for_node >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager >> rt.update_available_resource(context, nodename) >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line >> 664, in update_available_resource >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager resources = >> self.driver.get_available_resource(nodename) >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6440, >> in get_available_resource >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager >> disk_over_committed = self._get_disk_over_committed_size_total() >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 8019, >> in _get_disk_over_committed_size_total >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager config, >> block_device_info) >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 7918, >> in _get_instance_disk_info_from_config >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager dk_size = >> disk_api.get_allocated_disk_size(path) >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 147, in >> get_allocated_disk_size >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager return >> images.qemu_img_info(path).disk_size >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager File >> "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 87, in >> qemu_img_info >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager raise >> exception.InvalidDiskInfo(reason=msg) >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager InvalidDiskInfo: >> Disk info file is invalid: qemu-img failed to execute on >> /home/caslab/xenguest1/domains/guest1/disk.img : Unexpected error while >> running command. >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Command: >> /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- >> env LC_ALL=C LANG=C qemu-img info >> /home/caslab/xenguest1/domains/guest1/disk.img >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Exit code: 1 >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stdout: u'' >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager Stderr: >> u"qemu-img: Could not open >> '/home/caslab/xenguest1/domains/guest1/disk.img': Could not open >> '/home/caslab/xenguest1/domains/guest1/disk.img': Permission denied\n" >> 2018-11-02 18:19:13.048 16305 ERROR nova.compute.manager > > > Thanks! > > > I found a crucial clue. When I tried to create an instance, every time, I got the error messages repeated continuously. It is from the log file of node2, /var/log/nova/nova-compute.log: 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager [req-f93b17df-6718-457a-9ce0-916b4e051619 cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 - default default] Instance failed network setup after 1 attempt(s): BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string' Failed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']: {'type': 'string'} On instance['identity']['password']['user']['password']: None (HTTP 400) (Request-ID: req-c9fac551-ebb4-4b2c-af67-5bad51d403fc) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager Traceback (most recent call last): 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1399, in _allocate_network_async 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager bind_host_id=bind_host_id) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 954, in allocate_for_instance 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager bind_host_id, available_macs, requested_ports_dict) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1087, in _update_ports_for_instance 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager vif.destroy() 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager self.force_reraise() 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1057, in _update_ports_for_instance 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager port_client, instance, port_id, port_req_body) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 468, in _update_port 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager port_response = port_client.update_port(port_id, port_req_body) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager ret = obj(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 799, in update_port 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager revision_number=revision_number) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager ret = obj(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 2375, in _update_resource 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self.put(path, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager ret = obj(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 363, in put 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager headers=headers, params=params) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager ret = obj(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, in retry_request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager headers=headers, params=params) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager ret = obj(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 282, in do_request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager headers=headers) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 343, in do_request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self.request(url, method, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 331, in request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager resp = super(SessionClient, self).request(*args, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 189, in request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self.session.request(url, method, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager auth_headers = self.get_auth_headers(auth) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return auth.get_headers(self, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager token = self.get_token(session) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self.get_access(session).auth_token 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager self.auth_ref = self.get_auth_ref(session) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 201, in get_auth_ref 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self._plugin.get_auth_ref(session, **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 177, in get_auth_ref 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager authenticated=False, log=False, **rkwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager return self.request(url, 'POST', **kwargs) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 737, in request 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager raise exceptions.from_response(resp, method, url) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string' 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager Failed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']: 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager {'type': 'string'} 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager On instance['identity']['password']['user']['password']: 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager None (HTTP 400) (Request-ID: req-c9fac551-ebb4-4b2c-af67-5bad51d403fc) 2018-11-03 21:44:24.231 16305 ERROR nova.compute.manager 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [req-f93b17df-6718-457a-9ce0-916b4e051619 cc22ec575cb44e53aced9ddf58d9e8d7 965ff1c2002d4c278b5f838dbdbbb780 - default default] [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] Instance failed to spawn: BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string' Failed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']: {'type': 'string'} On instance['identity']['password']['user']['password']: None (HTTP 400) (Request-ID: req-c9fac551-ebb4-4b2c-af67-5bad51d403fc) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] Traceback (most recent call last): 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2251, in _build_resources 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] yield resources 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2031, in _build_and_run_instance 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] block_device_info=block_device_info) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3084, in spawn 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] mdevs=mdevs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5368, in _get_guest_xml 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] network_info_str = str(network_info) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 568, in __str__ 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self._sync_wrapper(fn, *args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 551, in _sync_wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] self.wait() 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 583, in wait 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] self[:] = self._gt.wait() 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self._exit_event.wait() 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 125, in wait 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] current.throw(*self._exc) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] result = function(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 906, in context_wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return func(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1416, in _allocate_network_async 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] six.reraise(*exc_info) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1399, in _allocate_network_async 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] bind_host_id=bind_host_id) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 954, in allocate_for_instance 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] bind_host_id, available_macs, requested_ports_dict) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1087, in _update_ports_for_instance 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] vif.destroy() 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] self.force_reraise() 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] six.reraise(self.type_, self.value, self.tb) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1057, in _update_ports_for_instance 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] port_client, instance, port_id, port_req_body) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 468, in _update_port 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] port_response = port_client.update_port(port_id, port_req_body) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] ret = obj(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 799, in update_port 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] revision_number=revision_number) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] ret = obj(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 2375, in _update_resource 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self.put(path, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] ret = obj(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 363, in put 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] headers=headers, params=params) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 114, in wrapper 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] ret = obj(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 282, in do_request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] headers=headers) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 343, in do_request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self.request(url, method, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 331, in request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] resp = super(SessionClient, self).request(*args, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 189, in request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self.session.request(url, method, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 573, in request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] auth_headers = self.get_auth_headers(auth) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 900, in get_auth_headers 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return auth.get_headers(self, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/plugin.py", line 95, in get_headers 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] token = self.get_token(session) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 88, in get_token 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self.get_access(session).auth_token 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/base.py", line 134, in get_access 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] self.auth_ref = self.get_auth_ref(session) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/generic/base.py", line 201, in get_auth_ref 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self._plugin.get_auth_ref(session, **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/identity/v3/base.py", line 177, in get_auth_ref 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] authenticated=False, log=False, **rkwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 848, in post 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] return self.request(url, 'POST', **kwargs) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 737, in request 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] raise exceptions.from_response(resp, method, url) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string' 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] Failed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']: 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] {'type': 'string'} 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] On instance['identity']['password']['user']['password']: 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] None (HTTP 400) (Request-ID: req-c9fac551-ebb4-4b2c-af67-5bad51d403fc) 2018-11-03 21:44:24.983 16305 ERROR nova.compute.manager [instance: 2ef5e55c-4868-4ff1-bbc9-3d82c4f99762] Above logs all occurred with only one instance creation. Although I already searched the internet, I found nothing meaningful. Just, I'm guessing that when I requested an instance creation on the controller node, it delivered the request with an empty parameter, which was deemed to None type. I mean, the empty one is password. However, I'm confused because I have set all the config files which the official documentation guided to add. As I think, the main issue is the reason the instance creation request was sent to 'node2' without password parameter. Have you ever seen the error message I had? -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Sun Nov 4 06:22:42 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sun, 4 Nov 2018 09:52:42 +0330 Subject: [Openstack] [PackStack][Neutron] How to configure for external network Message-ID: Hi, I installed PackStack on two nodes, one as Controller and Network, the other as the Compute Node. After installation, I created a new br-ex interface and I did OVS settings. I also define two internal and external network (flat) and router and interfaces. The internal network range IP is not a valid range and for the test, I set it in 10.10.0.0/24. I can lunch instances using internal network and VMs can connect to each other using that. The external range IP is the same as our Provider range IP (the range that our devices get when connecting to the internet). But lunching instances using the external network it can't connect to the internet or ping other devices in the network (proper security groups are set). IMy question was, is there any tutorial for configuring the network in two nodes so VMs can connect to the internet? What kind of Network does it need? flat, vlan or vxlan? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun Nov 4 09:22:07 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sun, 4 Nov 2018 10:22:07 +0100 Subject: [Openstack] [PackStack][Neutron] How to configure for external network In-Reply-To: References: Message-ID: <872FE884-E36B-4952-839D-121AF3729D9B@redhat.com> Hi, For external network You should use flat or vlan type - which one of them, depends on Your provider’s configuration - if You should use some VLAN, You need vlan network. Vxlan networks can be only used for tenant networks. > Wiadomość napisana przez Soheil Pourbafrani w dniu 04.11.2018, o godz. 07:22: > > Hi, > > I installed PackStack on two nodes, one as Controller and Network, the other as the Compute Node. After installation, I created a new br-ex interface and I did OVS settings. I also define two internal and external network (flat) and router and interfaces. > The internal network range IP is not a valid range and for the test, I set it in 10.10.0.0/24. I can lunch instances using internal network and VMs can connect to each other using that. > The external range IP is the same as our Provider range IP (the range that our devices get when connecting to the internet). But lunching instances using the external network it can't connect to the internet or ping other devices in the network (proper security groups are set). IMy question was, is there any tutorial for configuring the network in two nodes so VMs can connect to the internet? What kind of Network does it need? flat, vlan or vxlan? > > Thanks > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Sun Nov 4 10:29:25 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sun, 4 Nov 2018 11:29:25 +0100 Subject: [Openstack] [PackStack][Neutron] How to configure for external network In-Reply-To: References: <872FE884-E36B-4952-839D-121AF3729D9B@redhat.com> Message-ID: <6A087450-B344-449A-81FF-028E8AB28BF3@redhat.com> Hi, > Wiadomość napisana przez Soheil Pourbafrani w dniu 04.11.2018, o godz. 11:10: > > Thanks, > > Actually, I created a vxlan network and define the internal network and external network in range of my network provider, 192.168.0.1. I create a router and interface and lunch an instance with an internal network that has floating IP, too. As I said before, Your external network has to be vlan or flat network. Vxlan network can be used only as tenant network. Packets from such network can’t go „outside” to Your provider’s network and it’s not matter of IP addresses used in network. > > From the controller node I can ping the instance using floating IP: "ip netns exec qdhcp-9691cdb8-56f3-44b6-b501-4678eceb3866 ping 192.168.0.4", but the instance is not accessible in the external network and the command ping 192.168.0.4 failed! > > What is the problem? > > > On Sun, Nov 4, 2018 at 12:52 PM Slawomir Kaplonski wrote: > Hi, > > For external network You should use flat or vlan type - which one of them, depends on Your provider’s configuration - if You should use some VLAN, You need vlan network. > Vxlan networks can be only used for tenant networks. > > > Wiadomość napisana przez Soheil Pourbafrani w dniu 04.11.2018, o godz. 07:22: > > > > Hi, > > > > I installed PackStack on two nodes, one as Controller and Network, the other as the Compute Node. After installation, I created a new br-ex interface and I did OVS settings. I also define two internal and external network (flat) and router and interfaces. > > The internal network range IP is not a valid range and for the test, I set it in 10.10.0.0/24. I can lunch instances using internal network and VMs can connect to each other using that. > > The external range IP is the same as our Provider range IP (the range that our devices get when connecting to the internet). But lunching instances using the external network it can't connect to the internet or ping other devices in the network (proper security groups are set). IMy question was, is there any tutorial for configuring the network in two nodes so VMs can connect to the internet? What kind of Network does it need? flat, vlan or vxlan? > > > > Thanks > > _______________________________________________ > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From soheil.ir08 at gmail.com Sun Nov 4 11:19:42 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sun, 4 Nov 2018 14:49:42 +0330 Subject: [Openstack] [PackStack][Neutron] How to configure for external network In-Reply-To: <6A087450-B344-449A-81FF-028E8AB28BF3@redhat.com> References: <872FE884-E36B-4952-839D-121AF3729D9B@redhat.com> <6A087450-B344-449A-81FF-028E8AB28BF3@redhat.com> Message-ID: Thanks a lot. On Sun, Nov 4, 2018 at 1:59 PM Slawomir Kaplonski wrote: > Hi, > > > Wiadomość napisana przez Soheil Pourbafrani w > dniu 04.11.2018, o godz. 11:10: > > > > Thanks, > > > > Actually, I created a vxlan network and define the internal network and > external network in range of my network provider, 192.168.0.1. I create a > router and interface and lunch an instance with an internal network that > has floating IP, too. > > As I said before, Your external network has to be vlan or flat network. > Vxlan network can be used only as tenant network. Packets from such network > can’t go „outside” to Your provider’s network and it’s not matter of IP > addresses used in network. > > > > > From the controller node I can ping the instance using floating IP: "ip > netns exec qdhcp-9691cdb8-56f3-44b6-b501-4678eceb3866 ping 192.168.0.4", > but the instance is not accessible in the external network and the command > ping 192.168.0.4 failed! > > > > What is the problem? > > > > > > On Sun, Nov 4, 2018 at 12:52 PM Slawomir Kaplonski > wrote: > > Hi, > > > > For external network You should use flat or vlan type - which one of > them, depends on Your provider’s configuration - if You should use some > VLAN, You need vlan network. > > Vxlan networks can be only used for tenant networks. > > > > > Wiadomość napisana przez Soheil Pourbafrani w > dniu 04.11.2018, o godz. 07:22: > > > > > > Hi, > > > > > > I installed PackStack on two nodes, one as Controller and Network, the > other as the Compute Node. After installation, I created a new br-ex > interface and I did OVS settings. I also define two internal and external > network (flat) and router and interfaces. > > > The internal network range IP is not a valid range and for the test, I > set it in 10.10.0.0/24. I can lunch instances using internal network and > VMs can connect to each other using that. > > > The external range IP is the same as our Provider range IP (the range > that our devices get when connecting to the internet). But lunching > instances using the external network it can't connect to the internet or > ping other devices in the network (proper security groups are set). IMy > question was, is there any tutorial for configuring the network in two > nodes so VMs can connect to the internet? What kind of Network does it > need? flat, vlan or vxlan? > > > > > > Thanks > > > _______________________________________________ > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Sun Nov 4 11:25:57 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sun, 4 Nov 2018 14:55:57 +0330 Subject: [Openstack] [PackStack][Neutron] openvswitch error - Port not present in bridge br-int Message-ID: Hi, I have 2 node packsack. One as controller and network, the other as compute. I created a flat external network to connect external provider network. When I lunch instance using that I got the error: openvswitch error - Port not present in bridge br-int The output of the ovs-vsctl show in compute node is: 00a36389-cb9e-45ce-a5be-d2da04c295c9 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-ex Port "enp2s0" Interface "enp2s0" Port br-ex Interface br-ex type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal ovs_version: "2.9.0" and the output in the controller node is: 36420024-d9b9-4fd9-a64a-5152f8012af5 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "ens33" Interface "ens33" Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qg-0e5c98e8-02" tag: 4095 Interface "qg-0e5c98e8-02" type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "qr-71e0313c-18" tag: 5 Interface "qr-71e0313c-18" type: internal ovs_version: "2.9.0" Which part is faulty and how can I fix that? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.sb at garvan.org.au Sun Nov 4 23:10:35 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Sun, 4 Nov 2018 23:10:35 +0000 Subject: [Openstack] pci alias device_type and numa_policy and device_type meanings Message-ID: <9D8A2486E35F0941A60430473E29F15B017BADC930@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community. I am setting up pci passthrough for GPUs using aliases. I was wondering the meaning of the fields device_type and numa_policy and how should I use them as I could not find much details in the official documentation. https://docs.openstack.org/nova/rocky/admin/pci-passthrough.html#configure-nova-api-controller https://docs.openstack.org/nova/rocky/configuration/config.html#pci thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Nov 5 04:50:59 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 5 Nov 2018 13:50:59 +0900 Subject: [Openstack] [tripleo] can't deploy Newton undercloud Message-ID: I am trying to set up Tripleo for Newton (why Newton you ask? The RHCSA/RHOSP exam is based on Newton). Unexpectedly, I run into a brick wall when deploying the undercloud. *Any suggestions how to troubleshoot this?* The last messages are: puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [2018-11-05 00:47:13,779] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2018-11-05 00:47:13,779] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last):   File "", line 1, in   File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1258, in install     _run_orc(instack_env)   File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1022, in _run_orc     _run_live_command(args, instack_env, 'os-refresh-config')   File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 522, in _run_live_command     raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Neither /var/log nor /opt have any trace of os-refresh-config. *Where should I look for log? *And***where do I find more information about the **/puppet apply /**error? /var/log/puppet is empty.* ~/.instack/install-undercloud.log contains one other message labeled with "error". Not sure if it's relevant: Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack token issue --format value' returned 1: Certificate did not match expected hostname: 192.168.126.2. Certificate: {'subjectAltName': (('DNS', '192.168.126.2'),), 'notBefore': u'Nov  4 15:04:13 2018 GMT', 'serialNumber': u'03553ED8BBF040FEAF44FE8C95A45CB2', 'notAfter': 'Nov  4 14:58:20 2019 GMT', 'version': 3L, 'subject': ((('commonName', u'192.168.126.2'),),), 'issuer': ((('commonName', u'Local Signing Authority'),), (('commonName', u'03553ed8-bbf040fe-af44fe8c-95a45cb1'),))} SSL exception connecting to https://192.168.126.2:13000/v3/auth/tokens: *hostname '192.168.126.2' doesn't match '192.168.126.2'* (tried 21, for a total of 170 seconds)^[[0m So it seems that hostname 192.168.126.2 doesn't match 192.168.126.2. *How can I convince SSL that they do actually match?* To get to this point, I followed the steps in the Tripleo install guide . The undercloud.conf is almost verbatim from Keith Tenzer's blog : [DEFAULT] # local interface is connected to the provisioning network # which is used to PXE-install the overcloud nodes local_interface = eth0 local_ip = 192.168.126.1/24 undercloud_public_vip = 192.168.126.2 undercloud_admin_vip = 192.168.126.3 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.150 network_cidr = 192.168.126.0/24 network_gateway = 192.168.126.1 inspection_iprange = 192.168.126.30,192.168.126.99 generate_service_certificate = true certificate_generation_ca = local -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From berndbausch at gmail.com Mon Nov 5 10:11:31 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 5 Nov 2018 19:11:31 +0900 Subject: [Openstack] [tripleo] can't deploy Newton undercloud In-Reply-To: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> References: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> Message-ID: Thanks for the hint, Remo. Unfortunately the reboot didn't help. Do you know how to switch off SSL? I only see this in undercloud.conf: # Certificate file to use for OpenStack service SSL connections. # Setting this enables SSL for the OpenStack API endpoints, leaving it # unset disables SSL. (string value) #undercloud_service_certificate = The problem: It is already unset. Is there a way to unset it even more? (kidding) The Tripleo web site has a section how to enable SSL, but AFAIK nothing now to not enable it. I added the openstack list back to the CC. Bernd On 11/5/2018 4:50 PM, Remo Mattei wrote: > Try to turn ssl off there is a bug that may be the issue here but if > you reboot the under cloud and retry to re-run the install it May > finishes. So before turning off ssl try reboot rerun install and see > what happen.  -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaosorior at gmail.com Mon Nov 5 10:49:19 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Mon, 5 Nov 2018 12:49:19 +0200 Subject: [Openstack] [tripleo] can't deploy Newton undercloud In-Reply-To: References: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> Message-ID: What about the option "generate_service_certificate", that is 'true' by default, and you would need to set that as false. On Mon, Nov 5, 2018 at 12:16 PM Bernd Bausch wrote: > Thanks for the hint, Remo. > > Unfortunately the reboot didn't help. Do you know how to switch off SSL? I > only see this in undercloud.conf: > > # Certificate file to use for OpenStack service SSL connections. > # Setting this enables SSL for the OpenStack API endpoints, leaving it > # unset disables SSL. (string value) > #undercloud_service_certificate = > > The problem: It is already unset. Is there a way to unset it even more? > (kidding) > > The Tripleo web site has a section how to enable SSL, but AFAIK nothing > now to not enable it. > I added the openstack list back to the CC. > > Bernd > On 11/5/2018 4:50 PM, Remo Mattei wrote: > > Try to turn ssl off there is a bug that may be the issue here but if you > reboot the under cloud and retry to re-run the install it May finishes. So > before turning off ssl try reboot rerun install and see what happen. > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Nov 5 12:03:39 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 5 Nov 2018 21:03:39 +0900 Subject: [Openstack] [tripleo] can't deploy Newton undercloud In-Reply-To: References: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> Message-ID: <70f31437-121e-aeda-014c-e47ada70aa45@gmail.com> Yep, after commenting anything that seems to be remotely linked with "certificate" or "SSL", undercloud installation comletes. Many thanks, gentlemen! If anybody has an idea how to successfully deploy a Newton undercloud /with /SSL, it would be great to hear. Bernd. On 11/5/2018 7:49 PM, Juan Antonio Osorio wrote: > What about the option "generate_service_certificate", that is 'true' > by default, and you would need to set that as false. > > > On Mon, Nov 5, 2018 at 12:16 PM Bernd Bausch > wrote: > > Thanks for the hint, Remo. > > Unfortunately the reboot didn't help. Do you know how to switch > off SSL? I only see this in undercloud.conf: > > # Certificate file to use for OpenStack service SSL connections. > # Setting this enables SSL for the OpenStack API endpoints, > leaving it > # unset disables SSL. (string value) > #undercloud_service_certificate = > > The problem: It is already unset. Is there a way to unset it even > more? (kidding) > > The Tripleo web site has a section how to enable SSL, but AFAIK > nothing now to not enable it. > > I added the openstack list back to the CC. > > Bernd > > On 11/5/2018 4:50 PM, Remo Mattei wrote: >> Try to turn ssl off there is a bug that may be the issue here but >> if you reboot the under cloud and retry to re-run the install it >> May finishes. So before turning off ssl try reboot rerun install >> and see what happen.  > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to     : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ttml at fastmail.com Mon Nov 5 12:15:38 2018 From: ttml at fastmail.com (Tushar Tyagi) Date: Mon, 05 Nov 2018 17:45:38 +0530 Subject: [Openstack] [Cinder] Debugging Cinder Volume Code Message-ID: <1541420138.972538.1565962000.444C3ACB@webmail.messagingengine.com> Hello, I have some requirement wherein I need to change the way Cinder creates volumes. Along with code changes I want to debug and understand the code flow. So far I've been able to debug the `cinderclient` package using the Python debugger `pdb` by following these steps: - Add pdb.set_trace() statements in the python files - Run the "cinder create" CLI command - Step through the code. My understanding is that the requests end up going to the Cinder api exposed the "$HOST/volumes" http endpoint. For my debugging purposes, this is not deep enough (I want to understand the interaction between `cinder` and some of the NVMe volumes that I have). For debugging the core `cinder` package, I've tried using the same `pdb` approach but I've not been able to make it work. Mostly because I've not been able to find the entry point where I can add the trace statements and start the service. Another approach that I've tried is to attach debugger on the running python process and add a file+function breakpoint. Again, I've not been able to make this work. I believe gdb is mostly looking for C files and even if I give the complete path of the file and function, it doesn't break. Moreover, there are 3 instances of `cinder-volume` running on my machine and I've tried attaching gdb to all three of them but nothing happens. So my question is how do I debug the core Cinder process? Is there any documentation I can refer to? Or some other practice that is involved in doing so? Thanks -- Tushar Tyagi ttml at fastmail.com From Remo at Italy1.com Mon Nov 5 12:54:10 2018 From: Remo at Italy1.com (Remo Mattei) Date: Mon, 5 Nov 2018 13:54:10 +0100 Subject: [Openstack] [tripleo] can't deploy Newton undercloud In-Reply-To: References: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> Message-ID: Yes as soon as I get to my computer I will share the info. Inviato da iPhone > Il giorno 5 nov 2018, alle ore 11:11, Bernd Bausch ha scritto: > > Thanks for the hint, Remo. > > Unfortunately the reboot didn't help. Do you know how to switch off SSL? I only see this in undercloud.conf: > > # Certificate file to use for OpenStack service SSL connections. > # Setting this enables SSL for the OpenStack API endpoints, leaving it > # unset disables SSL. (string value) > #undercloud_service_certificate = > > The problem: It is already unset. Is there a way to unset it even more? (kidding) > > The Tripleo web site has a section how to enable SSL, but AFAIK nothing now to not enable it. > > I added the openstack list back to the CC. > Bernd > >> On 11/5/2018 4:50 PM, Remo Mattei wrote: >> Try to turn ssl off there is a bug that may be the issue here but if you reboot the under cloud and retry to re-run the install it May finishes. So before turning off ssl try reboot rerun install and see what happen. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Remo at Italy1.com Mon Nov 5 13:52:29 2018 From: Remo at Italy1.com (Remo Mattei) Date: Mon, 5 Nov 2018 14:52:29 +0100 Subject: [Openstack] [tripleo] can't deploy Newton undercloud In-Reply-To: References: <48C50B40-696C-4DA8-8D16-F89862FF775A@italy1.com> Message-ID: <8CA42566-91C5-4B89-A336-B6737F125C6E@Italy1.com> I saw Juan response that should do the steps to bypass the ssl cert. Remo Inviato da iPhone > Il giorno 5 nov 2018, alle ore 13:54, Remo Mattei ha scritto: > > Yes as soon as I get to my computer I will share the info. > > Inviato da iPhone > >> Il giorno 5 nov 2018, alle ore 11:11, Bernd Bausch ha scritto: >> >> Thanks for the hint, Remo. >> >> Unfortunately the reboot didn't help. Do you know how to switch off SSL? I only see this in undercloud.conf: >> >> # Certificate file to use for OpenStack service SSL connections. >> # Setting this enables SSL for the OpenStack API endpoints, leaving it >> # unset disables SSL. (string value) >> #undercloud_service_certificate = >> >> The problem: It is already unset. Is there a way to unset it even more? (kidding) >> >> The Tripleo web site has a section how to enable SSL, but AFAIK nothing now to not enable it. >> >> I added the openstack list back to the CC. >> Bernd >> >>> On 11/5/2018 4:50 PM, Remo Mattei wrote: >>> Try to turn ssl off there is a bug that may be the issue here but if you reboot the under cloud and retry to re-run the install it May finishes. So before turning off ssl try reboot rerun install and see what happen. >> > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Mon Nov 5 16:12:15 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Mon, 5 Nov 2018 18:12:15 +0200 Subject: [Openstack] specify endpoint in API calls Message-ID: Dear colleagues, I have the following configuration of endpoints: $ openstack endpoint list --service identity +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ | ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                            | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ | 68a4eabc27474beeb6f08d986cca3263 | RegionOne | keystone     | identity     | True    | public    | http://controller-ext:5000/v3/ | | 6fab7abe61e84463a05b4e58d8f7bb60 | RegionOne | keystone     | identity     | True    | internal  | http://controller:5000/v3/     | | eb378df5949046a49661dad3c887677f | RegionOne | keystone     | identity     | True    | admin     | http://controller:5000/v3/     | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ and want to explicitly use public endpoint (calling controller-ext, NOT controller) when doing API calls. Example of code: from keystoneauth1.identity import v3 from keystoneauth1 import session as authsession from keystoneclient.v3 import client as identity os_domain = 'default' auth_url = 'http://controller-ext:5000/v3' os_username = 'admin' os_password = 'adminpass' project_name = 'admin' password = v3.Password(auth_url=auth_url, username=os_username, password=os_password, user_domain_name=os_domain,project_name=project_name,project_domain_name=os_domain)auth = authsession.Session(auth=password) ks = identity.Client(session = auth) for ep in ks.endpoints.list(): pass returns an error since it tries to call 'controller' (which is internal address and isn't resolvable): keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://controller:5000/v3/endpoints?endpoint_filter=service_type&endpoint_filter=interface: HTTPConnectionPool(host='controller', port=5000): Max retries exceeded with url: /v3/endpoints?endpoint_filter=service_type&endpoint_filter=interface (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',)) The question is: are there ways to implicitly point to 'public' (and whatever else) endpoint when working with identity service? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Mon Nov 5 16:27:39 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Mon, 5 Nov 2018 18:27:39 +0200 Subject: [Openstack] specify endpoint in API calls In-Reply-To: References: Message-ID: Sorry, found the way immediately: ks = identity.Client(session = auth,interface='public') On 11/5/18 6:12 PM, Volodymyr Litovka wrote: > Dear colleagues, > > I have the following configuration of endpoints: > > $ openstack endpoint list --service identity > +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ > | ID                               | Region    | Service Name | > Service Type | Enabled | Interface | URL                            | > +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ > | 68a4eabc27474beeb6f08d986cca3263 | RegionOne | keystone     | > identity     | True    | public    | http://controller-ext:5000/v3/ | > | 6fab7abe61e84463a05b4e58d8f7bb60 | RegionOne | keystone     | > identity     | True    | internal  | http://controller:5000/v3/ | > | eb378df5949046a49661dad3c887677f | RegionOne | keystone     | > identity     | True    | admin     | http://controller:5000/v3/ | > +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------------+ > > and want to explicitly use public endpoint (calling controller-ext, > NOT controller) when doing API calls. Example of code: > > from keystoneauth1.identity import v3 > from keystoneauth1 import session as authsession > from keystoneclient.v3 import client as identity > > os_domain = 'default' > auth_url = 'http://controller-ext:5000/v3' > os_username = 'admin' > os_password = 'adminpass' > project_name = 'admin' > > password = v3.Password(auth_url=auth_url, > username=os_username, > password=os_password, > user_domain_name=os_domain,project_name=project_name,project_domain_name=os_domain)auth = authsession.Session(auth=password) > ks = identity.Client(session = auth) > > for ep in ks.endpoints.list(): > pass > > returns an error since it tries to call 'controller' (which is > internal address and isn't resolvable): > > keystoneauth1.exceptions.connection.ConnectFailure: Unable to > establish connection to > http://controller:5000/v3/endpoints?endpoint_filter=service_type&endpoint_filter=interface: > HTTPConnectionPool(host='controller', port=5000): Max retries exceeded > with url: > /v3/endpoints?endpoint_filter=service_type&endpoint_filter=interface > (Caused by NewConnectionError(' object at 0x1063f1940>: Failed to establish a new connection: [Errno > 8] nodename nor servname provided, or not known',)) > > The question is: are there ways to implicitly point to 'public' (and > whatever else) endpoint when working with identity service? > > Thank you. > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From jd8lester at gmail.com Mon Nov 5 20:21:50 2018 From: jd8lester at gmail.com (John Lester) Date: Mon, 5 Nov 2018 15:21:50 -0500 Subject: [Openstack] [kolla-ansible] kvm permission denied for nova user for all-in-one deployment: cannot launch instance as consequence Message-ID: Hello, I'm having some trouble launching an instance after deploying using kolla-ansible all-in-one deployment on Ubuntu and Centos using the most up to date release, Rocky. Here is the log. (nova-compute)[nova at openstack /]$ cat /var/log/kolla/nova/nova-compute.log | grep error : libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied 2018-11-05 15:10:36.720 6 ERROR nova.virt.libvirt.driver [req-2bf6b5d8-cfba-449c-967b-2e3dae7a3a56 507ab1a9af604db19446ea9d97ac2503 634aee2a6cd8437c93395be07af6fe33 - default default] [instance: 8d323b09-27c3-4632-9c03-48d27bbc5840] Failed to start libvirt guest: libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied 2018-11-05 15:10:43.502 6 ERROR nova.compute.manager [req-2bf6b5d8-cfba-449c-967b-2e3dae7a3a56 507ab1a9af604db19446ea9d97ac2503 634aee2a6cd8437c93395be07af6fe33 - default default] [instance: 8d323b09-27c3-4632-9c03-48d27bbc5840] Instance failed to spawn: libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied 2018-11-05 15:10:43.502 6 ERROR nova.compute.manager [instance: 8d323b09-27c3-4632-9c03-48d27bbc5840] self._encoded_xml, errors='ignore') 2018-11-05 15:10:43.502 6 ERROR nova.compute.manager [instance: 8d323b09-27c3-4632-9c03-48d27bbc5840] libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied : libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied It should work based on below? (nova-compute)[nova at openstack /]$ cat /dev/kvm cat: /dev/kvm: Permission denied (nova-compute)[nova at openstack /]$ ls -al /dev/kvm crw-rw----+ 1 root qemu 10, 232 Nov 5 15:10 /dev/kvm (nova-compute)[nova at openstack /]$ whoami nova (nova-compute)[nova at openstack /]$ groups nova nova : nova kolla qemu Thanks, JD -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Nov 5 21:08:19 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 5 Nov 2018 16:08:19 -0500 Subject: [Openstack] Seeking user/operator feedback on the vision for OpenStack clouds Message-ID: <93badd06-a23b-d8bb-4a7b-7b091e9a42b0@redhat.com> The TC is leading an initiative to write down a vision for what OpenStack clouds could eventually look like - essentially interpreting the OpenStack Mission Statement at a level of detail that's sufficient to inform technical and governance decisions. You can read the current draft here: http://logs.openstack.org/05/592205/5/check/openstack-tox-docs/24a6785/html/reference/technical-vision.html Feedback from the technical community suggests to me that it's getting pretty close to accurately capturing what we thought we should have been building. However, there's obviously no point in developers doing this in isolation if the thing we describe isn't what actual or potential operators and users of OpenStack want. Thus, we're seeking feedback from the wider OpenStack community on the vision. There are a number of ways to get involved: * You can comment directly on the review https://review.openstack.org/592205 * Replies to this thread are also welcome. * There will be a brief presentation on this topic at the joint leadership meeting with the Board, UC, and TC in Berlin: https://wiki.openstack.org/wiki/Governance/Foundation/12Nov2018BoardMeeting * Attend this Forum session at the Summit in Berlin where we'll be listening to feedback and discussing the next steps: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22818/vision-for-openstack-clouds-discussion I look forward to hearing from y'all :) thanks, Zane. From berndbausch at gmail.com Tue Nov 6 01:58:04 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 6 Nov 2018 10:58:04 +0900 Subject: [Openstack] [openstack client] command completion Message-ID: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> Rocky Devstack sets up bash command completion for the openstack client, e.g. /openstack net[TAB]/ expands to /network/. Sadly, there is no command completion when using the client interactively: /$ openstack// //(openstack) net[TAB][TAB][TAB][TAB][TAB]//[key breaks]/       # nothing happens But I faintly remember that it worked in earlier releases. Can this be configured, and how? Is this a bug? By the way, there used to be a /python-openstackclient /section in Launchpad. Doesn't exist anymore. Where are bugs tracked these days? Thanks, Bernd -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Tue Nov 6 02:19:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 6 Nov 2018 13:19:20 +1100 Subject: [Openstack] [openstack-dev] [all]Naming the T release of OpenStack -- Poll open In-Reply-To: <20181030054024.GC2343@thor.bakeyournoodle.com> References: <20181030054024.GC2343@thor.bakeyournoodle.com> Message-ID: <20181106021919.GB20576@thor.bakeyournoodle.com> Hi all, Time is running out for you to have your say in the T release name poll. We have just under 3 days left. If you haven't voted please do! On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote: > Hi folks, > > It is time again to cast your vote for the naming of the T Release. > As with last time we'll use a public polling option over per user private URLs > for voting. This means, everybody should proceed to use the following URL to > cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_aac97f1cbb6c61df&akey=b9e448b340787f0e > > We've selected a public poll to ensure that the whole community, not just gerrit > change owners get a vote. Also the size of our community has grown such that we > can overwhelm CIVS if using private urls. A public can mean that users > behind NAT, proxy servers or firewalls may receive an message saying > that your vote has already been lodged, if this happens please try > another IP. > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-11-08 00:00:00+00:00[1], and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the T release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > --------------------- > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > ----------------------- > > The Geographic Region from where names for the S release will come is Colorado > > Proposed Names > -------------- > > * Tarryall > * Teakettle > * Teller > * Telluride > * Thomas : the Tank Engine > * Thornton > * Tiger > * Tincup > * Timnath > * Timber > * Tiny Town > * Torreys > * Trail > * Trinidad > * Treasure > * Troublesome > * Trussville > * Turret > * Tyrone > > Proposed Names that do not meet the criteria (accepted by the TC) > ----------------------------------------------------------------- > > * Train🚂 : Many Attendees of the first Denver PTG have a story to tell about the trains near the PTG hotel. We could celebrate those stories with this name > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From soheil.ir08 at gmail.com Tue Nov 6 12:23:05 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Tue, 6 Nov 2018 15:53:05 +0330 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int Message-ID: Hi, I initilize an instance using a defined flat network and I got the error: port no present in bridge br-int I have a 2 node deployment (controller + network, compute). The output of the command ovs-vsctl show is *On the network node* d3a06f16-d727-4333-9de6-cf4ce3b0ce36 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "ens33" Interface "ens33" Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tapefb98047-57" tag: 1 Interface "tapefb98047-57" type: internal Port "qr-d62d0c14-51" tag: 1 Interface "qr-d62d0c14-51" type: internal Port "qg-5468707b-6d" tag: 2 Interface "qg-5468707b-6d" type: internal Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port "vxlan-c0a8003d" Interface "vxlan-c0a8003d" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} ovs_version: "2.9.0" *On the Compute node* *55e62867-9c88-4925-b49c-55fb74d174bd* * Manager "ptcp:6640:127.0.0.1"* * is_connected: true* * Bridge br-ex* * Controller "tcp:127.0.0.1:6633 "* * is_connected: true* * fail_mode: secure* * Port phy-br-ex* * Interface phy-br-ex* * type: patch* * options: {peer=int-br-ex}* * Port "enp2s0"* * Interface "enp2s0"* * Port br-ex* * Interface br-ex* * type: internal* * Bridge br-tun* * Controller "tcp:127.0.0.1:6633 "* * is_connected: true* * fail_mode: secure* * Port br-tun* * Interface br-tun* * type: internal* * Port "vxlan-c0a8003e"* * Interface "vxlan-c0a8003e"* * type: vxlan* * options: {df_default="true", in_key=flow, local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* * Port patch-int* * Interface patch-int* * type: patch* * options: {peer=patch-tun}* * Bridge br-int* * Controller "tcp:127.0.0.1:6633 "* * is_connected: true* * fail_mode: secure* * Port int-br-ex* * Interface int-br-ex* * type: patch* * options: {peer=phy-br-ex}* * Port br-int* * Interface br-int* * type: internal* * Port patch-tun* * Interface patch-tun* * type: patch* * options: {peer=patch-int}* * ovs_version: "2.9.0"* How can I solve the problem? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Nov 6 13:07:00 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Nov 2018 13:07:00 +0000 Subject: [Openstack] [openstack client] command completion In-Reply-To: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> References: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> Message-ID: <20181106130700.3flvylokjzeqc7zh@yuggoth.org> On 2018-11-06 10:58:04 +0900 (+0900), Bernd Bausch wrote: [...] > By the way, there used to be a /python-openstackclient /section in > Launchpad. Doesn't exist anymore. Where are bugs tracked these days? At the top of https://launchpad.net/python-openstackclient it says, "Note that all Launchpad activity (just bugs & blueprints really) has been migrated to OpenStack's Storyboard: https://storyboard.openstack.org/#!/project_group/80" I suppose now that project group name URL support is in for SB, they could update that to the more memorable https://storyboard.openstack.org/#!/project_group/openstackclient instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Nov 6 13:09:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 06 Nov 2018 08:09:40 -0500 Subject: [Openstack] [openstack client] command completion In-Reply-To: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> References: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> Message-ID: Bernd Bausch writes: > Rocky Devstack sets up bash command completion for the openstack client, > e.g. /openstack net[TAB]/ expands to /network/. Sadly, there is no > command completion when using the client interactively: > > /$ openstack// > //(openstack) net[TAB][TAB][TAB][TAB][TAB]//[key breaks]/       # > nothing happens > > But I faintly remember that it worked in earlier releases. Can this be > configured, and how? Is this a bug? It seems like one. > > By the way, there used to be a /python-openstackclient /section in > Launchpad. Doesn't exist anymore. Where are bugs tracked these days? According to [1] the bug tracker has moved to storyboard. Doug [1] http://git.openstack.org/cgit/openstack/python-openstackclient/tree/README.rst#n41 From terje.lundin at evolved-intelligence.com Tue Nov 6 13:21:31 2018 From: terje.lundin at evolved-intelligence.com (Terry Lundin) Date: Tue, 6 Nov 2018 13:21:31 +0000 Subject: [Openstack] VMs cannot fetch metadata Message-ID: Hi all, I've been struggling with instances suddenly not being able to fetch metadata from Openstack Queens (this has worked fine earlier). Newly created VMs fail to connect to the magic ip, eg. http://169.254.169.254/, and won't initialize properly. Subsequently ssh login will fail since no key is uploaded. The symptom is failed requests in the log *Cirros:* Starting network... udhcpc (v1.20.1) started Sending discover... Sending select for 10.0.0.18... Lease of 10.0.0.18 obtained, lease time 86400 route: SIOCADDRT: File exists WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.1" cirros-ds 'net' up at 0.94 checking http://169.254.169.254/2009-04-04/instance-id failed 1/20: up 0.94. request failed failed 2/20: up 3.01. request failed failed 3/20: up 5.03. request failed failed 4/20: up 7.04. request failed *..and on Centos6:* ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ ci-info: | 0 | 169.254.169.254 | 10.0.0.1 | 255.255.255.255 | eth0 | UGH | ci-info: | 1 | 10.0.0.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | ci-info: | 2 | 0.0.0.0 | 10.0.0.1 | 0.0.0.0 | eth0 | UG | ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ 2018-11-06 08:10:07,892 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: unexpected error ['NoneType' object has no attribute 'status_code'] 2018-11-06 08:10:08,906 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: unexpected error ['NoneType' object has no attribute 'status_code'] 2018-11-06 08:10:09,925 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: unexpected error ['NoneType' object has no attribute ... Using Curl manually, eg. '/curl http://169.254.169.254/openstack/' one gets: /curl: (52) Empty reply from server/ *At the same time this error is showing up in the syslog on the controller:* Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460, in fire_timers Nov  6 12:51:01 controller neutron-metadata-agent[3094]: timer() Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 59, in __call__ Nov  6 12:51:01 controller neutron-metadata-agent[3094]: cb(*args, **kw) Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 219, in main Nov  6 12:51:01 controller neutron-metadata-agent[3094]: result = function(*args, **kwargs) Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 793, in process_request Nov  6 12:51:01 controller neutron-metadata-agent[3094]: proto.__init__(conn_state, self) Nov  6 12:51:01 controller neutron-metadata-agent[3094]: TypeError: __init__() takes exactly 4 arguments (3 given) *Neither rebooting the controller, reinstalling neutron, or restarting the services will do anything top fix this.* Has anyone else seen this? We are using Queens with a single controller. Kind Regards Terje Lundin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Nov 6 14:25:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 6 Nov 2018 09:25:39 -0500 Subject: [Openstack] VMs cannot fetch metadata In-Reply-To: References: Message-ID: <6d8b807d-67c3-5c9f-803d-9a8cd6a93feb@gmail.com> https://bugs.launchpad.net/neutron/+bug/1777640 Best, -jay On 11/06/2018 08:21 AM, Terry Lundin wrote: > Hi all, > > I've been struggling with instances suddenly not being able to fetch > metadata from Openstack Queens (this has worked fine earlier). > > Newly created VMs fail to connect to the magic ip, eg. > http://169.254.169.254/, and won't initialize properly. Subsequently ssh > login will fail since no key is uploaded. > > The symptom is failed requests in the log > > *Cirros:* > Starting network... > udhcpc (v1.20.1) started > Sending discover... > Sending select for 10.0.0.18... > Lease of 10.0.0.18 obtained, lease time 86400 > route: SIOCADDRT: File exists > WARN: failed: route add -net "0.0.0.0/0" gw "10.0.0.1" > cirros-ds 'net' up at 0.94 > checkinghttp://169.254.169.254/2009-04-04/instance-id > failed 1/20: up 0.94. request failed > failed 2/20: up 3.01. request failed > failed 3/20: up 5.03. request failed > failed 4/20: up 7.04. request failed > > *..and on Centos6:* > ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | > ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ > ci-info: | 0 | 169.254.169.254 | 10.0.0.1 | 255.255.255.255 | eth0 | UGH | > ci-info: | 1 | 10.0.0.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | > ci-info: | 2 | 0.0.0.0 | 10.0.0.1 | 0.0.0.0 | eth0 | UG | > ci-info: +-------+-----------------+----------+-----------------+-----------+-------+ > 2018-11-06 08:10:07,892 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: unexpected error ['NoneType' object has no attribute 'status_code'] > 2018-11-06 08:10:08,906 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: unexpected error ['NoneType' object has no attribute 'status_code'] > 2018-11-06 08:10:09,925 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: unexpected error ['NoneType' object has no attribute > ... > > Using Curl manually, eg. '/curl http://169.254.169.254/openstack/' one > gets: > > /curl: (52) Empty reply from server/ > > *At the same time this error is showing up in the syslog on the controller:* > > Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File > "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 460, > in fire_timers > Nov  6 12:51:01 controller neutron-metadata-agent[3094]: timer() > Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File > "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line > 59, in __call__ > Nov  6 12:51:01 controller neutron-metadata-agent[3094]: cb(*args, **kw) > Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File > "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line > 219, in main > Nov  6 12:51:01 controller neutron-metadata-agent[3094]: result = > function(*args, **kwargs) > Nov  6 12:51:01 controller neutron-metadata-agent[3094]:   File > "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 793, in > process_request > Nov  6 12:51:01 controller neutron-metadata-agent[3094]: > proto.__init__(conn_state, self) > Nov  6 12:51:01 controller neutron-metadata-agent[3094]: TypeError: > __init__() takes exactly 4 arguments (3 given) > > *Neither rebooting the controller, reinstalling neutron, or restarting > the services will do anything top fix this.* > > Has anyone else seen this? We are using Queens with a single controller. > > Kind Regards > > Terje Lundin > > > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From amotoki at gmail.com Tue Nov 6 16:04:20 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 7 Nov 2018 01:04:20 +0900 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int In-Reply-To: References: Message-ID: How is your [ovs] bridge_mapping in your configuration? Flat network requires a corresponding bridge_mapping entry and you also need to create a corresponding bridge in advance. 2018年11月6日(火) 21:31 Soheil Pourbafrani : > Hi, I initilize an instance using a defined flat network and I got the > error: > port no present in bridge br-int > > I have a 2 node deployment (controller + network, compute). > > The output of the command ovs-vsctl show is > > *On the network node* > d3a06f16-d727-4333-9de6-cf4ce3b0ce36 > Manager "ptcp:6640:127.0.0.1" > is_connected: true > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port br-ex > Interface br-ex > type: internal > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "ens33" > Interface "ens33" > Bridge br-int > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port br-int > Interface br-int > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Port "tapefb98047-57" > tag: 1 > Interface "tapefb98047-57" > type: internal > Port "qr-d62d0c14-51" > tag: 1 > Interface "qr-d62d0c14-51" > type: internal > Port "qg-5468707b-6d" > tag: 2 > Interface "qg-5468707b-6d" > type: internal > Bridge br-tun > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Port br-tun > Interface br-tun > type: internal > Port "vxlan-c0a8003d" > Interface "vxlan-c0a8003d" > type: vxlan > options: {df_default="true", in_key=flow, > local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} > ovs_version: "2.9.0" > > *On the Compute node* > > *55e62867-9c88-4925-b49c-55fb74d174bd* > * Manager "ptcp:6640:127.0.0.1"* > * is_connected: true* > * Bridge br-ex* > * Controller "tcp:127.0.0.1:6633 "* > * is_connected: true* > * fail_mode: secure* > * Port phy-br-ex* > * Interface phy-br-ex* > * type: patch* > * options: {peer=int-br-ex}* > * Port "enp2s0"* > * Interface "enp2s0"* > * Port br-ex* > * Interface br-ex* > * type: internal* > * Bridge br-tun* > * Controller "tcp:127.0.0.1:6633 "* > * is_connected: true* > * fail_mode: secure* > * Port br-tun* > * Interface br-tun* > * type: internal* > * Port "vxlan-c0a8003e"* > * Interface "vxlan-c0a8003e"* > * type: vxlan* > * options: {df_default="true", in_key=flow, > local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* > * Port patch-int* > * Interface patch-int* > * type: patch* > * options: {peer=patch-tun}* > * Bridge br-int* > * Controller "tcp:127.0.0.1:6633 "* > * is_connected: true* > * fail_mode: secure* > * Port int-br-ex* > * Interface int-br-ex* > * type: patch* > * options: {peer=phy-br-ex}* > * Port br-int* > * Interface br-int* > * type: internal* > * Port patch-tun* > * Interface patch-tun* > * type: patch* > * options: {peer=patch-int}* > * ovs_version: "2.9.0"* > > How can I solve the problem? > > Thanks > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From torin.woltjer at granddial.com Tue Nov 6 22:31:04 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Tue, 06 Nov 2018 22:31:04 GMT Subject: [Openstack] DHCP not accessible on new compute node. Message-ID: <4538df3910794a1a8ce7cef99cd99a52@granddial.com> So I did further ping tests and explored differences between my working compute nodes and my non-working compute node. Firstly, it seems that the VXLAN is working between the nonworking compute node and controller nodes. After manually setting IP addresses, I can ping from an instance on the non working node to 172.16.1.1 (neutron gateway); when running tcpdump I can see icmp on: -compute's bridge interface -compute's vxlan interface -controller's vxlan interface -controller's bridge interface -controller's qrouter namespace This behavior is expected and is the same for instances on the working compute nodes. However if I try to ping 172.16.1.2 (neutron dhcp) from an instance on the nonworking compute node, pings do not flow. If I use tcpdump to listen for pings I cannot hear any, even listening on the compute node itself; this includes listening on the vxlan, bridge, and the tap device directly. Once I try to ping in reverse, from the dhcp netns on the controller to the instance on the non-working compute node, pings begin to flow. The same is true for pings between the instance on the nonworking compute and an instance on the working compute. Pings do not flow, until the working instance pings. Once pings are flowing between the nonworking instance and neutron DHCP; I run dhclient on the instance and start listening for DHCP requests with tcpdump, and I hear them on: -compute's bridge interface -compute's vxlan interface They don't make it to the controller node. I've re-enabled l2-population on the controller's and rebooted them just in case, but the problem persists. A diff of /etc/ on all compute nodes shows that all openstack and networking related configuration is effectively identical. The last difference between the non-working compute node and the working compute nodes as far as I can tell, is that the new node has a different network card. The working nodes use "Broadcom Limited NetXtreme II BCM57712 10 Gigabit Ethernet" and the nonworking node uses a "NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter". Are there any known issues with neutron and this brand of network adapter? I looked at the capabilities on both adapters and here are the differences: Broadcom NetXen tx-tcp-ecn-segmentation: on tx-tcp-ecn-segmentation: off [fixed] rx-vlan-offload: on [fixed] rx-vlan-offload: off [fixed] receive-hashing: on receive-hashing: off [fixed] rx-vlan-filter: on rx-vlan-filter: off [fixed] tx-gre-segmentation: on tx-gre-segmentation: off [fixed] tx-gre-csum-segmentation: on tx-gre-csum-segmentation: off [fixed] tx-ipxip4-segmentation: on tx-ipxip4-segmentation: off [fixed] tx-udp_tnl-segmentation: on tx-udp_tnl-segmentation: off [fixed] tx-udp_tnl-csum-segmentation: on tx-udp_tnl-csum-segmentation: off [fixed] tx-gso-partial: on tx-gso-partial: off [fixed] loopback: off loopback: off [fixed] rx-udp_tunnel-port-offload: on rx-udp_tunnel-port-offload: off [fixed] -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed Nov 7 00:49:12 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 7 Nov 2018 09:49:12 +0900 Subject: [Openstack] [openstack client] command completion In-Reply-To: References: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> Message-ID: <1d6f44ff-6eb2-0ba4-0682-040f37000546@gmail.com> Thanks for educating me, Doug and Jeremy. Bug submitted. Just by the way: I checked [1] and [2] to find out where bugs might be tracked. The former doesn't mention bugs, the latter is outdated. Finding the way through the maze of OpenStack information is not always easy. [1] https://governance.openstack.org/tc/reference/projects/openstackclient.html [2] https://wiki.openstack.org/wiki/OpenStackClient On 11/6/2018 10:09 PM, Doug Hellmann wrote: > Bernd Bausch writes: > >> Rocky Devstack sets up bash command completion for the openstack client, >> e.g. /openstack net[TAB]/ expands to /network/. Sadly, there is no >> command completion when using the client interactively: >> >> /$ openstack// >> //(openstack) net[TAB][TAB][TAB][TAB][TAB]//[key breaks]/       # >> nothing happens >> >> But I faintly remember that it worked in earlier releases. Can this be >> configured, and how? Is this a bug? > It seems like one. > >> By the way, there used to be a /python-openstackclient /section in >> Launchpad. Doesn't exist anymore. Where are bugs tracked these days? > According to [1] the bug tracker has moved to storyboard. > > Doug > > [1] http://git.openstack.org/cgit/openstack/python-openstackclient/tree/README.rst#n41 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From laszlo.budai at gmail.com Wed Nov 7 06:51:13 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Wed, 7 Nov 2018 08:51:13 +0200 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int In-Reply-To: References: Message-ID: Hi we had a similar situation when the ``host`` entry in the neutron.conf was different than the host entry in the nova.conf on the compute nodes. So if you're setting a ``host`` entry in one of these files, then make sure the other file contains the same ``host`` setting. see https://docs.openstack.org/neutron/rocky/configuration/neutron.html#DEFAULT.host and https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.host Kind regards, Laszlo On 11/6/18 6:04 PM, Akihiro Motoki wrote: > How is your [ovs] bridge_mapping in your configuration? > Flat network requires a corresponding bridge_mapping entry and you also need to create a corresponding bridge in advance. > > > 2018年11月6日(火) 21:31 Soheil Pourbafrani >: > > Hi, I initilize an instance using a defined flat network and I got the error: > port no present in bridge br-int > > I have a 2 node deployment (controller + network, compute). > > The output of the command ovs-vsctl show is > * > * > *On the network node* > d3a06f16-d727-4333-9de6-cf4ce3b0ce36 >     Manager "ptcp:6640:127.0.0.1" >         is_connected: true >     Bridge br-ex >         Controller "tcp:127.0.0.1:6633 " >             is_connected: true >         fail_mode: secure >         Port br-ex >             Interface br-ex >                 type: internal >         Port phy-br-ex >             Interface phy-br-ex >                 type: patch >                 options: {peer=int-br-ex} >         Port "ens33" >             Interface "ens33" >     Bridge br-int >         Controller "tcp:127.0.0.1:6633 " >             is_connected: true >         fail_mode: secure >         Port br-int >             Interface br-int >                 type: internal >         Port patch-tun >             Interface patch-tun >                 type: patch >                 options: {peer=patch-int} >         Port int-br-ex >             Interface int-br-ex >                 type: patch >                 options: {peer=phy-br-ex} >         Port "tapefb98047-57" >             tag: 1 >             Interface "tapefb98047-57" >                 type: internal >         Port "qr-d62d0c14-51" >             tag: 1 >             Interface "qr-d62d0c14-51" >                 type: internal >         Port "qg-5468707b-6d" >             tag: 2 >             Interface "qg-5468707b-6d" >                 type: internal >     Bridge br-tun >         Controller "tcp:127.0.0.1:6633 " >             is_connected: true >         fail_mode: secure >         Port patch-int >             Interface patch-int >                 type: patch >                 options: {peer=patch-tun} >         Port br-tun >             Interface br-tun >                 type: internal >         Port "vxlan-c0a8003d" >             Interface "vxlan-c0a8003d" >                 type: vxlan >                 options: {df_default="true", in_key=flow, local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} >     ovs_version: "2.9.0" > > *On the Compute node* > * > * > *55e62867-9c88-4925-b49c-55fb74d174bd* > *    Manager "ptcp:6640:127.0.0.1"* > *        is_connected: true* > *    Bridge br-ex* > *        Controller "tcp:127.0.0.1:6633 "* > *            is_connected: true* > *        fail_mode: secure* > *        Port phy-br-ex* > *            Interface phy-br-ex* > *                type: patch* > *                options: {peer=int-br-ex}* > *        Port "enp2s0"* > *            Interface "enp2s0"* > *        Port br-ex* > *            Interface br-ex* > *                type: internal* > *    Bridge br-tun* > *        Controller "tcp:127.0.0.1:6633 "* > *            is_connected: true* > *        fail_mode: secure* > *        Port br-tun* > *            Interface br-tun* > *                type: internal* > *        Port "vxlan-c0a8003e"* > *            Interface "vxlan-c0a8003e"* > *                type: vxlan* > *                options: {df_default="true", in_key=flow, local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* > *        Port patch-int* > *            Interface patch-int* > *                type: patch* > *                options: {peer=patch-tun}* > *    Bridge br-int* > *        Controller "tcp:127.0.0.1:6633 "* > *            is_connected: true* > *        fail_mode: secure* > *        Port int-br-ex* > *            Interface int-br-ex* > *                type: patch* > *                options: {peer=phy-br-ex}* > *        Port br-int* > *            Interface br-int* > *                type: internal* > *        Port patch-tun* > *            Interface patch-tun* > *                type: patch* > *                options: {peer=patch-int}* > *    ovs_version: "2.9.0"* > > How can I solve the problem? > > Thanks > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to     : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From doug at doughellmann.com Wed Nov 7 12:51:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 07 Nov 2018 07:51:54 -0500 Subject: [Openstack] [openstack client] command completion In-Reply-To: <1d6f44ff-6eb2-0ba4-0682-040f37000546@gmail.com> References: <35be60e3-669d-477f-2e7e-67f0eca41f2a@gmail.com> <1d6f44ff-6eb2-0ba4-0682-040f37000546@gmail.com> Message-ID: Bernd Bausch writes: > Thanks for educating me, Doug and Jeremy. Bug submitted. > > Just by the way: I checked [1] and [2] to find out where bugs might be > tracked. The former doesn't mention bugs, the latter is outdated. > Finding the way through the maze of OpenStack information is not always > easy. > > [1] > https://governance.openstack.org/tc/reference/projects/openstackclient.html Interesting; I don't think we really expected anyone to use the governance docs to explore the projects in terms of code. I wonder how common that is? Maybe we should add some links to docs.openstack.org. > > [2] https://wiki.openstack.org/wiki/OpenStackClient We've marked that as deprecated but left it around for future digital archaeologists. Maybe we should delete more of the content. We should definitely update the home page in the governance repo so it doesn't link to a wiki page we aren't maintaining, so I will do that. -- Doug From soheil.ir08 at gmail.com Wed Nov 7 12:52:54 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 7 Nov 2018 16:22:54 +0330 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int In-Reply-To: References: Message-ID: Thanks. In one node installation no error happened but in two node installation, I got the error. So I guess the answer of Laszlo is reasonable. On Wed, Nov 7, 2018 at 10:23 AM Budai Laszlo wrote: > Hi > > we had a similar situation when the ``host`` entry in the neutron.conf was > different than the host entry in the nova.conf on the compute nodes. > So if you're setting a ``host`` entry in one of these files, then make > sure the other file contains the same ``host`` setting. > > see > https://docs.openstack.org/neutron/rocky/configuration/neutron.html#DEFAULT.host > and > https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.host > > Kind regards, > Laszlo > > On 11/6/18 6:04 PM, Akihiro Motoki wrote: > > How is your [ovs] bridge_mapping in your configuration? > > Flat network requires a corresponding bridge_mapping entry and you also > need to create a corresponding bridge in advance. > > > > > > 2018年11月6日(火) 21:31 Soheil Pourbafrani soheil.ir08 at gmail.com>>: > > > > Hi, I initilize an instance using a defined flat network and I got > the error: > > port no present in bridge br-int > > > > I have a 2 node deployment (controller + network, compute). > > > > The output of the command ovs-vsctl show is > > * > > * > > *On the network node* > > d3a06f16-d727-4333-9de6-cf4ce3b0ce36 > > Manager "ptcp:6640:127.0.0.1" > > is_connected: true > > Bridge br-ex > > Controller "tcp:127.0.0.1:6633 " > > is_connected: true > > fail_mode: secure > > Port br-ex > > Interface br-ex > > type: internal > > Port phy-br-ex > > Interface phy-br-ex > > type: patch > > options: {peer=int-br-ex} > > Port "ens33" > > Interface "ens33" > > Bridge br-int > > Controller "tcp:127.0.0.1:6633 " > > is_connected: true > > fail_mode: secure > > Port br-int > > Interface br-int > > type: internal > > Port patch-tun > > Interface patch-tun > > type: patch > > options: {peer=patch-int} > > Port int-br-ex > > Interface int-br-ex > > type: patch > > options: {peer=phy-br-ex} > > Port "tapefb98047-57" > > tag: 1 > > Interface "tapefb98047-57" > > type: internal > > Port "qr-d62d0c14-51" > > tag: 1 > > Interface "qr-d62d0c14-51" > > type: internal > > Port "qg-5468707b-6d" > > tag: 2 > > Interface "qg-5468707b-6d" > > type: internal > > Bridge br-tun > > Controller "tcp:127.0.0.1:6633 " > > is_connected: true > > fail_mode: secure > > Port patch-int > > Interface patch-int > > type: patch > > options: {peer=patch-tun} > > Port br-tun > > Interface br-tun > > type: internal > > Port "vxlan-c0a8003d" > > Interface "vxlan-c0a8003d" > > type: vxlan > > options: {df_default="true", in_key=flow, > local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} > > ovs_version: "2.9.0" > > > > *On the Compute node* > > * > > * > > *55e62867-9c88-4925-b49c-55fb74d174bd* > > * Manager "ptcp:6640:127.0.0.1"* > > * is_connected: true* > > * Bridge br-ex* > > * Controller "tcp:127.0.0.1:6633 "* > > * is_connected: true* > > * fail_mode: secure* > > * Port phy-br-ex* > > * Interface phy-br-ex* > > * type: patch* > > * options: {peer=int-br-ex}* > > * Port "enp2s0"* > > * Interface "enp2s0"* > > * Port br-ex* > > * Interface br-ex* > > * type: internal* > > * Bridge br-tun* > > * Controller "tcp:127.0.0.1:6633 "* > > * is_connected: true* > > * fail_mode: secure* > > * Port br-tun* > > * Interface br-tun* > > * type: internal* > > * Port "vxlan-c0a8003e"* > > * Interface "vxlan-c0a8003e"* > > * type: vxlan* > > * options: {df_default="true", in_key=flow, > local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* > > * Port patch-int* > > * Interface patch-int* > > * type: patch* > > * options: {peer=patch-tun}* > > * Bridge br-int* > > * Controller "tcp:127.0.0.1:6633 "* > > * is_connected: true* > > * fail_mode: secure* > > * Port int-br-ex* > > * Interface int-br-ex* > > * type: patch* > > * options: {peer=phy-br-ex}* > > * Port br-int* > > * Interface br-int* > > * type: internal* > > * Port patch-tun* > > * Interface patch-tun* > > * type: patch* > > * options: {peer=patch-int}* > > * ovs_version: "2.9.0"* > > > > How can I solve the problem? > > > > Thanks > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org openstack at lists.openstack.org> > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Wed Nov 7 13:46:50 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 7 Nov 2018 22:46:50 +0900 Subject: [Openstack] [glance] Rocky image import stuck Message-ID: <34751b29-b07e-5bb0-d174-ddfb3fe8549a@gmail.com> Does the new image import process work? Am I missing something? Uploaded images stay in an intermediate state, either /uploading/ or /importing/, and never become /active.///Where should I look?/ / On a stable/Rocky Devstack, I do: openstack image create --disk-format qcow2 myimg Image status is /queueing/, as expected. glance image-stage --file devstack/files/Fedora...qcow2 --progress IMAGE_ID Image status is /uploading/. A copy of the image is on /tmp/staging/IMAGE_ID. glance image-import --import-method glance-direct IMAGE_ID Sometimes, the status remains /uploading, /sometimes it turns /importing, /never /active./ glance-api log grep'd for the image ID: Nov 07 18:51:36 rocky devstack at g-api.service[1033]: INFO glance.common.scripts.image_import.main [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Task ec4b36fd-dece-4f41-aa8d-337d01c239f1: Got image data uri file:///tmp/staging/72a6d7d0-a538-4922-95f2-1649e9702eb2 to be imported Nov 07 18:51:37 rocky devstack at g-api.service[1033]: DEBUG glance_store._drivers.swift.store [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Adding image object '72a6d7d0-a538-4922-95f2-1649e9702eb2' to Swift {{(pid=2250) add /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:941}} Nov 07 18:51:45 rocky devstack at g-api.service[1033]: DEBUG swiftclient [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] REQ: curl -i http://192.168.1.201:8080/v1/AUTH_9495609cff044252965f8c3e5e86f8e0/glance/72a6d7d0-a538-4922-95f2-1649e9702eb2-00001 -X PUT -H "X-Auth-Token: gAAAAABb4rWowjLQ..." {{(pid=2250) http_log /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:167}} Nov 07 18:51:45 rocky devstack at g-api.service[1033]: DEBUG glance_store._drivers.swift.store [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Wrote chunk 72a6d7d0-a538-4922-95f2-1649e9702eb2-00001 (1/?) of length 204800000 to Swift returning MD5 of content: 5139500edbb5814a1351100d162db333 {{(pid=2250) add /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:1024}} And then nothing. So it does send a 200MB chunk to Swift. I can see it on Swift, too. But it stops after the first chunk and forgets to send the rest. After I tried that a few times, now it doesn't even upload the first chunk. Nothing in Swift at all. No error in the Glance API log either. Same problem with the /image-upload-via-import /command. I also tried the /web-download /import method; same result. In all these cases, the image remains in an non-active state forever, i.e. an hour or so, when I lose patience and delete it. "Classic" upload works (/openstack image create --file ..../). The log file then shows the expected chunk uploads to Swift. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From laszlo.budai at gmail.com Wed Nov 7 14:09:11 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Wed, 7 Nov 2018 16:09:11 +0200 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int In-Reply-To: References: Message-ID: Hi, please don't misunderstand me. If you have multiple nodes, then each of them will have its own ``host`` setting (like, host = node1 for node1 and host = node2 for node2). The important thing is that both the neutron.conf and nova.conf to have the same ``host`` setting on one node. So if you have used the ``host = node1`` in neutron.conf of node1, then do the same in the nova.conf on the same host. And if you have ``host = node2`` in neutron.conf on node2, then you should have the same in the nova.conf. Kind regards, Laszlo On 11/7/18 2:52 PM, Soheil Pourbafrani wrote: > Thanks. In one node installation no error happened but in two node installation, I got the error. So I guess the answer of Laszlo is reasonable. > > On Wed, Nov 7, 2018 at 10:23 AM Budai Laszlo > wrote: > > Hi > > we had a similar situation when the ``host`` entry in the neutron.conf was different than the host entry in the nova.conf on the compute nodes. > So if you're setting a ``host`` entry in one of these files, then make sure the other file contains the same ``host`` setting. > > see https://docs.openstack.org/neutron/rocky/configuration/neutron.html#DEFAULT.host and https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.host > > Kind regards, > Laszlo > > On 11/6/18 6:04 PM, Akihiro Motoki wrote: > > How is your [ovs] bridge_mapping in your configuration? > > Flat network requires a corresponding bridge_mapping entry and you also need to create a corresponding bridge in advance. > > > > > > 2018年11月6日(火) 21:31 Soheil Pourbafrani >>: > > > >     Hi, I initilize an instance using a defined flat network and I got the error: > >     port no present in bridge br-int > > > >     I have a 2 node deployment (controller + network, compute). > > > >     The output of the command ovs-vsctl show is > >     * > >     * > >     *On the network node* > >     d3a06f16-d727-4333-9de6-cf4ce3b0ce36 > >          Manager "ptcp:6640:127.0.0.1" > >              is_connected: true > >          Bridge br-ex > >              Controller "tcp:127.0.0.1:6633 " > >                  is_connected: true > >              fail_mode: secure > >              Port br-ex > >                  Interface br-ex > >                      type: internal > >              Port phy-br-ex > >                  Interface phy-br-ex > >                      type: patch > >                      options: {peer=int-br-ex} > >              Port "ens33" > >                  Interface "ens33" > >          Bridge br-int > >              Controller "tcp:127.0.0.1:6633 " > >                  is_connected: true > >              fail_mode: secure > >              Port br-int > >                  Interface br-int > >                      type: internal > >              Port patch-tun > >                  Interface patch-tun > >                      type: patch > >                      options: {peer=patch-int} > >              Port int-br-ex > >                  Interface int-br-ex > >                      type: patch > >                      options: {peer=phy-br-ex} > >              Port "tapefb98047-57" > >                  tag: 1 > >                  Interface "tapefb98047-57" > >                      type: internal > >              Port "qr-d62d0c14-51" > >                  tag: 1 > >                  Interface "qr-d62d0c14-51" > >                      type: internal > >              Port "qg-5468707b-6d" > >                  tag: 2 > >                  Interface "qg-5468707b-6d" > >                      type: internal > >          Bridge br-tun > >              Controller "tcp:127.0.0.1:6633 " > >                  is_connected: true > >              fail_mode: secure > >              Port patch-int > >                  Interface patch-int > >                      type: patch > >                      options: {peer=patch-tun} > >              Port br-tun > >                  Interface br-tun > >                      type: internal > >              Port "vxlan-c0a8003d" > >                  Interface "vxlan-c0a8003d" > >                      type: vxlan > >                      options: {df_default="true", in_key=flow, local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} > >          ovs_version: "2.9.0" > > > >     *On the Compute node* > >     * > >     * > >     *55e62867-9c88-4925-b49c-55fb74d174bd* > >     *    Manager "ptcp:6640:127.0.0.1"* > >     *        is_connected: true* > >     *    Bridge br-ex* > >     *        Controller "tcp:127.0.0.1:6633 "* > >     *            is_connected: true* > >     *        fail_mode: secure* > >     *        Port phy-br-ex* > >     *            Interface phy-br-ex* > >     *                type: patch* > >     *                options: {peer=int-br-ex}* > >     *        Port "enp2s0"* > >     *            Interface "enp2s0"* > >     *        Port br-ex* > >     *            Interface br-ex* > >     *                type: internal* > >     *    Bridge br-tun* > >     *        Controller "tcp:127.0.0.1:6633 "* > >     *            is_connected: true* > >     *        fail_mode: secure* > >     *        Port br-tun* > >     *            Interface br-tun* > >     *                type: internal* > >     *        Port "vxlan-c0a8003e"* > >     *            Interface "vxlan-c0a8003e"* > >     *                type: vxlan* > >     *                options: {df_default="true", in_key=flow, local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* > >     *        Port patch-int* > >     *            Interface patch-int* > >     *                type: patch* > >     *                options: {peer=patch-tun}* > >     *    Bridge br-int* > >     *        Controller "tcp:127.0.0.1:6633 "* > >     *            is_connected: true* > >     *        fail_mode: secure* > >     *        Port int-br-ex* > >     *            Interface int-br-ex* > >     *                type: patch* > >     *                options: {peer=phy-br-ex}* > >     *        Port br-int* > >     *            Interface br-int* > >     *                type: internal* > >     *        Port patch-tun* > >     *            Interface patch-tun* > >     *                type: patch* > >     *                options: {peer=patch-int}* > >     *    ovs_version: "2.9.0"* > > > >     How can I solve the problem? > > > >     Thanks > >     _______________________________________________ > >     Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > >     Post to     : openstack at lists.openstack.org > > >     Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > _______________________________________________ > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to     : openstack at lists.openstack.org > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to     : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From soheil.ir08 at gmail.com Wed Nov 7 16:01:40 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 7 Nov 2018 19:31:40 +0330 Subject: [Openstack] [PackStack][Neutron] erro port no present in bridge br-int In-Reply-To: References: Message-ID: Thank you for the clarification, Laszlo. On Wed, Nov 7, 2018 at 5:39 PM Budai Laszlo wrote: > Hi, > > please don't misunderstand me. If you have multiple nodes, then each of > them will have its own ``host`` setting (like, host = node1 for node1 and > host = node2 for node2). The important thing is that both the neutron.conf > and nova.conf to have the same ``host`` setting on one node. So if you have > used the ``host = node1`` in neutron.conf of node1, then do the same in the > nova.conf on the same host. And if you have ``host = node2`` in > neutron.conf on node2, then you should have the same in the nova.conf. > > Kind regards, > Laszlo > > > On 11/7/18 2:52 PM, Soheil Pourbafrani wrote: > > Thanks. In one node installation no error happened but in two node > installation, I got the error. So I guess the answer of Laszlo is > reasonable. > > > > On Wed, Nov 7, 2018 at 10:23 AM Budai Laszlo > wrote: > > > > Hi > > > > we had a similar situation when the ``host`` entry in the > neutron.conf was different than the host entry in the nova.conf on the > compute nodes. > > So if you're setting a ``host`` entry in one of these files, then > make sure the other file contains the same ``host`` setting. > > > > see > https://docs.openstack.org/neutron/rocky/configuration/neutron.html#DEFAULT.host > and > https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.host > > > > Kind regards, > > Laszlo > > > > On 11/6/18 6:04 PM, Akihiro Motoki wrote: > > > How is your [ovs] bridge_mapping in your configuration? > > > Flat network requires a corresponding bridge_mapping entry and > you also need to create a corresponding bridge in advance. > > > > > > > > > 2018年11月6日(火) 21:31 Soheil Pourbafrani soheil.ir08 at gmail.com>>>: > > > > > > Hi, I initilize an instance using a defined flat network and > I got the error: > > > port no present in bridge br-int > > > > > > I have a 2 node deployment (controller + network, compute). > > > > > > The output of the command ovs-vsctl show is > > > * > > > * > > > *On the network node* > > > d3a06f16-d727-4333-9de6-cf4ce3b0ce36 > > > Manager "ptcp:6640:127.0.0.1" > > > is_connected: true > > > Bridge br-ex > > > Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> " > > > is_connected: true > > > fail_mode: secure > > > Port br-ex > > > Interface br-ex > > > type: internal > > > Port phy-br-ex > > > Interface phy-br-ex > > > type: patch > > > options: {peer=int-br-ex} > > > Port "ens33" > > > Interface "ens33" > > > Bridge br-int > > > Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> " > > > is_connected: true > > > fail_mode: secure > > > Port br-int > > > Interface br-int > > > type: internal > > > Port patch-tun > > > Interface patch-tun > > > type: patch > > > options: {peer=patch-int} > > > Port int-br-ex > > > Interface int-br-ex > > > type: patch > > > options: {peer=phy-br-ex} > > > Port "tapefb98047-57" > > > tag: 1 > > > Interface "tapefb98047-57" > > > type: internal > > > Port "qr-d62d0c14-51" > > > tag: 1 > > > Interface "qr-d62d0c14-51" > > > type: internal > > > Port "qg-5468707b-6d" > > > tag: 2 > > > Interface "qg-5468707b-6d" > > > type: internal > > > Bridge br-tun > > > Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> " > > > is_connected: true > > > fail_mode: secure > > > Port patch-int > > > Interface patch-int > > > type: patch > > > options: {peer=patch-tun} > > > Port br-tun > > > Interface br-tun > > > type: internal > > > Port "vxlan-c0a8003d" > > > Interface "vxlan-c0a8003d" > > > type: vxlan > > > options: {df_default="true", in_key=flow, > local_ip="192.168.0.62", out_key=flow, remote_ip="192.168.0.61"} > > > ovs_version: "2.9.0" > > > > > > *On the Compute node* > > > * > > > * > > > *55e62867-9c88-4925-b49c-55fb74d174bd* > > > * Manager "ptcp:6640:127.0.0.1"* > > > * is_connected: true* > > > * Bridge br-ex* > > > * Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> "* > > > * is_connected: true* > > > * fail_mode: secure* > > > * Port phy-br-ex* > > > * Interface phy-br-ex* > > > * type: patch* > > > * options: {peer=int-br-ex}* > > > * Port "enp2s0"* > > > * Interface "enp2s0"* > > > * Port br-ex* > > > * Interface br-ex* > > > * type: internal* > > > * Bridge br-tun* > > > * Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> "* > > > * is_connected: true* > > > * fail_mode: secure* > > > * Port br-tun* > > > * Interface br-tun* > > > * type: internal* > > > * Port "vxlan-c0a8003e"* > > > * Interface "vxlan-c0a8003e"* > > > * type: vxlan* > > > * options: {df_default="true", in_key=flow, > local_ip="192.168.0.61", out_key=flow, remote_ip="192.168.0.62"}* > > > * Port patch-int* > > > * Interface patch-int* > > > * type: patch* > > > * options: {peer=patch-tun}* > > > * Bridge br-int* > > > * Controller "tcp:127.0.0.1:6633 < > http://127.0.0.1:6633> "* > > > * is_connected: true* > > > * fail_mode: secure* > > > * Port int-br-ex* > > > * Interface int-br-ex* > > > * type: patch* > > > * options: {peer=phy-br-ex}* > > > * Port br-int* > > > * Interface br-int* > > > * type: internal* > > > * Port patch-tun* > > > * Interface patch-tun* > > > * type: patch* > > > * options: {peer=patch-int}* > > > * ovs_version: "2.9.0"* > > > > > > How can I solve the problem? > > > > > > Thanks > > > _______________________________________________ > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org openstack at lists.openstack.org> > > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > > > > _______________________________________________ > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org openstack at lists.openstack.org> > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org openstack at lists.openstack.org> > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Nov 8 00:48:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Nov 2018 11:48:20 +1100 Subject: [Openstack] [all] Results of the T release naming poll. open Message-ID: <20181108004819.GF20576@thor.bakeyournoodle.com> Hello all! The results of the naming poll are in! **PLEASE REMEMBER** that these now have to go through legal vetting. So it is too soon to say 'OpenStack Train' is our next release, given that previous polls have had some issues with the top choice. In any case, the names will be sent off to legal for vetting. As soon as we have a final winner, I'll let you all know. https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_aac97f1cbb6c61df&rkey=7c8b5588574494c1 Result 1. Train (Condorcet winner: wins contests with all other choices) 2. Tiger loses to Train by 142–70 3. Timber loses to Train by 142–72, loses to Tiger by 100–76 4. Trail loses to Train by 150–55, loses to Timber by 93–62 5. Telluride loses to Train by 155–56, loses to Trail by 81–69 6. Teller loses to Train by 158–46, loses to Telluride by 70–67 7. Treasure loses to Train by 151–52, loses to Teller by 68–67 8. Teakettle loses to Train by 158–49, loses to Treasure by 75–67 9. Tincup loses to Train by 157–47, loses to Teakettle by 67–60 10. Turret loses to Train by 158–48, loses to Tincup by 75–56 11. Thomas loses to Train by 159–42, loses to Turret by 66–63 12. Trinidad loses to Train by 153–44, loses to Thomas by 70–56 13. Troublesome loses to Train by 165–41, loses to Trinidad by 69–62 14. Thornton loses to Train by 163–35, loses to Troublesome by 62–59 15. Tyrone loses to Train by 163–35, loses to Thornton by 58–38 16. Tarryall loses to Train by 170–31, loses to Tyrone by 54–50 17. Timnath loses to Train by 170–23, loses to Tarryall by 60–32 18. Tiny Town loses to Train by 168–29, loses to Timnath by 45–43 19. Torreys loses to Train by 167–29, loses to Tiny Town by 48–40 20. Trussville loses to Train by 169–25, loses to Torreys by 43–34 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From amy at demarco.com Thu Nov 8 01:05:01 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 7 Nov 2018 19:05:01 -0600 Subject: [Openstack] Diversity and Inclusion at OpenStack Summit Message-ID: I just wanted to pass on a few things we have going on during Summit that might be of interest! *Diversity and Inclusion GroupMe* - Is it your first summit and you don't know anyone else? Maybe you just don't want to travel to and from the venue alone? In the tradition of WoO, I have created a GroupMe so people can communicate with each other. If you would like to be added to the group please let me know and I'll get you added! *Night Watch Tour *- On Wednesday night at 10PM, members of the community will be meeting up to go on a private Night Watch Tour[0]! This is a non-alcoholic activity for those wanting to get with other Stackers, but don't want to partake in the Pub Crawl! We've been doing these since Boston and they're a lot of fun. The cost is 15 euros cash and I do need you to RSVP to me as we will need to get a second guide if we grow too large! Summit sessions you may wish to attend: Tuesday - *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both Mentors and Mentees for the session so please RSVP! This is another great way to meet people in the community, learn more and give back!!! *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in charge of the Cohort Mentoring program and see how you can get involved as a Mentor or Mentee! *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can get involved, and what's next. Wednesday - *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some exciting stuff this week but don't know how to get setup to start contributing? This session is for you in that we'll walk you through getting your logins, your system configured and if time allows even how to submit a bug and patch! Thursday - *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance of mentoring, the changes in the OPenStack mentoring programs and how you can get involved. Hope to see everyone in Berlin next week! Please feel free to contact me or grab me in the hall next week with any questions or to join in the fun! Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - http://baerentouren.de/nachtwache_en.html [1] - https://www.openstack.org/summit/berlin-2018/summit- schedule/events/22873/speed-mentoring-lunch [2] - https://www.openstack.org/summit/berlin-2018/summit- schedule/events/22892/long-term-mentoring-keeping-the-party-going [3] - https://www.openstack.org/summit/berlin-2018/summit- schedule/events/22893/diversity-and-inclusion-wg-update [4] - https://www.openstack.org/summit/berlin-2018/summit- schedule/events/21943/git-and-gerrit-hands-on-workshop [5] - https://www.openstack.org/summit/berlin-2018/summit- schedule/events/22443/mentoring-program-reboot -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Thu Nov 8 07:45:06 2018 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 8 Nov 2018 15:45:06 +0800 Subject: [Openstack] [Fuel] How to set bonding included admin network by using network template? Message-ID: Hi everyone, I'm using Fuel community 11 to install Openstack, and I'm using network template to set the network for my own reason. I tried to set bonding with 2 NICs, and bonding mode is 6 (blanace-alb). Then planned all networks going to do tagged VLAN except PXE network, and all networks are going to use bonding interface as traffic. network_scheme: bond_setup: transformations: - action: add-bond bond_properties: mode: balance-alb type__: linux xmit_hash_policy: layer2 lacp_rate: fast interface_properties: {} interfaces: - <% if1 %> - <% if2 %> name: bond0 endpoints: - bond0 roles: fake/use: br-fake fuel_fw_admin: transformations: - action: add-br name: br-fw-admin - action: add-port bridge: br-fw-admin name: bond0 endpoints: - br-fw-admin roles: admin/pxe: br-fw-admin fw-admin: br-fw-admin Now the problem is, I tried to upload this template to my environment, but it says : Networks cannot be assigned as interface with named bond0 does not exist for node. Already searched the internet but no answer. Can someone help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Thu Nov 8 15:06:19 2018 From: amy at demarco.com (Amy Marrich) Date: Thu, 8 Nov 2018 09:06:19 -0600 Subject: [Openstack] Diversity and Inclusion at OpenStack Summit In-Reply-To: References: Message-ID: Forgot one important one on Wednesday, 12:30 - 1:40!!! We are very pleased to have the very first *Diversity Networking Lunch* which os being sponsored by Intel[0]. In the past there was feedback that allies and others didn't wish to intrude on the WoO lunch and Intel was all for this change to a more open Diversity lunch! So please come and join us on Wednesday for lunch!! See you all soon, Amy Marrich (spotz) Diversity and Inclusion WG Chair [0] - https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22850/diversity-networking-lunch-sponsored-by-intel On Wed, Nov 7, 2018 at 9:18 AM, Amy Marrich wrote: > I just wanted to pass on a few things we have going on during Summit that > might be of interest! > > *Diversity and Inclusion GroupMe* - Is it your first summit and you don't > know anyone else? Maybe you just don't want to travel to and from the venue > alone? In the tradition of WoO, I have created a GroupMe so people can > communicate with each other. If you would like to be added to the group > please let me know and I'll get you added! > > *Night Watch Tour *- On Wednesday night at 10PM, members of the community > will be meeting up to go on a private Night Watch Tour[0]! This is a > non-alcoholic activity for those wanting to get with other Stackers, but > don't want to partake in the Pub Crawl! We've been doing these since Boston > and they're a lot of fun. The cost is 15 euros cash and I do need you to > RSVP to me as we will need to get a second guide if we grow too large! > > Summit sessions you may wish to attend: > Tuesday - > *Speed Mentoring Lunch* [1] 12:30 -1:40 - We are still looking for both > Mentors and Mentees for the session so please RSVP! This is another great > way to meet people in the community, learn more and give back!!! > *Cohort Mentoring BoF* [2] 4:20 - 5:00 - Come talk to the people in > charge of the Cohort Mentoring program and see how you can get involved as > a Mentor or Mentee! > *D&I WG Update* [3] 5:10- 5:50 - Learn what we've been up to, how you can > get involved, and what's next. > > Wednesday - > *Git and Gerrit Hands-On Workshop* [4] 3:20 - 4:20 - So you've seen some > exciting stuff this week but don't know how to get setup to start > contributing? This session is for you in that we'll walk you through > getting your logins, your system configured and if time allows even how to > submit a bug and patch! > > Thursday - > *Mentoring Program Reboot* [5] 3:20 - 4:00 - Learn about the importance > of mentoring, the changes in the OPenStack mentoring programs and how you > can get involved. > > Hope to see everyone in Berlin next week! Please feel free to contact me > or grab me in the hall next week with any questions or to join in the fun! > > Amy Marrich (spotz) > Diversity and Inclusion WG Chair > > [0] - http://baerentouren.de/nachtwache_en.html > [1] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22873/speed-mentoring-lunch > [2] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22892/long-term-mentoring-keeping-the-party-going > [3] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22893/diversity-and-inclusion-wg-update > [4] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/21943/git-and-gerrit-hands-on-workshop > [5] - https://www.openstack.org/summit/berlin-2018/summit- > schedule/events/22443/mentoring-program-reboot > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shubjero at gmail.com Thu Nov 8 19:57:21 2018 From: shubjero at gmail.com (shubjero) Date: Thu, 8 Nov 2018 14:57:21 -0500 Subject: [Openstack] Trove configuration reference missing on docs.openstack.org Message-ID: Does anyone know why the Trove configuration reference seems to be missing for seemlingly all OpenStack releases since Liberty? https://docs.openstack.org/rocky/configuration/ https://docs.openstack.org/queens/configuration/ https://docs.openstack.org/pike/configuration/ Would like to see this as I am trying to roll out Trove in our lab to eventually bring to production. Cheers, Jared -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Nov 9 18:14:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Nov 2018 18:14:47 +0000 Subject: [Openstack] [all] We're combining the lists! In-Reply-To: <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> Message-ID: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this is being sent) will be replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list is open for subscriptions[0] now, but is not yet accepting posts until Monday November 19 and it's strongly recommended to subscribe before that date so as not to miss any messages posted there. The old lists will be configured to no longer accept posts starting on Monday December 3, but in the interim posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them any time after the 19th and not miss any messages. See my previous notice[1] for details. For those wondering, we have 207 subscribers so far on openstack-discuss with a little over a week to go before it will be put into use (and less than a month before the old lists are closed down for good). [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Sat Nov 10 10:02:15 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 10 Nov 2018 11:02:15 +0100 Subject: [Openstack] [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: Robert Collins wrote: > There don't seem to be any topics defined for -discuss yet, I hope > there will be, as I'm certainly not in a position of enough bandwidth > to handle everything *stack related. > > I'd suggest one for each previously list, at minimum. As we are ultimately planning to move lists to mailman3 (which decided to drop the "topics" concept altogether), I don't think we planned to add serverside mailman topics to the new list. We'll still have standardized subject line topics. The current list lives at: https://etherpad.openstack.org/p/common-openstack-ml-topics -- Thierry Carrez (ttx) From fungi at yuggoth.org Sat Nov 10 15:53:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 10 Nov 2018 15:53:22 +0000 Subject: [Openstack] [openstack-dev] [all] We're combining the lists! In-Reply-To: References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181110155322.nedu4jiammpi537x@yuggoth.org> On 2018-11-10 11:02:15 +0100 (+0100), Thierry Carrez wrote: [...] > As we are ultimately planning to move lists to mailman3 (which decided > to drop the "topics" concept altogether), I don't think we planned to > add serverside mailman topics to the new list. Correct, that was covered in more detail in the longer original announcement linked from my past couple of reminders: http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html In short, we're recommending client-side filtering because server-side topic selection/management was not retained in Mailman 3 as Thierry indicates and we hope we might move our lists to an MM3 instance sometime in the not-too-distant future. > We'll still have standardized subject line topics. The current list > lives at: > > https://etherpad.openstack.org/p/common-openstack-ml-topics Which is its initial location for crowd-sourcing/brainstorming, but will get published to a more durable location like on lists.openstack.org itself or perhaps the Project-Team Guide once the list is in use. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mj at mode.net Mon Nov 12 02:25:10 2018 From: mj at mode.net (Mike Joseph) Date: Sun, 11 Nov 2018 18:25:10 -0800 Subject: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set Message-ID: Hi folks, It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the flavor also has hw:cpu_policy=dedicated set. The alias config is: alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": "preferred" } And the flavor config is: { "OS-FLV-DISABLED:disabled": false, "OS-FLV-EXT-DATA:ephemeral": 0, "access_project_ids": null, "disk": 10, "id": "221e1bcd-2dde-48e6-bd09-820012198908", "name": "vm-2", "os-flavor-access:is_public": true, "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", "ram": 8192, "rxtx_factor": 1.0, "swap": "", "vcpus": 2 } In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't need that, since we are okay with cross-region access to the PCI device. However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec for numa_policy seem to indicate that this is exactly the intent of the option: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html But, with the above config, we still get PCI affinity scheduling errors: 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit the given host NUMA topology.' This strikes me as a bug, but perhaps I am missing something here? Thanks, MJ -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Mon Nov 12 10:07:38 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 12 Nov 2018 19:07:38 +0900 Subject: [Openstack] [glance] task in pending state, image in uploading state Message-ID: <151eb186-6481-4b43-b179-b19f431ad384@gmail.com> Trying Glance's new import process, my images are all stuck in status uploading (both methods glance-direct and web-download). I can see that there are tasks for those images; they are pending. The Glance API log doesn't contain anything that clues me in (debug logging is enabled). The source code is too involved for my feeble Python and OpenStack Internals skills. *How can I find out what blocks the tasks? * This is a stable Rocky Devstack without any customization of the Glance config. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3609 bytes Desc: S/MIME Cryptographic Signature URL: From soheil.ir08 at gmail.com Mon Nov 12 12:34:45 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Mon, 12 Nov 2018 16:04:45 +0330 Subject: [Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance Message-ID: Hi, My question is does OpenStack store volumes somewhere other than the compute node? For example in PackStack on two nodes, one for controller and network and the other for compute node, the instance's volumes will be stored on the controller or on compute? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaiokmo at lsd.ufcg.edu.br Mon Nov 12 13:14:22 2018 From: kaiokmo at lsd.ufcg.edu.br (Kaio Oliveira) Date: Mon, 12 Nov 2018 10:14:22 -0300 (BRT) Subject: [Openstack] Retire of openstack-ansible-os_monasca-ui Message-ID: <217994954.378236.1542028462038.JavaMail.zimbra@lsd.ufcg.edu.br> Hi everyone, As part of the process of retiring the os_monasca-ui role from the openstack-ansible project, I'm announcing here on the ML that this role will be retired, because there's no reason to maintain it anymore. This has been discussed with the previous and the current OpenStack-Ansible PTL. The monasca-ui will be dealt within os_horizon role on openstack-ansible. Best regards, Kaio -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Nov 12 13:25:50 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 12 Nov 2018 14:25:50 +0100 Subject: [Openstack] Retire of openstack-ansible-os_monasca-ui In-Reply-To: <217994954.378236.1542028462038.JavaMail.zimbra@lsd.ufcg.edu.br> References: <217994954.378236.1542028462038.JavaMail.zimbra@lsd.ufcg.edu.br> Message-ID: +1 On Mon, Nov 12, 2018 at 2:14 PM Kaio Oliveira wrote: > Hi everyone, > > As part of the process of retiring the os_monasca-ui role from the > openstack-ansible project, I'm announcing here on the ML that this role > will be retired, because there's no reason to maintain it anymore. > This has been discussed with the previous and the current > OpenStack-Ansible PTL. > > The monasca-ui will be dealt within os_horizon role on openstack-ansible. > > Best regards, > Kaio > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Nov 12 13:27:52 2018 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 12 Nov 2018 08:27:52 -0500 Subject: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set In-Reply-To: References: Message-ID: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> Mike, I had same issue month ago when I roll out sriov in my cloud and this is what I did to solve this issue. Set following in flavor hw:numa_nodes=2 It will spread out instance vcpu across numa, yes there will be little penalty but if you tune your application according they you are good Yes this is bug I have already open ticket and I believe folks are working on it but its not simple fix. They may release new feature in coming oprnstack release. Sent from my iPhone > On Nov 11, 2018, at 9:25 PM, Mike Joseph wrote: > > Hi folks, > > It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the flavor also has hw:cpu_policy=dedicated set. The alias config is: > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": "preferred" } > > And the flavor config is: > > { > "OS-FLV-DISABLED:disabled": false, > "OS-FLV-EXT-DATA:ephemeral": 0, > "access_project_ids": null, > "disk": 10, > "id": "221e1bcd-2dde-48e6-bd09-820012198908", > "name": "vm-2", > "os-flavor-access:is_public": true, > "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", > "ram": 8192, > "rxtx_factor": 1.0, > "swap": "", > "vcpus": 2 > } > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't need that, since we are okay with cross-region access to the PCI device. > > However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec for numa_policy seem to indicate that this is exactly the intent of the option: > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html > > But, with the above config, we still get PCI affinity scheduling errors: > > 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit the given host NUMA topology.' > > This strikes me as a bug, but perhaps I am missing something here? > > Thanks, > MJ > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Nov 12 14:27:13 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Nov 2018 09:27:13 -0500 Subject: [Openstack] [glance] task in pending state, image in uploading state In-Reply-To: <151eb186-6481-4b43-b179-b19f431ad384@gmail.com> References: <151eb186-6481-4b43-b179-b19f431ad384@gmail.com> Message-ID: <79510205-58ff-c490-7047-038d94773b7d@gmail.com> On 11/12/18 5:07 AM, Bernd Bausch wrote: > Trying Glance's new import process, my images are all stuck in status > uploading (both methods glance-direct and web-download). > > I can see that there are tasks for those images; they are pending. The > Glance API log doesn't contain anything that clues me in (debug logging > is enabled). > > The source code is too involved for my feeble Python and OpenStack > Internals skills. > > *How can I find out what blocks the tasks? * > > This is a stable Rocky Devstack without any customization of the Glance > config. > The tasks engine Glance uses to facilitate the "new" (experimental in Pike, current in Queens) image import process does not work when Glance is deployed as a WSGI application using uWSGI [0]; as you observed, the tasks remain stuck in 'pending'. You can apply this patch [1] to your devstack Glance and restart devstack at g-api and image import should work without additional glance api-changes (the patch applied cleanly last time I checked, which was a Stein-1 milestone devstack; it should apply cleanly to your stable Rocky devstack). You may also want to take a look at the Glance admin guide [2] to see what configuration options are available. [0] https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues [1] https://review.openstack.org/#/c/545483/ [2] https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From rosmaita.fossdev at gmail.com Mon Nov 12 14:33:06 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 12 Nov 2018 09:33:06 -0500 Subject: [Openstack] [glance] Rocky image import stuck In-Reply-To: <34751b29-b07e-5bb0-d174-ddfb3fe8549a@gmail.com> References: <34751b29-b07e-5bb0-d174-ddfb3fe8549a@gmail.com> Message-ID: On 11/7/18 8:46 AM, Bernd Bausch wrote: > Does the new image import process work? Am I missing something? Uploaded > images stay in an intermediate state, either /uploading/ or /importing/, > and never become /active.///Where should I look? Apologies for the late reply to your question. For anyone else with a similar question, I replied to your thread on the general list: http://lists.openstack.org/pipermail/openstack/2018-November/047186.html > > On a stable/Rocky Devstack, I do: > > openstack image create --disk-format qcow2 myimg > > Image status is /queueing/, as expected. > > glance image-stage --file devstack/files/Fedora...qcow2 --progress > IMAGE_ID > > Image status is /uploading/. A copy of the image is on > /tmp/staging/IMAGE_ID. > > glance image-import --import-method glance-direct IMAGE_ID > > Sometimes, the status remains /uploading, /sometimes it turns > /importing, /never /active./ > > glance-api log grep'd for the image ID: > > Nov 07 18:51:36 rocky devstack at g-api.service[1033]: INFO > glance.common.scripts.image_import.main [None > req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Task > ec4b36fd-dece-4f41-aa8d-337d01c239f1: Got image data uri > file:///tmp/staging/72a6d7d0-a538-4922-95f2-1649e9702eb2 to be imported > Nov 07 18:51:37 rocky devstack at g-api.service[1033]: DEBUG > glance_store._drivers.swift.store [None > req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Adding image > object '72a6d7d0-a538-4922-95f2-1649e9702eb2' to Swift {{(pid=2250) add > /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:941}} > Nov 07 18:51:45 rocky devstack at g-api.service[1033]: DEBUG swiftclient > [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] REQ: curl -i > http://192.168.1.201:8080/v1/AUTH_9495609cff044252965f8c3e5e86f8e0/glance/72a6d7d0-a538-4922-95f2-1649e9702eb2-00001 > -X PUT -H "X-Auth-Token: gAAAAABb4rWowjLQ..." {{(pid=2250) http_log > /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:167}} > Nov 07 18:51:45 rocky devstack at g-api.service[1033]: DEBUG > glance_store._drivers.swift.store [None > req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Wrote chunk > 72a6d7d0-a538-4922-95f2-1649e9702eb2-00001 (1/?) of length 204800000 to > Swift returning MD5 of content: 5139500edbb5814a1351100d162db333 > {{(pid=2250) add > /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:1024}} > > And then nothing. So it does send a 200MB chunk to Swift. I can see it > on Swift, too. But it stops after the first chunk and forgets to send > the rest. > > After I tried that a few times, now it doesn't even upload the first > chunk. Nothing in Swift at all. No error in the Glance API log either. > > Same problem with the /image-upload-via-import /command. I also tried > the /web-download /import method; same result. > > In all these cases, the image remains in an non-active state forever, > i.e. an hour or so, when I lose patience and delete it. > > "Classic" upload works (/openstack image create --file ..../). The log > file then shows the expected chunk uploads to Swift. > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From berndbausch at gmail.com Mon Nov 12 15:01:47 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 13 Nov 2018 00:01:47 +0900 Subject: [Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance In-Reply-To: References: Message-ID: <5097f5e8-84c8-c0ed-0ec5-23e5d2f5f414@gmail.com> OpenStack stores volumes wherever you configure it to store them. On a disk array, an NFS server, a Ceph cluster, a dedicated storage node, a controller or even a compute node. And more. My guess: Volumes on controllers or compute nodes are not a good solution for production systems. By default, Packstack implements Cinder volumes as LVM volumes on the controller. It's probably possible to put the LVM volumes on other nodes, and it is definitely possible to configure a different backend than LVM, for example Netapp, in which case the volumes would be on a Netapp appliance. On 11/12/2018 9:34 PM, Soheil Pourbafrani wrote: > My question is does OpenStack store volumes somewhere other than > the compute node? > For example in PackStack on two nodes, one for controller and network > and the other for compute node, the instance's volumes will be stored > on the controller or on compute? -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3609 bytes Desc: S/MIME Cryptographic Signature URL: From berndbausch at gmail.com Tue Nov 13 00:29:47 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 13 Nov 2018 09:29:47 +0900 Subject: [Openstack] [glance] task in pending state, image in uploading state In-Reply-To: <79510205-58ff-c490-7047-038d94773b7d@gmail.com> References: <151eb186-6481-4b43-b179-b19f431ad384@gmail.com> <79510205-58ff-c490-7047-038d94773b7d@gmail.com> Message-ID: <625a30c7-2de0-3113-2239-882ad07fe7cd@gmail.com> Thanks Brian. It's great to get an email from Mr. Glance. I managed to patch Devstack, and a first test was successful. Perfect! A bit late, I then found numerous warnings in release notes and other documents that UWSGI should not be used when deploying Glance. My earlier web searches flew by these documents without noticing them. Bernd On 11/12/2018 11:27 PM, Brian Rosmaita wrote: > On 11/12/18 5:07 AM, Bernd Bausch wrote: >> Trying Glance's new import process, my images are all stuck in status >> uploading (both methods glance-direct and web-download). >> >> I can see that there are tasks for those images; they are pending. The >> Glance API log doesn't contain anything that clues me in (debug logging >> is enabled). >> >> The source code is too involved for my feeble Python and OpenStack >> Internals skills. >> >> *How can I find out what blocks the tasks? * >> >> This is a stable Rocky Devstack without any customization of the Glance >> config. >> > The tasks engine Glance uses to facilitate the "new" (experimental in > Pike, current in Queens) image import process does not work when Glance > is deployed as a WSGI application using uWSGI [0]; as you observed, the > tasks remain stuck in 'pending'. You can apply this patch [1] to your > devstack Glance and restart devstack at g-api and image import should work > without additional glance api-changes (the patch applied cleanly last > time I checked, which was a Stein-1 milestone devstack; it should apply > cleanly to your stable Rocky devstack). You may also want to take a > look at the Glance admin guide [2] to see what configuration options are > available. > > [0] > https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues > [1] https://review.openstack.org/#/c/545483/ > [2] > https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html > >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3609 bytes Desc: S/MIME Cryptographic Signature URL: From Cheung at ezfly.com Tue Nov 13 06:00:47 2018 From: Cheung at ezfly.com (=?utf-8?B?Q2hldW5nIOaliuemrumKkw==?=) Date: Tue, 13 Nov 2018 06:00:47 +0000 Subject: [Openstack] can't input pipeline symbol in linux instance on web vnc console Message-ID: <1542088846.32593.3.camel@ezfly.com> Host OS: Ubuntu 16.4 openstack version: queens When I type the "|" symbol in linux instance on web vnc console, "|" becomes ">". But I connet to linux instance with putty, I can type "|" without any problem. I do not know why this happend. This issue only occurs on Linux instances. -- 本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。 The information contained in this communication and attachment is confidential and is intended only for the use of the recipient to which this communication is addressed. Any disclosure, copying or distribution of this communication without the sender's consents is strictly prohibited. If you are not the intended recipient, please notify the sender and delete this communication entirely without using, retaining, or disclosing any of its contents. Internet communications cannot be guaranteed to be virus-free. The recipient is responsible for ensuring that this communication is virus free and the sender accepts no liability for any damages caused by virus transmitted by this communication. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Tue Nov 13 10:05:44 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Tue, 13 Nov 2018 12:05:44 +0200 Subject: [Openstack] Openstack maintenance updates - how to? Message-ID: <51257f93-1dbe-f23b-7f49-d9908a19cfa4@gmx.com> Hi colleagues, we're using Openstack from Ubuntu repositories. Everything is perfect except cases when I manually apply patches before supplier (e.g. Canonical) will issue updated versions. The problem is that it happens not immediately and not with the next update, thus all patches I applied manually, will be overwrited back during next update. How you, firends, deal with this? Is "manual" (as described above) way is the most safe and reliable? Or may be there is "stable" branch of Openstack components which can be used as maintenance? Or whether "master" branch is good and safe source for updating Openstack components is such way? Any thoughts on this? Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From berndbausch at gmail.com Tue Nov 13 10:22:06 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 13 Nov 2018 19:22:06 +0900 Subject: [Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance In-Reply-To: References: <5097f5e8-84c8-c0ed-0ec5-23e5d2f5f414@gmail.com> Message-ID: <80fd55d4-e59d-7721-c050-35c820abacf7@gmail.com> Soheil, I took the liberty to add the openstack distribution list back in. Your description is a bit vague. Do you have dedicated nodes for storage, or do you run instances on the same nodes where storage is configured? Do you want run use volumes for instance storage, or ephemeral disks? Volumes are normally located on remote servers or disk arrays, so that the answer is yes in this case. You can even pool storage of several nodes together using DRBD or (up to Newton) GlusterFS, but I have no experience in this area and can't tell you what would work and what would not. To configure volume backends, see https://docs.openstack.org/cinder/rocky/configuration/block-storage/volume-drivers.html. Ephemeral storage is normally local storage on the compute node where the instance runs. You can also use NFS-mounted remote filesystem for ephemeral storage. Bernd. On 11/13/2018 5:37 PM, Soheil Pourbafrani wrote: > Thanks all, > > Suppose we use HDD disks of local machines and there are no shared > storages like SAN storage. So in such an environment is it possible to > use remote disks on other machines for compute nodes? (I think it's > impossible with HDD local disks and for such a scenario we should have > SAN storage). > > So the question is is it possible to have volumes in local disk of > compute nodes? or we should let OpenStack go! > > On Mon, Nov 12, 2018 at 6:31 PM Bernd Bausch > wrote: > > OpenStack stores volumes wherever you configure it to store them. > On a > disk array, an NFS server, a Ceph cluster, a dedicated storage > node, a > controller or even a compute node. And more. > > My guess: Volumes on controllers or compute nodes are not a good > solution for production systems. > > By default, Packstack implements Cinder volumes as LVM volumes on the > controller. It's probably possible to put the LVM volumes on other > nodes, and it is definitely possible to configure a different backend > than LVM, for example Netapp, in which case the volumes would be on a > Netapp appliance. > > On 11/12/2018 9:34 PM, Soheil Pourbafrani wrote: > > My question is does OpenStack store volumes somewhere other than > > the compute node? > > For example in PackStack on two nodes, one for controller and > network > > and the other for compute node, the instance's volumes will be > stored > > on the controller or on compute? > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3609 bytes Desc: S/MIME Cryptographic Signature URL: From satish.txt at gmail.com Tue Nov 13 12:52:40 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 13 Nov 2018 07:52:40 -0500 Subject: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set In-Reply-To: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> References: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> Message-ID: <8BB43193-8F89-46E6-B075-5E7432D4F61C@gmail.com> Mike, Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920 Cc'ing: Sean Sent from my iPhone > On Nov 12, 2018, at 8:27 AM, Satish Patel wrote: > > Mike, > > I had same issue month ago when I roll out sriov in my cloud and this is what I did to solve this issue. Set following in flavor > > hw:numa_nodes=2 > > It will spread out instance vcpu across numa, yes there will be little penalty but if you tune your application according they you are good > > Yes this is bug I have already open ticket and I believe folks are working on it but its not simple fix. They may release new feature in coming oprnstack release. > > Sent from my iPhone > >> On Nov 11, 2018, at 9:25 PM, Mike Joseph wrote: >> >> Hi folks, >> >> It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the flavor also has hw:cpu_policy=dedicated set. The alias config is: >> >> alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": "preferred" } >> >> And the flavor config is: >> >> { >> "OS-FLV-DISABLED:disabled": false, >> "OS-FLV-EXT-DATA:ephemeral": 0, >> "access_project_ids": null, >> "disk": 10, >> "id": "221e1bcd-2dde-48e6-bd09-820012198908", >> "name": "vm-2", >> "os-flavor-access:is_public": true, >> "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", >> "ram": 8192, >> "rxtx_factor": 1.0, >> "swap": "", >> "vcpus": 2 >> } >> >> In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't need that, since we are okay with cross-region access to the PCI device. >> >> However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec for numa_policy seem to indicate that this is exactly the intent of the option: >> >> https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html >> >> But, with the above config, we still get PCI affinity scheduling errors: >> >> 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit the given host NUMA topology.' >> >> This strikes me as a bug, but perhaps I am missing something here? >> >> Thanks, >> MJ >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Tue Nov 13 13:20:40 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Tue, 13 Nov 2018 16:50:40 +0330 Subject: [Openstack] [PackStack][cinder, nova] How to create ephemeral instance from volume Message-ID: Hi, I have some volumes with a snapshot of each. I was wondering is it possible to create a new instance with the only ephemeral disk (not root disk) from them. Actually, I didn't want to create volume for new instances. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue Nov 13 13:30:26 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 13 Nov 2018 08:30:26 -0500 Subject: [Openstack] [glance] task in pending state, image in uploading state In-Reply-To: <625a30c7-2de0-3113-2239-882ad07fe7cd@gmail.com> References: <151eb186-6481-4b43-b179-b19f431ad384@gmail.com> <79510205-58ff-c490-7047-038d94773b7d@gmail.com> <625a30c7-2de0-3113-2239-882ad07fe7cd@gmail.com> Message-ID: On 11/12/18 7:29 PM, Bernd Bausch wrote: > Thanks Brian. It's great to get an email from Mr. Glance. > > I managed to patch Devstack, and a first test was successful. Perfect! Glad it worked! > A bit late, I then found numerous warnings in release notes and other > documents that UWSGI should not be used when deploying Glance. My > earlier web searches flew by these documents without noticing them. We haven't made it easy for you in devstack, though. As you can see from the patch, it requires coordination across a few different teams to make the appropriate changes to get all tests passing and the patch merged. When everyone's back from the summit, I'll see if I can get a coordinated push across teams to get this done for Stein milestone 2. This won't solve the larger problem of Glance not running in uWSGI, though. I'd refer people interested in having that happen to my statement about this issue in the Queens release notes [0]; the situation described there still stands. [0] https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues > Bernd > > On 11/12/2018 11:27 PM, Brian Rosmaita wrote: >> On 11/12/18 5:07 AM, Bernd Bausch wrote: >>> Trying Glance's new import process, my images are all stuck in status >>> uploading (both methods glance-direct and web-download). >>> >>> I can see that there are tasks for those images; they are pending. The >>> Glance API log doesn't contain anything that clues me in (debug logging >>> is enabled). >>> >>> The source code is too involved for my feeble Python and OpenStack >>> Internals skills. >>> >>> *How can I find out what blocks the tasks? * >>> >>> This is a stable Rocky Devstack without any customization of the Glance >>> config. >>> >> The tasks engine Glance uses to facilitate the "new" (experimental in >> Pike, current in Queens) image import process does not work when Glance >> is deployed as a WSGI application using uWSGI [0]; as you observed, the >> tasks remain stuck in 'pending'.  You can apply this patch [1] to your >> devstack Glance and restart devstack at g-api and image import should work >> without additional glance api-changes (the patch applied cleanly last >> time I checked, which was a Stein-1 milestone devstack; it should apply >> cleanly to your stable Rocky devstack).  You may also want to take a >> look at the Glance admin guide [2] to see what configuration options are >> available. >> >> [0] >> https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues >> >> [1] https://review.openstack.org/#/c/545483/ >> [2] >> https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html >> >> >>> _______________________________________________ >>> Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> Post to     : openstack at lists.openstack.org >>> Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to     : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > From soheil.ir08 at gmail.com Tue Nov 13 13:48:36 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Tue, 13 Nov 2018 17:18:36 +0330 Subject: [Openstack] How to detach Image from instance Message-ID: Hi, I lunch an instance using CentOS ISO image and install it on ephemeral disk (no volume created). So after finishing the installation, I rebooted the instance, *but it boot the CentOS image, again. *While on the path /var/lib/nova on the compute node I can see a disk for the instance is created. So I guess removing the image from instance will force it to boot the installed OS. Is there any way to do that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Nov 13 16:46:20 2018 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 13 Nov 2018 11:46:20 -0500 Subject: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set In-Reply-To: <9b221d32e322a7fe4e97b29d1ef3369240d51a67.camel@redhat.com> References: <2F2B5387-A725-44DC-A649-E7F6050164BB@gmail.com> <8BB43193-8F89-46E6-B075-5E7432D4F61C@gmail.com> <9b221d32e322a7fe4e97b29d1ef3369240d51a67.camel@redhat.com> Message-ID: Sean, Thank you for the detailed explanation, i really hope if we can backport to queens, it would be harder for me to upgrade cluster..! On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney wrote: > > On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote: > > Mike, > > > > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920 > actully this is a releated but different bug based in the description below. > thanks for highlighting this to me. > > > > Cc'ing: Sean > > > > Sent from my iPhone > > > > On Nov 12, 2018, at 8:27 AM, Satish Patel wrote: > > > > > Mike, > > > > > > I had same issue month ago when I roll out sriov in my cloud and this is what I did to solve this issue. Set > > > following in flavor > > > > > > hw:numa_nodes=2 > > > > > > It will spread out instance vcpu across numa, yes there will be little penalty but if you tune your application > > > according they you are good > > > > > > Yes this is bug I have already open ticket and I believe folks are working on it but its not simple fix. They may > > > release new feature in coming oprnstack release. > > > > > > Sent from my iPhone > > > > > > On Nov 11, 2018, at 9:25 PM, Mike Joseph wrote: > > > > > > > Hi folks, > > > > > > > > It appears that the numa_policy attribute of a PCI alias is ignored for flavors referencing that alias if the > > > > flavor also has hw:cpu_policy=dedicated set. The alias config is: > > > > > > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", "product_id": "1004", "numa_policy": > > > > "preferred" } > > > > > > > > And the flavor config is: > > > > > > > > { > > > > "OS-FLV-DISABLED:disabled": false, > > > > "OS-FLV-EXT-DATA:ephemeral": 0, > > > > "access_project_ids": null, > > > > "disk": 10, > > > > "id": "221e1bcd-2dde-48e6-bd09-820012198908", > > > > "name": "vm-2", > > > > "os-flavor-access:is_public": true, > > > > "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'", > > > > "ram": 8192, > > > > "rxtx_factor": 1.0, > > > > "swap": "", > > > > "vcpus": 2 > > > > } > Satish in your case you were trying to use neutrons sriov vnic types such that the VF would be connected to a neutron > network. In this case the mellanox connectx 3 virtual funcitons are being passed to the guest using the pci alias via > the flavor which means they cannot be used to connect to neutron networks but they should be able to use affinity > poileices. > > > > > > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 VFs configured. We wish to expose > > > > these VFs to VMs that schedule on the host. However, the NIC is in NUMA region 0 which means that only half of > > > > the compute node's CPU cores would be usable if we required VM affinity to the NIC's NUMA region. But we don't > > > > need that, since we are okay with cross-region access to the PCI device. > > > > > > > > However, we do need CPU pinning to work, in order to have efficient cache hits on our VM processes. Therefore, we > > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a NUMA region opposite of the NIC. The spec > > > > for numa_policy seem to indicate that this is exactly the intent of the option: > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html > > > > > > > > But, with the above config, we still get PCI affinity scheduling errors: > > > > > > > > 'Insufficient compute resources: Requested instance NUMA topology together with requested PCI devices cannot fit > > > > the given host NUMA topology.' > > > > > > > > This strikes me as a bug, but perhaps I am missing something here? > yes this does infact seam like a new bug. > can you add myself and stephen to the bug once you file it. > in the bug please include the version of opentack you were deploying. > > in the interim setting hw:numa_nodes=2 will allow you to pin the guest without the error > however the flavor and alias you have provided should have been enough. > > im hoping that we can fix both the alisa and neutorn based case this cycle but to do so we > will need to reporpose original queens spec for stein and disucss if we can backport any of the > fixes or if this would be only completed in stein+ i would hope we coudl backport fixes for the flavor > based use case but the neutron based sue case would likely be stein+ > > regards > sean > > > > > > > > Thanks, > > > > MJ > > > > _______________________________________________ > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From tony at bakeyournoodle.com Wed Nov 14 04:42:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 14 Nov 2018 05:42:35 +0100 Subject: [Openstack] [all] All Hail our Newest Release Name - OpenStack Train Message-ID: <20181114044233.GA10706@thor.bakeyournoodle.com> Hi everybody! As the subject reads, the "T" release of OpenStack is officially "Train". Unlike recent choices Train was the popular choice so congrats! Thanks to everybody who participated and help with the naming process. Lets make OpenStack Train the release so awesome that people can't help but choo-choo-choose to run it[1]! Yours Tony. [1] Too soon? Too much? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ttml at fastmail.com Wed Nov 14 07:11:29 2018 From: ttml at fastmail.com (Tushar Tyagi) Date: Wed, 14 Nov 2018 12:41:29 +0530 Subject: [Openstack] [openstack][nova|cinder] niscsiadm: Could not log into all portals || VolumeNotFound exception Message-ID: <1542179489.3301968.1576321912.148D2848@webmail.messagingengine.com> Hi, I have created a development setup with Devstack on 2 machines, where one is a controller + compute node (IP: 172.23.29.96) and other is a storage node(IP: 172.23.29.118). I am running all the services on the controller, except the c-vol service which runs on the storage node. Whenever I try to create a new instance, the volume gets created but the instance is stuck with "Spawning" status, and in some time gets errored out. This volume is then not attached to the VM. These volumes are LVM based, SCSI backed volumes. During this time, I can see the following 2 errors of interest in stack traces/logs. It would be really helpful if anyone here can take a look and point me in the right direction. I've also attached the stack traces in case the formatting gets messed up. ====== START OF STACK TRACE 1 ========= DEBUG oslo.privsep.daemon [-] privsep: Exception during request[140390878014992]: Unexpected error while running command. Command: iscsiadm -m node -T iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 172.23.29.118:3260 --login Exit code: 8 Stdout: u'Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 172.23.29.118,3260] (multiple)\n' Stderr: u'iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 172.23.29.118,3260].\nis\ csiadm: initiator reported error (8 - connection timed out)\niscsiadm: Could not log into all portals\n' {{(pid=14097) loop /usr/lib/python2.7/site-packages/oslo_privs\ ep/daemon.py:449}} ERROR oslo.privsep.daemon Traceback (most recent call last): ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 445, in loop ERROR oslo.privsep.daemon reply = self._process_cmd(*msg) ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 428, in _process_cmd ERROR oslo.privsep.daemon ret = func(*f_args, **f_kwargs) ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 209, in _wrap ERROR oslo.privsep.daemon return func(*args, **kwargs) ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 194, in execute_root ERROR oslo.privsep.daemon return custom_execute(*cmd, shell=False, run_as_root=False, **kwargs) ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 143, in custom_execute ERROR oslo.privsep.daemon on_completion=on_completion, *cmd, **kwargs) ERROR oslo.privsep.daemon File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute ERROR oslo.privsep.daemon cmd=sanitized_cmd) ERROR oslo.privsep.daemon ProcessExecutionError: Unexpected error while running command. ERROR oslo.privsep.daemon Command: iscsiadm -m node -T iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 172.23.29.118:3260 --login ERROR oslo.privsep.daemon Exit code: 8 ERROR oslo.privsep.daemon Stdout: u'Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 172.23.29.118\ ,3260] (multiple)\n' ERROR oslo.privsep.daemon Stderr: u'iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal\ : 172.23.29.118,3260].\niscsiadm: initiator reported error (8 - connection timed out)\niscsiadm: Could not log into all portals\n' ERROR oslo.privsep.daemon ====== END OF STACK TRACE 1 =========== ====== START OF STACK TRACE 2 =========== [instance: d2a44cc2-c367-4d6e-b572-0d174e44d817] Instance failed to spawn: VolumeDeviceNotFound: Volume device not found at . Error: Traceback (most recent call last): Error: File "/opt/stack/nova/nova/compute/manager.py", line 2357, in _build_resources Error: yield resources Error: File "/opt/stack/nova/nova/compute/manager.py", line 2121, in _build_and_run_instance Error: block_device_info=block_device_info) Error: File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3075, in spawn Error: mdevs=mdevs) Error: File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5430, in _get_guest_xml Error: context, mdevs) Error: File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5216, in _get_guest_config Error: flavor, guest.os_type) Error: File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4030, in _get_guest_storage_config Error: self._connect_volume(context, connection_info, instance) Error: File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1222, in _connect_volume Error: vol_driver.connect_volume(connection_info, instance) Error: File "/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 64, in connect_volume Error: device_info = self.connector.connect_volume(connection_info['data']) Error: File "/usr/lib/python2.7/site-packages/os_brick/utils.py", line 150, in trace_logging_wrapper Error: result = f(*args, **kwargs) Error: File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner Error: return f(*args, **kwargs) Error: File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 516, in connect_volume Error: self._cleanup_connection(connection_properties, force=True) Error: File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ Error: self.force_reraise() Error: File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise Error: six.reraise(self.type_, self.value, self.tb) Error: File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 510, in connect_volume Error: return self._connect_single_volume(connection_properties) Error: File "/usr/lib/python2.7/site-packages/os_brick/utils.py", line 61, in _wrapper Error: return r.call(f, *args, **kwargs) Error: File "/usr/lib/python2.7/site-packages/retrying.py", line 212, in call Error: raise attempt.get() Error: File "/usr/lib/python2.7/site-packages/retrying.py", line 247, in get Error: six.reraise(self.value[0], self.value[1], self.value[2]) Error: File "/usr/lib/python2.7/site-packages/retrying.py", line 200, in call Error: attempt = Attempt(fn(*args, **kwargs), attempt_number, False) Error: File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 585, in _connect_single_volume Error: raise exception.VolumeDeviceNotFound(device='') Error: VolumeDeviceNotFound: Volume device not found at . ====== END OF STACK TRACE 2 =========== -- Tushar Tyagi ttml at fastmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: stack_trace_1.log Type: application/octet-stream Size: 2890 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stack_trace_2.log Type: application/octet-stream Size: 3168 bytes Desc: not available URL: From skaplons at redhat.com Wed Nov 14 07:37:09 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 14 Nov 2018 08:37:09 +0100 Subject: [Openstack] [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train In-Reply-To: References: <20181114044233.GA10706@thor.bakeyournoodle.com> Message-ID: <7064638E-4B0A-4572-A194-4FF6327DC6B1@redhat.com> Hi, I think it was published, see http://lists.openstack.org/pipermail/openstack/2018-November/047172.html > Wiadomość napisana przez Jeremy Freudberg w dniu 14.11.2018, o godz. 06:12: > > Hey Tony, > > What's the reason for the results of the poll not being public? > > Thanks, > Jeremy > On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds wrote: >> >> >> Hi everybody! >> >> As the subject reads, the "T" release of OpenStack is officially >> "Train". Unlike recent choices Train was the popular choice so >> congrats! >> >> Thanks to everybody who participated and help with the naming process. >> >> Lets make OpenStack Train the release so awesome that people can't help >> but choo-choo-choose to run it[1]! >> >> >> Yours Tony. >> [1] Too soon? Too much? >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From berndbausch at gmail.com Wed Nov 14 07:43:14 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Wed, 14 Nov 2018 16:43:14 +0900 Subject: [Openstack] [openstack][nova|cinder] niscsiadm: Could not log into all portals || VolumeNotFound exception In-Reply-To: <1542179489.3301968.1576321912.148D2848@webmail.messagingengine.com> References: <1542179489.3301968.1576321912.148D2848@webmail.messagingengine.com> Message-ID: <12cc3c46-a830-41eb-a0a4-cd42cfd634b6@gmail.com> You launch a volume-backed instance. The volume can't be attached, so the instance can't be launched. The volume can't be attached because iSCSI authentication fails. Either it's not set up correctly in cinder.conf on the controller, or you hit a bug. When you google for /iscsi authentication /and /Cinder /or /Nova/, you get plenty of hits, such as https://ask.openstack.org/en/question/92482/unable-to-attach-cinder-volume-iscsi-require-authentication/. This is the command that fails on the controller. The IP address or target might be incorrect, or the credentials, which are set by earlier iscsiadm commands. > iscsiadm -m node -T iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 172.23.29.118:3260 --login Bernd -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3609 bytes Desc: S/MIME Cryptographic Signature URL: From manuel.sb at garvan.org.au Thu Nov 15 07:52:45 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Thu, 15 Nov 2018 07:52:45 +0000 Subject: [Openstack] neutron python client stopped working Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAE648F@MXDB2.ad.garvan.unsw.edu.au> Hi, This looks like a stupid thing but I can't make it to work. For some reason I can't run any openstack cli command related to neutron, however using neutron cli seems to work so I am guessing this is just affecting the the python client. I tried pip uninstall and install it again with no success. I am using Rocky deployed through kolla-ansible [root at openstack-deployment ~]# pip show python-neutronclient Name: python-neutronclient Version: 6.10.0 Summary: CLI and Client Library for OpenStack Networking Home-page: https://docs.openstack.org/python-neutronclient/latest/ Author: OpenStack Networking Project Author-email: openstack-dev at lists.openstack.org License: UNKNOWN Location: /usr/lib/python2.7/site-packages Requires: osc-lib, Babel, oslo.i18n, oslo.utils, oslo.serialization, requests, netaddr, iso8601, python-keystoneclient, cliff, keystoneauth1, oslo.log, os-client-config, six, pbr, debtcollector, simplejson Required-by: [root at openstack-deployment ~]# openstack server list +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ | 6e079312-b067-4cf0-870a-c6e452b57d5c | test_gpu_small | ACTIVE | private=192.168.1.108, 129.94.120.182 | ubuntu18.04 | gpu.small | | bd977544-a088-4365-8bed-128daca1e820 | test_gpu_large | ACTIVE | private=192.168.1.117, 129.94.120.186 | ubuntu18.04 | gpu.large | | db439705-5b57-4a05-9bc3-74ff0f88bf87 | test | ACTIVE | private=192.168.1.120, 129.94.120.181 | cirros | m1.tiny | +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ [root at openstack-deployment ~]# openstack network list __init__() got an unexpected keyword argument 'retriable_status_codes' [root at openstack-deployment ~]# openstack subnet list __init__() got an unexpected keyword argument 'retriable_status_codes' [root at openstack-deployment ~]# openstack port list __init__() got an unexpected keyword argument 'retriable_status_codes' [root at openstack-deployment ~]# neutron port-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ | id | name | tenant_id | mac_address | fixed_ips | +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ | 305c8e83-08ca-4df0-b6e4-984cc31f8f0b | | | fa:16:3e:c8:7f:34 | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.186"} | | 569ec2d3-c1f7-41c8-a7cb-a20d4d721634 | | abc29399b91d423088549d7446766573 | fa:16:3e:e2:9a:98 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.108"} | | 70b16d5b-648b-40fc-9b44-4cb1d13e80e4 | | | fa:16:3e:15:8c:70 | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.182"} | | 7cf3f895-b6f8-4b08-9c92-1909b956425a | | abc29399b91d423088549d7446766573 | fa:16:3e:75:29:a4 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.117"} | | b4f86f58-66f9-4961-851b-3627cccfbfbe | | | fa:16:3e:a7:bd:6c | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.181"} | | d5d0e2d0-42c3-4438-862a-d5908932daba | | abc29399b91d423088549d7446766573 | fa:16:3e:55:ef:95 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.100"} | | dc22e1ba-1776-467c-a511-8c68ee16520f | | | fa:16:3e:76:13:ee | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.184"} | | e18bcf54-302c-4424-a0a1-2c558c7ef5eb | | abc29399b91d423088549d7446766573 | fa:16:3e:82:e8:ac | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.120"} | | f225fa78-9683-4d89-bbc8-67ff80513a56 | | abc29399b91d423088549d7446766573 | fa:16:3e:dd:52:d8 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.1"} | +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ [root at openstack-deployment ~]# neutron subnet-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ | id | name | tenant_id | cidr | allocation_pools | +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ | 13fd3ed8-988e-407b-955b-e6f749b8f8a3 | privatesubnet | abc29399b91d423088549d7446766573 | 192.168.1.0/24 | {"start": "192.168.1.100", "end": "192.168.1.199"} | | 7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5 | publicsubnet | abc29399b91d423088549d7446766573 | 129.94.120.128/26 | {"start": "129.94.120.181", "end": "129.94.120.190"} | +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ How can I make openstack client to work again with neutron commands? Thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Nov 15 08:17:03 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 15 Nov 2018 09:17:03 +0100 Subject: [Openstack] neutron python client stopped working In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BAE648F@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BAE648F@MXDB2.ad.garvan.unsw.edu.au> Message-ID: Hi, Please try newest version from master branch. All works fine for me with it: [09:15:53] vagrant at devstack-ubuntu-ovs ~ $ neutron net-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+---------+----------------------------------+----------------------------------------------------------+ | id | name | tenant_id | subnets | +--------------------------------------+---------+----------------------------------+----------------------------------------------------------+ | 78e44156-4a0d-47d1-902f-91a5bab0a2f7 | private | e9ded0c4ea2a48ed98d77a4048176b92 | b470007a-00c9-4953-bbf2-6ef8337bbaba fddb:30c8:695d::/64 | | | | | b536f540-6e40-4e37-adf3-deb46733550f 10.0.0.0/26 | | a95c7928-003b-4be5-b623-e8c608646a93 | public | 0b9809c9cadf44779b99680baabdf981 | e9fecfad-97b4-4998-b3f5-9f0c1f570236 10.10.0.128/25 | | | | | be9ba7f0-3931-41e9-8492-b207a10db930 2001:db8::/64 | +--------------------------------------+---------+----------------------------------+----------------------------------------------------------+ [09:15:56] vagrant at devstack-ubuntu-ovs ~ $ openstack network list +--------------------------------------+---------+----------------------------------------------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------+----------------------------------------------------------------------------+ | 78e44156-4a0d-47d1-902f-91a5bab0a2f7 | private | b470007a-00c9-4953-bbf2-6ef8337bbaba, b536f540-6e40-4e37-adf3-deb46733550f | | a95c7928-003b-4be5-b623-e8c608646a93 | public | be9ba7f0-3931-41e9-8492-b207a10db930, e9fecfad-97b4-4998-b3f5-9f0c1f570236 | +--------------------------------------+---------+----------------------------------------------------------------------------+ > Wiadomość napisana przez Manuel Sopena Ballesteros w dniu 15.11.2018, o godz. 08:52: > > Hi, > > This looks like a stupid thing but I can’t make it to work. For some reason I can’t run any openstack cli command related to neutron, however using neutron cli seems to work so I am guessing this is just affecting the the python client. I tried pip uninstall and install it again with no success. I am using Rocky deployed through kolla-ansible > > [root at openstack-deployment ~]# pip show python-neutronclient > Name: python-neutronclient > Version: 6.10.0 > Summary: CLI and Client Library for OpenStack Networking > Home-page: https://docs.openstack.org/python-neutronclient/latest/ > Author: OpenStack Networking Project > Author-email: openstack-dev at lists.openstack.org > License: UNKNOWN > Location: /usr/lib/python2.7/site-packages > Requires: osc-lib, Babel, oslo.i18n, oslo.utils, oslo.serialization, requests, netaddr, iso8601, python-keystoneclient, cliff, keystoneauth1, oslo.log, os-client-config, six, pbr, debtcollector, simplejson > Required-by: > [root at openstack-deployment ~]# openstack server list > +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ > | ID | Name | Status | Networks | Image | Flavor | > +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ > | 6e079312-b067-4cf0-870a-c6e452b57d5c | test_gpu_small | ACTIVE | private=192.168.1.108, 129.94.120.182 | ubuntu18.04 | gpu.small | > | bd977544-a088-4365-8bed-128daca1e820 | test_gpu_large | ACTIVE | private=192.168.1.117, 129.94.120.186 | ubuntu18.04 | gpu.large | > | db439705-5b57-4a05-9bc3-74ff0f88bf87 | test | ACTIVE | private=192.168.1.120, 129.94.120.181 | cirros | m1.tiny | > +--------------------------------------+----------------+--------+---------------------------------------+-------------+-----------+ > [root at openstack-deployment ~]# openstack network list > __init__() got an unexpected keyword argument 'retriable_status_codes' > [root at openstack-deployment ~]# openstack subnet list > __init__() got an unexpected keyword argument 'retriable_status_codes' > [root at openstack-deployment ~]# openstack port list > __init__() got an unexpected keyword argument 'retriable_status_codes' > [root at openstack-deployment ~]# neutron port-list > neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. > +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ > | id | name | tenant_id | mac_address | fixed_ips | > +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ > | 305c8e83-08ca-4df0-b6e4-984cc31f8f0b | | | fa:16:3e:c8:7f:34 | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.186"} | > | 569ec2d3-c1f7-41c8-a7cb-a20d4d721634 | | abc29399b91d423088549d7446766573 | fa:16:3e:e2:9a:98 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.108"} | > | 70b16d5b-648b-40fc-9b44-4cb1d13e80e4 | | | fa:16:3e:15:8c:70 | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.182"} | > | 7cf3f895-b6f8-4b08-9c92-1909b956425a | | abc29399b91d423088549d7446766573 | fa:16:3e:75:29:a4 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.117"} | > | b4f86f58-66f9-4961-851b-3627cccfbfbe | | | fa:16:3e:a7:bd:6c | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.181"} | > | d5d0e2d0-42c3-4438-862a-d5908932daba | | abc29399b91d423088549d7446766573 | fa:16:3e:55:ef:95 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.100"} | > | dc22e1ba-1776-467c-a511-8c68ee16520f | | | fa:16:3e:76:13:ee | {"subnet_id": "7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5", "ip_address": "129.94.120.184"} | > | e18bcf54-302c-4424-a0a1-2c558c7ef5eb | | abc29399b91d423088549d7446766573 | fa:16:3e:82:e8:ac | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.120"} | > | f225fa78-9683-4d89-bbc8-67ff80513a56 | | abc29399b91d423088549d7446766573 | fa:16:3e:dd:52:d8 | {"subnet_id": "13fd3ed8-988e-407b-955b-e6f749b8f8a3", "ip_address": "192.168.1.1"} | > +--------------------------------------+------+----------------------------------+-------------------+---------------------------------------------------------------------------------------+ > [root at openstack-deployment ~]# neutron subnet-list > neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. > +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ > | id | name | tenant_id | cidr | allocation_pools | > +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ > | 13fd3ed8-988e-407b-955b-e6f749b8f8a3 | privatesubnet | abc29399b91d423088549d7446766573 | 192.168.1.0/24 | {"start": "192.168.1.100", "end": "192.168.1.199"} | > | 7d5f5405-8b3b-4ebc-8bf3-9f9be196e1a5 | publicsubnet | abc29399b91d423088549d7446766573 | 129.94.120.128/26 | {"start": "129.94.120.181", "end": "129.94.120.190"} | > +--------------------------------------+---------------+----------------------------------+-------------------+------------------------------------------------------+ > > How can I make openstack client to work again with neutron commands? > > Thank you very much > > Manuel Sopena Ballesteros | Big data Engineer > Garvan Institute of Medical Research > The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 > T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au > > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack — Slawek Kaplonski Senior software engineer Red Hat From dtroyer at gmail.com Thu Nov 15 09:15:52 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 15 Nov 2018 03:15:52 -0600 Subject: [Openstack] neutron python client stopped working In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BAE648F@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BAE648F@MXDB2.ad.garvan.unsw.edu.au> Message-ID: On Thu, Nov 15, 2018 at 2:10 AM Manuel Sopena Ballesteros wrote: > This looks like a stupid thing but I can’t make it to work. For some reason I can’t run any openstack cli command related to neutron, however using neutron cli seems to work so I am guessing this is just affecting the the python client. I tried pip uninstall and install it again with no success. I am using Rocky deployed through kolla-ansible The thing that sticks out to me here is that the network commands in OSC use the SDK rather than neutronclient and that seems to be what is broken, so verify you have a good install of openstacksdk that matches your python-openstackclient. I have the following versions installed and working: openstacksdk==0.19.0 python-openstackclient==3.17.0 dt -- Dean Troyer dtroyer at gmail.com From torin.woltjer at granddial.com Thu Nov 15 15:33:33 2018 From: torin.woltjer at granddial.com (Torin Woltjer) Date: Thu, 15 Nov 2018 15:33:33 GMT Subject: [Openstack] DHCP not accessible on new compute node. Message-ID: <5867c87ce53a4cc4a4d0456435792d9b@granddial.com> I've just done this and the problem is still there. Torin Woltjer Grand Dial Communications - A ZK Tech Inc. Company 616.776.1066 ext. 2006 www.granddial.com ---------------------------------------- From: Marcio Prado Sent: 11/2/18 5:08 PM To: torin.woltjer at granddial.com Subject: Re: [Openstack] DHCP not accessible on new compute node. Clone the hd of a server and restore to what is not working. then only change the required settings ... ip, hostname, etc. Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br Em 02/11/2018 16:27, Torin Woltjer escreveu:I've completely wiped the node and reinstalled it, and the problem still persists. I can't ping instances on other compute nodes, or ping the DHCP ports. Instances don't get addresses or metadata when started on this node. From: Marcio Prado Sent: 11/1/18 9:51 AM To: torin.woltjer at granddial.com Cc: openstack at lists.openstack.org Subject: Re: [Openstack] DHCP not accessible on new compute node. I believe you have not forgotten anything. This should probably be bug ... As my cloud is not production, but rather masters research. I migrate the VM live to a node that is working, restart it, after that I migrate back to the original node that was not working and it keeps running ... Em 30-10-2018 17:50, Torin Woltjer escreveu: > Interestingly, I created a brand new selfservice network and DHCP > doesn't work on that either. I've followed the instructions in the > minimal setup (excluding the controllers as they're already set up) > but the new node has no access to the DHCP agent in neutron it seems. > Is there a likely component that I've overlooked? > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: "Torin Woltjer" > SENT: 10/30/18 10:48 AM > TO: , "openstack at lists.openstack.org" > > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > > I deleted both DHCP ports and they recreated as you said. However, > instances are still unable to get network addresses automatically. > > _TORIN WOLTJER_ > > GRAND DIAL COMMUNICATIONS - A ZK TECH INC. COMPANY > > 616.776.1066 EXT. 2006 > _ [1] [1]WWW.GRANDDIAL.COM [1]_ > > ------------------------- > > FROM: Marcio Prado > SENT: 10/29/18 6:23 PM > TO: torin.woltjer at granddial.com > SUBJECT: Re: [Openstack] DHCP not accessible on new compute node. > The door is recreated automatically. The problem like I said is not in > DHCP, but for some reason, erasing and waiting for OpenStack to > re-create the port often solves the problem. > > Please, if you can find out the problem in fact, let me know. I'm very > interested to know. > > You can delete the door without fear. OpenStack will recreate in a > short > time. > > Links: > ------ > [1] http://www.granddial.com -- Marcio Prado Analista de TI - Infraestrutura e Redes Fone: (35) 9.9821-3561 www.marcioprado.eti.br -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcioprado at marcioprado.eti.br Thu Nov 15 17:53:33 2018 From: marcioprado at marcioprado.eti.br (Marcio Prado) Date: Thu, 15 Nov 2018 15:53:33 -0200 Subject: [Openstack] DHCP not accessible on new compute node. In-Reply-To: <5867c87ce53a4cc4a4d0456435792d9b@granddial.com> Message-ID: An HTML attachment was scrubbed... URL: From ken at jots.org Thu Nov 15 18:36:29 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Thu, 15 Nov 2018 13:36:29 -0500 Subject: [Openstack] (Juno - *sigh*) Want non-admin access to hypervisor:VM association. Message-ID: <2bf0febb434f77cf5e717565117722b7@jots.org> Hey, all. We've got a Juno cloud, and we'd like various end-users to be able to see which hypervisors their VMs spring up on. /etc/nova/policy.json seems to have some relevant info, but it's hard to tell what does what. "compute_extension:hypervisors" looks like a possible candidate, but that's so vague that there's no telling what, exactly, is meant by "hypervisors". So: * Given that I just want the hypervisor:VM association, any suggestions as to which rule(s) to modify? * Failing that, wondering if there's any for-real documentation on what the various options in policy.json *do*. I've found many, many lists of what's in a generic policy.json, but nothing that went into detail about what does what. Thanks! -Ken From korpusik at mit.edu Fri Nov 16 16:07:54 2018 From: korpusik at mit.edu (Mandy Korpusik) Date: Fri, 16 Nov 2018 11:07:54 -0500 Subject: [Openstack] afs permissions Message-ID: Hi all, I'm setting up an OpenStack instance, and I'm running into permission errors when reading from afs. I tried typing "renew" and got the following error: kinit: Client 'ubuntu at CSAIL.MIT.EDU' not found in Kerberos database while getting initial credentials Does anyone have experience setting up afs certificates on OpenStack who could give me some advice? Thanks, Mandy From nickeysgo at gmail.com Fri Nov 16 17:18:47 2018 From: nickeysgo at gmail.com (Minjun Hong) Date: Sat, 17 Nov 2018 02:18:47 +0900 Subject: [Openstack] Network setup problem Message-ID: Hi, I'm installing Openstack manually and that means I don't use automation tools such as Devstack or Packstack. During the setup process, because of the problem, I have not been able to progress for three days at this stage! I guess that is caused by network settings. First of all, after I describe the network environment I'm in (here is a school), you'll understand what my problem is. Here, for Internet connection, every node (PC or server) needs manual IP like below figure. https://imgur.com/4ZkdxqO I made up the figure, guessing how the network structure is organized. So, it could be wrong. What is remarkable is that every time I reboot, a bridge of which name starts with 'brqf' is created, and suddenly the Internet does not work on the controller node. The local network is normally connected and DNS_PROBE_FINISHED_BAD_CONFIG error occurs in the internet browser. If I remove the bridge, Internet becomes available again. The weird bridge also emerges on the controller node, but Internet is working on the node well. In addition, I heard that compute node needs more than 2 NIC but, I'm curious about how to configure that. In fact, I was using this link ( https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens) to proceed with the installation. And I choose 'Provider network' from the Openstack networking options (Provider networks and Self-service networks). After I finished minimal installation for Queens, I attempted to create an instance. However, the error occurs and it is following: 2018-11-16 15:12:05.491+0000: libxl: > libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge > online [7254] exited with error status 1 > 2018-11-16 15:12:05.491+0000: libxl: > libxl_device.c:1286:device_hotplug_child_death_cb: script: ip link set > vif1.0 name tap4dcb475d-8f failed > 2018-11-16 15:12:05.491+0000: libxl: > libxl_create.c:1522:domcreate_attach_devices: Domain 1:unable to add vif > devices The message is from libxl (libvirt) log (/etc/libvirt/libxl/libxl-driver.log). I suppose that the error is caused due to wrong network setting in the host or wrong network configuration of Openstack. But, I was not able to start solving this problem because I do not know what I should do first. I have no idea about that problem at all ! Please help me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Fri Nov 16 23:40:53 2018 From: mranga at gmail.com (M. Ranganathan) Date: Fri, 16 Nov 2018 18:40:53 -0500 Subject: [Openstack] Octavia - Exception when creating loadbalancer pool Message-ID: Hello, I am trying to set up a load balancer using openstack queens and octavia 2.0.2. I ran into the following error when I tried to create a pool. Any help would be appreciated. Thank you Atom 'octavia.controller.worker.tasks.lifecycle_tasks.ListenersToErrorOnRevertTask' {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'listeners': [], 'loadbalancer': }, 'provides': None} |__Flow 'octavia-create-listener_flow': TemplateSyntaxError: expected token 'end of statement block', got '.' 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker Traceback (most recent call last): 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker result = task.execute(**arguments) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 76, in execute 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker self.amphora_driver.update(listener, loadbalancer.vip) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 115, in update 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker user_group=CONF.haproxy_amphora.user_group) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/jinja_cfg.py", line 101, in build_config 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker socket_path=socket_path) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/jinja_cfg.py", line 146, in render_loadbalancer_obj 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker constants=constants) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 989, in render 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker return self.environment.handle_exception(exc_info, True) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 754, in handle_exception 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker reraise(exc_type, exc_value, tb) 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker File "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/templates/base.j2", line 32, in template 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker {% set found_ns.found = true %} 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker TemplateSyntaxError: expected token 'end of statement block', got '.' 2018-11-16 16:29:20.102 20324 ERROR octavia.controller.worker.controller_worker -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Sat Nov 17 07:38:52 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sat, 17 Nov 2018 11:08:52 +0330 Subject: [Openstack] [PackStack][Cinder]Save Volumes in The Compute nodes Message-ID: Hi, I have 8 servers with just local HDD disk. I want to use one server as the controller and network node and the other (7 servers) as the compute node. I was wondering if it's possible to install PackStack that every compute node to store its volumes in it's HDD local disk? (I guess the Cinder should be installed on every Compute node alongside other settings) Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Sat Nov 17 09:43:42 2018 From: berndbausch at gmail.com (Bernd Bausch) Date: Sat, 17 Nov 2018 10:43:42 +0100 Subject: [Openstack] [PackStack][Cinder]Save Volumes in The Compute nodes In-Reply-To: References: Message-ID: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> Yes. You need to enable Cinder’s InstanceLocalityFilter, see https://docs.openstack.org/cinder/latest/configuration/block-storage/scheduler-filters.html. Here a tip: https://ask.openstack.org/en/question/92001/cinder-lvm-volume-local-to-instance/ Bernd > On Nov 17, 2018, at 8:38, Soheil Pourbafrani wrote: > > Hi, > I have 8 servers with just local HDD disk. I want to use one server as the controller and network node and the other (7 servers) as the compute node. > > I was wondering if it's possible to install PackStack that every compute node to store its volumes in it's HDD local disk? (I guess the Cinder should be installed on every Compute node alongside other settings) > > Thanks > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Sat Nov 17 16:48:00 2018 From: mranga at gmail.com (M. Ranganathan) Date: Sat, 17 Nov 2018 11:48:00 -0500 Subject: [Openstack] Octavia - Exception when creating loadbalancer pool In-Reply-To: References: Message-ID: This seems to be a dependency bug that was addressed here: https://bugzilla.redhat.com/show_bug.cgi?id=1551821 Updating to jinja 2.10 as recommended fixed the issue. Ranga On Fri, Nov 16, 2018 at 6:40 PM M. Ranganathan wrote: > Hello, > > I am trying to set up a load balancer using openstack queens and octavia > 2.0.2. I ran into the following error when I tried to create a pool. Any > help would be appreciated. > > Thank you > > > Atom > 'octavia.controller.worker.tasks.lifecycle_tasks.ListenersToErrorOnRevertTask' > {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'listeners': > [], > 'loadbalancer': 0x7f14947904d0>}, 'provides': None} > |__Flow 'octavia-create-listener_flow': TemplateSyntaxError: expected > token 'end of statement block', got '.' > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker Traceback (most recent call > last): > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", > line 53, in _execute_task > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker result = > task.execute(**arguments) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/local/lib/python2.7/dist-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py", > line 76, in execute > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker > self.amphora_driver.update(listener, loadbalancer.vip) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/local/lib/python2.7/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py", > line 115, in update > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker > user_group=CONF.haproxy_amphora.user_group) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/jinja_cfg.py", > line 101, in build_config > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker socket_path=socket_path) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/jinja_cfg.py", > line 146, in render_loadbalancer_obj > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker constants=constants) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 989, in > render > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker return > self.environment.handle_exception(exc_info, True) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 754, in > handle_exception > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker reraise(exc_type, > exc_value, tb) > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker File > "/usr/local/lib/python2.7/dist-packages/octavia/common/jinja/haproxy/templates/base.j2", > line 32, in template > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker {% set found_ns.found = > true %} > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker TemplateSyntaxError: expected > token 'end of statement block', got '.' > 2018-11-16 16:29:20.102 20324 ERROR > octavia.controller.worker.controller_worker > > > -- > M. Ranganathan > > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sun Nov 18 14:11:23 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 18 Nov 2018 09:11:23 -0500 Subject: [Openstack] (Juno - *sigh*) Want non-admin access to hypervisor:VM association. In-Reply-To: <2bf0febb434f77cf5e717565117722b7@jots.org> References: <2bf0febb434f77cf5e717565117722b7@jots.org> Message-ID: https://blueprints.launchpad.net/nova/+spec/openstack-api-hostid This should take care of it, don't know if it exists in Juno though. On Thu, Nov 15, 2018 at 1:49 PM Ken D'Ambrosio wrote: > Hey, all. We've got a Juno cloud, and we'd like various end-users to be > able to see which hypervisors their VMs spring up on. > /etc/nova/policy.json seems to have some relevant info, but it's hard to > tell what does what. "compute_extension:hypervisors" looks like a > possible candidate, but that's so vague that there's no telling what, > exactly, is meant by "hypervisors". So: > > * Given that I just want the hypervisor:VM association, any suggestions > as to which rule(s) to modify? > * Failing that, wondering if there's any for-real documentation on what > the various options in policy.json *do*. I've found many, many lists of > what's in a generic policy.json, but nothing that went into detail about > what does what. > > Thanks! > > -Ken > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 19 00:03:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Nov 2018 00:03:43 +0000 Subject: [Openstack] IMPORTANT: We're combining the lists! In-Reply-To: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> Message-ID: <20181119000342.46kpr5wcunjq2bfn@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] is open for posts from subscribers starting now, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 280 subscribers on openstack-discuss with three weeks to go before the old lists are closed down for good). At the recommendation of David Medberry at the OpenStack Summit last week, this reminder is being sent individually to each of the old lists (not as a cross-post), and without any topic tag in case either might be resulting in subscribers missing it. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ken at jots.org Mon Nov 19 01:12:46 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Sun, 18 Nov 2018 20:12:46 -0500 Subject: [Openstack] (Juno - *sigh*) Want non-admin access to hypervisor:VM association. In-Reply-To: References: <2bf0febb434f77cf5e717565117722b7@jots.org> Message-ID: <2cec061784872132eb7fdb514da64db1@jots.org> On 2018-11-18 09:11, Mohammed Naser wrote: > https://blueprints.launchpad.net/nova/+spec/openstack-api-hostid > > This should take care of it, don't know if it exists in Juno though. It *does* exist in Juno, it *can't*, however, do what I want -- at least, generically. The hostId that gets returned is a value that's (apparently) used to let you know, semi-anonymously, how affinity is (or isn't) working for your VMs. So you get a unique identifier for each hypervisor -- but it has no *obvious* bearing on the hostname. It's an id that's formed from the SHA224 hash of the ID of your tenant and the hostname of the hypervisor -- a bit of a catch-22, that, and prevents you from being able to make use of the hash if you don't know your back-end. But... I do. So I created an associative array with all the current (and, God willing, future) hypervisor hostnames in my company, with the key being the hostId/hash, and the value being the hypervisor name. Then I queried my VMs, got all the hostIds, used that as the index to query my associative array, and bingo! My hypervisor's name. Kinda fugly, but when you have a standardized hypervisor hostname nomenclature, it's sufficient, without having to go mucking about with changing poorly-documented Nova policy.json stuff in a four-year-old release of OpenStack. I'll take it. Thanks! -Ken > On Thu, Nov 15, 2018 at 1:49 PM Ken D'Ambrosio wrote: > >> Hey, all. We've got a Juno cloud, and we'd like various end-users to be >> able to see which hypervisors their VMs spring up on. >> /etc/nova/policy.json seems to have some relevant info, but it's hard to >> tell what does what. "compute_extension:hypervisors" looks like a >> possible candidate, but that's so vague that there's no telling what, >> exactly, is meant by "hypervisors". So: >> >> * Given that I just want the hypervisor:VM association, any suggestions >> as to which rule(s) to modify? >> * Failing that, wondering if there's any for-real documentation on what >> the various options in policy.json *do*. I've found many, many lists of >> what's in a generic policy.json, but nothing that went into detail about >> what does what. >> >> Thanks! >> >> -Ken >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -- > Mohammed Naser -- vexxhost > > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com [1] Links: ------ [1] http://vexxhost.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yedhusastri at gmail.com Mon Nov 19 16:25:13 2018 From: yedhusastri at gmail.com (Yedhu Sastri) Date: Mon, 19 Nov 2018 17:25:13 +0100 Subject: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? Message-ID: Hello All, I have some use-cases which I want to test in PowerPC architecture(ppc64). As I dont have any Power machines I would like to try it with ppc64 VM's. Is it possible to run these kind of VM's on my OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS RHEL 7)?? I set the image property architecture=ppc64 to the ppc64 image I uploaded to glance but no success in launching VM with those images. I am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think it is not built to support power architecture. For testing without OpenStack I manually built qemu on a x86_64 host with ppc64 support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont know how to do this on my OpenStack cluster. Whether I need to manually build qemu on compute nodes with ppc64 support or I need to add some lines in my nova.conf to do this?? Any help to solve this issue would be much appreciated. -- Thank you for your time and have a nice day, With kind regards, Yedhu Sastri -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Mon Nov 19 16:57:15 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Mon, 19 Nov 2018 11:57:15 -0500 Subject: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: References: Message-ID: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> On 2018-11-19 11:25, Yedhu Sastri wrote: > Hello All, > > I have some use-cases which I want to test in PowerPC architecture(ppc64). As I dont have any Power machines I would like to try it with ppc64 VM's. Is it possible to run these kind of VM's on my OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS RHEL 7)?? I'm not 100% sure, but I'm 95% sure that the answer to your question is "No." While there's much emulation that occurs, the CPU isn't so much emulated, but more abstracted. Constructing and running a modern CPU in software would be non-trivial. -Ken > I set the image property architecture=ppc64 to the ppc64 image I uploaded to glance but no success in launching VM with those images. I am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think it is not built to support power architecture. For testing without OpenStack I manually built qemu on a x86_64 host with ppc64 support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont know how to do this on my OpenStack cluster. Whether I need to manually build qemu on compute nodes with ppc64 support or I need to add some lines in my nova.conf to do this?? Any help to solve this issue would be much appreciated. > > -- > > Thank you for your time and have a nice day, > > With kind regards, Yedhu Sastri > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Mon Nov 19 19:25:43 2018 From: rfolco at redhat.com (Rafael Folco) Date: Mon, 19 Nov 2018 17:25:43 -0200 Subject: [Openstack] [openstack-dev] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> References: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> Message-ID: (not sure if my answer has been sent, sorry if duplicate) I don't touch ppc for a while but AFAIK this should work as long as you run full emulation (qemu, not kvm) as libvirt_type in nova.conf and get the qemu-system-ppc64le installed in the compute node. Assume also you get the ppc64le image to launch your instance. Expect poor performance though. On Mon, Nov 19, 2018 at 4:12 PM Sean Mooney wrote: > adding openstack dev. > On Mon, 2018-11-19 at 18:08 +0000, Sean Mooney wrote: > > On Mon, 2018-11-19 at 11:57 -0500, Ken D'Ambrosio wrote: > > > On 2018-11-19 11:25, Yedhu Sastri wrote: > > > > Hello All, > > > > > > > > I have some use-cases which I want to test in PowerPC > architecture(ppc64). As I dont have any Power machines I > > > > would > > > > like to try it with ppc64 VM's. Is it possible to run these kind of > VM's on my OpenStack cluster(Queens) which > > > > runs > > > > on X86_64 architecture nodes(OS RHEL 7)?? > > > > > > > > > > I'm not 100% sure, but I'm 95% sure that the answer to your question > is "No." While there's much emulation that > > > occurs, the CPU isn't so much emulated, but more abstracted. > Constructing and running a modern CPU in software > > > would > > > be non-trivial. > > > > you can do this but not with kvm. > > you need to revert the virt dirver in the nova.conf back to qemu to > disable kvm to allow emulation of other > > plathforms. > > that said i have only emulated power on x86_64 using libvirt or qemu > idrectly never with openstack but i belive we > > used > > to support this i just hav enot done it personally. > > > > > > -Ken > > > > > > > I set the image property architecture=ppc64 to the ppc64 image I > uploaded to glance but no success in launching VM > > > > with those images. I am using KVM as hypervisor(qemu 2.10.0) in my > compute nodes and I think it is not built to > > > > support power architecture. For testing without OpenStack I manually > built qemu on a x86_64 host with ppc64 > > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I > dont know how to do this on my OpenStack > > > > cluster. > > > > Whether I need to manually build qemu on compute nodes with ppc64 > support or I need to add some lines in my > > > > nova.conf to do this?? Any help to solve this issue would be much > appreciated. > > > > > > > > > > > > -- > > > > Thank you for your time and have a nice day, > > > > > > > > With kind regards, > > > > Yedhu Sastri > > > > > > > > _______________________________________________ > > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > _______________________________________________ > > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > Post to : openstack at lists.openstack.org > > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 19 19:56:08 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Nov 2018 19:56:08 +0000 Subject: [Openstack] [openstack-dev] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: References: <9f1bdf3c463f3d788adb3401fc9777ee@jots.org> <36812fc83e71a08fbeb809fbfa26722a743aab78.camel@redhat.com> <7d06072569866475fd4f0efff25c7fea4df399f1.camel@redhat.com> Message-ID: <6e4346f08e047edf5d0d1a5722943b9fb5d0f477.camel@redhat.com> On Mon, 2018-11-19 at 17:25 -0200, Rafael Folco wrote: > (not sure if my answer has been sent, sorry if duplicate) > I don't touch ppc for a while but AFAIK this should work as long as you run full emulation (qemu, not kvm) as > libvirt_type in nova.conf and get the qemu-system-ppc64le installed in the compute node. Assume also you get the > ppc64le image to launch your instance. Expect poor performance though. that is my understanding also. the perfromace is accepable but not great if you can enabled the multithread tcg backend instead of the default singel tread tcg backend that is used when in qemu only mode but it still wont match kvm accleration or power on power performance. i dont think we have a way to enabel the multi thread accleration for qemu only backend via nova unfortunetly it would be triavial to add but no one has asked as far as i am aware. > > On Mon, Nov 19, 2018 at 4:12 PM Sean Mooney wrote: > > adding openstack dev. > > On Mon, 2018-11-19 at 18:08 +0000, Sean Mooney wrote: > > > On Mon, 2018-11-19 at 11:57 -0500, Ken D'Ambrosio wrote: > > > > On 2018-11-19 11:25, Yedhu Sastri wrote: > > > > > Hello All, > > > > > > > > > > I have some use-cases which I want to test in PowerPC architecture(ppc64). As I dont have any Power machines I > > > > > would > > > > > like to try it with ppc64 VM's. Is it possible to run these kind of VM's on my OpenStack cluster(Queens) which > > > > > runs > > > > > on X86_64 architecture nodes(OS RHEL 7)?? > > > > > > > > > > > > > I'm not 100% sure, but I'm 95% sure that the answer to your question is "No." While there's much emulation that > > > > occurs, the CPU isn't so much emulated, but more abstracted. Constructing and running a modern CPU in software > > > > would > > > > be non-trivial. > > > > > > you can do this but not with kvm. > > > you need to revert the virt dirver in the nova.conf back to qemu to disable kvm to allow emulation of other > > > plathforms. > > > that said i have only emulated power on x86_64 using libvirt or qemu idrectly never with openstack but i belive we > > > used > > > to support this i just hav enot done it personally. > > > > > > > > -Ken > > > > > > > > > I set the image property architecture=ppc64 to the ppc64 image I uploaded to glance but no success in > > launching VM > > > > > with those images. I am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think it is not built > > to > > > > > support power architecture. For testing without OpenStack I manually built qemu on a x86_64 host with ppc64 > > > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont know how to do this on my OpenStack > > > > > cluster. > > > > > Whether I need to manually build qemu on compute nodes with ppc64 support or I need to add some lines in my > > > > > nova.conf to do this?? Any help to solve this issue would be much appreciated. > > > > > > > > > > > > > > > -- > > > > > Thank you for your time and have a nice day, > > > > > > > > > > With kind regards, > > > > > Yedhu Sastri > > > > > > > > > > _______________________________________________ > > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > Post to : openstack at lists.openstack.org > > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > > > _______________________________________________ > > > > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > Post to : openstack at lists.openstack.org > > > > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Mon Nov 19 20:48:45 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 19 Nov 2018 14:48:45 -0600 Subject: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: References: Message-ID: <031e51d9-6f0f-40de-3fff-6a0690ade675@windriver.com> On 11/19/2018 10:25 AM, Yedhu Sastri wrote: > Hello All, > > I have some use-cases which I want to test in PowerPC > architecture(ppc64). As I dont have any Power machines I would like to > try it with ppc64 VM's. Is it possible to run these kind of VM's on my > OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS > RHEL 7)?? > > I set the image property architecture=ppc64 to the ppc64 image I > uploaded to glance but no success in launching VM with those images. I > am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think > it is not built to support power architecture. For testing without > OpenStack I manually built qemu on a x86_64 host with ppc64 > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont > know how to do this on my OpenStack cluster. Whether I need to manually > build qemu on compute nodes with ppc64 support or I need to add some > lines in my nova.conf to do this?? Any help to solve this issue would be > much appreciated. I think that within an OpenStack cluster you'd have to dedicate a whole compute node to running ppc64 and have it advertise the architecture as ppc64. Then when you ask for "architecture=ppc64" it should land on that node. If this is for "development, testing or migration of applications to Power" have you checked out these people? They provide free Power VMs. http://openpower.ic.unicamp.br/minicloud/ Chris From smooney at redhat.com Mon Nov 19 21:39:19 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 19 Nov 2018 21:39:19 +0000 Subject: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: <031e51d9-6f0f-40de-3fff-6a0690ade675@windriver.com> References: <031e51d9-6f0f-40de-3fff-6a0690ade675@windriver.com> Message-ID: <5b16085e300954633f0d724d3db643b8fa05c914.camel@redhat.com> On Mon, 2018-11-19 at 14:48 -0600, Chris Friesen wrote: > On 11/19/2018 10:25 AM, Yedhu Sastri wrote: > > Hello All, > > > > I have some use-cases which I want to test in PowerPC > > architecture(ppc64). As I dont have any Power machines I would like to > > try it with ppc64 VM's. Is it possible to run these kind of VM's on my > > OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS > > RHEL 7)?? > > > > I set the image property architecture=ppc64 to the ppc64 image I > > uploaded to glance but no success in launching VM with those images. I > > am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I think > > it is not built to support power architecture. For testing without > > OpenStack I manually built qemu on a x86_64 host with ppc64 > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I dont > > know how to do this on my OpenStack cluster. Whether I need to manually > > build qemu on compute nodes with ppc64 support or I need to add some > > lines in my nova.conf to do this?? Any help to solve this issue would be > > much appreciated. > > I think that within an OpenStack cluster you'd have to dedicate a whole > compute node to running ppc64 and have it advertise the architecture as > ppc64. Then when you ask for "architecture=ppc64" it should land on > that node. you know it says it for arm but you might be able to do this by setting hw_machine_type https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L34 i know you can set the hw_machine_type to set teh default for all instace on a host https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.hw_machine_type but i can rememebr it image property is alys used or if its just used for arm as the docs suggest. the hypervisor_type in the image should be set to qemu to make sure you avoid the kvm hosts as they will not work for cpu emulation. > > If this is for "development, testing or migration of applications to > Power" have you checked out these people? They provide free Power VMs. > > http://openpower.ic.unicamp.br/minicloud/ > > Chris > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From manuel.sb at garvan.org.au Tue Nov 20 05:49:55 2018 From: manuel.sb at garvan.org.au (Manuel Sopena Ballesteros) Date: Tue, 20 Nov 2018 05:49:55 +0000 Subject: [Openstack] how to sync quota usage our of sync in Rocky release Message-ID: <9D8A2486E35F0941A60430473E29F15B017BAEB6FB@MXDB2.ad.garvan.unsw.edu.au> Dear Openstack community, I often have nova complaining that I have exceeded my quota, these normally happens when I node deployment fails and the resources allocated hasn't been released. Before Rocky I used to run the command below to resync, however it looks like it has been removed from Rocky. [root at TEST-openstack-controller ~]# docker exec -it -u root nova_scheduler nova-manage project quota_usage_refresh --project abc29399b91d423088549d7446766573 --user 02132c31dafa4d1d939bd52e0420b975 usage: nova-manage [-h] [--config-dir DIR] [--config-file PATH] [--debug] [--log-config-append PATH] [--log-date-format DATE_FORMAT] [--log-dir LOG_DIR] [--log-file PATH] [--nodebug] [--nopost-mortem] [--nouse-journal] [--nouse-json] [--nouse-syslog] [--nowatch-log-file] [--post-mortem] [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal] [--use-json] [--use-syslog] [--version] [--watch-log-file] [--remote_debug-host REMOTE_DEBUG_HOST] [--remote_debug-port REMOTE_DEBUG_PORT] {version,bash-completion,placement,network,cell_v2,db,cell,floating,api_db} ... nova-manage: error: argument category: invalid choice: 'project' (choose from 'version', 'bash-completion', 'placement', 'network', 'cell_v2', 'db', 'cell', 'floating', 'api_db') Any idea how can I refresh the quota usage in Rocky? Thank you very much Manuel Sopena Ballesteros | Big data Engineer Garvan Institute of Medical Research The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yedhusastri at gmail.com Tue Nov 20 10:39:37 2018 From: yedhusastri at gmail.com (Yedhu Sastri) Date: Tue, 20 Nov 2018 11:39:37 +0100 Subject: [Openstack] Create VMs with Power architecture(ppc64) on OpenStack running on x86_64 nodes?? In-Reply-To: <5b16085e300954633f0d724d3db643b8fa05c914.camel@redhat.com> References: <031e51d9-6f0f-40de-3fff-6a0690ade675@windriver.com> <5b16085e300954633f0d724d3db643b8fa05c914.camel@redhat.com> Message-ID: Thank you all for the valuable informations. So I will try with one compute node which has qemu as libvirt_type in nova.conf and then I hope I can host ppc64 VM's on that node. On Mon, Nov 19, 2018 at 10:47 PM Sean Mooney wrote: > On Mon, 2018-11-19 at 14:48 -0600, Chris Friesen wrote: > > On 11/19/2018 10:25 AM, Yedhu Sastri wrote: > > > Hello All, > > > > > > I have some use-cases which I want to test in PowerPC > > > architecture(ppc64). As I dont have any Power machines I would like to > > > try it with ppc64 VM's. Is it possible to run these kind of VM's on my > > > OpenStack cluster(Queens) which runs on X86_64 architecture nodes(OS > > > RHEL 7)?? > > > > > > I set the image property architecture=ppc64 to the ppc64 image I > > > uploaded to glance but no success in launching VM with those images. I > > > am using KVM as hypervisor(qemu 2.10.0) in my compute nodes and I > think > > > it is not built to support power architecture. For testing without > > > OpenStack I manually built qemu on a x86_64 host with ppc64 > > > support(qemu-ppc64) and then I am able to host the ppc64 VM. But I > dont > > > know how to do this on my OpenStack cluster. Whether I need to > manually > > > build qemu on compute nodes with ppc64 support or I need to add some > > > lines in my nova.conf to do this?? Any help to solve this issue would > be > > > much appreciated. > > > > I think that within an OpenStack cluster you'd have to dedicate a whole > > compute node to running ppc64 and have it advertise the architecture as > > ppc64. Then when you ask for "architecture=ppc64" it should land on > > that node. > you know it says it for arm but you might be able to do this by setting > hw_machine_type > https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L34 > i know you can set the hw_machine_type to set teh default for all instace > on a host > > https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.hw_machine_type > but i can rememebr it image property is alys used or if its just used for > arm as the docs suggest. > > the hypervisor_type in the image should be set to qemu to make sure you > avoid the kvm hosts as they will not work > for cpu emulation. > > > > > > If this is for "development, testing or migration of applications to > > Power" have you checked out these people? They provide free Power VMs. > > > > http://openpower.ic.unicamp.br/minicloud/ > > > > Chris > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Thank you for your time and have a nice day, With kind regards, Yedhu Sastri -------------- next part -------------- An HTML attachment was scrubbed... URL: From sorrison at gmail.com Tue Nov 20 22:52:02 2018 From: sorrison at gmail.com (Sam Morrison) Date: Wed, 21 Nov 2018 09:52:02 +1100 Subject: [Openstack] how to sync quota usage our of sync in Rocky release In-Reply-To: <9D8A2486E35F0941A60430473E29F15B017BAEB6FB@MXDB2.ad.garvan.unsw.edu.au> References: <9D8A2486E35F0941A60430473E29F15B017BAEB6FB@MXDB2.ad.garvan.unsw.edu.au> Message-ID: <1964C018-A866-4E05-BC35-4792B4F7B7E1@gmail.com> Starting I think in Pike or Queens nova no longer caches quota usages and calculates it on the fly so there is now no need to the command as quotas can’t get out of sync anymore. Cheers, Sam > On 20 Nov 2018, at 4:49 pm, Manuel Sopena Ballesteros wrote: > > Dear Openstack community, > > I often have nova complaining that I have exceeded my quota, these normally happens when I node deployment fails and the resources allocated hasn’t been released. Before Rocky I used to run the command below to resync, however it looks like it has been removed from Rocky. > > [root at TEST-openstack-controller ~]# docker exec -it -u root nova_scheduler nova-manage project quota_usage_refresh --project abc29399b91d423088549d7446766573 --user 02132c31dafa4d1d939bd52e0420b975 > usage: nova-manage [-h] [--config-dir DIR] [--config-file PATH] [--debug] > [--log-config-append PATH] [--log-date-format DATE_FORMAT] > [--log-dir LOG_DIR] [--log-file PATH] [--nodebug] > [--nopost-mortem] [--nouse-journal] [--nouse-json] > [--nouse-syslog] [--nowatch-log-file] [--post-mortem] > [--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal] > [--use-json] [--use-syslog] [--version] [--watch-log-file] > [--remote_debug-host REMOTE_DEBUG_HOST] > [--remote_debug-port REMOTE_DEBUG_PORT] > > {version,bash-completion,placement,network,cell_v2,db,cell,floating,api_db} > ... > nova-manage: error: argument category: invalid choice: 'project' (choose from 'version', 'bash-completion', 'placement', 'network', 'cell_v2', 'db', 'cell', 'floating', 'api_db') > > Any idea how can I refresh the quota usage in Rocky? > > Thank you very much > > Manuel Sopena Ballesteros | Big data Engineer > Garvan Institute of Medical Research > The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010 > T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: manuel.sb at garvan.org.au > > NOTICE > Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed. > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Wed Nov 21 11:08:04 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 21 Nov 2018 14:38:04 +0330 Subject: [Openstack] [OpenStack][Compute-Neutron] Is it possible to create flat network from different NICs Message-ID: Hi, I installed OpenStack on One node (for test) and creating a flat external network I could connect VMs to the provider network (internet). In the production environment, we have 4 NIC on each server and each server (compute node) will run 4 instances. My question is, is it possible to create a separate external network based on each NIC (so 4 external networks) and run each instance using one of them? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Nov 21 11:21:35 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 21 Nov 2018 12:21:35 +0100 Subject: [Openstack] [OpenStack][Compute-Neutron] Is it possible to create flat network from different NICs In-Reply-To: References: Message-ID: <9E359876-421D-466C-A5B6-9F9B94331E74@redhat.com> Hi, Yes, that should be possible. You can create 4 bridges on compute node and add all of them to bridge_mappings config option. Then You can create 4 networks with different physical_network for each of them and agent will know which bridge should be used for ports from each network. It is described e.g. in [1]. [1] https://ask.openstack.org/en/question/94206/how-to-understand-the-bridge_mappings/?answer=94230#post-id-94230 > Wiadomość napisana przez Soheil Pourbafrani w dniu 21.11.2018, o godz. 12:08: > > Hi, > > I installed OpenStack on One node (for test) and creating a flat external network I could connect VMs to the provider network (internet). > > In the production environment, we have 4 NIC on each server and each server (compute node) will run 4 instances. My question is, is it possible to create a separate external network based on each NIC (so 4 external networks) and run each instance using one of them? > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack — Slawek Kaplonski Senior software engineer Red Hat From soheil.ir08 at gmail.com Wed Nov 21 11:38:01 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 21 Nov 2018 15:08:01 +0330 Subject: [Openstack] [OpenStack][Compute-Neutron] Is it possible to create flat network from different NICs In-Reply-To: <9E359876-421D-466C-A5B6-9F9B94331E74@redhat.com> References: <9E359876-421D-466C-A5B6-9F9B94331E74@redhat.com> Message-ID: Thanks a lot. On Wed, Nov 21, 2018 at 2:51 PM Slawomir Kaplonski wrote: > Hi, > > Yes, that should be possible. You can create 4 bridges on compute node and > add all of them to bridge_mappings config option. > Then You can create 4 networks with different physical_network for each of > them and agent will know which bridge should be used for ports from each > network. > It is described e.g. in [1]. > > [1] > https://ask.openstack.org/en/question/94206/how-to-understand-the-bridge_mappings/?answer=94230#post-id-94230 > > > Wiadomość napisana przez Soheil Pourbafrani w > dniu 21.11.2018, o godz. 12:08: > > > > Hi, > > > > I installed OpenStack on One node (for test) and creating a flat > external network I could connect VMs to the provider network (internet). > > > > In the production environment, we have 4 NIC on each server and each > server (compute node) will run 4 instances. My question is, is it possible > to create a separate external network based on each NIC (so 4 external > networks) and run each instance using one of them? > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Wed Nov 21 17:07:40 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Wed, 21 Nov 2018 18:07:40 +0100 Subject: [Openstack] [kolla] Berlin summit resume Message-ID: Hi kollagues, During the Berlin Summit kolla team had a few talks and forum discussions, as well as other cross-project related topics [0] First session was ``Kolla project onboarding``, the room was full of people interested in contribute to kolla, many of them already using kolla in production environments whiling to make upstream some work they've done downstream. I can say this talk was a total success and we hope to see many new faces during this release putting features and bug fixes into kolla. Slides of the session at [1] Second session was ``Kolla project update``, was a brief resume of what work has been done during rocky release and some items will be implemented in the future. Number of attendees to this session was massive, no more people could enter the room. Slides at [2] Then forum sessions.. First one was ``Kolla user feedback``, many users came over the room. We've notice a big increase in production deployments and some PoC migrating to production soon, many of those environments are huge. Overall the impressions was that kolla is great and don't have any big issue or requirement, ``it works great`` became a common phrase to listen. Here's a resume of the user feedback needs [3] - Improve operational usage for add, remove, change and stop/start nodes and services. - Database backup and recovery - Lack of documentation is the bigger request, users need to read the code to know how to configure other than core/default services - Multi cells_v2 - New services request, cyborg, masakari and tricircle were the most requested - SElinux enabled - More SDN services such as Contrail and calico - Possibility to include user's ansible tasks during deploy as well as support custom config.json - HTTPS for internal networks Second one was about ``kolla for the edge``, we've meet with Edge computing group and others interested in edge deployments to identify what's missing in kolla and where we can help. Things we've identified are: - Kolla seems good at how the service split can be done, tweaking inventory file and config values can deploy independent environments easily. - Missing keystone federation - Glance cache support is not a hard requirement but improves efficiency (already merged) - Multi cells v2 - Multi storage per edge/far-edge - A documentation or architecture reference would be nice to have. Last one was ``kolla for NFV``, few people came over to discuss about NUMA, GPU, SRIOV. Nothing noticiable from this session, mainly was support DPDK for CentOS/RHEL,OracleLinux and few service addition covered by previous discussions. [0] https://etherpad.openstack.org/p/kolla-stein-summit [1] https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 [2] https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Nov 21 18:44:56 2018 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 21 Nov 2018 18:44:56 +0000 Subject: [Openstack] [openstack-dev] [kolla] Berlin summit resume In-Reply-To: References: Message-ID: Thanks for the write up Eduardo. I thought you and Surya did a good job of presenting and moderating those sessions. Mark On Wed, 21 Nov 2018 at 17:08, Eduardo Gonzalez wrote: > Hi kollagues, > > During the Berlin Summit kolla team had a few talks and forum discussions, > as well as other cross-project related topics [0] > > First session was ``Kolla project onboarding``, the room was full of > people interested in contribute to kolla, many of them already using kolla > in production environments whiling to make upstream some work they've done > downstream. I can say this talk was a total success and we hope to see many > new faces during this release putting features and bug fixes into kolla. > Slides of the session at [1] > > Second session was ``Kolla project update``, was a brief resume of what > work has been done during rocky release and some items will be implemented > in the future. Number of attendees to this session was massive, no more > people could enter the room. Slides at [2] > > > Then forum sessions.. > > First one was ``Kolla user feedback``, many users came over the room. > We've notice a big increase in production deployments and some PoC > migrating to production soon, many of those environments are huge. > Overall the impressions was that kolla is great and don't have any big > issue or requirement, ``it works great`` became a common phrase to listen. > Here's a resume of the user feedback needs [3] > > - Improve operational usage for add, remove, change and stop/start nodes > and services. > - Database backup and recovery > - Lack of documentation is the bigger request, users need to read the code > to know how to configure other than core/default services > - Multi cells_v2 > - New services request, cyborg, masakari and tricircle were the most > requested > - SElinux enabled > - More SDN services such as Contrail and calico > - Possibility to include user's ansible tasks during deploy as well as > support custom config.json > - HTTPS for internal networks > > Second one was about ``kolla for the edge``, we've meet with Edge > computing group and others interested in edge deployments to identify > what's missing in kolla and where we can help. > Things we've identified are: > > - Kolla seems good at how the service split can be done, tweaking > inventory file and config values can deploy independent environments easily. > - Missing keystone federation > - Glance cache support is not a hard requirement but improves efficiency > (already merged) > - Multi cells v2 > - Multi storage per edge/far-edge > - A documentation or architecture reference would be nice to have. > > Last one was ``kolla for NFV``, few people came over to discuss about > NUMA, GPU, SRIOV. > Nothing noticiable from this session, mainly was support DPDK for > CentOS/RHEL,OracleLinux and few service addition covered by previous > discussions. > > [0] https://etherpad.openstack.org/p/kolla-stein-summit > [1] > https://es.slideshare.net/EduardoGonzalezGutie/kolla-project-onboarding-openstack-summit-berlin-2018 > [2] > https://es.slideshare.net/EduardoGonzalezGutie/openstack-kolla-project-update-rocky-release > [3] https://etherpad.openstack.org/p/berlin-2018-kolla-user-feedback > [4] https://etherpad.openstack.org/p/berlin-2018-kolla-edge > [5] https://etherpad.openstack.org/p/berlin-2018-kolla-nfv > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkliczew at redhat.com Thu Nov 22 15:23:31 2018 From: pkliczew at redhat.com (Piotr Kliczewski) Date: Thu, 22 Nov 2018 16:23:31 +0100 Subject: [Openstack] =?utf-8?q?Fwd=3A__FOSDEM=E2=80=9819_Virtualization_?= =?utf-8?q?=26_IaaS_Devroom_CfP?= In-Reply-To: References: Message-ID: A friendly reminder that Cfp is due by 1st of December. Please submit your proposal using: https://penta.fosdem.org/submission/FOSDEM19 ---------- Forwarded message --------- From: Piotr Kliczewski Date: Wed, Oct 17, 2018 at 9:41 AM Subject: [Openstack] FOSDEM‘19 Virtualization & IaaS Devroom CfP To: We are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2019, to be hosted on February 2nd 2019. This year will mark FOSDEM’s 19th anniversary as one of the longest-running free and open source software developer events, attracting thousands of developers and users from all over the world. FOSDEM will be held once again in Brussels, Belgium, on February 2nd & 3rd, 2019. This devroom is a collaborative effort, and is organized by dedicated folks from projects such as OpenStack, Xen Project, oVirt, QEMU, KVM, and Foreman. We would like to invite all those who are involved in these fields to submit your proposals by December 1st, 2018. About the Devroom The Virtualization & IaaS devroom will feature session topics such as open source hypervisors and virtual machine managers such as Xen Project, KVM, bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as KubeVirt, Apache CloudStack, OpenStack, oVirt, QEMU and OpenNebula. This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with multiple hypervisors; hyperconverged deployments; and scaling across hundreds or thousands of servers. Presentations in this devroom will be aimed at developers working on these platforms who are looking to collaborate and improve shared infrastructure or solve common problems. We seek topics that encourage dialog between projects and continued work post-FOSDEM. Important Dates Submission deadline: 1 December 2019 Acceptance notifications: 14 December 2019 Final schedule announcement: 21 December 2019 Devroom: 2nd February 2019 Submit Your Proposal All submissions must be made via the Pentabarf event planning site[1]. If you have not used Pentabarf before, you will need to create an account. If you submitted proposals for FOSDEM in previous years, you can use your existing account. After creating the account, select Create Event to start the submission process. Make sure to select Virtualization and IaaS devroom from the Track list. Please fill out all the required fields, and provide a meaningful abstract and description of your proposed session. Submission Guidelines We expect more proposals than we can possibly accept, so it is vitally important that you submit your proposal on or before the deadline. Late submissions are unlikely to be considered. All presentation slots are 30 minutes, with 20 minutes planned for presentations, and 10 minutes for Q&A. All presentations will be recorded and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: "If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, ." In the Submission notes field, please also confirm that if your talk is accepted, you will be able to attend FOSDEM and deliver your presentation. We will not consider proposals from prospective speakers who are unsure whether they will be able to secure funds for travel and lodging to attend FOSDEM. (Sadly, we are not able to offer travel funding for prospective speakers.) Speaker Mentoring Program As a part of the rising efforts to grow our communities and encourage a diverse and inclusive conference ecosystem, we're happy to announce that we'll be offering mentoring for new speakers. Our mentors can help you with tasks such as reviewing your abstract, reviewing your presentation outline or slides, or practicing your talk with you. You may apply to the mentoring program as a newcomer speaker if you: Never presented before or Presented only lightning talks or Presented full-length talks at small meetups (<50 ppl) Submission Guidelines Mentored presentations will have 25-minute slots, where 20 minutes will include the presentation and 5 minutes will be reserved for questions. The number of newcomer session slots is limited, so we will probably not be able to accept all applications. You must submit your talk and abstract to apply for the mentoring program, our mentors are volunteering their time and will happily provide feedback but won't write your presentation for you! If you are experiencing problems with Pentabarf, the proposal submission interface, or have other questions, you can email our devroom mailing list[2] and we will try to help you. How to Apply In addition to agreeing to video recording and confirming that you can attend FOSDEM in case your session is accepted, please write "speaker mentoring program application" in the "Submission notes" field, and list any prior speaking experience or other relevant information for your application. Call for Mentors Interested in mentoring newcomer speakers? We'd love to have your help! Please email iaas-virt-devroom at lists.fosdem.org with a short speaker biography and any specific fields of expertise (for example, KVM, OpenStack, storage, etc.) so that we can match you with a newcomer speaker from a similar field. Estimated time investment can be as low as a 5-10 hours in total, usually distributed weekly or bi-weekly. Never mentored a newcomer speaker but interested to try? As the mentoring program coordinator, email Brian Proffitt[3] and he will be happy to answer your questions! Code of Conduct Following the release of the updated code of conduct for FOSDEM, we'd like to remind all speakers and attendees that all of the presentations and discussions in our devroom are held under the guidelines set in the CoC and we expect attendees, speakers, and volunteers to follow the CoC at all times. If you submit a proposal and it is accepted, you will be required to confirm that you accept the FOSDEM CoC. If you have any questions about the CoC or wish to have one of the devroom organizers review your presentation slides or any other content for CoC compliance, please email us and we will do our best to assist you. Call for Volunteers We are also looking for volunteers to help run the devroom. We need assistance watching time for the speakers, and helping with video for the devroom. Please contact Brian Proffitt, for more information. Questions? If you have any questions about this devroom, please send your questions to our devroom mailing list. You can also subscribe to the list to receive updates about important dates, session announcements, and to connect with other attendees. See you all at FOSDEM! [1] https://penta.fosdem.org/submission/FOSDEM19 [2] iaas-virt-devroom at lists.fosdem.org [3] bkp at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Thu Nov 22 19:38:47 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Thu, 22 Nov 2018 23:08:47 +0330 Subject: [Openstack] [PackStack][Cinder]Save Volumes in The Compute nodes In-Reply-To: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> Message-ID: Thanks! and Does it need to cinder module to be installed on compute nodes? On Sat, Nov 17, 2018 at 1:13 PM Bernd Bausch wrote: > Yes. You need to enable Cinder’s InstanceLocalityFilter, see > > https://docs.openstack.org/cinder/latest/configuration/block-storage/scheduler-filters.html > . > > Here a tip: > https://ask.openstack.org/en/question/92001/cinder-lvm-volume-local-to-instance/ > > Bernd > > On Nov 17, 2018, at 8:38, Soheil Pourbafrani > wrote: > > Hi, > I have 8 servers with just local HDD disk. I want to use one server as the > controller and network node and the other (7 servers) as the compute node. > > I was wondering if it's possible to install PackStack that every compute > node to store its volumes in it's HDD local disk? (I guess the Cinder > should be installed on every Compute node alongside other settings) > > Thanks > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Fri Nov 23 22:41:36 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sat, 24 Nov 2018 02:11:36 +0330 Subject: [Openstack] [OpenStack][Glance]Getting instance snapshot result in size 0 byte Message-ID: Hi, I have many instances running on OpenStack and I wanted to export them. So I create a snapshot of an instance and it results in a new record in images in the format of Qcow2 and size of 0 bytes! It just created a volume snapshot of the instance, too. I tried with both command line and horizon but the same results! How can I export instances in the correct way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Sat Nov 24 15:04:07 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sat, 24 Nov 2018 18:34:07 +0330 Subject: [Openstack] [OpenStack][Keystone] the controller is node listening on port 5000 Message-ID: Hi, I installed keystone step by step according to the OpenStack Doc but the http server is not listening on the port 5000! The only difference is my hostname (address) is "controller.a.b" instead of "controller". In verifying part of the document I could do all (like creating a user, project, ....) but after installing glance, I found out the Glance API is not running properly and it's because of Keystone is not listening on port 5000! Here are AUTH variables: export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=XXX export OS_AUTH_URL=http://controller.a.b:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 When I try "glance image-list", I got: "http://controller.a.b:9292/v2/images?limit=20&sort_key=name&sort_dir=asc: Unable to establish connection" Here is TCP listening ports: tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1614/master tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 961/python2 tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 957/beam.smp tcp 0 0 192.168.0.31:3306 0.0.0.0:* LISTEN 1622/mysqld tcp 0 0 192.168.0.31:2379 0.0.0.0:* LISTEN 963/etcd tcp 0 0 192.168.0.31:11211 0.0.0.0:* LISTEN 952/memcached tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 952/memcached tcp 0 0 192.168.0.31:2380 0.0.0.0:* LISTEN 963/etcd tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 953/sshd tcp6 0 0 ::1:25 :::* LISTEN 1614/master tcp6 0 0 :::5000 :::* LISTEN 960/httpd tcp6 0 0 :::5672 :::* LISTEN 957/beam.smp tcp6 0 0 ::1:11211 :::* LISTEN 952/memcached tcp6 0 0 :::80 :::* LISTEN 960/httpd tcp6 0 0 :::22 :::* LISTEN 953/sshd As you can see httpd is listening on port 5000 IPV6, but it's not active for IPV4! Here is keystone.log: 2018-11-24 15:55:59.685 17325 INFO keystone.common.wsgi [req-9fedd505-5d98-4c12-a6ad-967f28c766e7 - - - - -] POST http://controller.a.b:5000/v3/auth/tokens 2018-11-24 15:56:00.275 17327 INFO keystone.common.wsgi [req-ca93da87-1d9f-4611-835c-d6972054b8e1 - - - - -] POST http://controller.a.b:5000/v3/auth/tokens 2018-11-24 15:56:00.929 17328 INFO keystone.common.wsgi [req-cf8c6af8-342d-4b14-9826-ba27e5f2897a 1b96f6b67f08495092e98a9b60476152 b33579097090499499625154e92724ee - default default] GET http://controller.a.b:5000/v3/domains/default 2018-11-24 15:56:14.753 17325 INFO keystone.common.wsgi [req-cca0e4a5-0ef9-40d8-a2f4-56c1f0acafcc 1b96f6b67f08495092e98a9b60476152 b33579097090499499625154e92724ee - default default] POST http://controller.a.b:5000/v3/users 2018-11-24 15:56:14.773 17325 WARNING py.warnings [req-cca0e4a5-0ef9-40d8-a2f4-56c1f0acafcc 1b96f6b67f08495092e98a9b60476152 b33579097090499499625154e92724ee - default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_user failed scope check. The token used to make the request was project scoped but the policy requires ['system'] scope. This behavior may change in the future where using the intended scope is required warnings.warn(msg) Another difference between my system and document is that I use static IP and here are interface settings: TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="none" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="enp2s0" UUID="62aa5ce8-64da-41b4-abd3-293fd43584fa" DEVICE="enp2s0" ONBOOT="yes" IPADDR="192.168.0.X" PREFIX="24" GATEWAY="192.168.0.1" DNS1="192.168.0.1" DNS2="8.8.8.8" IPV6_PRIVACY="no" What can be the cause? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Nov 24 15:05:16 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 24 Nov 2018 10:05:16 -0500 Subject: [Openstack] [OpenStack][Glance]Getting instance snapshot result in size 0 byte In-Reply-To: References: Message-ID: Are they boot from volume instances? On Fri, Nov 23, 2018 at 5:57 PM Soheil Pourbafrani wrote: > Hi, > > I have many instances running on OpenStack and I wanted to export them. So > I create a snapshot of an instance and it results in a new record in images > in the format of Qcow2 and size of 0 bytes! It just created a volume > snapshot of the instance, too. > > I tried with both command line and horizon but the same results! > > How can I export instances in the correct way? > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From qishiyexu2 at 126.com Sun Nov 25 07:00:31 2018 From: qishiyexu2 at 126.com (=?GBK?B?s8Ke3Q==?=) Date: Sun, 25 Nov 2018 15:00:31 +0800 (CST) Subject: [Openstack] [openstack][neutron]Iptables snat not work when securitygroup is on Message-ID: <2c4de9d2.18c0.16749ab70c2.Coremail.qishiyexu2@126.com> Hi, I am building an openstack all-in-one environment in a CentOS7.4 machine. For some reason I have only one network interface(eth0) and one ip address, so I created a linux bridge(br0), and forwarded datas to eth0 using iptables command: iptables -t nat -A POSTROUTING -s {bridge virtual ip} -j SNAT --to {eth0 ip} But it seems not work. When I ping to 8.8.8.8 from br0 and run tcpdump, I can see that datas can be forwared to eth0 and be sent to 8.8.8.8, but when datas are sent back to eth0, they can not be forwarded to br0. Ip forwarding, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tablesare set to 1. If I close security group by setting securitygroup = false, this rule works fine, but if I use iptables -F instead, the rule is not work. Does the securitygroup have a magic to trap iptables? BR Don -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Mon Nov 26 05:42:20 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Mon, 26 Nov 2018 09:12:20 +0330 Subject: [Openstack] [OpenStack][Cinder] Config Volumes to Store on Compute nodes Message-ID: Hi, I want to configure cinder to store Volumes on the Compute nodes. I know I should use the InstanceLocalityFilter. Suppose we have one Controller and two Compute node, should I install the Cinder on each node (Controller and Compute)? I will be appreciated if you suggest me any instruction about this. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Mon Nov 26 06:32:08 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Mon, 26 Nov 2018 10:02:08 +0330 Subject: [Openstack] [OpenStack][Cinder]How config cinder.conf [nova] part for InstanceLocalityFilter Message-ID: Hi, I want to enable InstanceLocalityFiletr for the Cinder. I set scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,InstanceLocality But the document said we should specify an account with privileged rights for Nova in Cinder configuration (configure a keystone authentication plugin in the [nova] section), but I can't find an example for such configuration. Can anyone give an example? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpena at redhat.com Mon Nov 26 09:36:09 2018 From: jpena at redhat.com (Javier Pena) Date: Mon, 26 Nov 2018 04:36:09 -0500 (EST) Subject: [Openstack] [PackStack][Cinder]Save Volumes in The Compute nodes In-Reply-To: References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> Message-ID: <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Thanks! and Does it need to cinder module to be installed on compute nodes? You will need to run the cinder-volume service in the compute nodes. You will have to set it up manually, though, because Packstack only installs it on the controller. Regards, Javier > On Sat, Nov 17, 2018 at 1:13 PM Bernd Bausch < berndbausch at gmail.com > wrote: > > Yes. You need to enable Cinder’s InstanceLocalityFilter, see > > > https://docs.openstack.org/cinder/latest/configuration/block-storage/scheduler-filters.html > > . > > > Here a tip: > > https://ask.openstack.org/en/question/92001/cinder-lvm-volume-local-to-instance/ > > > Bernd > > > On Nov 17, 2018, at 8:38, Soheil Pourbafrani < soheil.ir08 at gmail.com > > > wrote: > > > > Hi, > > > > > > I have 8 servers with just local HDD disk. I want to use one server as > > > the > > > controller and network node and the other (7 servers) as the compute > > > node. > > > > > > I was wondering if it's possible to install PackStack that every compute > > > node > > > to store its volumes in it's HDD local disk? (I guess the Cinder should > > > be > > > installed on every Compute node alongside other settings) > > > > > > Thanks > > > > > > _______________________________________________ > > > > > > Mailing list: > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > > > Post to : openstack at lists.openstack.org > > > > > > Unsubscribe : > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Mon Nov 26 10:07:31 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Mon, 26 Nov 2018 13:37:31 +0330 Subject: [Openstack] [PackStack][Cinder]Save Volumes in The Compute nodes In-Reply-To: <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> Message-ID: Thanks a lot! On Mon, Nov 26, 2018 at 1:06 PM Javier Pena wrote: > > > ------------------------------ > > Thanks! and Does it need to cinder module to be installed on compute nodes? > > > You will need to run the cinder-volume service in the compute nodes. You > will have to set it up manually, though, because Packstack only installs it > on the controller. > > Regards, > Javier > > On Sat, Nov 17, 2018 at 1:13 PM Bernd Bausch > wrote: > >> Yes. You need to enable Cinder’s InstanceLocalityFilter, see >> >> https://docs.openstack.org/cinder/latest/configuration/block-storage/scheduler-filters.html >> . >> >> Here a tip: >> https://ask.openstack.org/en/question/92001/cinder-lvm-volume-local-to-instance/ >> >> Bernd >> >> On Nov 17, 2018, at 8:38, Soheil Pourbafrani >> wrote: >> >> Hi, >> I have 8 servers with just local HDD disk. I want to use one server as >> the controller and network node and the other (7 servers) as the compute >> node. >> >> I was wondering if it's possible to install PackStack that every compute >> node to store its volumes in it's HDD local disk? (I guess the Cinder >> should be installed on every Compute node alongside other settings) >> >> Thanks >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> >> > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Mon Nov 26 10:45:33 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Mon, 26 Nov 2018 17:45:33 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens Message-ID: Hi, I am deploying OpenStack with 3 compute node, but I am seeing an abnormal distribution of instance, the instance is only deployed in a specific compute node, and not distribute among other compute node. this is my nova.conf from the compute node. (template jinja2 based) [DEFAULT] osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1'][ 'ipv4']['address'] }} metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ 'address'] }} enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api [barbican] [cache] backend=oslo_cache.memcache_pool enabled=true memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 [cells] [cinder] os_region_name = RegionOne [compute] [conductor] [console] [consoleauth] [cors] [crypto] [database] connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://{{ vip }}:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://{{ vip }}:5000/v3 memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = {{ nova_pw }} [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] url = http://{{ vip }}:9696 auth_url = http://{{ vip }}:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = {{ neutron_pw }} service_metadata_proxy = true metadata_proxy_shared_secret = {{ metadata_secret }} [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://{{ vip }}:5000/v3 username = placement password = {{ placement_pw }} [quota] [rdp] [remote_debug] [scheduler] discover_hosts_in_cells_interval = 300 [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true keymap=en-us novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ 'address'] }} [workarounds] [wsgi] [xenserver] [xvp] [placement_database] connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement what is the problem? I have lookup the openstack-nova-scheduler in the controller node but it's running well with only warning nova-scheduler[19255]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported the result I want is the instance is distributed in all compute node. Thank you. -- *Regards,* *Zufar Dhiyaulhaq* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Mon Nov 26 14:32:51 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Mon, 26 Nov 2018 09:32:51 -0500 Subject: [Openstack] Way to see VMs under all tenants by non-admin? Message-ID: Hey, all. I've had a request for a non-admin user to see all the VMs currently running, irrespective of project. I've gone through the policy.json file (this is Juno) and enabled everything I could think of that seemed appropriate, to no avail. Is there any way to do this without granting him flat-out admin? Thanks! -Ken From mnaser at vexxhost.com Mon Nov 26 15:30:47 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Nov 2018 10:30:47 -0500 Subject: [Openstack] Way to see VMs under all tenants by non-admin? In-Reply-To: References: Message-ID: Hi Ken: https://github.com/openstack/nova/blob/juno-eol/nova/api/openstack/compute/servers.py#L588-L590 Good luck (with your upgrades ;)) Mohammed On Mon, Nov 26, 2018 at 9:39 AM Ken D'Ambrosio wrote: > Hey, all. I've had a request for a non-admin user to see all the VMs > currently running, irrespective of project. I've gone through the > policy.json file (this is Juno) and enabled everything I could think of > that seemed appropriate, to no avail. Is there any way to do this > without granting him flat-out admin? > > Thanks! > > -Ken > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken at jots.org Mon Nov 26 16:07:46 2018 From: ken at jots.org (Ken D'Ambrosio) Date: Mon, 26 Nov 2018 11:07:46 -0500 Subject: [Openstack] Way to see VMs under all tenants by non-admin? In-Reply-To: References: Message-ID: <70a159872f99aa58e3c73c9b70c922ca@jots.org> On 2018-11-26 10:30, Mohammed Naser wrote: > Hi Ken: > > https://github.com/openstack/nova/blob/juno-eol/nova/api/openstack/compute/servers.py#L588-L590 OK, I feel kinda dumb, but I never realized I could go and search for policy.json policy in the pertinent Python files. That's awesome! Doesn't exactly help me now, but will certainly come in handy in the future. Thanks, -Ken > Good luck (with your upgrades ;)) > > Mohammed > > On Mon, Nov 26, 2018 at 9:39 AM Ken D'Ambrosio wrote: > >> Hey, all. I've had a request for a non-admin user to see all the VMs >> currently running, irrespective of project. I've gone through the >> policy.json file (this is Juno) and enabled everything I could think of >> that seemed appropriate, to no avail. Is there any way to do this >> without granting him flat-out admin? >> >> Thanks! >> >> -Ken >> >> _______________________________________________ >> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -- > Mohammed Naser -- vexxhost > > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com [1] Links: ------ [1] http://vexxhost.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 26 16:13:33 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Nov 2018 16:13:33 +0000 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: Message-ID: On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote: > Hi, > > I am deploying OpenStack with 3 compute node, but I am seeing an abnormal distribution of instance, the instance is > only deployed in a specific compute node, and not distribute among other compute node. > > this is my nova.conf from the compute node. (template jinja2 based) hi, the default behavior of nova used to be spread not pack and i belive it still is. the default behavior with placement however is closer to a packing behavior as allcoation candiates are retrunidn in an undefined but deterministic order. on a busy cloud this does not strictly pack instaces but on a quite cloud it effectivly does you can try and enable randomisation of the allocation candiates by setting this config option in the nova.conf of the shcduler to true. https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates on that note can you provide the nova.conf for the schduelr is used instead of the compute node nova.conf. if you have not overriden any of the nova defaults the ram and cpu weigher should spread instances withing the allocation candiates returned by placement. > > [DEFAULT] > osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > enabled_apis = osapi_compute,metadata > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 > my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > [api] > auth_strategy = keystone > [api_database] > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api > [barbican] > [cache] > backend=oslo_cache.memcache_pool > enabled=true > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 > [cells] > [cinder] > os_region_name = RegionOne > [compute] > [conductor] > [console] > [consoleauth] > [cors] > [crypto] > [database] > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova > [devices] > [ephemeral_storage_encryption] > [filter_scheduler] > [glance] > api_servers = http://{{ vip }}:9292 > [guestfs] > [healthcheck] > [hyperv] > [ironic] > [key_manager] > [keystone] > [keystone_authtoken] > auth_url = http://{{ vip }}:5000/v3 > memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = {{ nova_pw }} > [libvirt] > [matchmaker_redis] > [metrics] > [mks] > [neutron] > url = http://{{ vip }}:9696 > auth_url = http://{{ vip }}:35357 > auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = neutron > password = {{ neutron_pw }} > service_metadata_proxy = true > metadata_proxy_shared_secret = {{ metadata_secret }} > [notifications] > [osapi_v21] > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > [oslo_messaging_amqp] > [oslo_messaging_kafka] > [oslo_messaging_notifications] > [oslo_messaging_rabbit] > [oslo_messaging_zmq] > [oslo_middleware] > [oslo_policy] > [pci] > [placement] > os_region_name = RegionOne > project_domain_name = Default > project_name = service > auth_type = password > user_domain_name = Default > auth_url = http://{{ vip }}:5000/v3 > username = placement > password = {{ placement_pw }} > [quota] > [rdp] > [remote_debug] > [scheduler] > discover_hosts_in_cells_interval = 300 > [serial_console] > [service_user] > [spice] > [upgrade_levels] > [vault] > [vendordata_dynamic_auth] > [vmware] > [vnc] > enabled = true > keymap=en-us > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html > novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > [workarounds] > [wsgi] > [xenserver] > [xvp] > [placement_database] > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement > > what is the problem? I have lookup the openstack-nova-scheduler in the controller node but it's running well with only > warning > > nova-scheduler[19255]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: > Configuration option(s) ['use_tpool'] not supported > > the result I want is the instance is distributed in all compute node. > Thank you. > > _______________________________________________ > Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack From jimmy at openstack.org Mon Nov 26 18:01:25 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 26 Nov 2018 12:01:25 -0600 Subject: [Openstack] my ask.openstack.org question has waited for moderator approval for 17 days In-Reply-To: References: Message-ID: <5BFC34F5.2080108@openstack.org> Hi all - Just a quick follow up on this... I just discovered a quirky UX bug that might explain some frustration people have been having with the moderation. There were messages dating back to March 2018. Basically, when you click to approve/disapprove/etc... it appears that the queue is cleared. But if you go to the menu and go back to the moderation page (or just reload), you get a whole new batch. My guess is moderators have been thinking they're clearing things out, but not. I know that's what happened to me multiple times over a period of many months. I've since really and truly cleared out the queue. I found about 25 users that were spammers and blocked all of them. I also auto-approved anyone that was a valid poster so they can post without moderation moving forward. Ask.openstack.org is still a valued tool for our community and another way for people to engage outside of the mls. It's full of not just valid questions, but a lot of valid answers. I highly encourage those that are curious about becoming a moderator to check it out and let me know. I'm happy to elevate your user to a moderator if you want to contribute. Cheers, Jimmy > Bernd Bausch > October 29, 2018 at 6:16 PM > If there is a shortage of moderators, I volunteer. > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Jacob Burckhardt > October 29, 2018 at 6:03 PM > My question on ask.openstack.org says "This post is awaiting moderation". > > It has been like that for 17 days. If you are a moderator, I'd > appreciate if you'd decide whether to publicly post my question: > > https://ask.openstack.org/en/question/116591/is-there-example-of-using-python-zunclient-api/ > > Thanks. > > -Jacob Burckhardt > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Nov 26 18:06:24 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 26 Nov 2018 12:06:24 -0600 Subject: [Openstack] my ask.openstack.org question has waited for moderator approval for 17 days In-Reply-To: <5BFC34F5.2080108@openstack.org> References: <5BFC34F5.2080108@openstack.org> Message-ID: Thanks Jimmy On Mon, Nov 26, 2018 at 12:02 PM Jimmy McArthur wrote: > Hi all - > > Just a quick follow up on this... I just discovered a quirky UX bug that > might explain some frustration people have been having with the > moderation. There were messages dating back to March 2018. Basically, > when you click to approve/disapprove/etc... it appears that the queue is > cleared. But if you go to the menu and go back to the moderation page (or > just reload), you get a whole new batch. My guess is moderators have been > thinking they're clearing things out, but not. I know that's what happened > to me multiple times over a period of many months. > > I've since really and truly cleared out the queue. I found about 25 users > that were spammers and blocked all of them. I also auto-approved anyone > that was a valid poster so they can post without moderation moving forward. > > Ask.openstack.org is still a valued tool for our community and another > way for people to engage outside of the mls. It's full of not just valid > questions, but a lot of valid answers. I highly encourage those that are > curious about becoming a moderator to check it out and let me know. I'm > happy to elevate your user to a moderator if you want to contribute. > > Cheers, > Jimmy > > > > Bernd Bausch > October 29, 2018 at 6:16 PM > If there is a shortage of moderators, I volunteer. > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Jacob Burckhardt > October 29, 2018 at 6:03 PM > My question on ask.openstack.org says "This post is awaiting moderation". > > It has been like that for 17 days. If you are a moderator, I'd appreciate > if you'd decide whether to publicly post my question: > > > https://ask.openstack.org/en/question/116591/is-there-example-of-using-python-zunclient-api/ > > Thanks. > > -Jacob Burckhardt > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Tue Nov 27 09:55:08 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Tue, 27 Nov 2018 16:55:08 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: Message-ID: Hi Smooney, thank you for your help. I am trying to enable randomization but not working. The instance I have created is still in the same node. Below is my nova configuration (added randomization from your suggestion) from the master node (Template jinja2 based). [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] [barbican] [cache] backend=oslo_cache.memcache_pool enabled=true memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [crypto] [database] [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://{{ vip }}:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://{{ vip }}:5000/v3 memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = {{ nova_pw }} [libvirt] virt_type = kvm [matchmaker_redis] [metrics] [mks] [neutron] url = http://{{ vip }}:9696 auth_url = http://{{ vip }}:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = {{ neutron_pw }} [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://{{ vip }}:5000/v3 username = placement password = {{ placement_pw }} [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = True keymap=en-us server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ 'address'] }} server_proxyclient_address = {{ hostvars[inventory_hostname][ 'ansible_ens3f1']['ipv4']['address'] }} novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp] Thank you, Best Regards, Zufar Dhiyaulhaq On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney wrote: > On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote: > > Hi, > > > > I am deploying OpenStack with 3 compute node, but I am seeing an > abnormal distribution of instance, the instance is > > only deployed in a specific compute node, and not distribute among other > compute node. > > > > this is my nova.conf from the compute node. (template jinja2 based) > > hi, the default behavior of nova used to be spread not pack and i belive > it still is. > the default behavior with placement however is closer to a packing > behavior as > allcoation candiates are retrunidn in an undefined but deterministic order. > > on a busy cloud this does not strictly pack instaces but on a quite cloud > it effectivly does > > you can try and enable randomisation of the allocation candiates by > setting this config option in > the nova.conf of the shcduler to true. > > https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates > > on that note can you provide the nova.conf for the schduelr is used > instead of the compute node nova.conf. > if you have not overriden any of the nova defaults the ram and cpu weigher > should spread instances withing > the allocation candiates returned by placement. > > > > > [DEFAULT] > > osapi_compute_listen = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > > metadata_listen = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > > enabled_apis = osapi_compute,metadata > > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ > controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ > > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ > controller3_ip_man }}:5672 > > my_ip = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > > use_neutron = True > > firewall_driver = nova.virt.firewall.NoopFirewallDriver > > [api] > > auth_strategy = keystone > > [api_database] > > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api > > [barbican] > > [cache] > > backend=oslo_cache.memcache_pool > > enabled=true > > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > > [cells] > > [cinder] > > os_region_name = RegionOne > > [compute] > > [conductor] > > [console] > > [consoleauth] > > [cors] > > [crypto] > > [database] > > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova > > [devices] > > [ephemeral_storage_encryption] > > [filter_scheduler] > > [glance] > > api_servers = http://{{ vip }}:9292 > > [guestfs] > > [healthcheck] > > [hyperv] > > [ironic] > > [key_manager] > > [keystone] > > [keystone_authtoken] > > auth_url = http://{{ vip }}:5000/v3 > > memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > > auth_type = password > > project_domain_name = default > > user_domain_name = default > > project_name = service > > username = nova > > password = {{ nova_pw }} > > [libvirt] > > [matchmaker_redis] > > [metrics] > > [mks] > > [neutron] > > url = http://{{ vip }}:9696 > > auth_url = http://{{ vip }}:35357 > > auth_type = password > > project_domain_name = default > > user_domain_name = default > > region_name = RegionOne > > project_name = service > > username = neutron > > password = {{ neutron_pw }} > > service_metadata_proxy = true > > metadata_proxy_shared_secret = {{ metadata_secret }} > > [notifications] > > [osapi_v21] > > [oslo_concurrency] > > lock_path = /var/lib/nova/tmp > > [oslo_messaging_amqp] > > [oslo_messaging_kafka] > > [oslo_messaging_notifications] > > [oslo_messaging_rabbit] > > [oslo_messaging_zmq] > > [oslo_middleware] > > [oslo_policy] > > [pci] > > [placement] > > os_region_name = RegionOne > > project_domain_name = Default > > project_name = service > > auth_type = password > > user_domain_name = Default > > auth_url = http://{{ vip }}:5000/v3 > > username = placement > > password = {{ placement_pw }} > > [quota] > > [rdp] > > [remote_debug] > > [scheduler] > > discover_hosts_in_cells_interval = 300 > > [serial_console] > > [service_user] > > [spice] > > [upgrade_levels] > > [vault] > > [vendordata_dynamic_auth] > > [vmware] > > [vnc] > > enabled = true > > keymap=en-us > > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html > > novncproxy_host = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > > [workarounds] > > [wsgi] > > [xenserver] > > [xvp] > > [placement_database] > > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement > > > > what is the problem? I have lookup the openstack-nova-scheduler in the > controller node but it's running well with only > > warning > > > > nova-scheduler[19255]: > /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: > NotSupportedWarning: > > Configuration option(s) ['use_tpool'] not supported > > > > the result I want is the instance is distributed in all compute node. > > Thank you. > > > > _______________________________________________ > > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > Post to : openstack at lists.openstack.org > > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Tue Nov 27 10:01:19 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Tue, 27 Nov 2018 17:01:19 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: Message-ID: Hi Smooney, sorry for the last reply. I am attaching wrong configuration files. This is my nova configuration (added randomization from your suggestion) from the master node (Template jinja2 based). [DEFAULT] osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api [barbican] [cache] backend=oslo_cache.memcache_pool enabled=true memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 [cells] [cinder] os_region_name = RegionOne [compute] [conductor] [console] [consoleauth] [cors] [crypto] [database] connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://{{ vip }}:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://{{ vip }}:5000/v3 memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = {{ nova_pw }} [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] url = http://{{ vip }}:9696 auth_url = http://{{ vip }}:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = {{ neutron_pw }} service_metadata_proxy = true metadata_proxy_shared_secret = {{ metadata_secret }} [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://{{ vip }}:5000/v3 username = placement password = {{ placement_pw }} randomize_allocation_candidates = true [quota] [rdp] [remote_debug] [scheduler] discover_hosts_in_cells_interval = 300 [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true keymap=en-us novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} [workarounds] [wsgi] [xenserver] [xvp] [placement_database] connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement Thank you Best Regards, Zufar Dhiyaulhaq On Tue, Nov 27, 2018 at 4:55 PM Zufar Dhiyaulhaq wrote: > Hi Smooney, > > thank you for your help. I am trying to enable randomization but not > working. The instance I have created is still in the same node. Below is my > nova configuration (added randomization from your suggestion) from the > master node (Template jinja2 based). > > [DEFAULT] > enabled_apis = osapi_compute,metadata > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man > }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man > }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 > my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ > 'address'] }} > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > [api] > auth_strategy = keystone > [api_database] > [barbican] > [cache] > backend=oslo_cache.memcache_pool > enabled=true > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > [cells] > [cinder] > [compute] > [conductor] > [console] > [consoleauth] > [cors] > [crypto] > [database] > [devices] > [ephemeral_storage_encryption] > [filter_scheduler] > [glance] > api_servers = http://{{ vip }}:9292 > [guestfs] > [healthcheck] > [hyperv] > [ironic] > [key_manager] > [keystone] > [keystone_authtoken] > auth_url = http://{{ vip }}:5000/v3 > memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = {{ nova_pw }} > [libvirt] > virt_type = kvm > [matchmaker_redis] > [metrics] > [mks] > [neutron] > url = http://{{ vip }}:9696 > auth_url = http://{{ vip }}:35357 > auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = neutron > password = {{ neutron_pw }} > [notifications] > [osapi_v21] > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > [oslo_messaging_amqp] > [oslo_messaging_kafka] > [oslo_messaging_notifications] > [oslo_messaging_rabbit] > [oslo_messaging_zmq] > [oslo_middleware] > [oslo_policy] > [pci] > [placement] > os_region_name = RegionOne > project_domain_name = Default > project_name = service > auth_type = password > user_domain_name = Default > auth_url = http://{{ vip }}:5000/v3 > username = placement > password = {{ placement_pw }} > [quota] > [rdp] > [remote_debug] > [scheduler] > [serial_console] > [service_user] > [spice] > [upgrade_levels] > [vault] > [vendordata_dynamic_auth] > [vmware] > [vnc] > enabled = True > keymap=en-us > server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ > 'address'] }} > server_proxyclient_address = {{ hostvars[inventory_hostname][ > 'ansible_ens3f1']['ipv4']['address'] }} > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html > [workarounds] > [wsgi] > [xenserver] > [xvp] > > Thank you, > > Best Regards, > Zufar Dhiyaulhaq > > On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney wrote: > >> On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote: >> > Hi, >> > >> > I am deploying OpenStack with 3 compute node, but I am seeing an >> abnormal distribution of instance, the instance is >> > only deployed in a specific compute node, and not distribute among >> other compute node. >> > >> > this is my nova.conf from the compute node. (template jinja2 based) >> >> hi, the default behavior of nova used to be spread not pack and i belive >> it still is. >> the default behavior with placement however is closer to a packing >> behavior as >> allcoation candiates are retrunidn in an undefined but deterministic >> order. >> >> on a busy cloud this does not strictly pack instaces but on a quite cloud >> it effectivly does >> >> you can try and enable randomisation of the allocation candiates by >> setting this config option in >> the nova.conf of the shcduler to true. >> >> https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates >> >> on that note can you provide the nova.conf for the schduelr is used >> instead of the compute node nova.conf. >> if you have not overriden any of the nova defaults the ram and cpu >> weigher should spread instances withing >> the allocation candiates returned by placement. >> >> > >> > [DEFAULT] >> > osapi_compute_listen = {{ >> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >> > metadata_listen = {{ >> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >> > enabled_apis = osapi_compute,metadata >> > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ >> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >> > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >> controller3_ip_man }}:5672 >> > my_ip = {{ >> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >> > use_neutron = True >> > firewall_driver = nova.virt.firewall.NoopFirewallDriver >> > [api] >> > auth_strategy = keystone >> > [api_database] >> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api >> > [barbican] >> > [cache] >> > backend=oslo_cache.memcache_pool >> > enabled=true >> > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man >> }}:11211,{{ controller3_ip_man }}:11211 >> > [cells] >> > [cinder] >> > os_region_name = RegionOne >> > [compute] >> > [conductor] >> > [console] >> > [consoleauth] >> > [cors] >> > [crypto] >> > [database] >> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova >> > [devices] >> > [ephemeral_storage_encryption] >> > [filter_scheduler] >> > [glance] >> > api_servers = http://{{ vip }}:9292 >> > [guestfs] >> > [healthcheck] >> > [hyperv] >> > [ironic] >> > [key_manager] >> > [keystone] >> > [keystone_authtoken] >> > auth_url = http://{{ vip }}:5000/v3 >> > memcached_servers = {{ controller1_ip_man }}:11211,{{ >> controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 >> > auth_type = password >> > project_domain_name = default >> > user_domain_name = default >> > project_name = service >> > username = nova >> > password = {{ nova_pw }} >> > [libvirt] >> > [matchmaker_redis] >> > [metrics] >> > [mks] >> > [neutron] >> > url = http://{{ vip }}:9696 >> > auth_url = http://{{ vip }}:35357 >> > auth_type = password >> > project_domain_name = default >> > user_domain_name = default >> > region_name = RegionOne >> > project_name = service >> > username = neutron >> > password = {{ neutron_pw }} >> > service_metadata_proxy = true >> > metadata_proxy_shared_secret = {{ metadata_secret }} >> > [notifications] >> > [osapi_v21] >> > [oslo_concurrency] >> > lock_path = /var/lib/nova/tmp >> > [oslo_messaging_amqp] >> > [oslo_messaging_kafka] >> > [oslo_messaging_notifications] >> > [oslo_messaging_rabbit] >> > [oslo_messaging_zmq] >> > [oslo_middleware] >> > [oslo_policy] >> > [pci] >> > [placement] >> > os_region_name = RegionOne >> > project_domain_name = Default >> > project_name = service >> > auth_type = password >> > user_domain_name = Default >> > auth_url = http://{{ vip }}:5000/v3 >> > username = placement >> > password = {{ placement_pw }} >> > [quota] >> > [rdp] >> > [remote_debug] >> > [scheduler] >> > discover_hosts_in_cells_interval = 300 >> > [serial_console] >> > [service_user] >> > [spice] >> > [upgrade_levels] >> > [vault] >> > [vendordata_dynamic_auth] >> > [vmware] >> > [vnc] >> > enabled = true >> > keymap=en-us >> > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html >> > novncproxy_host = {{ >> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >> > [workarounds] >> > [wsgi] >> > [xenserver] >> > [xvp] >> > [placement_database] >> > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement >> > >> > what is the problem? I have lookup the openstack-nova-scheduler in the >> controller node but it's running well with only >> > warning >> > >> > nova-scheduler[19255]: >> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: >> NotSupportedWarning: >> > Configuration option(s) ['use_tpool'] not supported >> > >> > the result I want is the instance is distributed in all compute node. >> > Thank you. >> > >> > _______________________________________________ >> > Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > Post to : openstack at lists.openstack.org >> > Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chhagarw at in.ibm.com Tue Nov 27 12:52:10 2018 From: chhagarw at in.ibm.com (Chhavi Agarwal) Date: Tue, 27 Nov 2018 18:22:10 +0530 Subject: [Openstack] [Cinder] Cinder New Tag Release In-Reply-To: <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> Message-ID: Hi All, With the current tagged release of Openstack Cinder ( 13.0.1 ) https://github.com/openstack/cinder/releases/tag/13.0.1 we are hitting the below issue https://bugs.launchpad.net/cinder/+bug/1796759 This got fixed with change set https://review.openstack.org/#/c/608768/ which is not a part of the tagged release. Want to know if we can have a new Cinder tag release to incorporate the new fixes. Thanks & Regards, Chhavi Agarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Nov 27 18:22:30 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Nov 2018 18:22:30 +0000 Subject: [Openstack] IMPORTANT: We're combining the lists! In-Reply-To: <20181119000342.46kpr5wcunjq2bfn@yuggoth.org> References: <20180830170350.wrz4wlanb276kncb@yuggoth.org> <20180920163248.oia5t7zjqcfwluwz@yuggoth.org> <20181029165346.vm6ptoqq5wkqban6@yuggoth.org> <20181109181447.qhutsauxl4fuinnh@yuggoth.org> <20181119000342.46kpr5wcunjq2bfn@yuggoth.org> Message-ID: <20181127182230.2u3wffobjmxggkrz@yuggoth.org> REMINDER: The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists (to which this was sent) are being replaced by a new openstack-discuss at lists.openstack.org mailing list. The new list[0] has been open for posts from subscribers since Monday November 19, and the old lists will be configured to no longer accept posts starting on Monday December 3. In the interim, posts to the old lists will also get copied to the new list so it's safe to unsubscribe from them now and not miss any messages. See my previous notice[1] for details. As of the time of this announcement, we have 403 subscribers on openstack-discuss with six days to go before the old lists are closed down for good). I have updated the old list descriptions to indicate the openstack-discuss list is preferred, and added a custom "welcome message" with the same for anyone who subscribes to them over the next week. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From iain.macdonnell at oracle.com Tue Nov 27 18:52:48 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Tue, 27 Nov 2018 10:52:48 -0800 Subject: [Openstack] [Cinder] Cinder New Tag Release In-Reply-To: References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> Message-ID: <0b7b6cf0-e772-e6e3-8f0f-87ea2161b8c1@oracle.com> On 11/27/18 4:52 AM, Chhavi Agarwal wrote: > With the current tagged release of Openstack Cinder ( 13.0.1 ) > https://github.com/openstack/cinder/releases/tag/13.0.1 > we are hitting the below issue > > https://bugs.launchpad.net/cinder/+bug/1796759 > > This got fixed with change set https://review.openstack.org/#/c/608768/ > which is not a part of the tagged release. > > Want to know if we can have a new Cinder tag release to incorporate the > new fixes. [attempting to cross-post to openstack-discuss] Cinder 13.x releases are OpenStack Rocky, and the upper-constraints for Rocky [1] says oslo.messaging===8.1.2, so there should be no need to backport this fix. Are you trying to run the unit tests when you see this? When I run tox on stable/rocky, it installs 8.1.2 as one of the dependencies, although, to be honest, I'm really not sure how tox knows that that's the right version. Or are you trying to run Cinder from Rocky with newer oslo.messaging, and getting the same symptom as the unit test failures in that bug, when running the cinder services? If so, a) it's an unsupported combination (I believe) and b) you'd (at least) need to update your configuration to remove use of that deprecated rpc_backend option. ~iain [1] https://github.com/openstack/requirements/blob/stable/rocky/upper-constraints.txt From iain.macdonnell at oracle.com Tue Nov 27 19:05:00 2018 From: iain.macdonnell at oracle.com (iain MacDonnell) Date: Tue, 27 Nov 2018 11:05:00 -0800 Subject: [Openstack] [Cinder] Cinder New Tag Release In-Reply-To: <0b7b6cf0-e772-e6e3-8f0f-87ea2161b8c1@oracle.com> References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> <0b7b6cf0-e772-e6e3-8f0f-87ea2161b8c1@oracle.com> Message-ID: <2cd493e9-e4a3-38e2-5295-9d7c1cd7dd29@oracle.com> On 11/27/18 10:52 AM, iain MacDonnell wrote: > > On 11/27/18 4:52 AM, Chhavi Agarwal wrote: >> With the current tagged release of Openstack Cinder ( 13.0.1 ) >> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_cinder_releases_tag_13.0.1&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_-mGtryix-GE-7_xbOpH7F0ZS4jQxE3RJZ72LghieKQ&s=WMS_BqRFKcnhRVhLF7Etzpoinel262YhUoKvNL19508&e= >> we are hitting the below issue >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_cinder_-2Bbug_1796759&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_-mGtryix-GE-7_xbOpH7F0ZS4jQxE3RJZ72LghieKQ&s=N4sIIITOxdF357oNgPI0vmwp7UyMtpyfwoe48FNGVec&e= >> >> This got fixed with change set >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_608768_&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_-mGtryix-GE-7_xbOpH7F0ZS4jQxE3RJZ72LghieKQ&s=16ZhumVOFprhADv1v553c2wwyMfG84cP0a9Z3VHaVuM&e= >> which is not a part of the tagged release. >> >> Want to know if we can have a new Cinder tag release to incorporate >> the new fixes. > > [attempting to cross-post to openstack-discuss] > > Cinder 13.x releases are OpenStack Rocky, and the upper-constraints for > Rocky [1] says oslo.messaging===8.1.2, so there should be no need to > backport this fix. > > Are you trying to run the unit tests when you see this? When I run tox > on stable/rocky, it installs 8.1.2 as one of the dependencies, although, > to be honest, I'm really not sure how tox knows that that's the right > version. Ahh, here's how it knows :- $ grep install_command tox.ini install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky} {opts} {packages} $ ~iain > Or are you trying to run Cinder from Rocky with newer oslo.messaging, > and getting the same symptom as the unit test failures in that bug, when > running the cinder services? If so, a) it's an unsupported combination > (I believe) and b) you'd (at least) need to update your configuration to > remove use of that deprecated rpc_backend option. > >     ~iain > > > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_requirements_blob_stable_rocky_upper-2Dconstraints.txt&d=DwICaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=RxYkIjeLZPK2frXV_wEUCq8d3wvUIvDPimUcunMwbMs&m=_-mGtryix-GE-7_xbOpH7F0ZS4jQxE3RJZ72LghieKQ&s=RyhzI0PUIxfPaQuJLH1iAf8Z4TjI1LWsI1DoUhBeYnc&e= > > From sean.mcginnis at gmail.com Tue Nov 27 19:08:58 2018 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Tue, 27 Nov 2018 13:08:58 -0600 Subject: [Openstack] [Cinder] Cinder New Tag Release In-Reply-To: <2cd493e9-e4a3-38e2-5295-9d7c1cd7dd29@oracle.com> References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> <0b7b6cf0-e772-e6e3-8f0f-87ea2161b8c1@oracle.com> <2cd493e9-e4a3-38e2-5295-9d7c1cd7dd29@oracle.com> Message-ID: On Tue, Nov 27, 2018 at 1:05 PM iain MacDonnell wrote: > > >> Want to know if we can have a new Cinder tag release to incorporate > >> the new fixes. > > > > [attempting to cross-post to openstack-discuss] > > > > Cinder 13.x releases are OpenStack Rocky, and the upper-constraints for > > Rocky [1] says oslo.messaging===8.1.2, so there should be no need to > > backport this fix. > > > > Are you trying to run the unit tests when you see this? When I run tox > > on stable/rocky, it installs 8.1.2 as one of the dependencies, although, > > to be honest, I'm really not sure how tox knows that that's the right > > version. > > Ahh, here's how it knows :- > > $ grep install_command tox.ini > install_command = pip install > -c{env:UPPER_CONSTRAINTS_FILE: > https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky} > > {opts} {packages} > $ > > ~iain > Yeah, we shouldn't need to backport something like this. We have upper constraints specifically to avoid needing to handle cases like this. Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: From chhagarw at in.ibm.com Wed Nov 28 04:42:33 2018 From: chhagarw at in.ibm.com (Chhavi Agarwal) Date: Wed, 28 Nov 2018 10:12:33 +0530 Subject: [Openstack] [Cinder] Cinder New Tag Release In-Reply-To: References: <90950AF3-ED04-4015-BE09-E4235F0DFAC9@gmail.com> <1926809770.53738230.1543224969471.JavaMail.zimbra@redhat.com> <0b7b6cf0-e772-e6e3-8f0f-87ea2161b8c1@oracle.com> <2cd493e9-e4a3-38e2-5295-9d7c1cd7dd29@oracle.com> Message-ID: I could see that the Unit tests are run against the latest oslo.messaging from the master, and source tree is old. http://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt oslo.messaging===9.2.1 [root at ip9-114-192-185 cinder-es]# grep install_command tox.ini install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE: https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt } {opts} {packages} [root at ip9-114-192-185 cinder-es]# Will get the configuration fixed. Thanks for the clarification. Thanks & Regards, Chhavi Agarwal Cloud System Software Group. From: Sean McGinnis To: iain.macdonnell at oracle.com Cc: chhagarw at in.ibm.com, openstack at lists.openstack.org, openstack-discuss at lists.openstack.org, jungleboyj at electronicjungle.net, John Griffith Date: 11/28/2018 12:39 AM Subject: Re: [Openstack] [Cinder] Cinder New Tag Release On Tue, Nov 27, 2018 at 1:05 PM iain MacDonnell wrote: >> Want to know if we can have a new Cinder tag release to incorporate >> the new fixes. > > [attempting to cross-post to openstack-discuss] > > Cinder 13.x releases are OpenStack Rocky, and the upper-constraints for > Rocky [1] says oslo.messaging===8.1.2, so there should be no need to > backport this fix. > > Are you trying to run the unit tests when you see this? When I run tox > on stable/rocky, it installs 8.1.2 as one of the dependencies, although, > to be honest, I'm really not sure how tox knows that that's the right > version. Ahh, here's how it knows :- $ grep install_command tox.ini install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE: https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky } {opts} {packages} $      ~iain Yeah, we shouldn't need to backport something like this. We have upper constraints specifically to avoid needing to handle cases like this. Sean -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From zufardhiyaulhaq at gmail.com Wed Nov 28 07:50:32 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Wed, 28 Nov 2018 14:50:32 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: Message-ID: Hi, Thank you. I am able to fix this issue by adding this configuration into nova configuration file in controller node. driver=filter_scheduler Best Regards Zufar Dhiyaulhaq On Tue, Nov 27, 2018 at 5:01 PM Zufar Dhiyaulhaq wrote: > Hi Smooney, > sorry for the last reply. I am attaching wrong configuration files. This > is my nova configuration (added randomization from your suggestion) from > the master node (Template jinja2 based). > > [DEFAULT] > osapi_compute_listen = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > metadata_listen = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > enabled_apis = osapi_compute,metadata > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man > }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man > }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672 > my_ip = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > use_neutron = True > firewall_driver = nova.virt.firewall.NoopFirewallDriver > [api] > auth_strategy = keystone > [api_database] > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api > [barbican] > [cache] > backend=oslo_cache.memcache_pool > enabled=true > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > [cells] > [cinder] > os_region_name = RegionOne > [compute] > [conductor] > [console] > [consoleauth] > [cors] > [crypto] > [database] > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova > [devices] > [ephemeral_storage_encryption] > [filter_scheduler] > [glance] > api_servers = http://{{ vip }}:9292 > [guestfs] > [healthcheck] > [hyperv] > [ironic] > [key_manager] > [keystone] > [keystone_authtoken] > auth_url = http://{{ vip }}:5000/v3 > memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man > }}:11211,{{ controller3_ip_man }}:11211 > auth_type = password > project_domain_name = default > user_domain_name = default > project_name = service > username = nova > password = {{ nova_pw }} > [libvirt] > [matchmaker_redis] > [metrics] > [mks] > [neutron] > url = http://{{ vip }}:9696 > auth_url = http://{{ vip }}:35357 > auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = neutron > password = {{ neutron_pw }} > service_metadata_proxy = true > metadata_proxy_shared_secret = {{ metadata_secret }} > [notifications] > [osapi_v21] > [oslo_concurrency] > lock_path = /var/lib/nova/tmp > [oslo_messaging_amqp] > [oslo_messaging_kafka] > [oslo_messaging_notifications] > [oslo_messaging_rabbit] > [oslo_messaging_zmq] > [oslo_middleware] > [oslo_policy] > [pci] > [placement] > os_region_name = RegionOne > project_domain_name = Default > project_name = service > auth_type = password > user_domain_name = Default > auth_url = http://{{ vip }}:5000/v3 > username = placement > password = {{ placement_pw }} > randomize_allocation_candidates = true > [quota] > [rdp] > [remote_debug] > [scheduler] > discover_hosts_in_cells_interval = 300 > [serial_console] > [service_user] > [spice] > [upgrade_levels] > [vault] > [vendordata_dynamic_auth] > [vmware] > [vnc] > enabled = true > keymap=en-us > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html > novncproxy_host = {{ > hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} > [workarounds] > [wsgi] > [xenserver] > [xvp] > [placement_database] > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement > > Thank you > > Best Regards, > Zufar Dhiyaulhaq > > > On Tue, Nov 27, 2018 at 4:55 PM Zufar Dhiyaulhaq < > zufardhiyaulhaq at gmail.com> wrote: > >> Hi Smooney, >> >> thank you for your help. I am trying to enable randomization but not >> working. The instance I have created is still in the same node. Below is my >> nova configuration (added randomization from your suggestion) from the >> master node (Template jinja2 based). >> >> [DEFAULT] >> enabled_apis = osapi_compute,metadata >> transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ >> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >> controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >> controller3_ip_man }}:5672 >> my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][ >> 'address'] }} >> use_neutron = True >> firewall_driver = nova.virt.firewall.NoopFirewallDriver >> [api] >> auth_strategy = keystone >> [api_database] >> [barbican] >> [cache] >> backend=oslo_cache.memcache_pool >> enabled=true >> memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man >> }}:11211,{{ controller3_ip_man }}:11211 >> [cells] >> [cinder] >> [compute] >> [conductor] >> [console] >> [consoleauth] >> [cors] >> [crypto] >> [database] >> [devices] >> [ephemeral_storage_encryption] >> [filter_scheduler] >> [glance] >> api_servers = http://{{ vip }}:9292 >> [guestfs] >> [healthcheck] >> [hyperv] >> [ironic] >> [key_manager] >> [keystone] >> [keystone_authtoken] >> auth_url = http://{{ vip }}:5000/v3 >> memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man >> }}:11211,{{ controller3_ip_man }}:11211 >> auth_type = password >> project_domain_name = default >> user_domain_name = default >> project_name = service >> username = nova >> password = {{ nova_pw }} >> [libvirt] >> virt_type = kvm >> [matchmaker_redis] >> [metrics] >> [mks] >> [neutron] >> url = http://{{ vip }}:9696 >> auth_url = http://{{ vip }}:35357 >> auth_type = password >> project_domain_name = default >> user_domain_name = default >> region_name = RegionOne >> project_name = service >> username = neutron >> password = {{ neutron_pw }} >> [notifications] >> [osapi_v21] >> [oslo_concurrency] >> lock_path = /var/lib/nova/tmp >> [oslo_messaging_amqp] >> [oslo_messaging_kafka] >> [oslo_messaging_notifications] >> [oslo_messaging_rabbit] >> [oslo_messaging_zmq] >> [oslo_middleware] >> [oslo_policy] >> [pci] >> [placement] >> os_region_name = RegionOne >> project_domain_name = Default >> project_name = service >> auth_type = password >> user_domain_name = Default >> auth_url = http://{{ vip }}:5000/v3 >> username = placement >> password = {{ placement_pw }} >> [quota] >> [rdp] >> [remote_debug] >> [scheduler] >> [serial_console] >> [service_user] >> [spice] >> [upgrade_levels] >> [vault] >> [vendordata_dynamic_auth] >> [vmware] >> [vnc] >> enabled = True >> keymap=en-us >> server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4' >> ]['address'] }} >> server_proxyclient_address = {{ hostvars[inventory_hostname][ >> 'ansible_ens3f1']['ipv4']['address'] }} >> novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html >> [workarounds] >> [wsgi] >> [xenserver] >> [xvp] >> >> Thank you, >> >> Best Regards, >> Zufar Dhiyaulhaq >> >> On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney wrote: >> >>> On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote: >>> > Hi, >>> > >>> > I am deploying OpenStack with 3 compute node, but I am seeing an >>> abnormal distribution of instance, the instance is >>> > only deployed in a specific compute node, and not distribute among >>> other compute node. >>> > >>> > this is my nova.conf from the compute node. (template jinja2 based) >>> >>> hi, the default behavior of nova used to be spread not pack and i belive >>> it still is. >>> the default behavior with placement however is closer to a packing >>> behavior as >>> allcoation candiates are retrunidn in an undefined but deterministic >>> order. >>> >>> on a busy cloud this does not strictly pack instaces but on a quite >>> cloud it effectivly does >>> >>> you can try and enable randomisation of the allocation candiates by >>> setting this config option in >>> the nova.conf of the shcduler to true. >>> >>> https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates >>> >>> on that note can you provide the nova.conf for the schduelr is used >>> instead of the compute node nova.conf. >>> if you have not overriden any of the nova defaults the ram and cpu >>> weigher should spread instances withing >>> the allocation candiates returned by placement. >>> >>> > >>> > [DEFAULT] >>> > osapi_compute_listen = {{ >>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >>> > metadata_listen = {{ >>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >>> > enabled_apis = osapi_compute,metadata >>> > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ >>> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >>> > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ >>> controller3_ip_man }}:5672 >>> > my_ip = {{ >>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >>> > use_neutron = True >>> > firewall_driver = nova.virt.firewall.NoopFirewallDriver >>> > [api] >>> > auth_strategy = keystone >>> > [api_database] >>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api >>> > [barbican] >>> > [cache] >>> > backend=oslo_cache.memcache_pool >>> > enabled=true >>> > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man >>> }}:11211,{{ controller3_ip_man }}:11211 >>> > [cells] >>> > [cinder] >>> > os_region_name = RegionOne >>> > [compute] >>> > [conductor] >>> > [console] >>> > [consoleauth] >>> > [cors] >>> > [crypto] >>> > [database] >>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova >>> > [devices] >>> > [ephemeral_storage_encryption] >>> > [filter_scheduler] >>> > [glance] >>> > api_servers = http://{{ vip }}:9292 >>> > [guestfs] >>> > [healthcheck] >>> > [hyperv] >>> > [ironic] >>> > [key_manager] >>> > [keystone] >>> > [keystone_authtoken] >>> > auth_url = http://{{ vip }}:5000/v3 >>> > memcached_servers = {{ controller1_ip_man }}:11211,{{ >>> controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211 >>> > auth_type = password >>> > project_domain_name = default >>> > user_domain_name = default >>> > project_name = service >>> > username = nova >>> > password = {{ nova_pw }} >>> > [libvirt] >>> > [matchmaker_redis] >>> > [metrics] >>> > [mks] >>> > [neutron] >>> > url = http://{{ vip }}:9696 >>> > auth_url = http://{{ vip }}:35357 >>> > auth_type = password >>> > project_domain_name = default >>> > user_domain_name = default >>> > region_name = RegionOne >>> > project_name = service >>> > username = neutron >>> > password = {{ neutron_pw }} >>> > service_metadata_proxy = true >>> > metadata_proxy_shared_secret = {{ metadata_secret }} >>> > [notifications] >>> > [osapi_v21] >>> > [oslo_concurrency] >>> > lock_path = /var/lib/nova/tmp >>> > [oslo_messaging_amqp] >>> > [oslo_messaging_kafka] >>> > [oslo_messaging_notifications] >>> > [oslo_messaging_rabbit] >>> > [oslo_messaging_zmq] >>> > [oslo_middleware] >>> > [oslo_policy] >>> > [pci] >>> > [placement] >>> > os_region_name = RegionOne >>> > project_domain_name = Default >>> > project_name = service >>> > auth_type = password >>> > user_domain_name = Default >>> > auth_url = http://{{ vip }}:5000/v3 >>> > username = placement >>> > password = {{ placement_pw }} >>> > [quota] >>> > [rdp] >>> > [remote_debug] >>> > [scheduler] >>> > discover_hosts_in_cells_interval = 300 >>> > [serial_console] >>> > [service_user] >>> > [spice] >>> > [upgrade_levels] >>> > [vault] >>> > [vendordata_dynamic_auth] >>> > [vmware] >>> > [vnc] >>> > enabled = true >>> > keymap=en-us >>> > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html >>> > novncproxy_host = {{ >>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }} >>> > [workarounds] >>> > [wsgi] >>> > [xenserver] >>> > [xvp] >>> > [placement_database] >>> > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip >>> }}/nova_placement >>> > >>> > what is the problem? I have lookup the openstack-nova-scheduler in the >>> controller node but it's running well with only >>> > warning >>> > >>> > nova-scheduler[19255]: >>> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: >>> NotSupportedWarning: >>> > Configuration option(s) ['use_tpool'] not supported >>> > >>> > the result I want is the instance is distributed in all compute node. >>> > Thank you. >>> > >>> > _______________________________________________ >>> > Mailing list: >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> > Post to : openstack at lists.openstack.org >>> > Unsubscribe : >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >>> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Wed Nov 28 12:26:21 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 28 Nov 2018 15:56:21 +0330 Subject: [Openstack] [OpenStack][cinder]Does cinder allocate disk to the volumes dynamically? Message-ID: Hi, I was wondering if the Cinder allocates disk to volumes statically or dynamically? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nickeysgo at gmail.com Wed Nov 28 12:37:02 2018 From: nickeysgo at gmail.com (Minjun Hong) Date: Wed, 28 Nov 2018 21:37:02 +0900 Subject: [Openstack] [Nova] Instance creation problem Message-ID: Hi. I'm setting up a Openstack system on the servers of my laboratory. While I try to create an instance, a problem has occurred! Instance creation was failed and it seems that libvirt failed to attaching the vif to the instance. When I create a virtual machine by using virsh tool (libvirt) manually, there was no problem. I add the logs as follows: 1. controller node > "/var/log/nova/nova-conductor.log" > 2018-11-28 21:18:13.033 2657 ERROR nova.scheduler.utils > [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 > 47270e4fb58045dc88b6f0f736286ffc - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Error from last host: node1 (node > node1): [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in > _do_build_and_run_instance\n filter_properties, request_spec)\n', u' > File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2120, > in _build_and_run_instance\n instance_uuid=instance.uuid, > reason=six.text_type(e))\n', u"RescheduledException: Build of instance > 9c2d08f3-0680-4709-a64d-ae1729a11304 was re-scheduled: internal error: > libxenlight failed to create new domain 'instance-00000008'\n"] > 2018-11-28 21:18:13.033 2657 WARNING nova.scheduler.utils > [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 > 47270e4fb58045dc88b6f0f736286ffc - default default] Failed to > compute_task_build_instances: Exceeded maximum number of retries. Exceeded > max scheduling attempts 3 for instance > 9c2d08f3-0680-4709-a64d-ae1729a11304. Last exception: internal error: > libxenlight failed to create new domain 'instance-00000008': > MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max > scheduling attempts 3 for instance 9c2d08f3-0680-4709-a64d-ae1729a11304. > Last exception: internal error: libxenlight failed to create new domain > 'instance-00000008' > 2018-11-28 21:18:13.034 2657 WARNING nova.scheduler.utils > [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 > 47270e4fb58045dc88b6f0f736286ffc - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Setting instance to ERROR state.: > MaxRetriesExceeded: Exceeded maximum number of retries. Exceeded max > scheduling attempts 3 for instance 9c2d08f3-0680-4709-a64d-ae1729a11304. > Last exception: internal error: libxenlight failed to create new domain > 'instance-00000008' > 2018-11-28 21:18:13.067 2657 WARNING oslo_config.cfg > [req-291fdb2d-fa94-461c-9f5f-68d340791c77 3367829d9c004653bdc9102443bd4736 > 47270e4fb58045dc88b6f0f736286ffc - default default] Option "url" from group > "neutron" is deprecated for removal (Endpoint lookup uses the service > catalog via common keystoneauth1 Adapter configuration options. In the > current release, "url" will override this behavior, but will be ignored > and/or removed in a future release. To achieve the same result, use the > endpoint_override option instead.). Its value may be silently ignored in > the future. > "/var/log/neutron/neutron-linuxbridge-agent.log" > 2018-11-28 17:41:45.593 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Interface mappings: {'provider': 'enp1s0f1'} > 2018-11-28 17:41:45.593 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Bridge mappings: {} > 2018-11-28 17:41:45.624 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] > Agent initialized successfully, now running... > 2018-11-28 17:41:45.901 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] RPC agent_id: > lba0369fa2714a > 2018-11-28 17:41:45.907 2476 INFO neutron.agent.agent_extensions_manager > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Loaded agent > extensions: [] > 2018-11-28 17:41:46.121 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent > Agent RPC Daemon Started! > 2018-11-28 17:41:46.122 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent > Agent out of sync with plugin! > 2018-11-28 17:41:46.512 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Clearing orphaned ARP > spoofing entries for devices [] > 2018-11-28 17:41:47.020 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Clearing orphaned ARP > spoofing entries for devices [] > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent [-] Failed reporting > state!: MessagingTimeout: Timed out waiting for a reply to message ID > 20ef587240864120b878559ab821adbf > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call > last): > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/agent/_common_agent.py", > line 128, in _report_state > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent True) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/neutron/agent/rpc.py", line 93, in > report_state > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent return method(context, > 'report_state', **kwargs) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 174, > in call > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent retry=self.retry) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 131, > in _send > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent timeout=timeout, > retry=retry) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 559, in send > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent retry=retry) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 548, in _send > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent result = > self._waiter.wait(msg_id, timeout) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 440, in wait > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent message = > self.waiters.get(msg_id, timeout=timeout) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent File > "/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 328, in get > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent 'to message ID %s' % > msg_id) > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent MessagingTimeout: Timed out > waiting for a reply to message ID 20ef587240864120b878559ab821adbf > 2018-11-28 17:42:45.981 2476 ERROR > neutron.plugins.ml2.drivers.agent._common_agent > 2018-11-28 17:42:45.986 2476 WARNING oslo.service.loopingcall [-] Function > 'neutron.plugins.ml2.drivers.agent._common_agent.CommonAgentLoop._report_state' > run outlasted interval by 30.07 sec > 2018-11-28 17:42:46.055 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent [-] Linux bridge agent > Agent has just been revived. Doing a full sync. > 2018-11-28 17:42:46.156 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Linux bridge agent > Agent out of sync with plugin! > 2018-11-28 17:43:40.189 2476 INFO neutron.agent.securitygroups_rpc > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Preparing filters for > devices set(['tap4a09374a-7f']) > 2018-11-28 17:43:40.935 2476 INFO > neutron.plugins.ml2.drivers.agent._common_agent > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Port tap4a09374a-7f > updated. Details: {u'profile': {}, u'network_qos_policy_id': None, > u'qos_policy_id': None, u'allowed_address_pairs': [], u'admin_state_up': > True, u'network_id': u'87fa0d9c-5ed3-4332-8782-0d4139eed7f3', > u'segmentation_id': None, u'mtu': 1500, u'device_owner': u'network:dhcp', > u'physical_network': u'provider', u'mac_address': u'fa:16:3e:ab:0e:84', > u'device': u'tap4a09374a-7f', u'port_security_enabled': False, u'port_id': > u'4a09374a-7fa5-42c2-9430-67a0cd65336c', u'fixed_ips': [{u'subnet_id': > u'e95946a8-070c-42c4-877e-279e6e7acc7e', u'ip_address': u'192.0.10.4'}], > u'network_type': u'flat'} > 2018-11-28 17:43:41.124 2476 INFO > neutron.plugins.ml2.drivers.linuxbridge.agent.arp_protect > [req-c447894c-9013-4bec-82e4-8b501414184d - - - - -] Skipping ARP spoofing > rules for port 'tap4a09374a-7f' because it has port security disabled "/var/log/neutron/neutron-server.log" > 2018-11-28 17:30:02.130 15995 INFO neutron.pecan_wsgi.hooks.translation > [req-e7554d70-c84d-46d5-8ff6-a536fb35664c 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] GET failed (client > error): The resource could not be found. > 2018-11-28 17:30:02.130 15995 INFO neutron.wsgi > [req-e7554d70-c84d-46d5-8ff6-a536fb35664c 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET > /v2.0/floatingips?fixed_ip_address=192.0.10.10&port_id=8ab0d544-ec5b-4e69-95f4-1f06f7b53bb4 > HTTP/1.1" status: 404 len: 309 time: 0.0072770 > 2018-11-28 17:30:02.167 15995 INFO neutron.wsgi > [req-a2b5f53b-8992-4178-b0b6-55be9d1a0f32 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET > /v2.0/subnets?id=e95946a8-070c-42c4-877e-279e6e7acc7e HTTP/1.1" status: > 200 len: 822 time: 0.0341990 > 2018-11-28 17:30:02.199 15995 INFO neutron.wsgi > [req-9e1a51e9-9ccc-4226-a78f-0420ff95c147 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET > /v2.0/ports?network_id=87fa0d9c-5ed3-4332-8782-0d4139eed7f3&device_owner=network%3Adhcp > HTTP/1.1" status: 200 len: 1080 time: 0.0300300 > 2018-11-28 17:30:02.584 15995 INFO neutron.notifiers.nova [-] Nova event > response: {u'status': u'completed', u'tag': > u'8ab0d544-ec5b-4e69-95f4-1f06f7b53bb4', u'name': u'network-changed', > u'server_uuid': u'a9afc2d4-f4c9-429b-9773-4de8a3eaefa5', u'code': 200} > 2018-11-28 17:30:02.628 15995 INFO neutron.wsgi > [req-73265cf5-5f0d-4217-b716-caa2fb906abf 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET > /v2.0/ports?tenant_id=47270e4fb58045dc88b6f0f736286ffc&device_id=a9afc2d4-f4c9-429b-9773-4de8a3eaefa5 > HTTP/1.1" status: 200 len: 1062 time: 0.0316660 > 2018-11-28 17:30:02.696 15995 INFO neutron.wsgi > [req-ed53b92c-3033-4b4a-ade4-fdc5a3463e8c 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] 10.150.21.183 "GET > /v2.0/networks?id=87fa0d9c-5ed3-4332-8782-0d4139eed7f3 HTTP/1.1" status: > 200 len: 872 time: 0.0655539 > 2018-11-28 17:30:02.702 15995 WARNING neutron.pecan_wsgi.controllers.root > [req-ccd8c9b8-d2cf-40f7-b53b-5936bd0c9a6d 29a3a16fd2484ee9bed834a3835545af > 5ebe3484974848b182a381127cb35a22 - default default] No controller found > for: floatingips - returning response code 404: PecanNotFound 2. compute node > "/var/log/libvirt/libxl/libxl-driver.log" > 2018-11-28 08:40:31.920+0000: libxl: > libxl_event.c:681:libxl__ev_xswatch_deregister: remove watch for path > @releaseDomain: Bad file descriptor > 2018-11-28 09:57:01.707+0000: libxl: > libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge > online [2536] exited with error status 1 > 2018-11-28 09:57:01.708+0000: libxl: > libxl_device.c:1286:device_hotplug_child_death_cb: script: ip link set > vif1.0 name tape5a239a8-6e failed > 2018-11-28 09:57:01.708+0000: libxl: > libxl_create.c:1522:domcreate_attach_devices: Domain 1:unable to add vif > devices "/var/log/xen/xen-hotplug.log" > RTNETLINK answers: Device or resource busy "/var/log/nova/nova-compute.log" > : libvirtError: internal error: libxenlight failed to create new domain > 'instance-00000008' > 2018-11-28 21:18:11.350 2384 ERROR nova.virt.libvirt.driver > [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d > 5ebe3484974848b182a381127cb35a22 - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Failed to start libvirt guest: > libvirtError: internal error: libxenlight failed to create new domain > 'instance-00000008' > 2018-11-28 21:18:11.352 2384 INFO os_vif > [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d > 5ebe3484974848b182a381127cb35a22 - default default] Successfully unplugged > vif > VIFBridge(active=False,address=fa:16:3e:6b:e4:b7,bridge_name='brq87fa0d9c-5e',has_traffic_filtering=True,id=484807ca-8c7c-4509-a5f5-ed7e5fd2078f,network=Network(87fa0d9c-5ed3-4332-8782-0d4139eed7f3),plugin='linux_bridge',port_profile=,preserve_on_delete=False,vif_name='tap484807ca-8c') > 2018-11-28 21:18:11.554 2384 INFO nova.virt.libvirt.driver > [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d > 5ebe3484974848b182a381127cb35a22 - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Deleting instance files > /var/lib/nova/instances/9c2d08f3-0680-4709-a64d-ae1729a11304_del > 2018-11-28 21:18:11.556 2384 INFO nova.virt.libvirt.driver > [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d > 5ebe3484974848b182a381127cb35a22 - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Deletion of > /var/lib/nova/instances/9c2d08f3-0680-4709-a64d-ae1729a11304_del complete > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager > [req-b3e761b7-00fa-4930-bd9e-4330f8440c03 48086750fa13420888601964bb6a9d0d > 5ebe3484974848b182a381127cb35a22 - default default] [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Instance failed to spawn: > libvirtError: internal error: libxenlight failed to create new domain > 'instance-00000008' > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] Traceback (most recent call last): > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2251, in > _build_resources > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] yield resources > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2031, in > _build_and_run_instance > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] > block_device_info=block_device_info) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3089, > in spawn > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] destroy_disks_on_failure=True) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5614, > in _create_domain_and_network > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] destroy_disks_on_failure) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] self.force_reraise() > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(self.type_, > self.value, self.tb) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5583, > in _create_domain_and_network > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] > post_xml_callback=post_xml_callback) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5502, > in _create_domain > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] guest.launch(pause=pause) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 144, in > launch > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] self._encoded_xml, > errors='ignore') > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] self.force_reraise() > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(self.type_, > self.value, self.tb) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 139, in > launch > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] return > self._domain.createWithFlags(flags) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] result = > proxy_call(self._autowrap, f, *args, **kwargs) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in > proxy_call > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] rv = execute(f, *args, **kwargs) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] six.reraise(c, e, tb) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] rv = meth(*args, **kwargs) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] File > "/usr/lib/python2.7/dist-packages/libvirt.py", line 1092, in createWithFlags > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] if ret == -1: raise libvirtError > ('virDomainCreateWithFlags() failed', dom=self) > 2018-11-28 21:18:11.614 2384 ERROR nova.compute.manager [instance: > 9c2d08f3-0680-4709-a64d-ae1729a11304] libvirtError: internal error: > libxenlight failed to create new domain 'instance-00000008' Anyone help me please. -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Wed Nov 28 12:40:21 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Wed, 28 Nov 2018 16:10:21 +0330 Subject: [Openstack] [OpenStack][cinder]Does cinder allocate disk to the volumes dynamically? In-Reply-To: References: Message-ID: When I installed OpenStack using PackStack, the size of LVM group was 20G but I could create 18 volume each of which 20G size (so in the PackStack the Cinder allocate volumes dynamically), but installing OpenStack from repository, the Cinder allocate disk to volume statically because I have volume group in size 140G but I only can create 7 volume of size 20G. So How Can I configure Cinder to allocate disk dynamically? On Wed, Nov 28, 2018 at 3:56 PM Soheil Pourbafrani wrote: > Hi, > > I was wondering if the Cinder allocates disk to volumes statically or > dynamically? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Nov 28 14:56:26 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 28 Nov 2018 09:56:26 -0500 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: Message-ID: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> On 11/28/2018 02:50 AM, Zufar Dhiyaulhaq wrote: > Hi, > > Thank you. I am able to fix this issue by adding this configuration into > nova configuration file in controller node. > > driver=filter_scheduler That's the default: https://docs.openstack.org/ocata/config-reference/compute/config-options.html So that was definitely not the solution to your problem. My guess is that Sean's suggestion to randomize the allocation candidates fixed your issue. Best, -jay From sean.mcginnis at gmx.com Wed Nov 28 16:03:38 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 28 Nov 2018 10:03:38 -0600 Subject: [Openstack] [OpenStack][cinder]Does cinder allocate disk to the volumes dynamically? In-Reply-To: References: Message-ID: <20181128160337.GA5164@sm-workstation> On Wed, Nov 28, 2018 at 04:10:21PM +0330, Soheil Pourbafrani wrote: > When I installed OpenStack using PackStack, the size of LVM group was 20G > but I could create 18 volume each of which 20G size (so in the PackStack > the Cinder allocate volumes dynamically), but installing OpenStack from > repository, the Cinder allocate disk to volume statically because I have > volume group in size 140G but I only can create 7 volume of size 20G. So > How Can I configure Cinder to allocate disk dynamically? > Not sure why you would be seeing a difference, but the default for Cinder is to thin provision volumes. This is controller through the following option in cinder.conf: # Use thin provisioning for SAN volumes? (boolean value) #san_thin_provision = true Specific to LVM there is also the type option: # Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to # thin if thin is supported. (string value) # Possible values: # default - Thick-provisioned LVM. # thin - Thin-provisioned LVM. # auto - Defaults to thin when supported. #lvm_type = auto With the "auto" setting, that should thin provision the volumes unless you had already configured the volume group ahead of time to not support it. That's probably the most likely thing I can think of. Sean From rafaelweingartner at gmail.com Thu Nov 29 01:23:51 2018 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Wed, 28 Nov 2018 23:23:51 -0200 Subject: [Openstack] OpenStack federation and WAYF process with multiple IdPs Message-ID: Hello Openstackers, I am testing the integration of OpenStack (acting as a service provider) using Keycloak (as an identity provider) with OpenId Connect protocol. So far everything is working, but when I enable more than one IdP, I get an odd behavior. The “where are you from (WAYF)” process is happening twice, one in Horizon (where the user selects the authentication provider A.K.A IdP), and another one in Keystone via the Apache HTTPD OIDC module. I assume this is happening because the actual application being authenticated via OIDC is Keystone, and just afterwards, the other systems will authenticate themselves via Keystone. Has anybody else experienced/”dealt with” this situation? Is this by design? Am I missing a parameter/configuration or something else? The version of OpenStack that I am using is Rocky. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Thu Nov 29 01:40:52 2018 From: mranga at gmail.com (M. Ranganathan) Date: Wed, 28 Nov 2018 20:40:52 -0500 Subject: [Openstack] TAPaaS on Queens? Message-ID: Hello all, I want to experiment with SNORT as a service on OpenStack. I looked at the TAPaaS project. However, I am not sure it runs on OpenStack Queens. I don't see it in the queens release https://releases.openstack.org/queens/index.html so I am wondering if this project is still alive (?) Thanks, Ranga -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Thu Nov 29 01:53:20 2018 From: mranga at gmail.com (M. Ranganathan) Date: Wed, 28 Nov 2018 20:53:20 -0500 Subject: [Openstack] How to create a tap port using openstack API? Message-ID: Hello, I want to write my own VNF. For this I need to : 1. Create a network namespace. 2. Create a ovs internal port on a bridge with an appropriate tag. 3. Send the port to the network namespace. 4. Run a service in the network namespace that could (for example) read packets. Are there "openstack networking" commands to do steps 1-3 above or should this be done manually? ( What I want to do, essentially is to create a tap port and read packets from this tap port. Incidentally, the TAPaaS project was doing something like this but it seems to be unsupported.) Thanks, Ranga -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From reedip14 at gmail.com Thu Nov 29 05:48:46 2018 From: reedip14 at gmail.com (reedip banerjee) Date: Thu, 29 Nov 2018 11:18:46 +0530 Subject: [Openstack] TAPaaS on Queens? In-Reply-To: References: Message-ID: Hi M.Ranganathan, Tap-as-a-Service has a release for Stable/Queens and Stable/Rocky. However, its not yet an official project in Openstack, so it might not be listed there. On Thu, Nov 29, 2018 at 7:21 AM M. Ranganathan wrote: > Hello all, > > I want to experiment with SNORT as a service on OpenStack. I looked at the > TAPaaS project. However, I am not sure it runs on OpenStack Queens. I don't > see it in the queens release > https://releases.openstack.org/queens/index.html so I am wondering if > this project is still alive (?) > > > Thanks, > > Ranga > > -- > M. Ranganathan > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -- Thanks and Regards, Reedip Banerjee IRC: reedip -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Fri Nov 30 07:53:12 2018 From: mike.carden at gmail.com (Mike Carden) Date: Fri, 30 Nov 2018 18:53:12 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: I'm seeing a similar issue in Queens deployed via tripleo. Two x86 compute nodes and one ppc64le node and host aggregates for virtual instances and baremetal (x86) instances. Baremetal on x86 is working fine. All VMs get deployed to compute-0. I can live migrate VMs to compute-1 and all is well, but I tire of being the 'meatspace scheduler'. I've looked at the nova.conf in the various nova-xxx containers on the controllers, but I have failed to discern the root of this issue. Anyone have a suggestion? -- MC > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Nov 30 13:57:32 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 30 Nov 2018 08:57:32 -0500 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On 11/30/2018 02:53 AM, Mike Carden wrote: > I'm seeing a similar issue in Queens deployed via tripleo. > > Two x86 compute nodes and one ppc64le node and host aggregates for > virtual instances and baremetal (x86) instances. Baremetal on x86 is > working fine. > > All VMs get deployed to compute-0. I can live migrate VMs to compute-1 > and all is well, but I tire of being the 'meatspace scheduler'. LOL, I love that term and will have to remember to use it in the future. > I've looked at the nova.conf in the various nova-xxx containers on the > controllers, but I have failed to discern the root of this issue. Have you set the placement_randomize_allocation_candidates CONF option and are still seeing the packing behaviour? Best, -jay From alex at privacysystems.eu Fri Nov 30 14:24:18 2018 From: alex at privacysystems.eu (Alexandru Sorodoc) Date: Fri, 30 Nov 2018 16:24:18 +0200 Subject: [Openstack] [Pike][Neutron] L3 metering with DVR doesn't work In-Reply-To: References: <9ea77aca-0878-20b2-da0c-537b47e8b3f0@privacysystems.eu> Message-ID: Hello Brian, Thanks for the info. I looked into the code for the metering agent and discovered the following: 1. The metering agent on compute nodes doesn't get notified about DVR routers running on the node. 2. For DVR routers it tries to meter the rfp- interface(for floating ips) on the snat- namespace. This is wrong. The rfp- interface lies on the qrouter- namespace on compute nodes. Also, there is a qg- interface on the snat- namespace on network nodes (for performing NAT) which should be metered too. 3. There is a race condition whereby the metering agent is notified about a router before its namespaces are created. The agent ends up not adding the metering rules for those namespaces and this leads to unrecorded traffic. I addressed those issues in a change: https://review.openstack.org/#/c/621165/. Any feedback is appreciated. Best regards, Alex On 26/10/2018 21:49, Brian Haley wrote: > On 10/25/2018 08:06 AM, Alexandru Sorodoc wrote: >> Hello, >> >> I'm trying to set up metering for neutron in Pike. I tested it with a >> centralized router and it works, but when I try with a distributed >> router it >> doesn't record any usage samples. I have one compute node and one >> network node >> and I've created an instance with a floating ip. > > The metering agent isn't very well maintained, and I don't see any > open bugs similar to this issue.  The only thing I can remember is > this abandoned change regarding traffic counters for DVR routers - > https://review.openstack.org/#/c/486493/ but there was no follow-on > from the author. > > The best thing to do would be to try and reproduce it on the master > branch (or Rocky) and file a bug. > > > I think this is because the router is running on network1. Why is it > > running on > > network1 and why does it seem that the l3 agent on compute1 does the > actual > > routing? > > The compute node will do all the routing when a floating IP is > associated, the router on network1 is for default snat traffic when > there is no floating IP and the instance tries to communicate out the > external network. > > -Brian > >> >> openstack router show public-router2 >> +-------------------------+----------------------------------------------------+ >> >> | Field                   | >> Value                                              | >> +-------------------------+----------------------------------------------------+ >> >> | admin_state_up          | >> UP                                                 | >> | availability_zone_hints >> |                                                    | >> | availability_zones      | >> nova                                               | >> | created_at              | >> 2018-10-05T12:07:32Z                               | >> | description |                                                    | >> | distributed             | >> True                                               | >> | external_gateway_info   | {"network_id": >> "b96473ce-                          | >> |                         | 94f6-464f-a703-5285fb8ff3d3", >> "enable_snat": true, | >> |                         | "external_fixed_ips": >> [{"subnet_id":               | >> |                         | >> "6c08c3d9-7df1-4bec-b847-19f80b9d1764",            | >> |                         | "ip_address": >> "192.168.252.102"}]}                 | >> | flavor_id               | >> None                                               | >> | ha                      | >> False                                              | >> | id                      | >> 37c1794b-58d1-4d0d-b34b-944ca411b86b               | >> | name                    | >> public-router2                                     | >> | project_id              | >> fe203109e67f4e39b066c9529f9fc35d                   | >> | revision_number         | >> 5                                                  | >> | routes |                                                    | >> | status                  | >> ACTIVE                                             | >> | tags |                                                    | >> | updated_at              | >> 2018-10-05T12:09:36Z                               | >> +-------------------------+----------------------------------------------------+ >> >> >> openstack network agent list >> +-----------+------------+-----------+-------------------+-------+-------+--------------+ >> >> | ID        | Agent Type | Host      | Availability Zone | Alive | >> State | Binary       | >> +-----------+------------+-----------+-------------------+-------+-------+--------------+ >> >> | 14b9ea75- | L3 agent   | compute1. | nova | :-)   | UP    | >> neutron-l3-a | >> | 1dc1-4e37 |            | localdoma | |       |       | gent         | >> | -a2b0-508 |            | in        | |       | |              | >> | 3d336916d |            |           | |       | |              | >> | 26139ec1- | Metering   | compute1. | None | :-)   | UP    | >> neutron-     | >> | f4f9-4bb3 | agent      | localdoma | |       |       | metering-    | >> | -aebb-c35 |            | in        | |       |       | agent        | >> | 3a36ed79c |            |           | |       | |              | >> | 2a54971f- | DHCP agent | network1. | nova | :-)   | UP    | >> neutron-     | >> | 9849-4ed2 |            | localdoma | |       |       | dhcp-agent   | >> | -b009-00e |            | in        | |       | |              | >> | 45eb4d255 |            |           | |       | |              | >> | 443c0b49- | Open       | compute1. | None | :-)   | UP    | >> neutron-     | >> | 4484-44d2 | vSwitch    | localdoma | |       |       | openvswitch- | >> | -a704-32a | agent      | in        | |       |       | agent        | >> | 92ffe6982 |            |           | |       | |              | >> | 5d00a219  | L3 agent   | network1. | nova | :-)   | UP    | >> neutron-vpn- | >> | -abce-    |            | localdoma | |       |       | agent        | >> | 48ca-     |            | in        | |       | |              | >> | ba1e-d962 |            |           | |       | |              | >> | 01bd7de3  |            |           | |       | |              | >> | bc3458b4  | Open       | network1. | None | :-)   | UP    | >> neutron-     | >> | -250e-    | vSwitch    | localdoma | |       |       | openvswitch- | >> | 4adf-90e0 | agent      | in        | |       |       | agent        | >> | -110a1a7f |            |           | |       | |              | >> | 6ccb      |            |           | |       | |              | >> | c29f9da8- | Metering   | network1. | None | :-)   | UP    | >> neutron-     | >> | ca58-4a11 | agent      | localdoma | |       |       | metering-    | >> | -b500-a25 |            | in        | |       |       | agent        | >> | 3f820808e |            |           | |       | |              | >> | cdce667d- | Metadata   | network1. | None | :-)   | UP    | >> neutron-     | >> | faa4      | agent      | localdoma | |       |       | metadata-    | >> | -49ed-    |            | in        | |       |       | agent        | >> | 83ee-e0e5 |            |           | |       | |              | >> | a352d482  |            |           | |       | |              | >> | cf5ae104- | Metadata   | compute1. | None | :-)   | UP    | >> neutron-     | >> | 49d7-4c85 | agent      | localdoma | |       |       | metadata-    | >> | -a252-cc5 |            | in        | |       |       | agent        | >> | 9a9a12789 |            |           | |       | |              | >> +-----------+------------+-----------+-------------------+-------+-------+--------------+ >> >> >> If I check the node on which my distributed router is running it >> tells me that >> it's running on the network node: >> >> neutron l3-agent-list-hosting-router >> 37c1794b-58d1-4d0d-b34b-944ca411b86b >> +--------------------------------------+----------------------+----------------+-------+----------+ >> >> | id                                   | host                 | >> admin_state_up | alive | ha_state | >> +--------------------------------------+----------------------+----------------+-------+----------+ >> >> | 5d00a219-abce-48ca-ba1e-d96201bd7de3 | network1.localdomain | >> True           | :-)   |          | >> +--------------------------------------+----------------------+----------------+-------+----------+ >> >> >> If I check the iptable rules for the router on the compute and >> network nodes by running: >> >> ip netns exec qrouter-37c1794b-58d1-4d0d-b34b-944ca411b86b iptables >> -nv -L >> >> I see that compute1 records the traffic while network1 doesn't. Also, >> I did some >> debugging and found out that the metering agent on compute1 receives >> an empty >> list of routers when querying the routers that it should monitor. >> >> Source: >> >> https://github.com/openstack/neutron/blob/stable/pike/neutron/services/metering/agents/metering_agent.py#L177-L189 >> >> >> https://github.com/openstack/neutron/blob/stable/pike/neutron/db/metering/metering_rpc.py#L33-L57 >> >> >> I think this is because the router is running on network1. Why is it >> running on >> network1 and why does it seem that the l3 agent on compute1 does the >> actual >> routing? >> >> Thanks, >> Alex >> >> >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to     : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Fri Nov 30 22:52:13 2018 From: mike.carden at gmail.com (Mike Carden) Date: Sat, 1 Dec 2018 09:52:13 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: > > > Have you set the placement_randomize_allocation_candidates CONF option > and are still seeing the packing behaviour? > > No I haven't. Where would be the place to do that? In a nova.conf somewhere that the nova-scheduler containers on the controller hosts could pick it up? Just about to deploy for realz with about forty x86 compute nodes, so it would be really nice to sort this first. :) -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: