From anlin.kong at gmail.com Thu Aug 1 00:01:30 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 1 Aug 2019 12:01:30 +1200 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Hi Bernd, There were a lot of people asked the same question before, unfortunately, I don't know the answer either(we are still using an old version of Ceilometer). The original cpu_util support has been removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no doc in Gnocchi mentioned how to achieve the same thing and no clear answer from the Gnocchi maintainers. It'd be much appreciated if you could find the answer in the end, or there will be someone who has the already solved the issue. Best regards, Lingxian Kong Catalyst Cloud On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch wrote: > The message at the end of this email is some three months old. I have the > same problem. The question is: *How to use the new rate metrics in > Gnocchi. *I am using a Stein Devstack for my tests. > > For example, I need the CPU rate, formerly named *cpu_util*. I created a > new archive policy that uses *rate:mean* aggregation and has a 1 minute > granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > > +---------------------+------------------------------------------------------------------+ > | Field | > Value | > > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean | > | back_window | > 0 | > | definition | - points: 10080, granularity: 0:01:00, timespan: 7 > days, 0:00:00 | > | name | > ceilometer-medium-rate | > > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in *pipeline.yaml*: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: > - name: meter_sink > publishers: > - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift > *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate would > magically appear in the metric list. But no: All metrics are linked to > archive policy *medium*, and looking at the details of an instance, I > don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > > +-----------------------+---------------------------------------------------------------------+ > | Field | > Value | > > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > | | cpu: > af930d9a-a218-4230-b729-fee7e3796944 | > | | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80 | > | | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | > | | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183 | > | | memory.swap.in: > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | > | | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4 | > | | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b | > | | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | > | | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | > | original_resource_id | > ae3659d6-8998-44ae-a494-5248adbebe11 | > ... > > | type | > instance | > | user_id | > a9c935f52e5540fc9befae7f91b4b3ae | > > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What do I > have to do to get CPU usage rates? Do I have to create metrics? Do I have > to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is correct, I > would be happy to write documentation once I understand how it works. > > Thanks a lot. > > Bernd > On 5/10/2019 3:49 PM, info at dantalion.nl wrote: > > Hello, > > I am working on Watcher and we are currently changing how metrics are > retrieved from different datasources such as Monasca or Gnocchi. Because > of this major overhaul I would like to validate that everything is > working correctly. > > Almost all of the optimization strategies in Watcher require the cpu > utilization of an instance as metric but with newer versions of > Ceilometer this has become unavailable. > > On IRC I received the information that Gnocchi could be used to > configure an aggregate and this aggregate would then report cpu > utilization, however, I have been unable to find documentation on how to > achieve this. > > I was also notified that cpu_util is something that could be computed > from other metrics. When readinghttps://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute > the documentation seems to agree on this as it states that cpu_util is > measured by using a 'rate of change' transformer. But I have not been > able to find how this can be computed. > > I was hoping someone could spare the time to provide documentation or > information on how this currently is best achieved. > > Kind Regards, > Corne Lukken (Dantali0n) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 01:16:25 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 10:16:25 +0900 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Lingxian, Thanks for "bumping" my request and keeping it alive. The reason I need an answer: I am updating courseware to Stein that includes autoscaling based on CPU and disk I/O rates. Looks like I am "cutting edge" :) I don't think the problem is in the Gnocchi camp, but rather Ceilometer. To store rates of measures in z, the following is needed: * A /metric/. Raw measures are sent to the metric. * An /archive policy/. The metric has an archive policy. * The archive policy includes one or more /rate aggregates/ My cloud has archive policies with rate aggregates, but the question is about the first bullet: *How can I configure Ceilometer so that it creates the corresponding metrics and sends measures to them. *In other words, how is Ceilometer's output connected to my archive policy. From my experience, just adding the archive policy to Ceilometer's publishers is not sufficient. Ceilometer's source code includes /.../publisher/data/gnocchi_resources.yaml/, which might well be the place where this can be configured. I am not sure how to do it though, and this file is not documented. I can read the source, but my developer skills are insufficient for understanding how everything fits together. Bernd On 8/1/2019 9:01 AM, Lingxian Kong wrote: > Hi Bernd, > > There were a lot of people asked the same question before, > unfortunately, I don't know the answer either(we are still using an > old version of Ceilometer). The original cpu_util support has been > removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no > doc in Gnocchi mentioned how to achieve the same thing and no clear > answer from the Gnocchi maintainers. > > It'd be much appreciated if you could find the answer in the end, or > there will be someone who has the already solved the issue. > > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > wrote: > > The message at the end of this email is some three months old. I > have the same problem. The question is: *How to use the new rate > metrics in Gnocchi. *I am using a Stein Devstack for my tests.* > * > > For example, I need the CPU rate, formerly named /cpu_util/. I > created a new archive policy that uses /rate:mean/ aggregation and > has a 1 minute granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > +---------------------+------------------------------------------------------------------+ > | Field               | Value | > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean                                                  | > | back_window         | 0 | > | definition          | - points: 10080, granularity: 0:01:00, > timespan: 7 days, 0:00:00 | > | name                | ceilometer-medium-rate | > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in /pipeline.yaml/: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: >     - name: meter_sink >       publishers: >           - > gnocchi://?archive_policy=medium&filter_project=gnocchi_swift > *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate > would magically appear in the metric list. But no: All metrics are > linked to archive policy /medium/, and looking at the details of > an instance, I don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > +-----------------------+---------------------------------------------------------------------+ > | Field                 | Value | > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics               | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > |                       | cpu: af930d9a-a218-4230-b729-fee7e3796944 | > |                       | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | > |                       | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | > |                       | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183               | > |                       | memory.swap.in : > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | > |                       | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4               | > |                       | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b                  | > |                       | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | > |                       | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | > | original_resource_id  | ae3659d6-8998-44ae-a494-5248adbebe11 | > ... > > | type                  | instance | > | user_id               | a9c935f52e5540fc9befae7f91b4b3ae | > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What > do I have to do to get CPU usage rates? Do I have to create > metrics? Do//I have to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is > correct, I would be happy to write documentation once I understand > how it works. > > Thanks a lot. > > Bernd > > On 5/10/2019 3:49 PM, info at dantalion.nl > wrote: >> Hello, >> >> I am working on Watcher and we are currently changing how metrics are >> retrieved from different datasources such as Monasca or Gnocchi. Because >> of this major overhaul I would like to validate that everything is >> working correctly. >> >> Almost all of the optimization strategies in Watcher require the cpu >> utilization of an instance as metric but with newer versions of >> Ceilometer this has become unavailable. >> >> On IRC I received the information that Gnocchi could be used to >> configure an aggregate and this aggregate would then report cpu >> utilization, however, I have been unable to find documentation on how to >> achieve this. >> >> I was also notified that cpu_util is something that could be computed >> from other metrics. When reading >> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >> the documentation seems to agree on this as it states that cpu_util is >> measured by using a 'rate of change' transformer. But I have not been >> able to find how this can be computed. >> >> I was hoping someone could spare the time to provide documentation or >> information on how this currently is best achieved. >> >> Kind Regards, >> Corne Lukken (Dantali0n) >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 01:23:37 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 10:23:37 +0900 Subject: [rdo] [packstack] failing to setup Stein with Openvswitch/VXLAN networking In-Reply-To: References: Message-ID: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> Thanks Alfredo. Yes, a single eth0: $ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 ... 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000     link/ether 52:54:00:dd:2c:f3 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.202/24 brd 192.168.1.255 scope global eth0        valid_lft forever preferred_lft forever     inet6 240d:1a:3c5:3d00:5054:ff:fedd:2cf3/64 scope global mngtmpaddr dynamic        valid_lft 10790sec preferred_lft 10790sec     inet6 fe80::5054:ff:fedd:2cf3/64 scope link        valid_lft forever preferred_lft forever and ifcfg-eth0 is unchanged since installation (perhaps I should clean it up a little; could IPv6 be causing problems?): $ cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=75c26b23-f3fb-4d3f-a473-5354075d5a25 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.202 PREFIX=24 GATEWAY=192.168.1.1 DNS1=192.168.1.16 DNS2=1.1.1.1 DOMAIN=home IPV6_PRIVACY=no More info: This is a VM running /CentOS 7.6.1810 /on a Fedora KVM host. eth0 is bridged. I did disable and stop network manager and firewalld on the guest. Bernd. On 7/31/2019 10:13 PM, Alfredo Moralejo Alonso wrote: > Hi, > > So, IIUC, your server has a single NIC, eth0, right? > > Could you provide the configuration file for eth0 before running > packstack?, i guess you are using ifcfg files? > > Best regards, > > Alfredo > > > On Wed, Jul 31, 2019 at 2:20 AM Bernd Bausch > wrote: > > Trying to set up a Stein cloud with Packstack. I want the > Openvswitch mech driver and VXLAN type driver. A few weeks ago, > the following invocation was successful: > > |sudo packstack --debug --allinone --default-password pw \ > --os-neutron-ovs-bridge-interfaces=br-ex:eth0 \ > --os-neutron-ml2-tenant-network-types=vxlan \ > --os-neutron-ml2-mechanism-drivers=openvswitch \ > --os-neutron-ml2-type-drivers=vxlan,flat \ > --os-neutron-l2-agent=openvswitch \ > --provision-demo-floatrange=10.1.1.0/24\ > --provision-demo-allocation-pools > '["start=10.1.1.10,end=10.1.1.50"]'\ --os-heat-install=y > --os-heat-cfn-install=y||| > > Now, it fails during network setup. My network connection to the > Packstack server is severed, and it turns out that its only > network interface /eth0 /has no IP address and is down. No bridge > exists. > > In the /network.pp.finished /file, I find various /ovs-vsctl > /commands including /add-br/, and a command /ifdown eth0 /which > fails with exit code 1 (no error message from the /ifdown /command > is logged). > > *Can somebody recommend the options required to successfully > deploy a Stein cloud based on the Openvswitch and VXLAN drivers?* > > Thanks much, > > Bernd > > > > || > > || > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 02:39:05 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 11:39:05 +0900 Subject: [rdo] [packstack] failing to setup Stein with Openvswitch/VXLAN networking In-Reply-To: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> References: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> Message-ID: <4ab53229-9c7a-70e4-7c95-8e466d6463ca@gmail.com> Hmmmmm. I just succeeded in setting up my Packstack after removing IPv6 from the kernel and cleaning out ifgcfg-eth0. Since I don't need IPv6, I am good. My config: $ *cat /etc/sysconfig/network-scripts/ifcfg-eth0* TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.202 PREFIX=24 GATEWAY=192.168.1.1 DNS1=192.168.1.16 DNS2=1.1.1.1 DOMAIN=home IPV6INIT=no $ *cat /etc/sysctl.conf* net.ipv6.conf.eth0.disable_ipv6=1 Thanks again, Alfredo. On 8/1/2019 10:23 AM, Bernd Bausch wrote: > > Thanks Alfredo. Yes, a single eth0: > > $ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1000 > ... > 2: eth0: mtu 1500 qdisc pfifo_fast > state UP group default qlen 1000 >     link/ether 52:54:00:dd:2c:f3 brd ff:ff:ff:ff:ff:ff >     inet 192.168.1.202/24 brd 192.168.1.255 scope global eth0 >        valid_lft forever preferred_lft forever >     inet6 240d:1a:3c5:3d00:5054:ff:fedd:2cf3/64 scope global > mngtmpaddr dynamic >        valid_lft 10790sec preferred_lft 10790sec >     inet6 fe80::5054:ff:fedd:2cf3/64 scope link >        valid_lft forever preferred_lft forever > > and ifcfg-eth0 is unchanged since installation (perhaps I should clean > it up a little; could IPv6 be causing problems?): > > $ cat /etc/sysconfig/network-scripts/ifcfg-eth0 > TYPE=Ethernet > PROXY_METHOD=none > BROWSER_ONLY=no > BOOTPROTO=none > DEFROUTE=yes > IPV4_FAILURE_FATAL=no > IPV6INIT=yes > IPV6_AUTOCONF=yes > IPV6_DEFROUTE=yes > IPV6_FAILURE_FATAL=no > IPV6_ADDR_GEN_MODE=stable-privacy > NAME=eth0 > UUID=75c26b23-f3fb-4d3f-a473-5354075d5a25 > DEVICE=eth0 > ONBOOT=yes > IPADDR=192.168.1.202 > PREFIX=24 > GATEWAY=192.168.1.1 > DNS1=192.168.1.16 > DNS2=1.1.1.1 > DOMAIN=home > IPV6_PRIVACY=no > > More info: This is a VM running /CentOS 7.6.1810 /on a Fedora KVM > host. eth0 is bridged. I did disable and stop network manager and > firewalld on the guest. > > Bernd. > > On 7/31/2019 10:13 PM, Alfredo Moralejo Alonso wrote: >> Hi, >> >> So, IIUC, your server has a single NIC, eth0, right? >> >> Could you provide the configuration file for eth0 before running >> packstack?, i guess you are using ifcfg files? >> >> Best regards, >> >> Alfredo >> >> >> On Wed, Jul 31, 2019 at 2:20 AM Bernd Bausch > > wrote: >> >> Trying to set up a Stein cloud with Packstack. I want the >> Openvswitch mech driver and VXLAN type driver. A few weeks ago, >> the following invocation was successful: >> >> |sudo packstack --debug --allinone --default-password pw \ >> --os-neutron-ovs-bridge-interfaces=br-ex:eth0 \ >> --os-neutron-ml2-tenant-network-types=vxlan \ >> --os-neutron-ml2-mechanism-drivers=openvswitch \ >> --os-neutron-ml2-type-drivers=vxlan,flat \ >> --os-neutron-l2-agent=openvswitch \ >> --provision-demo-floatrange=10.1.1.0/24\ >> --provision-demo-allocation-pools >> '["start=10.1.1.10,end=10.1.1.50"]'\ --os-heat-install=y >> --os-heat-cfn-install=y||| >> >> Now, it fails during network setup. My network connection to the >> Packstack server is severed, and it turns out that its only >> network interface /eth0 /has no IP address and is down. No bridge >> exists. >> >> In the /network.pp.finished /file, I find various /ovs-vsctl >> /commands including /add-br/, and a command /ifdown eth0 /which >> fails with exit code 1 (no error message from the /ifdown >> /command is logged). >> >> *Can somebody recommend the options required to successfully >> deploy a Stein cloud based on the Openvswitch and VXLAN drivers?* >> >> Thanks much, >> >> Bernd >> >> >> >> || >> >> || >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregory.orange at pawsey.org.au Thu Aug 1 03:12:19 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 1 Aug 2019 11:12:19 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM Message-ID: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> Hi everyone, We have a Queens/Rocky environment with haproxy in front of most services. Recently we've found a problem when creating multiple instances (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we use Horizon or Terraform, so I've continued on with Terraform since it's easier to repeat attempts. WORKS: 340MB image, create 80x instances, 40 at a time FAILS: 20GB image, create 40x instances, 20 at a time ... Changed haproxy config, added "balance roundrobin" to glance, cinder, nova, neutron stanzas (there was no 'balance' config before, not sure what it would have been doing) ... WORKS sometimes[1]: 20GB image, create 40x instances, 20 at a time FAILS: 20GB image, create 80x instances, 40 at a time The failure condition: * The active haproxy server has a single core go to 100% usage (the rest are idle) * One glance server's RAM usage grows rapidly and continuously * Some instances that are building complete * Creating new instances fails (BUILD state forever) * Horizon becomes unresponsive * Ceph and Cinder don't appear to be overloaded (ceph -s, logs, system state) * These states do not recover until we take the following actions... To recover: * Kill any remaining (Terraform) attempts to launch instances * Stop haproxy on the active server * Wait a few seconds * Start haproxy again [1] When we create enough to not quite overload it, haproxy server goes to 100% on one core but recovers once the instances are (slooowly) created. The cause of the problem is not clear (e.g. from haproxy and glance logs, system state), and I'm looking for pointers on where to look or what to try next. Can you help? Thank you, Greg. From gregory.orange at pawsey.org.au Thu Aug 1 07:19:46 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 1 Aug 2019 15:19:46 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> Message-ID: <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Hi again everyone, On 1/8/19 11:12 am, Gregory Orange wrote: > We have a Queens/Rocky environment with haproxy in front of most services. Recently we've found a problem when creating multiple instances (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we use Horizon or Terraform, so I've continued on with Terraform since it's easier to repeat attempts. As a followup, I found a neutron server stuck with one of its cores consumed to 100%, and RAM and swap exhausted. After rebooting that server, everything worked fine. Over the next hour, RAM and swap was exhausted again by lots of spawning processes (a few hundred neutron-rootwrap-daemon), and oom-killer cleaned it up, resulting in a loop where it fills and empties RAM every 20-60 minutes. We have some other neutron changes planned, so for now we have left that one turned off, and the other two (which have less RAM) are working fine without these symptoms. Strange, but I'm glad to have found something, and that it's working for now. Regards, Greg. From ruslanas at lpic.lt Thu Aug 1 07:57:39 2019 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Aug 2019 09:57:39 +0200 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Message-ID: when in newton release were introduced role separation, we divided memory hungry processes into 4 different VM's on 3 physical boxes: 1) Networker: all Neutron agent processes (network throughput) 2) Systemd: all services started by systemd (Neutron) 3) pcs: all services controlled by pcs (Galera + RabbitMQ) 4) horizon not sure how to do now, I think I will go for VMs again and those VMs will include containers. It is easier to recover and rebuild the whole OpenStack. Gregory > do you have local storage for swift and cinder background? if local; then do you use RAID? if yes; then which RAID?; fi do you use ssd? do you use CEPH as background for cinder and swift? fi also double check where _base image is located? is it in /var/lib/nova/instances/_base/* ? and flavor disks stored in /var/lib/nova/instances ? (can check on compute by: virsh domiflist instance-00000## ) On Thu, 1 Aug 2019 at 09:25, Gregory Orange wrote: > Hi again everyone, > > On 1/8/19 11:12 am, Gregory Orange wrote: > > We have a Queens/Rocky environment with haproxy in front of most > services. Recently we've found a problem when creating multiple instances > (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we > use Horizon or Terraform, so I've continued on with Terraform since it's > easier to repeat attempts. > > As a followup, I found a neutron server stuck with one of its cores > consumed to 100%, and RAM and swap exhausted. After rebooting that server, > everything worked fine. Over the next hour, RAM and swap was exhausted > again by lots of spawning processes (a few hundred > neutron-rootwrap-daemon), and oom-killer cleaned it up, resulting in a loop > where it fills and empties RAM every 20-60 minutes. We have some other > neutron changes planned, so for now we have left that one turned off, and > the other two (which have less RAM) are working fine without these symptoms. > > Strange, but I'm glad to have found something, and that it's working for > now. > > Regards, > Greg. > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Thu Aug 1 08:58:18 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Aug 2019 18:58:18 +1000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> Message-ID: <20190801085818.GD2077@fedora19.localdomain> On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > Given my change shows this can be so much quicker is there any > interest in modifying devstack to be faster here? And if so what do > we think an appropriate approach would be? My first concern was if anyone considered openstack-client setting these things up as actually part of the testing. I'd say not, comments in [1] suggest similar views. My second concern is that we do keep sufficient track of complexity v speed; obviously doing things in a sequential manner via a script is pretty simple to follow and as we start putting things into scripts we make it harder to debug when a monoscript dies and you have to start pulling apart where it was. With just a little json fiddling we can currently pull good stats from logstash ([2]) so I think as we go it would be good to make sure we account for the time using appropriate wrappers, etc. Then the third concern is not to break anything for plugins -- devstack has a very very loose API which basically relies on plugin authors using a combination of good taste and copying other code to decide what's internal or not. Which made me start thinking I wonder if we look at this closely, even without replacing things we might make inroads? For example [3]; it seems like SERVICE_DOMAIN_NAME is never not default, so the get_or_create_domain call is always just overhead (the result is never used). Then it seems that in the gate, basically all of the "get_or_create" calls will really just be "create" calls? Because we're always starting fresh. So we could cut out about half of the calls there pre-checking if we know we're under zuul (proof-of-concept [4]). Then we have blocks like: get_or_add_user_project_role $member_role $demo_user $demo_project get_or_add_user_project_role $admin_role $admin_user $demo_project get_or_add_user_project_role $another_role $demo_user $demo_project get_or_add_user_project_role $member_role $demo_user $invis_project If we wrapped that in something like start_osc_session ... end_osc_session which sets a variable that means instead of calling directly, those functions write their arguments to a tmp file. Then at the end call, end_osc_session does $ osc "$(< tmpfile)" and uses the inbuilt batching? If that had half the calls by skipping the "get_or" bit, and used common authentication from batching, would that help? And then I don't know if all the projects and groups are required for every devstack run? Maybe someone skilled in the art could do a bit of an audit and we could cut more of that out too? So I guess my point is that maybe we could tweak what we have a bit to make some immediate wins, before anyone has to rewrite too much? -i [1] https://review.opendev.org/673018 [2] https://ethercalc.openstack.org/rzuhevxz7793 [3] https://review.opendev.org/673941 [4] https://review.opendev.org/673936 From ralf.teckelmann at bertelsmann.de Thu Aug 1 09:11:55 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Thu, 1 Aug 2019 09:11:55 +0000 Subject: AW: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> , Message-ID: Hello Bernd, Hello Lingxian, +1 You are not alone in your fruitless endeavor. Sadly, I can not come up with a solution. We are stuck at the same point. Maybe some day a dedicated member of the OpenStack community give the ceilometer guys a push to explain their service. For us, also using Stein, it is in the state of "not production ready". Cheers, Ralf T. ________________________________ Von: Bernd Bausch Gesendet: Donnerstag, 1. August 2019 03:16:25 An: Lingxian Kong Cc: openstack-discuss Betreff: Re: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics Lingxian, Thanks for "bumping" my request and keeping it alive. The reason I need an answer: I am updating courseware to Stein that includes autoscaling based on CPU and disk I/O rates. Looks like I am "cutting edge" :) I don't think the problem is in the Gnocchi camp, but rather Ceilometer. To store rates of measures in z, the following is needed: * A metric. Raw measures are sent to the metric. * An archive policy. The metric has an archive policy. * The archive policy includes one or more rate aggregates My cloud has archive policies with rate aggregates, but the question is about the first bullet: How can I configure Ceilometer so that it creates the corresponding metrics and sends measures to them. In other words, how is Ceilometer's output connected to my archive policy. From my experience, just adding the archive policy to Ceilometer's publishers is not sufficient. Ceilometer's source code includes .../publisher/data/gnocchi_resources.yaml, which might well be the place where this can be configured. I am not sure how to do it though, and this file is not documented. I can read the source, but my developer skills are insufficient for understanding how everything fits together. Bernd On 8/1/2019 9:01 AM, Lingxian Kong wrote: Hi Bernd, There were a lot of people asked the same question before, unfortunately, I don't know the answer either(we are still using an old version of Ceilometer). The original cpu_util support has been removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no doc in Gnocchi mentioned how to achieve the same thing and no clear answer from the Gnocchi maintainers. It'd be much appreciated if you could find the answer in the end, or there will be someone who has the already solved the issue. Best regards, Lingxian Kong Catalyst Cloud On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > wrote: The message at the end of this email is some three months old. I have the same problem. The question is: How to use the new rate metrics in Gnocchi. I am using a Stein Devstack for my tests. For example, I need the CPU rate, formerly named cpu_util. I created a new archive policy that uses rate:mean aggregation and has a 1 minute granularity: $ gnocchi archive-policy show ceilometer-medium-rate +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | aggregation_methods | rate:mean, mean | | back_window | 0 | | definition | - points: 10080, granularity: 0:01:00, timespan: 7 days, 0:00:00 | | name | ceilometer-medium-rate | +---------------------+------------------------------------------------------------------+ I added the new policy to the publishers in pipeline.yaml: $ tail -n5 /etc/ceilometer/pipeline.yaml sinks: - name: meter_sink publishers: - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift - gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift After restarting all of Ceilometer, my hope was that the CPU rate would magically appear in the metric list. But no: All metrics are linked to archive policy medium, and looking at the details of an instance, I don't detect anything rate-related: $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 +-----------------------+---------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------+ ... | metrics | compute.instance.booting.time: 76fac1f5-962e-4ff2-8790-1f497c99c17d | | | cpu: af930d9a-a218-4230-b729-fee7e3796944 | | | disk.ephemeral.size: 0e838da3-f78f-46bf-aefb-aeddf5ff3a80 | | | disk.root.size: 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | | | memory.resident: 09efd98d-c848-4379-ad89-f46ec526c183 | | | memory.swap.in: 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | | | memory.swap.out: 4d012697-1d89-4794-af29-61c01c925bb4 | | | memory.usage: 93eab625-0def-4780-9310-eceff46aab7b | | | memory: ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | | | vcpus: e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | | original_resource_id | ae3659d6-8998-44ae-a494-5248adbebe11 | ... | type | instance | | user_id | a9c935f52e5540fc9befae7f91b4b3ae | +-----------------------+---------------------------------------------------------------------+ Obviously, I am missing something. Where is the missing link? What do I have to do to get CPU usage rates? Do I have to create metrics? Do I have to ask Ceilometer to create metrics? How? Right now, no instructions seem to exist at all. If that is correct, I would be happy to write documentation once I understand how it works. Thanks a lot. Bernd On 5/10/2019 3:49 PM, info at dantalion.nl wrote: Hello, I am working on Watcher and we are currently changing how metrics are retrieved from different datasources such as Monasca or Gnocchi. Because of this major overhaul I would like to validate that everything is working correctly. Almost all of the optimization strategies in Watcher require the cpu utilization of an instance as metric but with newer versions of Ceilometer this has become unavailable. On IRC I received the information that Gnocchi could be used to configure an aggregate and this aggregate would then report cpu utilization, however, I have been unable to find documentation on how to achieve this. I was also notified that cpu_util is something that could be computed from other metrics. When reading https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute the documentation seems to agree on this as it states that cpu_util is measured by using a 'rate of change' transformer. But I have not been able to find how this can be computed. I was hoping someone could spare the time to provide documentation or information on how this currently is best achieved. Kind Regards, Corne Lukken (Dantali0n) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 1 11:26:19 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Aug 2019 07:26:19 -0400 Subject: [ironic] Shanghai Planning Message-ID: Greetings everyone, I just wanted to remind my fellow ironic contributors that the planning and coordination etherpad for Shanghai[0] is available. Please add any topics and indicate your attendance as soon as possible. Thanks, -Julia [0]: https://etherpad.openstack.org/p/PVG-Ironic-Planning From dtantsur at redhat.com Thu Aug 1 11:27:48 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 1 Aug 2019 13:27:48 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: On 7/31/19 8:21 PM, Kendall Nelson wrote: > Hello Everyone! > > Development Focus > ----------------- > We are now past the Train-2 milestone, and entering the last development phase > of the cycle. Teams should be focused on implementing planned work for the > cycle.Now is  a good time to review those plans and reprioritize anything if > needed based on the what progress has been made and what looks realistic to > complete in the next few weeks. > > General Information > ------------------- > The following cycle-with-intermediary deliverables have not done any > intermediary release yet during this cycle. The cycle-with-rc release model is > more suited for deliverables that plan to be released only once per cycle. I respectfully disagree. I will reserve my opinion on whether cycle-with-rc suits *anyone*, but in our case I'd prefer to have an option of releasing something in the middle of a cycle even if we don't exercise this option way too often. I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any of our projects. Dmitry > As a > result, we will be proposing to change the release model for the following > deliverables: > > blazar-dashboard > > cloudkitty-dashboard > ec2-api > freezer-web-ui > freezer > heat-agents > heat-dashboard > ironic-ui > karbor-dashboard > karbor > kuryr-kubernetes > magnum-ui > manila-ui > masakari-dashboard > monasca-agent > monasca-api > monasca-ceilometer > monasca-events-api > monasca-kibana-plugin > monasca-log-api > monasca-notification > monasca-persister > monasca-thresh > monasca-transform > monasca-ui > murano-agent > networking-baremetal > networking-generic-switch > networking-hyperv > neutron-fwaas-dashboard > neutron-vpnaas-dashboard > requirements > sahara-extra > senlin-dashboard > solum-dashboard > tacker-horizon > tricircle > vitrage-dashboard > vitrage > watcher-dashboard > > > PTLs and release liaisons for each of those deliverables can either +1 the > release model changewhen we get them pushed, or propose an intermediary release > for that deliverable. In absence of answer by the start of R-10 week we'll > consider that the switch to cycle-with-rc is preferable. > > Upcoming Deadlines & Dates > -------------------------- > Non-client library freeze: September 05 (R-6 week) > Client library freeze: September 12 (R-5 week) > Train-3 milestone: September 12 (R-5 week) > > -Kendall (diablo_rojo) + the Release Management Team > From alex.kavanagh at canonical.com Thu Aug 1 11:49:30 2019 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Thu, 1 Aug 2019 12:49:30 +0100 Subject: [horizon] Stein, multi-domain, admin, can't list users, projects (maybe networks) (bug#1830782) Message-ID: Hi I'm trying to resolve/solve the issue that is described in bug 183072 [1], and I'm looking for help in how it might be resolved. To recap the bug quickly: 1. horizon, multi-domain enabled. 2. 'admin' user is in 'admin_domain' and 'admin' project 3. Log in as that 'admin' user in 'admin_domain'. 4. Create test domain. 5. Set domain context to 'test' domain 6. Create a user in the 'test' domain. 7. Can't see that user in the user list. 8. Do same for project; can't see the project. In the bug comments at [2] (comment 38) I've recorded the results after adding some debug code to keystone and horizon and came to the following tentative conclusion: 1. Horizon uses a domain scoped token for listing users when the domain context is set. In this case that token is domain-scoped to 'admin_domain' 2. Keystone at the stein release, due to a change introduced in [3] for the users (detail in [4]) filters users that are not in the domain of the domain scoped token. 3. Thus, the domains for the 'test' domain are filtered out and are not seen in the horizon dashboard. 4. I believe this is the same for projects. In order to solve this, I suspect one or more of the following would need to be done. However, I'm not familiar enough with the horizon codebase to know where to start. 1. In horizon, if the user is an admin user, then don't use a domain-scoped token for listing users, projects, or anything else. 2. Alternatively, obtain a domain scoped token for the domain context that is set. (I'm not familiar enough with keystone to know whether it's possible for the admin user to get 'any' domain scoped token for any domain???) Incidentally, the openstack CLI doesn't use domain scoped tokens for list users in a domain; I don't know whether this is an appropriate approach to take in horizon. Thanks very much in advance. Happy to chat on IRC if that's useful (I'm UTC TZ). Best regards Alex. [1] https://bugs.launchpad.net/openstack-bundles/+bug/1830782 [2] https://bugs.launchpad.net/openstack-bundles/+bug/1830782/comments/38 [3] https://review.opendev.org/#/c/647587/ [4] https://review.opendev.org/#/c/647587/3/keystone/api/users.py -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 12:20:49 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 21:20:49 +0900 Subject: AW: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: <5af391d8-e907-0622-5275-d40297d12818@gmail.com> I have a solution. At least it works for me. Be aware that this is Devstack, but I think nothing I did to solve my problem is Devstack-specific. Also, I don't know whether there are more efficient or canonical ways to reconfigure Ceilometer. But it's good enough for me. These are my steps - you may not need all of them. * in *pipeline.yaml*, set publisher to gnocchi:// * in *the resource definition file*, define my new archive policy. By default, this file resides in the Ceilometer source tree .../ceilometer/publisher/data/gnocchi_resources.yaml, but you can use config parameter resources_definition_file to change the default (I didn't try). Example:         - name: ceilometer-medium-rate           aggregation_methods:           - mean           - rate:mean          back_window: 0          definition:            - granularity: 1 minute              timespan: 7 days            - granularity: 1 hour              timespan: 365 days * in the same resource definition file, *adjust the archive policy *of rate metrics. Example:        - resource_type: instance          metrics:          ...            cpu:              archive_policy_name: ceilometer-medium-rate * *delete all existing metrics and resources *from Gnocchi Probably only necessary when Ceilometer is running, and not needed if you reconfigure it before its first start. This is a drastic measure, but if you do it at the beginning of a deployment, it won't cause loss of much data. Why is this required? A metric contains an archive policy that can't be changed. Thus existing metrics need to be recreated. Why remove resources? Because they reference the metrics that I removed. * *restart all Ceilometer services* This is required for re-reading the pipeline and the resource definition files. Ceilometer will create resources and metrics as needed when it sends its samples to Gnocchi. I tested this by running a CPU hogging instance and listing its measures after a few minutes:     gnocchi measures show --resource f28f6b78-9dd5-49cc-a6ac-28cb14477bf0                           --aggregation rate:mean cpu +---------------------------+-------------+---------------+     | timestamp                 | granularity |         value |     +---------------------------+-------------+---------------+     | 2019-08-01T20:23:00+09:00 |        60.0 |  1810000000.0 |     | 2019-08-01T20:24:00+09:00 |        60.0 | 39940000000.0 |     | 2019-08-01T20:25:00+09:00 |        60.0 | 40110000000.0 | This means that the instance accumulated 39940000000 nanoseconds of CPU time in the 60 seconds at 20:24:00. Note that the old /cpu_util /was expressed in percent, so that Aodh alarms and Heat autoscaling definitions must be adapted. Good luck. Hire me as Ceilometer consultant if you get stuck :) Bernd On 8/1/2019 6:11 PM, Teckelmann, Ralf, NMU-OIP wrote: > > Hello Bernd, Hello Lingxian, > > > +1 > > > You are not alone in your fruitless endeavor. Sadly, I can not come up > with a solution. > > We are stuck at the same point. > > > Maybe some day a dedicated member of the OpenStack community give the > ceilometer guys a push to explain their service. > For us, also using Stein, it is in the state of "not production ready". > > Cheers, > > Ralf T. > ------------------------------------------------------------------------ > *Von:* Bernd Bausch > *Gesendet:* Donnerstag, 1. August 2019 03:16:25 > *An:* Lingxian Kong > *Cc:* openstack-discuss > *Betreff:* Re: [telemetry][ceilometer][gnocchi] How to configure > aggregate for cpu_util or calculate from metrics > > Lingxian, > > Thanks for "bumping" my request and keeping it alive. The reason I > need an answer: I am updating courseware to Stein that includes > autoscaling based on CPU and disk I/O rates. Looks like I am "cutting > edge" :) > > I don't think the problem is in the Gnocchi camp, but rather > Ceilometer. To store rates of measures in z, the following is needed: > > * A /metric/. Raw measures are sent to the metric. > * An /archive policy/. The metric has an archive policy. > * The archive policy includes one or more /rate aggregates/ > > My cloud has archive policies with rate aggregates, but the question > is about the first bullet: *How can I configure Ceilometer so that it > creates the corresponding metrics and sends measures to them. *In > other words, how is Ceilometer's output connected to my archive > policy. From my experience, just adding the archive policy to > Ceilometer's publishers is not sufficient. > > Ceilometer's source code includes > /.../publisher/data/gnocchi_resources.yaml/, which might well be the > place where this can be configured. I am not sure how to do it though, > and this file is not documented. I can read the source, but my > developer skills are insufficient for understanding how everything > fits together. > > Bernd > > On 8/1/2019 9:01 AM, Lingxian Kong wrote: >> Hi Bernd, >> >> There were a lot of people asked the same question before, >> unfortunately, I don't know the answer either(we are still using an >> old version of Ceilometer). The original cpu_util support has been >> removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no >> doc in Gnocchi mentioned how to achieve the same thing and no clear >> answer from the Gnocchi maintainers. >> >> It'd be much appreciated if you could find the answer in the end, or >> there will be someone who has the already solved the issue. >> >> Best regards, >> Lingxian Kong >> Catalyst Cloud >> >> >> On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > > wrote: >> >> The message at the end of this email is some three months old. I >> have the same problem. The question is: *How to use the new rate >> metrics in Gnocchi. *I am using a Stein Devstack for my tests.* >> * >> >> For example, I need the CPU rate, formerly named /cpu_util/. I >> created a new archive policy that uses /rate:mean/ aggregation >> and has a 1 minute granularity: >> >> $ gnocchi archive-policy show ceilometer-medium-rate >> +---------------------+------------------------------------------------------------------+ >> | Field               | Value | >> +---------------------+------------------------------------------------------------------+ >> | aggregation_methods | rate:mean, mean | >> | back_window         | 0 | >> | definition          | - points: 10080, granularity: 0:01:00, >> timespan: 7 days, 0:00:00 | >> | name                | ceilometer-medium-rate | >> +---------------------+------------------------------------------------------------------+ >> >> I added the new policy to the publishers in /pipeline.yaml/: >> >> $ tail -n5 /etc/ceilometer/pipeline.yaml >> sinks: >>     - name: meter_sink >>       publishers: >>           - >> gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >> *- >> gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* >> >> After restarting all of Ceilometer, my hope was that the CPU rate >> would magically appear in the metric list. But no: All metrics >> are linked to archive policy /medium/, and looking at the details >> of an instance, I don't detect anything rate-related: >> >> $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 >> +-----------------------+---------------------------------------------------------------------+ >> | Field                 | Value | >> +-----------------------+---------------------------------------------------------------------+ >> ... >> | metrics               | compute.instance.booting.time: >> 76fac1f5-962e-4ff2-8790-1f497c99c17d | >> |                       | cpu: af930d9a-a218-4230-b729-fee7e3796944 | >> |                       | disk.ephemeral.size: >> 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | >> |                       | disk.root.size: >> 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | >> |                       | memory.resident: >> 09efd98d-c848-4379-ad89-f46ec526c183               | >> |                       | memory.swap.in >> : >> 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | >> |                       | memory.swap.out: >> 4d012697-1d89-4794-af29-61c01c925bb4               | >> |                       | memory.usage: >> 93eab625-0def-4780-9310-eceff46aab7b | >> |                       | memory: >> ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | >> |                       | vcpus: >> e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | >> | original_resource_id  | ae3659d6-8998-44ae-a494-5248adbebe11 | >> ... >> >> | type                  | instance | >> | user_id               | a9c935f52e5540fc9befae7f91b4b3ae | >> +-----------------------+---------------------------------------------------------------------+ >> >> Obviously, I am missing something. Where is the missing link? >> What do I have to do to get CPU usage rates? Do I have to create >> metrics? Do//I have to ask Ceilometer to create metrics? How? >> >> Right now, no instructions seem to exist at all. If that is >> correct, I would be happy to write documentation once I >> understand how it works. >> >> Thanks a lot. >> >> Bernd >> >> On 5/10/2019 3:49 PM, info at dantalion.nl >> wrote: >>> Hello, >>> >>> I am working on Watcher and we are currently changing how metrics are >>> retrieved from different datasources such as Monasca or Gnocchi. Because >>> of this major overhaul I would like to validate that everything is >>> working correctly. >>> >>> Almost all of the optimization strategies in Watcher require the cpu >>> utilization of an instance as metric but with newer versions of >>> Ceilometer this has become unavailable. >>> >>> On IRC I received the information that Gnocchi could be used to >>> configure an aggregate and this aggregate would then report cpu >>> utilization, however, I have been unable to find documentation on how to >>> achieve this. >>> >>> I was also notified that cpu_util is something that could be computed >>> from other metrics. When reading >>> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >>> the documentation seems to agree on this as it states that cpu_util is >>> measured by using a 'rate of change' transformer. But I have not been >>> able to find how this can be computed. >>> >>> I was hoping someone could spare the time to provide documentation or >>> information on how this currently is best achieved. >>> >>> Kind Regards, >>> Corne Lukken (Dantali0n) >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Thu Aug 1 14:33:10 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 1 Aug 2019 10:33:10 -0400 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: References: Message-ID: On Thu, Aug 1, 2019 at 7:28 AM Dmitry Tantsur wrote: > On 7/31/19 8:21 PM, Kendall Nelson wrote: > > Hello Everyone! > > > > Development Focus > > ----------------- > > We are now past the Train-2 milestone, and entering the last development > phase > > of the cycle. Teams should be focused on implementing planned work for > the > > cycle.Now is a good time to review those plans and reprioritize > anything if > > needed based on the what progress has been made and what looks realistic > to > > complete in the next few weeks. > > > > General Information > > ------------------- > > The following cycle-with-intermediary deliverables have not done any > > intermediary release yet during this cycle. The cycle-with-rc release > model is > > more suited for deliverables that plan to be released only once per > cycle. > > I respectfully disagree. I will reserve my opinion on whether > cycle-with-rc > suits *anyone*, but in our case I'd prefer to have an option of releasing > something in the middle of a cycle even if we don't exercise this option > way too > often. > +1. Kendall's key phrase is "plan to be released once per cycle". I agree these teams should ask themselves if they plan to only release at the end of each cycle. However, it isn't unreasonable to not plan cycle-based releases, but release once per cycle by chance. For example, I think we would just release ironic-ui whenever there was something substantial. There just (sadly) hasn't been anything substantial yet this cycle. Let's not force anyone to change the model or release a project, please. // jim > I'm not an ironic PTL, bit anyway please note that I'm -1 on the change > for any > of our projects. > > Dmitry > > > As a > > result, we will be proposing to change the release model for the > following > > deliverables: > > > > blazar-dashboard > > > > cloudkitty-dashboard > > ec2-api > > freezer-web-ui > > freezer > > heat-agents > > heat-dashboard > > ironic-ui > > karbor-dashboard > > karbor > > kuryr-kubernetes > > magnum-ui > > manila-ui > > masakari-dashboard > > monasca-agent > > monasca-api > > monasca-ceilometer > > monasca-events-api > > monasca-kibana-plugin > > monasca-log-api > > monasca-notification > > monasca-persister > > monasca-thresh > > monasca-transform > > monasca-ui > > murano-agent > > networking-baremetal > > networking-generic-switch > > networking-hyperv > > neutron-fwaas-dashboard > > neutron-vpnaas-dashboard > > requirements > > sahara-extra > > senlin-dashboard > > solum-dashboard > > tacker-horizon > > tricircle > > vitrage-dashboard > > vitrage > > watcher-dashboard > > > > > > PTLs and release liaisons for each of those deliverables can either +1 > the > > release model changewhen we get them pushed, or propose an intermediary > release > > for that deliverable. In absence of answer by the start of R-10 week > we'll > > consider that the switch to cycle-with-rc is preferable. > > > > Upcoming Deadlines & Dates > > -------------------------- > > Non-client library freeze: September 05 (R-6 week) > > Client library freeze: September 12 (R-5 week) > > Train-3 milestone: September 12 (R-5 week) > > > > -Kendall (diablo_rojo) + the Release Management Team > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Aug 1 15:00:10 2019 From: donny at fortnebula.com (Donny Davis) Date: Thu, 1 Aug 2019 11:00:10 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190801085818.GD2077@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <20190801085818.GD2077@fedora19.localdomain> Message-ID: These jobs seem to timeout from every provider on the regular[1], but the issue is surely more apparent with tempest on FN. The result is quite a bit of lost time. 361 jobs that run for several hours results in a little over a 1000 hours of lost cycles. [1] http://logstash.openstack.org/#/dashboard/file/logstash.json?query=filename:%5C%22job-output.txt%5C%22%20AND%20message:%5C%22RUN%20END%20RESULT_TIMED_OUT%5C%22&from=7d On Thu, Aug 1, 2019 at 5:01 AM Ian Wienand wrote: > On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > > Given my change shows this can be so much quicker is there any > > interest in modifying devstack to be faster here? And if so what do > > we think an appropriate approach would be? > > My first concern was if anyone considered openstack-client setting > these things up as actually part of the testing. I'd say not, > comments in [1] suggest similar views. > > My second concern is that we do keep sufficient track of complexity v > speed; obviously doing things in a sequential manner via a script is > pretty simple to follow and as we start putting things into scripts we > make it harder to debug when a monoscript dies and you have to start > pulling apart where it was. With just a little json fiddling we can > currently pull good stats from logstash ([2]) so I think as we go it > would be good to make sure we account for the time using appropriate > wrappers, etc. > > Then the third concern is not to break anything for plugins -- > devstack has a very very loose API which basically relies on plugin > authors using a combination of good taste and copying other code to > decide what's internal or not. > > Which made me start thinking I wonder if we look at this closely, even > without replacing things we might make inroads? > > For example [3]; it seems like SERVICE_DOMAIN_NAME is never not > default, so the get_or_create_domain call is always just overhead (the > result is never used). > > Then it seems that in the gate, basically all of the "get_or_create" > calls will really just be "create" calls? Because we're always > starting fresh. So we could cut out about half of the calls there > pre-checking if we know we're under zuul (proof-of-concept [4]). > > Then we have blocks like: > > get_or_add_user_project_role $member_role $demo_user $demo_project > get_or_add_user_project_role $admin_role $admin_user $demo_project > get_or_add_user_project_role $another_role $demo_user $demo_project > get_or_add_user_project_role $member_role $demo_user $invis_project > > If we wrapped that in something like > > start_osc_session > ... > end_osc_session > > which sets a variable that means instead of calling directly, those > functions write their arguments to a tmp file. Then at the end call, > end_osc_session does > > $ osc "$(< tmpfile)" > > and uses the inbuilt batching? If that had half the calls by skipping > the "get_or" bit, and used common authentication from batching, would > that help? > > And then I don't know if all the projects and groups are required for > every devstack run? Maybe someone skilled in the art could do a bit > of an audit and we could cut more of that out too? > > So I guess my point is that maybe we could tweak what we have a bit to > make some immediate wins, before anyone has to rewrite too much? > > -i > > [1] https://review.opendev.org/673018 > [2] https://ethercalc.openstack.org/rzuhevxz7793 > [3] https://review.opendev.org/673941 > [4] https://review.opendev.org/673936 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Aug 1 16:08:10 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 12:08:10 -0400 Subject: [tc][forum] documenting "forum 101" Message-ID: Hi everyone, We've discussed the idea of building a document which provides information on how to host a good forum session, tips and tricks, how to make the best out of them. It seems there's content all over the place but not aggregated in a single good spot that we can point attendees/hosts to. context: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-01.log.html#t2019-08-01T15:40:09 Do we have any volunteers from the TC (or even community members) that are interested in this effort? Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amotoki at gmail.com Thu Aug 1 16:52:13 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Aug 2019 01:52:13 +0900 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: References: Message-ID: On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur wrote: > > On 7/31/19 8:21 PM, Kendall Nelson wrote: > > Hello Everyone! > > > > Development Focus > > ----------------- > > We are now past the Train-2 milestone, and entering the last development phase > > of the cycle. Teams should be focused on implementing planned work for the > > cycle.Now is a good time to review those plans and reprioritize anything if > > needed based on the what progress has been made and what looks realistic to > > complete in the next few weeks. > > > > General Information > > ------------------- > > The following cycle-with-intermediary deliverables have not done any > > intermediary release yet during this cycle. The cycle-with-rc release model is > > more suited for deliverables that plan to be released only once per cycle. > > I respectfully disagree. I will reserve my opinion on whether cycle-with-rc > suits *anyone*, but in our case I'd prefer to have an option of releasing > something in the middle of a cycle even if we don't exercise this option way too > often. > > I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any > of our projects. I agree with Dmitry. cycle-with-intermediary model allows project teams to release somethings at any time during a release when they want. On the other hand, cycle-with-intermediary means at least one release along with a release cycle. "cycle-with-rc" means such deliverable can only *one* release per cycle. "cycle-with-rc" might be a good option for some projects but I think it is not forced. If some deliverable tends to have less changes and it is not worth cutting a release, another option might be "independent". My understanding is that "independent" release model does not allow us to have stable branches, so it might be a thing considered carefully when we switch some deliverable to "independent". Talking about horizon plugins, as a neutron release liaison, neutron-fwaas/vpnaas-dashboard hit similar situation to ironic-ui. we don't have any substantial changes till now in this cycle. I guess this situation may continues in further releases in most horizon plugins. I am not sure which release model is appropriate. horizon adopts release-with-rc model now and horizon plugins are usually assumed to work with a specific release of horizon, so "independent" might not fit. release-with-intermediary or release-with-rc may fit, but there are cases where they have only infra related changes in a cycle. Thanks, Akihiro Motoki > > Dmitry > > > As a > > result, we will be proposing to change the release model for the following > > deliverables: > > > > blazar-dashboard > > > > cloudkitty-dashboard > > ec2-api > > freezer-web-ui > > freezer > > heat-agents > > heat-dashboard > > ironic-ui > > karbor-dashboard > > karbor > > kuryr-kubernetes > > magnum-ui > > manila-ui > > masakari-dashboard > > monasca-agent > > monasca-api > > monasca-ceilometer > > monasca-events-api > > monasca-kibana-plugin > > monasca-log-api > > monasca-notification > > monasca-persister > > monasca-thresh > > monasca-transform > > monasca-ui > > murano-agent > > networking-baremetal > > networking-generic-switch > > networking-hyperv > > neutron-fwaas-dashboard > > neutron-vpnaas-dashboard > > requirements > > sahara-extra > > senlin-dashboard > > solum-dashboard > > tacker-horizon > > tricircle > > vitrage-dashboard > > vitrage > > watcher-dashboard > > > > > > PTLs and release liaisons for each of those deliverables can either +1 the > > release model changewhen we get them pushed, or propose an intermediary release > > for that deliverable. In absence of answer by the start of R-10 week we'll > > consider that the switch to cycle-with-rc is preferable. > > > > Upcoming Deadlines & Dates > > -------------------------- > > Non-client library freeze: September 05 (R-6 week) > > Client library freeze: September 12 (R-5 week) > > Train-3 milestone: September 12 (R-5 week) > > > > -Kendall (diablo_rojo) + the Release Management Team > > > > From mnaser at vexxhost.com Thu Aug 1 17:28:15 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 13:28:15 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, I’ve prepared the planning for the next TC meeting which will happen on Thursday, August 8th at 1400 UTC. If you’d like to add a topic of discussion, please feel free to do so. We will be cutting off the agenda on Wednesday the 7th so any subject added afterwards won’t appear in the plan. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Thu Aug 1 19:02:32 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 1 Aug 2019 15:02:32 -0400 Subject: =?utf-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Releas?= =?utf-8?Q?e_countdown_for_week_R-11=2C_July_29_-_August_2?= In-Reply-To: References: Message-ID: > On Aug 1, 2019, at 12:52 PM, Akihiro Motoki wrote: > > On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur > wrote: >> >> On 7/31/19 8:21 PM, Kendall Nelson wrote: >>> Hello Everyone! >>> >>> Development Focus >>> ----------------- >>> We are now past the Train-2 milestone, and entering the last development phase >>> of the cycle. Teams should be focused on implementing planned work for the >>> cycle.Now is a good time to review those plans and reprioritize anything if >>> needed based on the what progress has been made and what looks realistic to >>> complete in the next few weeks. >>> >>> General Information >>> ------------------- >>> The following cycle-with-intermediary deliverables have not done any >>> intermediary release yet during this cycle. The cycle-with-rc release model is >>> more suited for deliverables that plan to be released only once per cycle. >> >> I respectfully disagree. I will reserve my opinion on whether cycle-with-rc >> suits *anyone*, but in our case I'd prefer to have an option of releasing >> something in the middle of a cycle even if we don't exercise this option way too >> often. >> >> I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any >> of our projects. > > I agree with Dmitry. cycle-with-intermediary model allows project > teams to release > somethings at any time during a release when they want. On the other hand, > cycle-with-intermediary means at least one release along with a release cycle. > "cycle-with-rc" means such deliverable can only *one* release per cycle. > "cycle-with-rc" might be a good option for some projects but I think > it is not forced. > > If some deliverable tends to have less changes and it is not worth > cutting a release, > another option might be "independent". My understanding is that > "independent" release > model does not allow us to have stable branches, so it might be a > thing considered carefully > when we switch some deliverable to "independent”. That’s not quite right. Independent deliverables can have stable branches, but they are not considered part of the OpenStack release because they are not managed by the release team. > > Talking about horizon plugins, as a neutron release liaison, > neutron-fwaas/vpnaas-dashboard > hit similar situation to ironic-ui. we don't have any substantial > changes till now in this cycle. > I guess this situation may continues in further releases in most > horizon plugins. > I am not sure which release model is appropriate. > horizon adopts release-with-rc model now and horizon plugins > are usually assumed to work with a specific release of horizon, so > "independent" might not fit. > release-with-intermediary or release-with-rc may fit, but there are > cases where they have > only infra related changes in a cycle. There are far far too many deliverables for our small release team to keep up with everyone following different procedures for branching, and branching incorrectly has too many bad ramifications to leave it to chance. We have therefore tried to describe several release models to meet teams’ needs, and to allow the release team to automate managing the deliverables in groups that all follow the same procedures so we end up with consistent results. The fact that most of the rest of the community have not needed to pay much attention to issues around branch management says to me that this approach has been working. As Thierry pointed out on IRC, there are reasons to require a release beyond the software having significant features or bug fixes. The reason we need a release for cycle-with-intermediary projects before the end of the cycle is that when we reach the final release deadline we need something to use as a place to create the stable branch (we only branch from tagged releases). In the past, we used the last release from the previous cycle as a fallback when teams missed other cycle deadlines. That resulted in creating a new stable branch that had none of the bug fixes or CI changes that had been on master, and which was therefore broken and required extra effort to fix. So, we now ask for an early release to give us a relatively recent one from the current cycle, rather than using the final release from the previous cycle. The alternative, using the cycle-with-rc release model, means that the release team will automatically generate release candidates and a final release for the team. In cases where the team does not intend to release more than one version in a cycle, this is easier for the project team and not much more work for the release team since the deliverable is handled as part of the batch of all similar deliverables. Updating the release model is the default when there are no releases because it reflects what is actually happening with the deliverable and the release team can manage the change on its own, and Kendall’s email is the notification which is supposed to trigger the conversation for each deliverable so that project teams can decide how to proceed down one of the two paths proposed. Doing nothing isn’t really an option, though. So, if you have a cycle-with-intermediary deliverable with changes that you haven’t considered “substantial” enough to trigger a release previously, and you do not want to change the release model, this is the point at which you should do a release anyway to avoid issues at the end of the cycle. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Aug 1 20:10:16 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 16:10:16 -0400 Subject: [openstack-ansible] Shanghai Summit Planning Message-ID: Hey everyone! Here's the link to the Etherpad for this year's Shanghai summit initial planning. You can put your name if you're attending and also write down your topic of discussion ideas. Looking forward to seeing you there! https://etherpad.openstack.org/p/PVG-OSA-PTG Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Thu Aug 1 20:11:11 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 16:11:11 -0400 Subject: [tc] Shanghai Summit Planning Message-ID: Hey everyone! Here's the link to the Etherpad for this year's Shanghai summit initial planning. You can put your name if you're attending and also write down your topic of discussion ideas. Looking forward to seeing you there! https://etherpad.openstack.org/p/PVG-TC-PTG Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From kennelson11 at gmail.com Fri Aug 2 00:49:49 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 1 Aug 2019 17:49:49 -0700 Subject: [tc][forum] documenting "forum 101" In-Reply-To: References: Message-ID: I am happy to help review/comment on it (assuming its going to live in a repo somewhere) and could probably help brainstorm/draft something, but I'd rather not be the first one to raise a hand.. Also, once its done, we should probably point to it from here: https://wiki.openstack.org/wiki/Forum#Forum_tips -Kendall (diablo_rojo) On Thu, Aug 1, 2019 at 9:12 AM Mohammed Naser wrote: > Hi everyone, > > We've discussed the idea of building a document which provides > information on how to host a good forum session, tips and tricks, how > to make the best out of them. It seems there's content all over the > place but not aggregated in a single good spot that we can point > attendees/hosts to. > > context: > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-01.log.html#t2019-08-01T15:40:09 > > Do we have any volunteers from the TC (or even community members) that > are interested in this effort? > > Thanks, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Fri Aug 2 01:36:48 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 1 Aug 2019 20:36:48 -0500 Subject: [Security SIG] Weekly Newsletter - Aug 01st 2019 Message-ID: Apologies for the lack of a newsletter last week, back to regularly scheduled programming #Week of: 01 Aug 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-08-01-15.00.html - Security Guide Docs Update - The first set of changes have been made - https://review.opendev.org/#/q/project:openstack/security-doc - Decided to link directly to the keystone federation setup page instead of maintaining an out-of-date copy - Currently looking to update the various checklists for each service - Nova/Cinder Policy - Currently checking out inconsistencies between the documentation of generating policy files for both nova and cinder - https://review.opendev.org/#/c/673349/ # VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - IFLA_BR_AGEING_TIME of 0 causes flooding across bridges: https://bugs.launchpad.net/os-vif/+bug/1837252 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Fri Aug 2 05:01:19 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Fri, 2 Aug 2019 11:01:19 +0600 Subject: Openstack IPv6 neutron confiuraton Message-ID: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> Hi, We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 2 07:16:20 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 2 Aug 2019 09:16:20 +0200 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> References: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> Message-ID: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Hi, In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > On 2 Aug 2019, at 07:01, Md. Farhad Hasan Khan wrote: > > Hi, > We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. > > Thanks & B’Rgds, > Rony — Slawek Kaplonski Senior software engineer Red Hat From frickler at x-ion.de Fri Aug 2 08:10:15 2019 From: frickler at x-ion.de (Jens Harbott) Date: Fri, 02 Aug 2019 10:10:15 +0200 Subject: =?utf-8?q?Re=3A?= Openstack IPv6 neutron confiuraton In-Reply-To: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Message-ID: <62-5d43f000-5-50968180@101299267> On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > Hi, > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 From dtantsur at redhat.com Fri Aug 2 08:10:38 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 2 Aug 2019 10:10:38 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: Top-posting because I'm not answering to anything specific. Have you considered allowing intermediary releases with cycle-with-rc? Essentially combining the two models into one? On 8/1/19 9:02 PM, Doug Hellmann wrote: > > >> On Aug 1, 2019, at 12:52 PM, Akihiro Motoki > > wrote: >> >> On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur > > wrote: >>> >>> On 7/31/19 8:21 PM, Kendall Nelson wrote: >>>> Hello Everyone! >>>> >>>> Development Focus >>>> ----------------- >>>> We are now past the Train-2 milestone, and entering the last development phase >>>> of the cycle. Teams should be focused on implementing planned work for the >>>> cycle.Now is  a good time to review those plans and reprioritize anything if >>>> needed based on the what progress has been made and what looks realistic to >>>> complete in the next few weeks. >>>> >>>> General Information >>>> ------------------- >>>> The following cycle-with-intermediary deliverables have not done any >>>> intermediary release yet during this cycle. The cycle-with-rc release model is >>>> more suited for deliverables that plan to be released only once per cycle. >>> >>> I respectfully disagree. I will reserve my opinion on whether cycle-with-rc >>> suits *anyone*, but in our case I'd prefer to have an option of releasing >>> something in the middle of a cycle even if we don't exercise this option way too >>> often. >>> >>> I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any >>> of our projects. >> >> I agree with Dmitry. cycle-with-intermediary model allows project >> teams to release >> somethings at any time during a release when they want. On the other hand, >> cycle-with-intermediary means at least one release along with a release cycle. >> "cycle-with-rc" means such deliverable can only *one* release per cycle. >> "cycle-with-rc" might be a good option for some projects but I think >> it is not forced. >> >> If some deliverable tends to have less changes and it is not worth >> cutting a release, >> another option might be "independent". My understanding is that >> "independent" release >> model does not allow us to have stable branches, so it might be a >> thing considered carefully >> when we switch some deliverable to "independent”. > > That’s not quite right. Independent deliverables can have stable branches, but > they are not considered part of the OpenStack release because they are not > managed by the release team. > >> >> Talking about horizon plugins, as a neutron release liaison, >> neutron-fwaas/vpnaas-dashboard >> hit similar situation  to ironic-ui. we don't have any substantial >> changes till now in this cycle. >> I guess this situation may continues in further releases in most >> horizon plugins. >> I am not sure which release model is appropriate. >> horizon adopts release-with-rc model now and horizon plugins >> are usually assumed to work with a specific release of horizon, so >> "independent" might not fit. >> release-with-intermediary or release-with-rc may fit, but there are >> cases where they have >> only infra related changes in a cycle. > > There are far far too many deliverables for our small release team to keep up > with everyone following different procedures for branching, and branching > incorrectly has too many bad ramifications to leave it to chance. We have > therefore tried to describe several release models to meet teams’ needs, and to > allow the release team to automate managing the deliverables in groups that all > follow the same procedures so we end up with consistent results. The fact that > most of the rest of the community have not needed to pay much attention to > issues around branch management says to me that this approach has been working. > > As Thierry pointed out on IRC, there are reasons to require a release beyond the > software having significant features or bug fixes. The reason we need a release > for cycle-with-intermediary projects before the end of the cycle is that when we > reach the final release deadline we need something to use as a place to create > the stable branch (we only branch from tagged releases). In the past, we used > the last release from the previous cycle as a fallback when teams missed other > cycle deadlines. That resulted in creating a new stable branch that had none of > the bug fixes or CI changes that had been on master, and which was therefore > broken and required extra effort to fix. So, we now ask for an early release to > give us a relatively recent one from the current cycle, rather than using the > final release from the previous cycle. > > The alternative, using the cycle-with-rc release model, means that the release > team will automatically generate release candidates and a final release for the > team. In cases where the team does not intend to release more than one version > in a cycle, this is easier for the project team and not much more work for the > release team since the deliverable is handled as part of the batch of all > similar deliverables. Updating the release model is the default when there are > no releases because it reflects what is actually happening with the deliverable > and the release team can manage the change on its own, and Kendall’s email is > the notification which is supposed to trigger the conversation for each > deliverable so that project teams can decide how to proceed down one of the two > paths proposed. Doing nothing isn’t really an option, though. > > So, if you have a cycle-with-intermediary deliverable with changes that you > haven’t considered “substantial” enough to trigger a release previously, and you > do not want to change the release model, this is the point at which you should > do a release anyway to avoid issues at the end of the cycle. > > Doug > From laszlo.budai at gmail.com Fri Aug 2 08:53:14 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Fri, 2 Aug 2019 11:53:14 +0300 Subject: [nova] local ssd disk performance Message-ID: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Hello all, we have a problem with the performance of the disk IO in a KVM instance. We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. Kind regards, Laszlo From thierry at openstack.org Fri Aug 2 09:33:30 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 11:33:30 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> The release management team discussed this topic at the meeting yesterday. The current process works well in the case where you *know" you will only do one release (release-once), or you *know* you will do more than one release (release-often). We agree that it does not handle well the case where you actually have no idea how many releases you will do (release-if-needed). We need to add a bit of flexibility there, but: - the release team still needs to use a very limited number of standard release models, and know as much as possible in advance. We handle hundreds of OpenStack deliverables, we can't have everyone use their own release variant. - we don't want to disconnect project teams from their releases (we still want teams to trigger release points and feel responsible for the resulting artifact). Here is the proposal we came up with: - The general idea is, by milestone-2, you should have picked your release model. If you plan to release-once, you should use the cycle-with-rcs model. If you plan to release-often, you should be cycle-with-intermediary. In the hopefully rare case where you have no idea and would like to release-if-needed, continue to read. - Between milestone-2 and milestone-3, we look up cycle-with-intermediary things that have not done a release yet. For those, we propose a switch to cycle-with-rcs, and use that to start a discussion. At that point four things can happen: (1) you realize you could do an intermediary release, and do one now. Patch to change release model is abandoned. (2) you realize you only want to do one release this cycle, and +1 the patch. (3) you still have no idea where you're going for this deliverable this cycle and would like to release as-needed: you -1 the patch. You obviously commit to producing a release before RC1 freeze. If by RC1 freeze we still have no release, we'll force one. (4) you realize that the deliverable should be abandoned, or should be disconnected from the "OpenStack" release and be made independent, or some other solution. You -1 the patch and propose an alternative. In all cases that initial patch is the occasion to raise the discussion and cover that blind spot, well in advance of the final weeks of the release where we don't have time to handle differently each of our hundreds of deliverables. Dmitry Tantsur wrote: > Have you considered allowing intermediary releases with cycle-with-rc? > Essentially combining the two models into one? You really only have two different scenarios. A- the final release is more usable and important than the others B- the final release is just another release, just happens to have a stable branch cut from it In scenario (A), you use RCs to apply more care and make sure the one and only release works well. You can totally do other "releases" during the cycle, but since those are not using RCs and are not as carefully vetted, they use "beta" numbering. In scenario (B), all releases are equally important and usable. There is no reason to use RCs for one and not the others. -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 09:59:34 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 11:59:34 +0200 Subject: [tc][forum] Shanghai Forum selection committee Message-ID: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Hi, TC members! We need two TC members to serve on the Shanghai forum selection committee, and help select, refine and potentially merge the forum session proposals from the wider community. Beyond encouraging people to submit proposals, the bulk of the selection committee work happens after the submission deadline (planned for Sept 16th) and the Forum program final selection (planned for Oct 7th). Since we'll have TC renewal elections in progress, it's simpler to pick between members that are not standing for reelection in September. That would be: asettle mugsie jroll mnaser ricolin, ttx and zaneb. Anyone interested? -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 12:17:42 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 14:17:42 +0200 Subject: [neutron][release] neutron-interconnection release in Train Message-ID: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> Hi Neutron folks, We are now past the Train membership freeze[1] and neutron-interconnection is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/neutron-interconnection.yaml file to the openstack/releases repository), we will not include a release of neutron-interconnection in OpenStack Train. This may just be fine, for example if neutron-interconnection needs more time before a release, or if it is released independently from OpenStack releases. [1] https://releases.openstack.org/train/schedule.html#t-mf -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 12:17:44 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 14:17:44 +0200 Subject: [winstackers][release] compute-hyperv release in Train Message-ID: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> Hi winstackers, We are now past the Train membership freeze[1] and compute-hyperv is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/compute-hyperv.yaml file to the openstack/releases repository), we will not include a release of compute-hyperv in OpenStack Train. This may just be fine, for example if compute-hyperv needs more time before a release, or if it is released independently from OpenStack releases. [1] https://releases.openstack.org/train/schedule.html#t-mf -- Thierry Carrez (ttx) From amotoki at gmail.com Fri Aug 2 12:29:17 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Aug 2019 21:29:17 +0900 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <62-5d43f000-5-50968180@101299267> References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> Message-ID: At a quick look through Jens's guide, it looks like a really nice tutorial to follow. External connectivity is really an important point on IPv6 networking with neutron. I think my presentation at Boston summit two years ago still works [1]. This is based on my experience when I was involved in IPv6 POC with our customers (around Autumn in 2016). While Jens's guide covers detail commands for all, mine helps understanding some backgrounds on neutron with IPv6. [1] https://www.slideshare.net/ritchey98/openstack-neutron-ipv6-lessons Thanks, Akihiro Motoki (irc: amotoki) On Fri, Aug 2, 2019 at 5:14 PM Jens Harbott wrote: > > On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > > > Hi, > > > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. > > For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. > > [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > > > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > > Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 > > From cdent+os at anticdent.org Fri Aug 2 12:37:41 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 2 Aug 2019 13:37:41 +0100 (BST) Subject: [placement] update 19-30 Message-ID: HTML: https://anticdent.org/placement-update-19-30.html Pupdate 19-30 is brought to you by the letter P for Performance. # Most Important The main things on the Placement radar are implementing Consumer Types and cleanups, performance analysis, and documentation related to nested resource providers. # What's Changed * [os-traits 0.16.0 was released](https://review.opendev.org/673294) was released, with a corresponding [canary update](https://review.opendev.org/673964). * The [ProviderIds namedtuple was removed](https://review.opendev.org/673788). This is the first in a series of performance optimizations discovered as part of the analysis described below in the Cleanup section. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 23 (2) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 3 (1) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 5 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 4 (0). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new '--amend' option which can update resource provider inventory without requiring the user to pass a full replacement for inventory # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. I started some performance analysis this week. Initially I started working with [placement master in a container](https://anticdent.org/profiling-placement-in-docker.html) but as I started making changes I moved back to [container-less](https://anticdent.org/profiling-wsgi-apps.html). What I discovered was that there is quite a bit of redundancy in the code in the `objects` package that I was able to remove. For example we were creating at least twice as many ProviderSummary objects than required in a situation with multiple request groups. It's likely there would have been more duplicates with more request groups. That's improved in [this change](https://review.opendev.org/674254), which is at the end of a stack of several other like-minded improvements. The improvements in that stack will not be obvious until the [more complex nested topology](https://review.opendev.org/#/c/673513/) is generally available. My analysis was based on that topology. Not to put too fine a point on it, but this kind of incremental analysis and improvement is something I think we (the we that is the community of OpenStack) should be doing far more often. It is _incredibly_ revealing about how the system works and opportunities for making the code both work better and be easier to maintain. One outcome of this work will be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There is one [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And two [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Nova: nova-manage: heal port allocations * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * zun: [WIP] Use placement for unified resource management * Kayobe: Build placement images by default * blazar: Fix placement operations in multi-region deployments * Nova: libvirt: Start reporting PCPU inventory to placement * Nova: support move ops with qos ports * Nova: get_ksa_adapter: nix by-service-type confgrp hack * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * Nova: Update HostState.\*\_allocation_ratio earlier * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement # End I started working with around approximately 20,000 providers this week. Only 980,000 to go. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From massimo.sgaravatto at gmail.com Fri Aug 2 13:20:59 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 2 Aug 2019 15:20:59 +0200 Subject: [ops] [nova] Problems migrating an instance with libvirt if the image got deleted ? Message-ID: Hi I remember I had problems in the past trying to resize or migrate instances launched using an image that was then deleted (I use kvm as hypervisor). This is indeed confirmed e.g. here: https://storyboard.openstack.org/#!/story/2004892 https://docs.openstack.org/operations-guide/ops-user-facing-operations.html#deleting-images Now I am not able to reproduce anymore the problem, i.e.: 1- I create an image 2- I launch an instance using this image 3- I delete the image 4- I migrate the instance (nova migrate From jim at jimrollenhagen.com Fri Aug 2 13:57:18 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 2 Aug 2019 09:57:18 -0400 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> References: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> Message-ID: On Fri, Aug 2, 2019 at 5:39 AM Thierry Carrez wrote: > The release management team discussed this topic at the meeting yesterday. > > The current process works well in the case where you *know" you will > only do one release (release-once), or you *know* you will do more than > one release (release-often). We agree that it does not handle well the > case where you actually have no idea how many releases you will do > (release-if-needed). > > We need to add a bit of flexibility there, but: > > - the release team still needs to use a very limited number of standard > release models, and know as much as possible in advance. We handle > hundreds of OpenStack deliverables, we can't have everyone use their own > release variant. > > - we don't want to disconnect project teams from their releases (we > still want teams to trigger release points and feel responsible for the > resulting artifact). > > Here is the proposal we came up with: > > - The general idea is, by milestone-2, you should have picked your > release model. If you plan to release-once, you should use the > cycle-with-rcs model. If you plan to release-often, you should be > cycle-with-intermediary. In the hopefully rare case where you have no > idea and would like to release-if-needed, continue to read. > > - Between milestone-2 and milestone-3, we look up > cycle-with-intermediary things that have not done a release yet. For > those, we propose a switch to cycle-with-rcs, and use that to start a > discussion. > > At that point four things can happen: > > (1) you realize you could do an intermediary release, and do one now. > Patch to change release model is abandoned. > > (2) you realize you only want to do one release this cycle, and +1 the > patch. > > (3) you still have no idea where you're going for this deliverable this > cycle and would like to release as-needed: you -1 the patch. You > obviously commit to producing a release before RC1 freeze. If by RC1 > freeze we still have no release, we'll force one. > > (4) you realize that the deliverable should be abandoned, or should be > disconnected from the "OpenStack" release and be made independent, or > some other solution. You -1 the patch and propose an alternative. > > In all cases that initial patch is the occasion to raise the discussion > and cover that blind spot, well in advance of the final weeks of the > release where we don't have time to handle differently each of our > hundreds of deliverables. > This process seems reasonable, thanks Thierry! :) // jim > > Dmitry Tantsur wrote: > > Have you considered allowing intermediary releases with cycle-with-rc? > > Essentially combining the two models into one? > > You really only have two different scenarios. > > A- the final release is more usable and important than the others > B- the final release is just another release, just happens to have a > stable branch cut from it > > In scenario (A), you use RCs to apply more care and make sure the one > and only release works well. You can totally do other "releases" during > the cycle, but since those are not using RCs and are not as carefully > vetted, they use "beta" numbering. > > In scenario (B), all releases are equally important and usable. There is > no reason to use RCs for one and not the others. > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 2 13:58:42 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 2 Aug 2019 15:58:42 +0200 Subject: [neutron][release] neutron-interconnection release in Train In-Reply-To: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> References: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> Message-ID: <18632AA7-E5D1-4191-ACB5-077D09090503@redhat.com> Hi, I don’t think that there is anything to release in this project currently. It has only some basic stuff and it’s not functional yet. Please check commits history for the project: https://opendev.org/openstack/neutron-interconnection/commits/branch/master > On 2 Aug 2019, at 14:17, Thierry Carrez wrote: > > Hi Neutron folks, > > We are now past the Train membership freeze[1] and neutron-interconnection is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/neutron-interconnection.yaml file to the openstack/releases repository), we will not include a release of neutron-interconnection in OpenStack Train. > > This may just be fine, for example if neutron-interconnection needs more time before a release, or if it is released independently from OpenStack releases. > > [1] https://releases.openstack.org/train/schedule.html#t-mf > > -- > Thierry Carrez (ttx) > — Slawek Kaplonski Senior software engineer Red Hat From jim at jimrollenhagen.com Fri Aug 2 13:59:00 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 2 Aug 2019 09:59:00 -0400 Subject: [tc][forum] Shanghai Forum selection committee In-Reply-To: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> References: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Message-ID: On Fri, Aug 2, 2019 at 6:07 AM Thierry Carrez wrote: > Hi, TC members! > > We need two TC members to serve on the Shanghai forum selection > committee, and help select, refine and potentially merge the forum > session proposals from the wider community. > > Beyond encouraging people to submit proposals, the bulk of the selection > committee work happens after the submission deadline (planned for > Sept > 16th) and the Forum program final selection (planned for Oct 7th). > > Since we'll have TC renewal elections in progress, it's simpler to pick > between members that are not standing for reelection in September. That > would be: asettle mugsie jroll mnaser ricolin, ttx and zaneb. > > Anyone interested? > I'm happy to help, but won't be attending the forum. Is that okay? // jim > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Aug 2 14:44:48 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 16:44:48 +0200 Subject: [tc][forum] Shanghai Forum selection committee In-Reply-To: References: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Message-ID: <3c79e99b-ed7b-980b-aadd-d222890fefb0@openstack.org> Jim Rollenhagen wrote: > I'm happy to help, but won't be attending the forum. Is that okay? Sure, that makes you a disinterested party :) -- Thierry Carrez (ttx) From mriedemos at gmail.com Fri Aug 2 15:21:02 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 10:21:02 -0500 Subject: [ops] [nova] Problems migrating an instance with libvirt if the image got deleted ? In-Reply-To: References: Message-ID: On 8/2/2019 8:20 AM, Massimo Sgaravatto wrote: > Now I am not able to reproduce anymore the problem, i.e.: I think the bug was just fixed [1]. https://review.opendev.org/#/q/Id0f05bb1275cc816d98b662820e02eae25dc57a3 The commit message on that change says live migration but the code that was changed is used by cold migration flows as well so likely benefited from the fix. -- Thanks, Matt From mriedemos at gmail.com Fri Aug 2 15:27:47 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 10:27:47 -0500 Subject: [nova] local ssd disk performance In-Reply-To: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: <21341465-4761-62c3-5bf8-57cdc9efc7f9@gmail.com> On 8/2/2019 3:53 AM, Budai Laszlo wrote: > 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). Yes it's a known bug: https://bugs.launchpad.net/nova/+bug/1831657 As noted within that bug report, WindRiver had a patch at one point to make that work but it's long out of date, someone would have to polish it off and get it working again. The good news is we have a nova-lvm CI job which is currently skipping resize tests but in the patch that implements migrate for lvm we could unskip those tests and make sure everything is working in that nova-lvm CI job. We just need contributors that care about it to do the work (there seem to be several people that want this, but a dearth of people actually making it happen). > 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case I could dredge up the ML thread on this but while this is an option, (or even using the now-either-deprecated-or-deleted cinder local block volume type driver), it could quickly become a management nightmare since enforcing compute/volume locality with availability zones becomes a mess at scale. If you only have a half dozen computes or something then maybe that's not a problem in a private cloud shop, but it's definitely a problem at larger scale, and also complicated if you set the [cinder]cross_az_attach=False value in nova.conf because of known bugs [1] with that. [1] https://bugs.launchpad.net/nova/+bug/1694844 - yes I'm a bad person for not having cleaned up that patch yet but I haven't felt much urgency either. -- Thanks, Matt From daniel at speichert.pl Fri Aug 2 15:50:03 2019 From: daniel at speichert.pl (Daniel Speichert) Date: Fri, 2 Aug 2019 17:50:03 +0200 Subject: [nova] local ssd disk performance In-Reply-To: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: preallocate_images = space This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. Best Regards Daniel On 8/2/2019 10:53, Budai Laszlo wrote: > Hello all, > > we have a problem with the performance of the disk IO in a KVM instance. > We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > > 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case > 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > > do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > > Kind regards, > Laszlo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Fri Aug 2 16:22:08 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Fri, 2 Aug 2019 19:22:08 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: Thank you Daniel, My colleague found the same solution in the meantime. And that helped us as well. Kind regards, Laszlo On 8/2/19 6:50 PM, Daniel Speichert wrote: > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > preallocate_images = space > > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > > Best Regards > Daniel > > On 8/2/2019 10:53, Budai Laszlo wrote: >> Hello all, >> >> we have a problem with the performance of the disk IO in a KVM instance. >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... >> >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). >> 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. >> >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. >> >> Kind regards, >> Laszlo >> From kennelson11 at gmail.com Fri Aug 2 18:54:50 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 2 Aug 2019 11:54:50 -0700 Subject: [winstackers][release] compute-hyperv release in Train In-Reply-To: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> References: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> Message-ID: I have a patch out to add the empty deliverable file for compute-hyperv[1] after chatting to Mark and Cao Yuan. -Kendall (diablo_rojo) [1] https://review.opendev.org/#/c/674405/ On Fri, Aug 2, 2019 at 5:19 AM Thierry Carrez wrote: > Hi winstackers, > > We are now past the Train membership freeze[1] and compute-hyperv is not > listed as a train deliverable yet. Unless you act very quickly (and add > a deliverables/train/compute-hyperv.yaml file to the openstack/releases > repository), we will not include a release of compute-hyperv in > OpenStack Train. > > This may just be fine, for example if compute-hyperv needs more time > before a release, or if it is released independently from OpenStack > releases. > > [1] https://releases.openstack.org/train/schedule.html#t-mf > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Aug 2 20:15:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 2 Aug 2019 16:15:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-6) Message-ID: This is the goal-6 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 6 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train+status:open+(+label:Verified-1+OR+label:Verified-2+) == Ongoing Work == All patches have been submitted for all applicable projects. Note: I need to resubmit most of the OpenStack Charms but the project is currently in release freeze so I'm holding off on consuming 3rd party gate resources. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Completed Work == Merged patches: https://review.openstack.org/#/q/topic:python3-train+is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals /train/python3-updates.html [2] Train release schedule: https://releases.openstack.org/train/schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 2 20:19:23 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 15:19:23 -0500 Subject: [nova][ops] Documenting nova tunables at scale Message-ID: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> I wanted to send this to get other people's feedback if they have particular nova configurations once they hit a certain scale (hundreds or thousands of nodes). Every once in awhile in IRC I'll be chatting with someone about configuration changes they've made running at large scale to avoid, for example, hammering the control plane. I don't know how many times I've thought, "it would be nice if we had a doc highlighting some of these things so a new operator could come along and see, oh I've never tried changing that value before". I haven't started that doc, but I've started a bug report for people to dump some of their settings. The most common ones could go into a simple admin doc to start. I know there is more I've thought about in the past that I don't have in here but this is just a starting point so I don't make the mistake of not taking action on this again. https://bugs.launchpad.net/nova/+bug/1838819 -- Thanks, Matt From tony at bakeyournoodle.com Sat Aug 3 00:10:18 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 3 Aug 2019 10:10:18 +1000 Subject: stestr Python 2 Support In-Reply-To: <20190724131136.GA11582@sinanju.localdomain> References: <20190724131136.GA11582@sinanju.localdomain> Message-ID: <20190803001018.GA2352@thor.bakeyournoodle.com> On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > Hi Everyone, > > I just wanted to send a quick update about the state of python 2 support in > stestr, since OpenStack is the largest user of the project. With the recent > release of stestr 2.4.0 we've officially deprecated the python 2.7 support > in stestr. It will emit a DeprecationWarning whenever it's run from the CLI > with python 2.7 now. The plan (which is not set in stone) is that we will be > pushing a 3.0.0 release that removes the python 2 support and compat code from > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on Jan. > 1st). I don't believe this conflicts with the current Python version support > plans in OpenStack [1] but just wanted to make sure people were aware so that > there are no surprises when stestr stops working with Python 2 in 3.0.0. Thanks Matt. I know it's a little meta but if something really strange were to happen would you be open to doing 2.X.Y releases while we still have maintained branches that use it? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From colleen at gazlene.net Sat Aug 3 01:03:57 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 02 Aug 2019 18:03:57 -0700 Subject: [keystone] Keystone Team Update - Week of 29 July 2019 Message-ID: <1eb8bd54-2fb5-4665-aef0-a3259f131ba7@www.fastmail.com> # Keystone Team Update - Week of 29 July 2019 ## News ### CI instability The volume of policy deprecation warnings we generate in our unit tests has gotten to such a critical level that it appears to be causing serious instability in our unit test CI, possibly even affecting the CI infrastructure itself[1]. It's been suggested that we use the warnings module's filtering capabilities to suppress these warnings in the unit test output, but it seems that the sheer number of warnings that need to be suppressed makes the filtering so inefficient that the tests are even more likely to time out. We could do what the warnings actually suggest and override the deprecated policies in the tests, but it seems most of our unit tests aren't even ready to handle the new policies. Investigation is ongoing[2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-01.log.html#t2019-08-01T15:05:40 [2] https://review.opendev.org/673933 ### External auth In this week's meeting we discussed[3] how best to document external auth and agreed it's probably best to deprecate it entirely. We're seeking input from operators on how this may affect them[4]. [3] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-07-30-16.00.log.html#l-38 [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008127.html ## Action Items None outstanding ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. We will skip next week's office hours since we don't have a topic planned. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 7 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 37 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories System scope/default roles (https://trello.com/c/ERo50T7r , https://trello.com/c/RlYyb4DU) - https://review.opendev.org/#/q/status:open+topic:implement-default-roles+label:verified%253D%252B1 Application credential access rules (https://trello.com/c/XyBGhKrE) - https://review.opendev.org/#/q/status:open+topic:bp/whitelist-extension-for-app-creds+NOT+label:workflow%253D-1 Caching Guide (https://trello.com/c/UCFt3mfF) - https://review.opendev.org/672120 (Update the caching guide) Predictable IDs (https://trello.com/c/MVuu6DbU) - https://review.opendev.org/651655 (Predictable IDs for Roles) Oslo.limit (https://trello.com/c/KGGkNijR) - https://review.opendev.org/667242 (Add usage example) - https://review.opendev.org/666444 (Flush out basic enforcer and model relationship) - https://review.opendev.org/666085 (Add ksa connection logic) YAML Catalog (https://trello.com/c/Qv14G0xp) - https://review.opendev.org/483514 (Add yaml-loaded filesystem catalog backend) * Needs Discussion - https://review.opendev.org/669959 (discourage using X.509 with external auth) - https://review.opendev.org/655166 (Allows to use application credentials through group membership) * Oldest - https://review.opendev.org/448755 (Add federated support for creating a user) * Closes bugs - https://review.opendev.org/674122 (Fix websso auth loop) - https://review.opendev.org/672350 (Fixing dn_to_id function for cases were id is not in the DN) - https://review.opendev.org/674139 (Cleanup session on delete) ## Bugs This week we opened 6 new bugs and closed 6. Bugs opened (6) Bug #1838592 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1838592 Bug #1838554 (keystone:Low) opened by Mihail Milev https://bugs.launchpad.net/keystone/+bug/1838554 Bug #1836618 (keystone:Undecided) opened by Ghanshyam Mann https://bugs.launchpad.net/keystone/+bug/1836618 Bug #1838231 (keystone:Undecided) opened by Raviteja Polina https://bugs.launchpad.net/keystone/+bug/1838231 Bug #1838704 (keystoneauth:Undecided) opened by Alex Schultz https://bugs.launchpad.net/keystoneauth/+bug/1838704 Bug #1836568 (oslo.policy:Undecided) opened by Colleen Murphy https://bugs.launchpad.net/oslo.policy/+bug/1836568 Bugs closed (4) Bug #1837061 (keystone:Wishlist) https://bugs.launchpad.net/keystone/+bug/1837061 Bug #1791111 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1791111 Bug #1836618 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1836618 Bug #1837010 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1837010 Bugs fixed (2) Bug #1724645 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1724645 Bug #1837407 (keystone:Low) fixed by Chason Chan https://bugs.launchpad.net/keystone/+bug/1837407 ## Milestone Outlook https://releases.openstack.org/train/schedule.html Feature proposal freeze happens in two weeks. Feature freeze follows four weeks after that. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From rony.khan at brilliant.com.bd Sat Aug 3 09:23:09 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Sat, 3 Aug 2019 15:23:09 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> Message-ID: <035701d549dd$1187cc10$34976430$@brilliant.com.bd> Hi Motoki, Thanks. I shall check the slide. Thanks & B'Rgds, Rony -----Original Message----- From: Akihiro Motoki [mailto:amotoki at gmail.com] Sent: Friday, August 2, 2019 6:29 PM To: rony.khan at brilliant.com.bd; OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton At a quick look through Jens's guide, it looks like a really nice tutorial to follow. External connectivity is really an important point on IPv6 networking with neutron. I think my presentation at Boston summit two years ago still works [1]. This is based on my experience when I was involved in IPv6 POC with our customers (around Autumn in 2016). While Jens's guide covers detail commands for all, mine helps understanding some backgrounds on neutron with IPv6. [1] https://www.slideshare.net/ritchey98/openstack-neutron-ipv6-lessons Thanks, Akihiro Motoki (irc: amotoki) On Fri, Aug 2, 2019 at 5:14 PM Jens Harbott wrote: > > On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > > > Hi, > > > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. > > For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. > > [1] > https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/1 > 1/neutron-pike-ipv6.html > > > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > > Also noting that the good reference article at the end of this doc > sadly has disappeared, though you can still find it via the web > archives. See also https://review.opendev.org/674018 > > From rony.khan at brilliant.com.bd Sat Aug 3 09:23:50 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Sat, 3 Aug 2019 15:23:50 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> References: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Message-ID: <035801d549dd$296da7f0$7c48f7d0$@brilliant.com.bd> Hi Slawek, Thanks. I shall check documents links. Thanks & B'Rgds, Rony -----Original Message----- From: Slawek Kaplonski [mailto:skaplons at redhat.com] Sent: Friday, August 2, 2019 1:16 PM To: rony.khan at brilliant.com.bd Cc: OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton Hi, In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > On 2 Aug 2019, at 07:01, Md. Farhad Hasan Khan wrote: > > Hi, > We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. > > Thanks & B’Rgds, > Rony — Slawek Kaplonski Senior software engineer Red Hat From donny at fortnebula.com Sat Aug 3 17:41:33 2019 From: donny at fortnebula.com (Donny Davis) Date: Sat, 3 Aug 2019 13:41:33 -0400 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well. If you don't mind me asking, what kind of i/o performance are you looking for? On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo wrote: > Thank you Daniel, > > My colleague found the same solution in the meantime. And that helped us > as well. > > Kind regards, > Laszlo > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > For the case of simply using local disk mounted for /var/lib/nova and > raw disk image type, you could try adding to nova.conf: > > > > preallocate_images = space > > > > This implicitly changes the I/O method in libvirt from "threads" to > "native", which in my case improved performance a lot (10 times) and > generally is the best performance I could get. > > > > Best Regards > > Daniel > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > >> Hello all, > >> > >> we have a problem with the performance of the disk IO in a KVM > instance. > >> We are trying to provision VMs with high performance SSDs. we have > investigated different possibilities with different results ... > >> > >> 1. configure Nova to use local LVM storage (images_types = lvm) - > provided the best performance, but we could not migrate our instances > (seems to be a bug). > >> 2. use cinder with lvm backend and instance locality, we could migrate > the instances, but the performance is less than half of the previous case > >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = > raw in nova. We could migrate, but the write performance dropped to ~20% of > the images_types = lvm performance and read performance is ~65% of the lvm > case. > >> > >> do you have any idea to improve the performance for any of the cases 2 > or 3 which allows migration. > >> > >> Kind regards, > >> Laszlo > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Aug 3 18:56:44 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 3 Aug 2019 20:56:44 +0200 Subject: [keystone] [stein] user_enabled_emulation config problem Message-ID: Hello all, I have an issue using user_enabled_emulation with my LDAP solution. I set: user_tree_dn = ou=Users,o=UCO user_objectclass = inetOrgPerson user_id_attribute = uid user_name_attribute = uid user_enabled_emulation = true user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO user_enabled_emulation_use_group_config = true group_tree_dn = ou=Groups,o=UCO group_objectclass = posixGroup group_id_attribute = cn group_name_attribute = cn group_member_attribute = memberUid group_members_are_ids = true Keystone properly lists members of the Users group but they all remain disabled. Did I misinterpret something? Kind regards, Radek -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjamgade at suse.de Thu Aug 1 11:20:14 2019 From: sjamgade at suse.de (Sumit Jamgade) Date: Thu, 1 Aug 2019 13:20:14 +0200 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Hey Bernd, Can you try with just one publisher instead of 2 and also drop the archive_policy query parameter and its value. Then ceilometer should publish metrics based on map defined gnocchi_resources.yaml And while you are at it. Could you post a list of archive policies already defined in gnocchi, I believe this list should match what is listed in gnocchi_resources.yaml. Hope that helps Sumit On 7/31/19 3:22 AM, Bernd Bausch wrote: > > The message at the end of this email is some three months old. I have > the same problem. The question is: *How to use the new rate metrics in > Gnocchi. *I am using a Stein Devstack for my tests.* > * > > For example, I need the CPU rate, formerly named /cpu_util/. I created > a new archive policy that uses /rate:mean/ aggregation and has a 1 > minute granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > +---------------------+------------------------------------------------------------------+ > | Field               | > Value                                                            | > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean                                                  | > | back_window         | > 0                                                                | > | definition          | - points: 10080, granularity: 0:01:00, > timespan: 7 days, 0:00:00 | > | name                | > ceilometer-medium-rate                                           | > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in /pipeline.yaml/: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: >     - name: meter_sink >       publishers: >           - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >           *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate > would magically appear in the metric list. But no: All metrics are > linked to archive policy /medium/, and looking at the details of an > instance, I don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > +-----------------------+---------------------------------------------------------------------+ > | Field                 | > Value                                                               | > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics               | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > |                       | cpu: > af930d9a-a218-4230-b729-fee7e3796944                           | > |                       | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | > |                       | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | > |                       | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183               | > |                       | memory.swap.in: > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | > |                       | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4               | > |                       | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b                  | > |                       | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1                        | > |                       | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98                         | > | original_resource_id  | > ae3659d6-8998-44ae-a494-5248adbebe11                                | > ... > > | type                  | > instance                                                            | > | user_id               | > a9c935f52e5540fc9befae7f91b4b3ae                                    | > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What do > I have to do to get CPU usage rates? Do I have to create metrics? > Do//I have to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is correct, I > would be happy to write documentation once I understand how it works. > > Thanks a lot. > > Bernd > > On 5/10/2019 3:49 PM, info at dantalion.nl wrote: >> Hello, >> >> I am working on Watcher and we are currently changing how metrics are >> retrieved from different datasources such as Monasca or Gnocchi. Because >> of this major overhaul I would like to validate that everything is >> working correctly. >> >> Almost all of the optimization strategies in Watcher require the cpu >> utilization of an instance as metric but with newer versions of >> Ceilometer this has become unavailable. >> >> On IRC I received the information that Gnocchi could be used to >> configure an aggregate and this aggregate would then report cpu >> utilization, however, I have been unable to find documentation on how to >> achieve this. >> >> I was also notified that cpu_util is something that could be computed >> from other metrics. When reading >> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >> the documentation seems to agree on this as it states that cpu_util is >> measured by using a 'rate of change' transformer. But I have not been >> able to find how this can be computed. >> >> I was hoping someone could spare the time to provide documentation or >> information on how this currently is best achieved. >> >> Kind Regards, >> Corne Lukken (Dantali0n) >> From joseph.r.email at gmail.com Sat Aug 3 00:40:11 2019 From: joseph.r.email at gmail.com (Joe Robinson) Date: Sat, 3 Aug 2019 10:40:11 +1000 Subject: [nova][ops] Documenting nova tunables at scale In-Reply-To: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> References: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> Message-ID: Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along and > see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people to > dump some of their settings. The most common ones could go into a simple > admin doc to start. > > I know there is more I've thought about in the past that I don't have in > here but this is just a starting point so I don't make the mistake of > not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vungoctan252 at gmail.com Thu Aug 1 04:43:13 2019 From: vungoctan252 at gmail.com (Vu Tan) Date: Thu, 1 Aug 2019 11:43:13 +0700 Subject: [masakari] how to install masakari on centos 7 In-Reply-To: References: <35400f83-c29d-475a-8d36-d56b3cf16d30@email.android.com> Message-ID: Hi Patil, May I know how is it going ? On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: > Hi Patil, > Thank you for your reply, please instruct me if you successfully install > it. Thanks a lot > > On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > wrote: > >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and let >> you know the solution or will update you what steps I have followed to >> bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running but >> it is not running as it should be, a lots of error in logs, here is a >> sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, in >> _is_daemo >> n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, can >> you please take a look at my configuration to see what is wrong or what is >> missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING keystonemiddleware._common.config >> [-] The option "__file__" in conf is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING keystonemiddleware._common.config >> [-] The option "here" in conf is not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token [-] >> AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova >> os_privileged_user_password = P at ssword >> masakari_api_listen = controller >> masakari_api_listen_port = 15868 >> debug = False >> auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 >> auth_url = http://controller:5000 >> auth_type = password >> project_domain_id = default >> project_domain_name = default >> user_domain_name = default >> user_domain_id = default >> project_name = service >> username = masakari >> password = P at ssword >> region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar > > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded on >> the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options into >> your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig on >> > top level of masakari but the output is error, is this masakari still >> > in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig >> > genconfig create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run masakari-api >> > with this command: masakari-api, but I dont know whether the process is >> > running like that or is it just hang there, here is what it shows when >> > I run the command, I leave it there for a while but it does not change >> > anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING keystonemiddleware._common.config >> > [-] The option "__file__" in conf is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING keystonemiddleware._common.config >> > [-] The option "here" in conf is not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token [-] >> > AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This is >> > backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] Starting 4 >> > workers >> > 2019-07-08 15:21:09.984 30274 INFO masakari.masakari_api.wsgi.server >> > [-] (30274) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO masakari.masakari_api.wsgi.server >> > [-] (30275) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO masakari.masakari_api.wsgi.server >> > [-] (30277) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO masakari.masakari_api.wsgi.server >> > [-] (30276) wsgi starting up on http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> > > >>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured some >> > stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to systemctl >> > command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, you >> > will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api and >> > masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of prefix >> > on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step by >> > step or other documentation to get started with. Which part should be >> > installed on controller, which is should be on compute, and what is the >> > prerequisite to install masakari, I have installed corosync and >> > pacemaker on compute and controller nodes, , what else do I need to do >> > ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take a >> > look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller >> > # Enable to specify port other than default >> > masakari_api_listen_port = 15868 >> > debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path >> > api_paste_config = /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 >> > auth_url = http://controller:5000 >> > auth_type = password >> > project_domain_id = default >> > user_domain_id = default >> > project_name = service >> > username = masakari >> > password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the intended >> recipient, please advise the sender by replying promptly to this email and >> then delete and destroy this email and any attachments without any further >> use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the intended >> recipient, please advise the sender by replying promptly to this email and >> then delete and destroy this email and any attachments without any further >> use, copying or forwarding. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.posada.sanchez at gmail.com Fri Aug 2 21:45:03 2019 From: oscar.posada.sanchez at gmail.com (Oscar Omar Posada Sanchez) Date: Fri, 2 Aug 2019 16:45:03 -0500 Subject: [training-labs] what is access domain Message-ID: Hi Team, I am starting to study openStack I am following this reference https://github.com/openstack/training-labs but I can not find the access domain in the first login already installed the laboratory. Could you tell me, thanks. -- Por su atención y tiempo, gracias. Que pase feliz día. ------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 00:48:45 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 3 Aug 2019 20:48:45 -0400 Subject: [all][tc] U Cycle Naming Poll Message-ID: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html From berndbausch at gmail.com Sun Aug 4 07:50:09 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 4 Aug 2019 16:50:09 +0900 Subject: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? Message-ID: Prior to Stein, Ceilometer issued a metric named /cpu_util/, which I could use to trigger alarms and autoscaling when CPU utilization was too high. cpu_util doesn't exist anymore. Instead, we are asked to use Gnocchi's /rate/ feature. However, when using rates, alarms on a group of resources require more parameters than just one metric: Both an aggregation and a reaggregation method are needed. For example, a group of instances that implement "myapp": gnocchi measures aggregation -m cpu --reaggregation mean --aggregation rate:mean --query server_group=myapp --resource-type instance Actually, this command uses a deprecated API (but from what I can see, Aodh still uses it). The new way is like this: gnocchi aggregates --resource-type instance '(aggregate rate:mean (metric cpu mean))' server_group=myapp If rate:mean is in the archive policy, it also works the other way around: gnocchi aggregates --resource-type instance '(aggregate mean (metric cpu rate:mean))' server_group=myapp Without reaggregation, I get quite unexpected numbers, including negative CPU rates. If you want to understand why, see this discussion with one of the Gnocchi maintainers [1]. *My problem*: Aodh allows me to set an aggregation method, but not a reaggregation method. How can I create alarms based on rates? The problem extends to Heat and autoscaling. Thanks much, Bernd. [1] https://github.com/gnocchixyz/gnocchi/issues/1044 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Aug 4 07:57:27 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 4 Aug 2019 15:57:27 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann wrote: > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based on > input from communty members. The name critera are described in [1], and > this time around we were looking for names starting with U associated with > China. With some extra assistance from local community members (thank you > to everyone who helped!), we have a list of candidate names that will go > into the poll. Below is a subset of the names propsed, including those that > meet the standard criteria and some of the suggestions that do not. Before > we start the poll, the process calls for us to provide a period of 1 week > so that any names removed from the proposals can be discussed and any > last-minute objections can be raised. We will start the poll next week > using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared > among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is > in Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Aug 4 13:47:40 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 4 Aug 2019 13:47:40 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190804134740.vjaje7mtmrun7vyw@yuggoth.org> On 2019-08-04 15:57:27 +0800 (+0800), Zhipeng Huang wrote: > One of the most important one is missing: Urumqi: > https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . [...] I had suggested we exclude it for reasons of cultural sensitivity, since this is the 10-year anniversary of the July 2009 Ürümqi riots and September 2009 Xinjiang unrest there and thought it would probably be best not to seem like we're commemorating that. If most folks in China don't see it as an insensitive choice then we could presumably readd Urumqi as an option, but it was omitted out of caution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Sun Aug 4 14:11:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 10:11:27 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 4, 2019, at 3:57 AM, Zhipeng Huang wrote: > > One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? > > For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sun Aug 4 15:04:49 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 4 Aug 2019 15:04:49 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> I would also prefer not to establish too much precedence for non-geographical names. I feel Train should remain a special case (as it was related to the conference location, although not a geographical relation). We’ve got some good choices in the U* place names (although I’ll need some help with pronunciation, like Bexar) Tim On 4 Aug 2019, at 16:11, Doug Hellmann > wrote: On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Aug 4 16:18:15 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 5 Aug 2019 00:18:15 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Not quite understand what [GR] stands for but at least for top 6 ones the correct Romanized Pinyin spelling does not start with U :) All the rest looks fine :) On Sun, Aug 4, 2019 at 10:11 PM Doug Hellmann wrote: > > > On Aug 4, 2019, at 3:57 AM, Zhipeng Huang wrote: > > One of the most important one is missing: Urumqi: > https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should > actually not be counted since their Romanized spelling (pinyin) does not > start with U. > > > Jeremy has already addressed the reason for dropping Urumqui. I made > similar judgement calls on some of the suggestions not related to geography > because I easily found negative connotations for them. > > For the other items you refer to, I do see spellings starting with U there > in the list we were given. I do not claim to understand the differences in > the way those names have been translated into those forms, though. Are you > saying those are invalid spellings? > > > For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin > spelling since the name is following native's language. > > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > >> Every OpenStack development cycle and release has a code-name. As with >> everything we do, the process of choosing the name is open and based on >> input from communty members. The name critera are described in [1], and >> this time around we were looking for names starting with U associated with >> China. With some extra assistance from local community members (thank you >> to everyone who helped!), we have a list of candidate names that will go >> into the poll. Below is a subset of the names propsed, including those that >> meet the standard criteria and some of the suggestions that do not. Before >> we start the poll, the process calls for us to provide a period of 1 week >> so that any names removed from the proposals can be discussed and any >> last-minute objections can be raised. We will start the poll next week >> using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is >> shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet >> transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is >> in Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> >> > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open > Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 16:19:26 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 12:19:26 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> Message-ID: <200E2ADB-52B2-427A-B5D3-022C0416A3B6@doughellmann.com> > On Aug 4, 2019, at 11:04 AM, Tim Bell wrote: > > I would also prefer not to establish too much precedence for non-geographical names. I feel Train should remain a special case (as it was related to the conference location, although not a geographical relation). > > We’ve got some good choices in the U* place names (although I’ll need some help with pronunciation, like Bexar) > > Tim My understanding is that most of the other names were suggested by the Chinese contributor community, so I felt comfortable leaving them on the list even though I will be voting for one of the place names. Doug > >> On 4 Aug 2019, at 16:11, Doug Hellmann > wrote: >> >> >> >>> On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: >>> >>> One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. >> >> Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. >> >> For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? >> >>> >>> For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. >>> >>> On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: >>> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >>> >>> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >>> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >>> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >>> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >>> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >>> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >>> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >>> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >>> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >>> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >>> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >>> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >>> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >>> Uma http://www.fallingrain.com/world/CH/20/Uma.html >>> Unicorn >>> Urban >>> Unique >>> Umpire >>> Utopia >>> Umbrella >>> Ultimate >>> >>> [1] https://governance.openstack.org/tc/reference/release-naming.html >>> >>> >>> >>> >>> -- >>> Zhipeng (Howard) Huang >>> >>> Principle Engineer >>> OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 16:36:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 12:36:27 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 4, 2019, at 12:18 PM, Zhipeng Huang wrote: > > Not quite understand what [GR] stands for but at least for top 6 ones the correct Romanized Pinyin spelling does not start with U :) > > All the rest looks fine :) It is another romanization system [1]. During the brainstorming process we were told that using it was acceptable, although less common than Pinyin. [1] https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh > > On Sun, Aug 4, 2019 at 10:11 PM Doug Hellmann > wrote: > > >> On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: >> >> One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. > > Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. > > For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? > >> >> For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. >> >> On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: >> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> >> >> >> -- >> Zhipeng (Howard) Huang >> >> Principle Engineer >> OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C >> > > > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Aug 4 16:37:39 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 4 Aug 2019 16:37:39 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> On 2019-08-05 00:18:15 +0800 (+0800), Zhipeng Huang wrote: > Not quite understand what [GR] stands for [...] "Gwoyeu Romatzyh (pinyin: Guóyǔ Luómǎzì, literally "National Language Romanization"), abbreviated GR, is a system for writing Mandarin Chinese in the Latin alphabet. The system was conceived by Yuen Ren Chao and developed by a group of linguists including Chao and Lin Yutang from 1925 to 1926. Chao himself later published influential works in linguistics using GR. In addition a small number of other textbooks and dictionaries in GR were published in Hong Kong and overseas from 1942 to 2000." https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Sun Aug 4 18:52:07 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 4 Aug 2019 13:52:07 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Message-ID: Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Aug 5 00:17:00 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 5 Aug 2019 08:17:00 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> Message-ID: Fascinating ! All good choices :) On Mon, Aug 5, 2019 at 12:40 AM Jeremy Stanley wrote: > On 2019-08-05 00:18:15 +0800 (+0800), Zhipeng Huang wrote: > > Not quite understand what [GR] stands for > [...] > > "Gwoyeu Romatzyh (pinyin: Guóyǔ Luómǎzì, literally "National > Language Romanization"), abbreviated GR, is a system for writing > Mandarin Chinese in the Latin alphabet. The system was conceived by > Yuen Ren Chao and developed by a group of linguists including Chao > and Lin Yutang from 1925 to 1926. Chao himself later published > influential works in linguistics using GR. In addition a small > number of other textbooks and dictionaries in GR were published in > Hong Kong and overseas from 1942 to 2000." > > https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh > > -- > Jeremy Stanley > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Mon Aug 5 02:37:04 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 5 Aug 2019 02:37:04 +0000 Subject: =?utf-8?B?cmVwbHk6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVJlOiBbbm92YV1b?= =?utf-8?Q?ops]_Documenting_nova_tunables_at_scale?= Message-ID: <3d06c3e9d83d4cc7a0fe12df15091130@inspur.com> Agree with this approach. The large-scale test scenario configuration manual is very meaningful. When the OpenStack deployment scale reaches a certain level (for example, nodes >= 200,500… etc.), various exception scenarios will occur, rabitmq blocking conditions, and create a batch of servers. It’s success rate (There will be some configurations to be care. e.g. scheduling policies, rpc wait time, amount of works etc.). If there is a reference manual, this is very friendly. > On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, > wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along and > see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people to > dump some of their settings. The most common ones could go into a simple > admin doc to start. > > I know there is more I've thought about in the past that I don't have in > here but this is just a starting point so I don't make the mistake of > not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at liuyulong.me Mon Aug 5 06:18:19 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Mon, 5 Aug 2019 14:18:19 +0800 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1, Rodolfo always give us valuable review comments and keep producing high quality code. Welcome to the core team! ------------------ Original ------------------ From: "Miguel Lavalle"; Date: Mon, Aug 5, 2019 02:52 AM To: "openstack-discuss"; Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Aug 5 06:39:21 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Mon, 5 Aug 2019 18:39:21 +1200 Subject: [openstack-dev][magnum] Project updates In-Reply-To: <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> References: <20190731171049.gayatjbtjvgxya25@yuggoth.org> <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> Message-ID: Hi all, The issue of Magnum being "Certified Kubernetes Installer" has been fixed, see https://landscape.cncf.io/organization=open-stack&selected=magnum Thanks. On 1/08/19 6:45 AM, feilong wrote: > On 1/08/19 5:10 AM, Jeremy Stanley wrote: >> On 2019-07-31 21:03:39 +1200 (+1200), feilong wrote: >> [...] >>> So far, we have done some great work in this cycle which make >>> Magnum to achieve to a higher level. >> [...] >> >> This is all great stuff, thanks for the update! >> >>> Kubernetes is still evolving very fast >> [...] >> >> On that note, the Stein release announcement[0] mentioned that >> Magnum was a "Certified Kubernetes Installer." I don't see it listed >> on the Kubernetes Conformance site[1] now, but it was apparently >> still on there as recently as early July[2]. It seemed like this was >> a big deal at one point, but wasn't kept up. Is there any interest >> from the Magnum maintainers in adding support for recent versions of >> Kubernetes and reacquiring that certification in time for the Train >> release? > TBH, I don't know why it's removed from the list and I didn't get any > notice about that. But now I'm working with Chris to get it get. Thanks > for reminding. > > >> [0] https://www.openstack.org/software/stein/ >> [1] https://www.cncf.io/certification/software-conformance/ >> [2] https://web.archive.org/web/20190705004545/https://www.cncf.io/certification/software-conformance/ -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From skaplons at redhat.com Mon Aug 5 07:04:08 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 5 Aug 2019 09:04:08 +0200 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: <833D103F-BEFF-4173-9138-4184082F6D65@redhat.com> Yes! That is great news. And of course +1 (or even +100) from me :) > On 4 Aug 2019, at 20:52, Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel — Slawek Kaplonski Senior software engineer Red Hat From rico.lin.guanyu at gmail.com Mon Aug 5 07:40:59 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 5 Aug 2019 15:40:59 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann wrote: > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake For the above options, it's not common to use [GR] system, in Shanghai (or almost entire China area). So if we like to reduce confusion and unnecessary arguments also to get recognized by audiences (as width as we can), I don't think these are good choices. As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) I think this might be my fault here. because it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Aug 5 07:55:57 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 5 Aug 2019 16:55:57 +0900 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 from me. He provides valuable reviews and good quality codes and is also responsible. On Mon, Aug 5, 2019 at 3:53 AM Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel From merlin.blom at bertelsmann.de Mon Aug 5 08:43:08 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Mon, 5 Aug 2019 08:43:08 +0000 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax Message-ID: Hey, I would like to aggregate data from the gnocchi database by using the gnocchi aggregates function of the CLI/API The documentation does not cover the operations that are available nor the syntax that has to be used: https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggrega tes Searching for more information I found a GitHub Issue: https://github.com/gnocchixyz/gnocchi/issues/393 But I cannot use the syntax from that ether. My use case: I want to aggregate the vcpus hours per month, vram hours per month, . per server or project. - when an instance is stopped only storage is counted - the exact usage is used e.g. 2 vcpus between 1st and 7th day 4vcpus between 8th and last month no mean calculations Do you have detailed documentation about the gnocchi Aggregates Operation Syntax? Do you have complex examples for gnocchi aggregations? Especially when using the python bindings: conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], resource_type='instance', groupby='original_resource_id') Can you give me advice regarding my use case? Do's and don'ts. Thank you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From bcafarel at redhat.com Mon Aug 5 08:52:22 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 5 Aug 2019 10:52:22 +0200 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 on everything Miguel said! QoS, nova/neutron fixes, all the work to switch to Pyroute2, great reviews to learn from, … And of course +1 for seeing him added to the core list! On Sun, 4 Aug 2019 at 20:54, Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron since > the Mitaka cycle. He has been a driving force over these years in the > implementation an evolution of Neutron's QoS feature, currently leading the > sub-team dedicated to it. Recently he has been working on improving the > interaction with Nova during the port binding process, driven the adoption > of Pyroute2 and has become very active in fixing all kinds of bugs. The > quality and number of his code reviews during the Train cycle are > comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Mon Aug 5 08:54:41 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 5 Aug 2019 11:54:41 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: Hi, well, we used the same command to measure the different storage possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we have measured the disk mounted directly on the host, and we have used the same command to measure the performance in the guests using different ways to attach the storage to the VM. for instance on the host we were able to measure 408MB/s initial writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the guest we got the following, using the different technologies: 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, without preallocate images) Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads 427MB/s. 2. Ephemeral served by nova (images type = lvm, without preallocate images) Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random reads 550MB/s. 3. Cinder attached LVM with instance locality Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random reads 160MB/s. 4. Cinder attached LVM without instance locality Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random reads 105MB/s. 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, witht preallocate images) Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random reads 553MB/s So points 3,4 are using ISCSI. As you can see those numbers are far below the local volume based or the local file based with preallocate images. Could you share some nubers about the performance of your ISCSI based setup? that would allow us to see whether we are doing something wrong related to the iscsi. Thank you. Kind regards, Laszlo On 8/3/19 8:41 PM, Donny Davis wrote: > I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well.  > > If you don't mind me asking, what kind of i/o performance are you looking for? > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo > wrote: > > Thank you Daniel, > > My colleague found the same solution in the meantime. And that helped us as well. > > Kind regards, > Laszlo > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > > >     preallocate_images = space > > > > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > > > > Best Regards > > Daniel > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > >> Hello all, > >> > >> we have a problem with the performance of the disk IO in a KVM instance. > >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > >> > >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > >> 2. use cinder with lvm backend  and instance locality, we could migrate the instances, but the performance is less than half of the previous case > >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > >> > >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > >> > >> Kind regards, > >> Laszlo > >> > > From donny at fortnebula.com Mon Aug 5 12:08:11 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 5 Aug 2019 08:08:11 -0400 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: I am happy to share numbers from my iscsi setup. However these numbers probably won't mean much for your workloads. I tuned my openstack to perform as well as possible for a specific workload (Openstack CI), so some of the things I have put my efforts into are for CI work and not really relevant to general purpose. Also your cinder performance hinges greatly on your networks capabilities. I use a dedicated nic for iscsi traffic, and MTU's are set at 9000 for every device in the iscsi path. *Only* that nic is set at MTU 9000, because if the rest of the openstack network is, it can create more problems than it solves. My network spine is 40G, and each compute node has 4 10G nics. I only use one nic for iscsi traffic. The block storage node has two 40G nics. With that said, I use the fio tool to benchmark performance on linux systems. Here is the command i use to run the benchmark fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 --size=10G --readwrite=randrw --rwmixread=50 >From the block storage node locally Run status group 0 (all jobs): READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), io=79.9GiB (85.8GB), run=26948-27662msec WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), io=80.1GiB (85.0GB), run=26948-27662msec >From inside a vm Run status group 0 (all jobs): READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=30.0GiB (32.2GB), run=69242-69605msec WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=29.0GiB (32.2GB), run=69242-69605msec The vm side of the test is able to push pretty close to the limits of the nic. My cloud also currently has a full workload on it, as I have learned in working to get an optimized for CI cloud... it does matter if there is a workload or not. Are you using raid for your ssd's, if so what type? Do you mind sharing what workload will go on your Openstack deployment? Is it DB, web, general purpose, etc. ~/Donny D On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo wrote: > Hi, > > well, we used the same command to measure the different storage > possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we > have measured the disk mounted directly on the host, and we have used the > same command to measure the performance in the guests using different ways > to attach the storage to the VM. > > for instance on the host we were able to measure 408MB/s initial writes, > 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the > guest we got the following, using the different technologies: > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, without preallocate images) > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads > 427MB/s. > > 2. Ephemeral served by nova (images type = lvm, without preallocate images) > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random > reads 550MB/s. > > 3. Cinder attached LVM with instance locality > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random > reads 160MB/s. > > 4. Cinder attached LVM without instance locality > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random > reads 105MB/s. > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, witht preallocate images) > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random > reads 553MB/s > > > So points 3,4 are using ISCSI. As you can see those numbers are far below > the local volume based or the local file based with preallocate images. > > Could you share some nubers about the performance of your ISCSI based > setup? that would allow us to see whether we are doing something wrong > related to the iscsi. Thank you. > > Kind regards, > Laszlo > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > I am using the cinder-lvm backend right now and performance is quite > good. My situation is similar without the migration parts. Prior to this > arrangement I was using iscsi to mount a disk in /var/lib/nova/instances > and that also worked quite well. > > > > If you don't mind me asking, what kind of i/o performance are > you looking for? > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo > wrote: > > > > Thank you Daniel, > > > > My colleague found the same solution in the meantime. And that > helped us as well. > > > > Kind regards, > > Laszlo > > > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > > For the case of simply using local disk mounted for /var/lib/nova > and raw disk image type, you could try adding to nova.conf: > > > > > > preallocate_images = space > > > > > > This implicitly changes the I/O method in libvirt from "threads" > to "native", which in my case improved performance a lot (10 times) and > generally is the best performance I could get. > > > > > > Best Regards > > > Daniel > > > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > > >> Hello all, > > >> > > >> we have a problem with the performance of the disk IO in a KVM > instance. > > >> We are trying to provision VMs with high performance SSDs. we > have investigated different possibilities with different results ... > > >> > > >> 1. configure Nova to use local LVM storage (images_types = lvm) - > provided the best performance, but we could not migrate our instances > (seems to be a bug). > > >> 2. use cinder with lvm backend and instance locality, we could > migrate the instances, but the performance is less than half of the > previous case > > >> 3. mount the ssd on /var/lib/nova/instances and use the > images_type = raw in nova. We could migrate, but the write performance > dropped to ~20% of the images_types = lvm performance and read performance > is ~65% of the lvm case. > > >> > > >> do you have any idea to improve the performance for any of the > cases 2 or 3 which allows migration. > > >> > > >> Kind regards, > > >> Laszlo > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 5 12:22:16 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 08:22:16 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> > On Aug 5, 2019, at 3:40 AM, Rico Lin wrote: > > > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > > > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > > For the above options, it's not common to use [GR] system, in Shanghai (or almost entire China area). So if we like to reduce confusion and unnecessary arguments also to get recognized by audiences (as width as we can), I don't think these are good choices. Ok, based on your input and Howard’s I will just drop these options from the proposed list. > > As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. Are you saying we should not include any of these names either, or just that when we present the poll we should not include the Pinyin spelling? > > > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > > I think this might be my fault here. because it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) I will update the wiki page and ensure this is correct in the poll. > > > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > > > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > > Uma http://www.fallingrain.com/world/CH/20/Uma.html > > Unicorn > > Urban > > Unique > > Umpire > > Utopia > > Umbrella > > Ultimate > > > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > > > > > -- > May The Force of OpenStack Be With You, > Rico Lin > irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Mon Aug 5 13:03:27 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 5 Aug 2019 08:03:27 -0500 Subject: [uc]The UC Nomination Period is now open! Message-ID: As the subject says, the nomination period for the August 2019 User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. -- Ed Leafe From fungi at yuggoth.org Mon Aug 5 13:15:13 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 13:15:13 +0000 Subject: [openstack-dev][magnum] Project updates In-Reply-To: References: <20190731171049.gayatjbtjvgxya25@yuggoth.org> <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> Message-ID: <20190805131512.uxmktws5nhymuxrx@yuggoth.org> On 2019-08-05 18:39:21 +1200 (+1200), Feilong Wang wrote: > The issue of Magnum being "Certified Kubernetes Installer" has been > fixed, see > https://landscape.cncf.io/organization=open-stack&selected=magnum Thanks. [...] That's great news--congratulations to the Magnum team! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kevin at cloudnull.com Mon Aug 5 13:26:54 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 5 Aug 2019 08:26:54 -0500 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> References: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> Message-ID: Thank you everyone. It's an honor to be considered for a core reviewer position on this team and will strive to not let you down. -- Kevin Carter IRC: Cloudnull On Mon, Jul 29, 2019 at 7:32 AM Giulio Fidente wrote: > On 7/26/19 11:00 PM, Alex Schultz wrote: > > Hey folks, > > > > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. > > tripleo-ansible-core). He has made excellent progress centralizing our > > ansible roles and improving the testing around them. > > > > Please reply with your approval/objections. If there are no objections, > > we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. > > thanks Kevin for igniting a long waited transformation > > +1 > -- > Giulio Fidente > GPG KEY: 08D733BA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Aug 5 13:28:43 2019 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 5 Aug 2019 07:28:43 -0600 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: References: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> Message-ID: Since there were no objections, Kevin has been added to tripleo-ansible-core. Thanks, -Alex On Mon, Aug 5, 2019 at 7:27 AM Carter, Kevin wrote: > Thank you everyone. It's an honor to be considered for a core reviewer > position on this team and will strive to not let you down. > > -- > > Kevin Carter > IRC: Cloudnull > > > On Mon, Jul 29, 2019 at 7:32 AM Giulio Fidente > wrote: > >> On 7/26/19 11:00 PM, Alex Schultz wrote: >> > Hey folks, >> > >> > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. >> > tripleo-ansible-core). He has made excellent progress centralizing our >> > ansible roles and improving the testing around them. >> > >> > Please reply with your approval/objections. If there are no objections, >> > we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. >> >> thanks Kevin for igniting a long waited transformation >> >> +1 >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Aug 5 13:36:53 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 09:36:53 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: You can drop the Pinyin equivalents from the Mongolian/Russian options when drawing up the poll. The Pinyin spelling for those options was added to the Wiki document (by me) to match what was done for the GR options. I found it important to note that the options which conveniently began with U under the typical transliteration systems of those languages only did so because those systems are, essentially, not Chinese. But, all that said, it's pretty rare to see Mongolian/Russian place names written in Pinyin so it would come off as a bit of a confusing distraction to actually include the Pinyin equivalents in the poll itself. Now, my two cents about the poll options: - I'm against using GR spelling... regardless of the actual history of the use of GR, I've already seen some Chinese contributors state that using GR is, at best, weird. - I'm against using Mongolian/Manchu/Russian names... I do not see how a name from one of these languages is at all representative of Shanghai. - I'm against using arbitrary English words... "Train" was fine because it represented the conference itself and something about our community. "Umpire" and "Umbrella" don't represent anything. - I'm in favor of using English words that describe something about the host city or country... for example I think "Urban" is a great because Shanghai is by certain metrics one of the most urban areas in the world, China has many of the world's largest cities, etc. Deciding whether certain options should even appear on the poll is outside my responsibility. On Sat, Aug 3, 2019 at 8:51 PM Doug Hellmann wrote: > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > From mnaser at vexxhost.com Mon Aug 5 13:38:55 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Aug 2019 09:38:55 -0400 Subject: [tc] fast-approve change to update voting schedule for u release Message-ID: Hi everyone, Doug has proposed a change to make changes to the voting schedule for the U release. I will be fast tracking this change and this serves as a notification to the TC (and the community as whole). It's a trivial change which adjusts scheduling only, nothing more. The current date is already passed so we wouldn't have documentation that makes sense in the first place. https://review.opendev.org/#/c/674465/1 If anyone objects, feel free to push a revert to that patch. Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From laszlo.budai at gmail.com Mon Aug 5 13:45:53 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 5 Aug 2019 16:45:53 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> Thank you for the info. Our is a generic openstack having the main storage on CEPH. We had a requirement from one tenant to provide a very fast storage for a no-sql database. So it came the idea to add some nvme storage to a few computing nodes, and to provide the storage from those to the specific tenant. We have investigated different options in providing this. 1. The ssd managed by nova as LVM 2. the ssd managed by cinder and use the instance locality filter 3. the ssd mounted on the /var/liv/instances and the ephemeral disk managed by nova. Kind regards, Laszlo On 8/5/19 3:08 PM, Donny Davis wrote: > I am happy to share numbers from my iscsi setup. However these numbers probably won't mean much for your workloads. I tuned my openstack to perform as well as possible for a specific workload (Openstack CI), so some of the things I have put my efforts into are for CI work and not really relevant to general purpose. Also your cinder performance hinges greatly on your networks capabilities. I use a dedicated nic for iscsi traffic, and MTU's are set at 9000 for every device in the iscsi path. *Only* that nic is set at MTU 9000, because if the rest of the openstack network is, it can create more problems than it solves. My network spine is 40G, and each compute node has 4 10G nics. I only use one nic for iscsi traffic. The block storage node has two 40G nics.  > > With that said, I use the fio tool to benchmark performance on linux systems.  > Here is the command i use to run the benchmark > > fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 --size=10G --readwrite=randrw --rwmixread=50 > > From the block storage node locally > > Run status group 0 (all jobs): >    READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), io=79.9GiB (85.8GB), run=26948-27662msec >   WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), io=80.1GiB (85.0GB), run=26948-27662msec > > From inside a vm > > Run status group 0 (all jobs): >    READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=30.0GiB (32.2GB), run=69242-69605msec >   WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=29.0GiB (32.2GB), run=69242-69605msec > > The vm side of the test is able to push pretty close to the limits of the nic. My cloud also currently has a full workload on it, as I have learned in working to get an optimized for CI cloud... it does matter if there is a workload or not.  > > > Are you using raid for your ssd's, if so what type? > > Do you mind sharing what workload will go on your Openstack deployment? > Is it DB, web, general purpose, etc. > > ~/Donny D > > > > > > > > > On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo > wrote: > > Hi, > > well, we used the same command to measure the different storage possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we have measured the disk mounted directly on the host, and we have used the same command to measure the performance in the guests using different ways to attach the storage to the VM. > > for instance on the host we were able to measure 408MB/s initial writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the guest we got the following, using the different technologies: > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, without preallocate images) > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads 427MB/s. > > 2. Ephemeral served by nova (images type = lvm, without preallocate images) > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random reads 550MB/s. > > 3. Cinder attached LVM with instance locality > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random reads 160MB/s. > > 4. Cinder attached LVM without instance locality > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random reads 105MB/s. > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, witht preallocate images) > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random reads 553MB/s > > > So points 3,4 are using ISCSI. As you can see those numbers are far below the local volume based or the local file based with preallocate images. > > Could you share some nubers about the performance of your ISCSI based setup? that would allow us to see whether we are doing something wrong related to the iscsi. Thank you. > > Kind regards, > Laszlo > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well.  > > > > If you don't mind me asking, what kind of i/o performance are you looking for? > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo >> wrote: > > > >     Thank you Daniel, > > > >     My colleague found the same solution in the meantime. And that helped us as well. > > > >     Kind regards, > >     Laszlo > > > >     On 8/2/19 6:50 PM, Daniel Speichert wrote: > >     > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > >     > > >     >     preallocate_images = space > >     > > >     > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > >     > > >     > Best Regards > >     > Daniel > >     > > >     > On 8/2/2019 10:53, Budai Laszlo wrote: > >     >> Hello all, > >     >> > >     >> we have a problem with the performance of the disk IO in a KVM instance. > >     >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > >     >> > >     >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > >     >> 2. use cinder with lvm backend  and instance locality, we could migrate the instances, but the performance is less than half of the previous case > >     >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > >     >> > >     >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > >     >> > >     >> Kind regards, > >     >> Laszlo > >     >> > > > > > From doug at doughellmann.com Mon Aug 5 13:47:48 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 09:47:48 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 5, 2019, at 9:36 AM, Jeremy Freudberg wrote: > > You can drop the Pinyin equivalents from the Mongolian/Russian options > when drawing up the poll. The Pinyin spelling for those options was > added to the Wiki document (by me) to match what was done for the GR > options. I found it important to note that the options which > conveniently began with U under the typical transliteration systems of > those languages only did so because those systems are, essentially, > not Chinese. But, all that said, it's pretty rare to see > Mongolian/Russian place names written in Pinyin so it would come off > as a bit of a confusing distraction to actually include the Pinyin > equivalents in the poll itself. > > Now, my two cents about the poll options: > - I'm against using GR spelling... regardless of the actual history of > the use of GR, I've already seen some Chinese contributors state that > using GR is, at best, weird. Those will be dropped. > - I'm against using Mongolian/Manchu/Russian names... I do not see how > a name from one of these languages is at all representative of > Shanghai. The geographic area under consideration was expanded to "all of China" because it was proving too difficult to come up with place names starting with U from within the narrower area surrounding Shanghai. > - I'm against using arbitrary English words... "Train" was fine > because it represented the conference itself and something about our > community. "Umpire" and "Umbrella" don't represent anything. My understanding is those items were part of the list produced by the Chinese contributor community via their discussions on wechat. I only removed the ones I thought might have obvious negative connotations, and I’m content to leave the rest in the list and have the voters decide if they’re suitable names. > - I'm in favor of using English words that describe something about > the host city or country... for example I think "Urban" is a great > because Shanghai is by certain metrics one of the most urban areas in > the world, China has many of the world's largest cities, etc. > > Deciding whether certain options should even appear on the poll is > outside my responsibility. Lucky you. ;-) > > > > On Sat, Aug 3, 2019 at 8:51 PM Doug Hellmann wrote: >> >> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> From donny at fortnebula.com Mon Aug 5 14:01:46 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 5 Aug 2019 10:01:46 -0400 Subject: [nova] local ssd disk performance In-Reply-To: <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> Message-ID: If it was me, I would use cinder. Reasons why are as follows DB data doesn't belong on an ephemeral disk, it probably belongs on block storage. When you remove the requirements for data to be stored on ephemeral disks, live migration is less of an issue. You can tune the c-vol node providing the block storage to meet the performance requirements of the end users application. To start with I would look at local performance of disk on the block storage node. In my case, raid5/6 with nvme disks put the cpu at 100% because mdadm isn't multithreaded. Raid 10 provides enough performance, but only because I created seperate raid1 volumes, then put raid0 across them. This way I get more threads from mdadm. Once local performance meets expectations, I would move to creating a test volume and then mounting it on the hypervisor manually. Then tune network performance to meet your goals. Finally test in vm performance to make sure it's doing what you want. Donny Davis c: 805 814 6800 irc: donnyd On Mon, Aug 5, 2019, 9:45 AM Budai Laszlo wrote: > Thank you for the info. Our is a generic openstack having the main storage > on CEPH. We had a requirement from one tenant to provide a very fast > storage for a no-sql database. So it came the idea to add some nvme storage > to a few computing nodes, and to provide the storage from those to the > specific tenant. > > We have investigated different options in providing this. > > 1. The ssd managed by nova as LVM > 2. the ssd managed by cinder and use the instance locality filter > 3. the ssd mounted on the /var/liv/instances and the ephemeral disk > managed by nova. > > Kind regards, > Laszlo > > On 8/5/19 3:08 PM, Donny Davis wrote: > > I am happy to share numbers from my iscsi setup. However these numbers > probably won't mean much for your workloads. I tuned my openstack to > perform as well as possible for a specific workload (Openstack CI), so some > of the things I have put my efforts into are for CI work and not really > relevant to general purpose. Also your cinder performance hinges greatly on > your networks capabilities. I use a dedicated nic for iscsi traffic, and > MTU's are set at 9000 for every device in the iscsi path. *Only* that nic > is set at MTU 9000, because if the rest of the openstack network is, it can > create more problems than it solves. My network spine is 40G, and each > compute node has 4 10G nics. I only use one nic for iscsi traffic. The > block storage node has two 40G nics. > > > > With that said, I use the fio tool to benchmark performance on linux > systems. > > Here is the command i use to run the benchmark > > > > fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 > --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 > --size=10G --readwrite=randrw --rwmixread=50 > > > > From the block storage node locally > > > > Run status group 0 (all jobs): > > READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), > io=79.9GiB (85.8GB), run=26948-27662msec > > WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), > io=80.1GiB (85.0GB), run=26948-27662msec > > > > From inside a vm > > > > Run status group 0 (all jobs): > > READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), > io=30.0GiB (32.2GB), run=69242-69605msec > > WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), > io=29.0GiB (32.2GB), run=69242-69605msec > > > > The vm side of the test is able to push pretty close to the limits of > the nic. My cloud also currently has a full workload on it, as I have > learned in working to get an optimized for CI cloud... it does matter if > there is a workload or not. > > > > > > Are you using raid for your ssd's, if so what type? > > > > Do you mind sharing what workload will go on your Openstack deployment? > > Is it DB, web, general purpose, etc. > > > > ~/Donny D > > > > > > > > > > > > > > > > > > On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo > wrote: > > > > Hi, > > > > well, we used the same command to measure the different storage > possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we > have measured the disk mounted directly on the host, and we have used the > same command to measure the performance in the guests using different ways > to attach the storage to the VM. > > > > for instance on the host we were able to measure 408MB/s initial > writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, > on the guest we got the following, using the different technologies: > > > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, without preallocate images) > > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random > reads 427MB/s. > > > > 2. Ephemeral served by nova (images type = lvm, without preallocate > images) > > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, > random reads 550MB/s. > > > > 3. Cinder attached LVM with instance locality > > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, > random reads 160MB/s. > > > > 4. Cinder attached LVM without instance locality > > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, > random reads 105MB/s. > > > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, witht preallocate images) > > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, > random reads 553MB/s > > > > > > So points 3,4 are using ISCSI. As you can see those numbers are far > below the local volume based or the local file based with preallocate > images. > > > > Could you share some nubers about the performance of your ISCSI > based setup? that would allow us to see whether we are doing something > wrong related to the iscsi. Thank you. > > > > Kind regards, > > Laszlo > > > > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > > I am using the cinder-lvm backend right now and performance is > quite good. My situation is similar without the migration parts. Prior to > this arrangement I was using iscsi to mount a disk in > /var/lib/nova/instances and that also worked quite well. > > > > > > If you don't mind me asking, what kind of i/o performance are > you looking for? > > > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo < > laszlo.budai at gmail.com laszlo.budai at gmail.com >> wrote: > > > > > > Thank you Daniel, > > > > > > My colleague found the same solution in the meantime. And that > helped us as well. > > > > > > Kind regards, > > > Laszlo > > > > > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > > > For the case of simply using local disk mounted for > /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > > > > > > > preallocate_images = space > > > > > > > > This implicitly changes the I/O method in libvirt from > "threads" to "native", which in my case improved performance a lot (10 > times) and generally is the best performance I could get. > > > > > > > > Best Regards > > > > Daniel > > > > > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > > > >> Hello all, > > > >> > > > >> we have a problem with the performance of the disk IO in a > KVM instance. > > > >> We are trying to provision VMs with high performance SSDs. > we have investigated different possibilities with different results ... > > > >> > > > >> 1. configure Nova to use local LVM storage (images_types = > lvm) - provided the best performance, but we could not migrate our > instances (seems to be a bug). > > > >> 2. use cinder with lvm backend and instance locality, we > could migrate the instances, but the performance is less than half of the > previous case > > > >> 3. mount the ssd on /var/lib/nova/instances and use the > images_type = raw in nova. We could migrate, but the write performance > dropped to ~20% of the images_types = lvm performance and read performance > is ~65% of the lvm case. > > > >> > > > >> do you have any idea to improve the performance for any of > the cases 2 or 3 which allows migration. > > > >> > > > >> Kind regards, > > > >> Laszlo > > > >> > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Mon Aug 5 14:49:20 2019 From: dmellado at redhat.com (Daniel Mellado) Date: Mon, 5 Aug 2019 16:49:20 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL Message-ID: As I have taken on a new role in my company I won't be having the time to dedicate to Kuryr in order to keep as the PTL for the current cycle. I started working on the project more than two cycles ago and it has been a real pleasure for me. Helping a project grow from an idea and a set of diagrams to a production-grade service was an awesome experience and I got help from my awesome team and upstream contributors! I would like to take this opportunity to thank everyone who contributed to the success of Kuryr – either by writing code, suggesting new use cases, participating in our discussions, or helping out with Infra! Michal Dulko (irc: dulek) has been kind enough to accept replacing me as the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's knowledgeable about every piece of the code and is tightly connected to the community. I will still be around to help if needed. Please join me congratulating Michal on his new role! Best! Daniel [1] https://review.opendev.org/674624 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From tbechtold at suse.com Mon Aug 5 15:01:49 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Mon, 5 Aug 2019 17:01:49 +0200 Subject: [manila] Enabling CephFS snapshot support by default In-Reply-To: References: Message-ID: Hi Goutham, On 7/26/19 5:41 AM, Goutham Pacha Ravi wrote: > Hi Zorillas and interested parties, > > (Copying a couple of folks explicitly as they may not be part of > openstack-discuss) > > Manila's CephFS driver has a configuration option > "cephfs_enable_snapshots" that administrators can toggle in > manila.conf to allow project users to take snapshots of their shares. > [1][2] This option has defaulted to False, since the time the CephFS > driver was introduced (Mitaka release of OpenStack Manila) > > IIUC, CephFS snapshots have been "stable" since the Mimic release of > CephFS, which debuted in June 2018. [3] Since then, ceph > deployers/administrators don't have to toggle anything on the backend > to enable snapshots. > > So, can we consider changing the default value of the config opt in > manila to "True" in the Train release? +1 . I would also vote for deprecating the option directly. [...] Cheers, Tom From mdemaced at redhat.com Mon Aug 5 15:18:11 2019 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 5 Aug 2019 17:18:11 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: Congratulations, Michał!! I'm sure you'll do great. Cheers, Maysa. On Mon, Aug 5, 2019 at 4:49 PM Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Mon Aug 5 15:31:13 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 5 Aug 2019 11:31:13 -0400 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: <582fd354-3871-6cc8-4949-cd306d66c23a@gmail.com> Big +1 from me as well, keep up the great work Rodolfo! On 8/4/19 2:52 PM, Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron > since the Mitaka cycle. He has been a driving force over these years in > the implementation an evolution of Neutron's QoS feature, currently > leading the sub-team dedicated to it. Recently he has been working on > improving the interaction with Nova during the port binding process, > driven the adoption of Pyroute2 and has become very active in fixing all > kinds of bugs. The quality and number of his code reviews during the > Train cycle are comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel From hongbin.lu at huawei.com Mon Aug 5 15:36:33 2019 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Mon, 5 Aug 2019 15:36:33 +0000 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: 00370D08-B149-4873-BCCC-9B4877FBA7FA Big +1 from me as well. -------------------------------------------------- Hongbin Lu Hongbin Lu Mobile: Email: hongbin.lu at huawei.com From:Miguel Lavalle To:openstack-discuss Date:2019-08-04 14:53:45 Subject:[openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Aug 5 16:22:43 2019 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 05 Aug 2019 18:22:43 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: On Mon, 2019-08-05 at 16:49 +0200, Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! Thanks for leading the project for 2.5 cycles, you did a great job! > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! I actually thought a formal election process is required here, but I can confirm that we agreed that I should run in those elections. If using a simpler path is possible here I'm totally fine with it. > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > From i at liuyulong.me Mon Aug 5 16:29:08 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Tue, 6 Aug 2019 00:29:08 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Hi, Please allow me to re-post my former proposal here again [1]. Let me quote some contents: There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. Here is my list: 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 浦东区: Updong,Pudong District, Shanghai 徐汇区: Uxhui,Xuhui District, Shanghai 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu 乌镇: Uwzhen, yes, again 榆林: Uylin, City of China, Shaanxi province 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. 湖南:Uhnan, Hunan Province 鲁:Ul, the abbreviation of Shandong Province Thank you [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html LIU Yulong ------------------ Original ------------------ From: "Doug Hellmann"; Date: Sun, Aug 4, 2019 08:48 AM To: "openstack-discuss"; Subject: [all][tc] U Cycle Naming Poll Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Aug 5 17:30:26 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 13:30:26 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Hi -- That's an interesting proposal. Do you believe that your suggestion (non-standard modification of Pinyin) is more appropriate than the alternatives (using Mongolian/Russian place names, or using English words)? Would Chinese contributors understand these names with switched letters? On Mon, Aug 5, 2019 at 12:31 PM LIU Yulong wrote: > > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > From: "Doug Hellmann"; > Date: Sun, Aug 4, 2019 08:48 AM > To: "openstack-discuss"; > Subject: [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > From tpb at dyncloud.net Mon Aug 5 17:32:31 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 5 Aug 2019 13:32:31 -0400 Subject: [Manila] Python3 support in Manila 3rd Party CI Message-ID: <20190805173231.xkqrjbwl4wjwvd2k@barron.net> We worked in the Pike release to get Manila unit tests running under Python3. In Train we have completed the work begun in Stein to get all first party functional test jobs running under Python 3. Now we need to push to get third party jobs converted to Python 3 since Train will be the last OpenStack release to keep support for Python2 and it will itself support Python3 first [1]. None of this will be news to those who attend the weekly Manila team meeting at 1500 UTC on Thursdays on #openstack-meetings-alt on freenode [2], but not every back end driver has regular representation at those meetings so we are communicating the need for third party CI job Python 3 support here as well. We are tracking this work on the Manila wiki [3], where you may also find or add tips to help other CI maintainers. Also, please feel free to follow up with me via email or on freenode #openstack-manila. -- Tom Barron email: tpb at dyncloudnet irc: tbarron [1] https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html [2] https://wiki.openstack.org/wiki/Manila/Meetings [3] https://wiki.openstack.org/wiki/Manila/TrainCycle#Python3_Testing From jim at jimrollenhagen.com Mon Aug 5 17:40:42 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 5 Aug 2019 13:40:42 -0400 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: On Mon, Aug 5, 2019 at 12:24 PM Michał Dulko wrote: > On Mon, 2019-08-05 at 16:49 +0200, Daniel Mellado wrote: > > As I have taken on a new role in my company I won't be having the time > > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > > > I started working on the project more than two cycles ago and it has > > been a real pleasure for me. > > > > Helping a project grow from an idea and a set of diagrams to a > > production-grade service was an awesome experience and I got help from > > my awesome team and upstream contributors! > > > > I would like to take this opportunity to thank everyone who contributed > > to the success of Kuryr – either by writing code, suggesting new use > > cases, participating in our discussions, or helping out with Infra! > > Thanks for leading the project for 2.5 cycles, you did a great job! > > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > > knowledgeable about every piece of the code and is tightly connected to > > the community. I will still be around to help if needed. > > > > Please join me congratulating Michal on his new role! > > I actually thought a formal election process is required here, but I > can confirm that we agreed that I should run in those elections. If > using a simpler path is possible here I'm totally fine with it. > Nope! The TC appoints a replacement[0], but if the outgoing PTL appoints a replacement, that makes it easier for us, so we generally accept that. :) [0] https://governance.openstack.org/tc/reference/charter.html#election-for-ptl-seats // jim > > > Best! > > > > Daniel > > > > > > [1] https://review.opendev.org/674624 > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Aug 5 17:48:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 5 Aug 2019 13:48:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <8a579f53-6a8d-53a9-0c7c-a3eecb791443@redhat.com> On 5/08/19 3:40 AM, Rico Lin wrote: > it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) and after that we can s/Ussuir/Ussuri/ ;) From haleyb.dev at gmail.com Mon Aug 5 18:05:22 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 5 Aug 2019 14:05:22 -0400 Subject: [neutron] Bug deputy report for week of July 29th Message-ID: <658331ab-ebb4-e944-a8d2-33683e0b4db9@gmail.com> Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. -Brian Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1838699 - Removing a subnet from DVR router also removes DVR MAC flows for other router on network - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838760 - Security groups don't work for trunk ports with iptables_hybrid fw driver - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838793 - "KeepalivedManagerTestCase" tests failing during namespace deletion - Rodolfo took ownership * https://bugs.launchpad.net/devstack/+bug/1838811 - /opt/stack/devstack/tools/outfilter.py failing in neutron functional jobs since 8/2 - https://review.opendev.org/#/c/674426/ merged Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1838396 - update port receive 500 - https://review.opendev.org/#/c/673486/ * https://bugs.launchpad.net/neutron/+bug/1838403 - Asymmetric floating IP notifications - Seems events for floating IPs/routers are not sent in some cases - Additional information was supplied in bug, need to reproduce - Need neutron version information - Needs owner * https://bugs.launchpad.net/neutron/+bug/1838449 - Router migrations failing in the gate - Miguel assigned to himself, needs further investigation * https://bugs.launchpad.net/neutron/+bug/1838431 - [scale issue] ovs-agent port processing time increases linearly and eventually timeouts - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838563 - Timeout in executing ovs command crash ovs agent - https://review.opendev.org/#/c/674085/ * https://bugs.launchpad.net/neutron/+bug/1838587 - request neutron with Incorrect body key return 500 - https://review.opendev.org/#/c/674153/ * https://bugs.launchpad.net/neutron/+bug/1838689 - rpc_workers default value ignores setting of api_workers - https://review.opendev.org/674125 merged, backport in progress Low bugs -------- Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1838621 - [RFE] Configure extra dhcp options via API and per network - Discussed at neutron drivers meeting 8/2 - Slawek will investigate questions that came up at meeting Invalid bugs ------------ Further triage required ----------------------- * https://bugs.launchpad.net/neutron/+bug/1838617 - ssh connection getting dropped frequently - Pike, but not much info given, asked for logs and/or a reproducer on more recent code. * https://bugs.launchpad.net/neutron/+bug/1838697 - DVR Mac conversion rules are only added for the first router a network is attached to - Asked for additional information * https://bugs.launchpad.net/neutron/+bug/1839004 - Rocky DVR-SNAT seems missing entries for conntrack marking - Looks like a possible mis-configuration - Asked for additional information From mtreinish at kortar.org Mon Aug 5 18:21:44 2019 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 5 Aug 2019 14:21:44 -0400 Subject: stestr Python 2 Support In-Reply-To: <20190803001018.GA2352@thor.bakeyournoodle.com> References: <20190724131136.GA11582@sinanju.localdomain> <20190803001018.GA2352@thor.bakeyournoodle.com> Message-ID: <20190805182144.GA10102@zeong> On Sat, Aug 03, 2019 at 10:10:18AM +1000, Tony Breeds wrote: > On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > > Hi Everyone, > > > > I just wanted to send a quick update about the state of python 2 support in > > stestr, since OpenStack is the largest user of the project. With the recent > > release of stestr 2.4.0 we've officially deprecated the python 2.7 support > > in stestr. It will emit a DeprecationWarning whenever it's run from the CLI > > with python 2.7 now. The plan (which is not set in stone) is that we will be > > pushing a 3.0.0 release that removes the python 2 support and compat code from > > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on Jan. > > 1st). I don't believe this conflicts with the current Python version support > > plans in OpenStack [1] but just wanted to make sure people were aware so that > > there are no surprises when stestr stops working with Python 2 in 3.0.0. > > Thanks Matt. I know it's a little meta but if something really strange > were to happen would you be open to doing 2.X.Y releases while we still > have maintained branches that use it? Sure, I'm open to doing that if the need arises. The normal release process for stestr doesn't involve branching. But, if there is a critical issue in 2.x.y that requires a fix we can easily create a branch and push a bugfix release when/if that happens. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From thuanlk at viettel.com.vn Mon Aug 5 10:01:36 2019 From: thuanlk at viettel.com.vn (thuanlk at viettel.com.vn) Date: Mon, 5 Aug 2019 17:01:36 +0700 (ICT) Subject: [neutron] OpenvSwitch firewall sctp getting dropped References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> Message-ID: <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> I have tried any version of OpenvSwitch but problem continue happened. Is Openvswitch firewall support sctp? Thanks and best regards ! --------------------------------------- Lăng Khắc Thuận OCS Cloud | OCS (VTTEK) +(84)- 966463589 -----Original Message----- From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] Sent: Tuesday, July 30, 2019 11:22 AM To: 'smooney at redhat.com' ; 'openstack-discuss at lists.openstack.org' Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped I have tried config SCTP but nothing change! openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp Displaying 2 items Direction Ether Type IP Protocol Port Range Remote IP Prefix Remote Security Group Actions Egress IPv4 132 2000 - 10000 0.0.0.0/0 - Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - Thanks and best regards ! --------------------------------------- Lăng Khắc Thuận OCS Cloud | OCS (VTTEK) +(84)- 966463589 -----Original Message----- From: smooney at redhat.com [mailto:smooney at redhat.com] Sent: Tuesday, July 30, 2019 1:27 AM To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: > I have installed Openstack Queens on CentOs 7 with OvS and I recently > used the native openvswitch firewall to implement SecusiryGroup. The > native OvS firewall seems to work just fine with TCP/UDP traffic but > it does not forward any SCTP traffic going to the VMs no matter how I > change the security groups, But it run if i disable port security > completely or use iptables_hybrid firewall driver. What do I have to > do to allow SCTP packets to reach the VMs? the security groups api is a whitelist model so all traffic is droped by default. if you want to allow sctp you would ihave to create an new security group rule with ip_protocol set to the protocol number for sctp. e.g. openstack security group rule create --protocol sctp ... im not sure if neutron support --dst-port for sctp but you can still filter on --remote-ip or --remote-group and can specify the rule as an --ingress or --egress rule as normal. https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html based on this commit https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 it looks like neutron now validates the prot ranges for sctp impligying it support setting them so i gues its just a gap in the documentation. > From ks3019 at att.com Mon Aug 5 15:56:00 2019 From: ks3019 at att.com (SKELS, KASPARS) Date: Mon, 5 Aug 2019 15:56:00 +0000 Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment In-Reply-To: References: Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E808AE8@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hi Anirudh, The Airship Seaworthy is a bare-metal production reference implementation of Airship deployment, e.g. deployment that has 3 control servers (to carry HA and Ceph data replication), as well as ceph setup/replication for tenants data/VMs, redundant/bonded networks, and there are also things as DNS/TLS requirements to get this up and running. We also have Airsloop that is meant for 2 bare-metal servers (1 control node, and 1 compute). This from your description might fit better with your HW – and is also much simpler to install, here are some simplifications for it compared to full setup https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html I would def recommend to fist get familiar with Airsloop and get it up and running. The software/components are all the same but configured in non-redundant way. For virtual setups we have 2 options right now available * You can very simply get AIAB running – it’s a 1 VM setup and will give you a feel to what Airship is https://github.com/airshipit/treasuremap/tree/master/tools/deployment/aiab * There is also virtual multi-node environment that was available in the airship-in-a-bottle repo (https://github.com/airshipit/airship-in-a-bottle/tree/master/tools/multi_nodes_gate). This is now being moved to treasuremap and I would wait a bit since it’s slightly outdated on the old airship-in-a-bottle repo. Kind regards, Kaspars From: Anirudh Gupta Sent: Wednesday, May 29, 2019 2:31 AM To: airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ks3019 at att.com Mon Aug 5 16:09:50 2019 From: ks3019 at att.com (SKELS, KASPARS) Date: Mon, 5 Aug 2019 16:09:50 +0000 Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment In-Reply-To: References: Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E808B71@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hi, this is great to hear!! Here are few additional tips on the direction you are going * It is actually possible to configure Drydock/MAAS to talk to libvirt, it’s a bit tricky to setup all the SSH keys – but it’s possible https://github.com/airshipit/treasuremap/blob/master/site/seaworthy-virt/profiles/host/gate-vm-cp.yaml#L27 * There is a framework called mutlinode-gate or seaworthy-virt that aims to create more of a testing/gating environment using KVM and single host (launching at this moment 4 VMs). Most of that code was in older airship-in-a-bottle repo but is now being moved to treasuremap (site/seaworthy-virt) site and docs/scripts will come later https://review.opendev.org/#/c/655517/. This uses very similar idea of automating launching KVM VMs and then fully automated deploying Airship on top. Have a look! * For the disk name – ‘bootdisk’ is alias based on a SCSI ID, you can look it up with ‘sudo lshw -c disk’ and then update the HW profile, using direct name as sda is also OK, as you found https://github.com/airshipit/treasuremap/blob/master/site/airsloop/profiles/hardware/dell_r720xd.yaml#L45 * Yes, LMA (logging, monitoring, alerting) is on the heavier side, and in fact I believe in our current virtual setups (AIAB and virtual seaworthy) we disable deployment of them as well. It’s needed in production, though. The Airsloop itself was more meant as bare-metal; you may use it to run proper VMs to test virtual workloads/VNFs in more real setting as compute is bare-metal, but it’s great you got it running!! Cheers, Kaspars From: Li, Cheng1 Sent: Sunday, June 9, 2019 12:59 AM To: Anirudh Gupta ; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: Re: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Finally, I have been able to deploy airsloop on virtual env. I created two VMs(libvirt/kvm driven), one for genesis and the other for compute node. These two VMs were on the same host. As the compute node VM is supposed to be provisioned by maas via ipmi/pxe. So I used virtualbmc to simulate the ipmi. I authored the site by following these two guides[1][2]. It’s the mix of guide[1] and guide[2]. The commands I used are all these ones[3]. After fixing several issue, I have deployed the virtual airsloop env. I list here some issues I met: 1. Node identify failed. At the beginning of step ‘prepare_and_deploy_nodes’, the drydock power on the compute node VM via ipmi. Once the compute VM starts up via pxe boot, it runs script to detect local network interfaces and sends the info back to drycok. So the drydock can identify the node based on the received info. But the compute VM doesn’t have real ILO interface, so the drydock can’t identify it. What I did to workaround this was to manually fill the ipmi info on maas web page. 2. My host doesn’t have enough CPU cores, neither the VMs. So I had to increase --pods-per-core in kubelet.yaml. 3. The disk name in compute VM is vda, instead of sda. Drydock can’t map the alias device name to vda, so I had to used the fixed alias name ‘vda’ which is the same as it’s real device name.(it was ‘bootdisk’) 4. My host doesn’t have enough resource(CPU, memory), so I removed some resource consuming components(logging, monitoring). Besides, I disabled the neutron rally test. As it failed with timeout error because of the resource limits. I also paste my site changes[4] for reference. [1] https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html [2] https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html [3] https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html#getting-started [4] https://github.com/cheng1li/treasuremap/commit/7a8287720dacc6dc1921948aaddec96b8cf2645e Thanks, Cheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, May 30, 2019 7:29 PM To: Li, Cheng1 >; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: RE: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, I am trying to create Airship-Seaworthy from the link https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html It requires 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes to be configured, but there is no documentation of how to install and getting started with Airship-Seaworthy. Do we need to follow the “Getting Started” section mentioned in Airsloop or will there be any difference in case of Seaworthy. https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html#getting-started Also what all configurations need to be run from the 3 controller nodes and what needs to be run from 3 computes? Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) From: Li, Cheng1 > Sent: 30 May 2019 08:29 To: Anirudh Gupta >; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: RE: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment I have the same question. I haven’t seen any docs which guides how to deploy airsloop/air-seaworthy in virtual env. I am trying to deploy airsloop on libvirt/kvm driven virtual env. Two VMs, one for genesis, the other for compute. Virtualbmc for ipmi simulation. The genesis.sh scripts has been run on genesis node without error. But deploy_site fails at prepare_and_deploy_nodes task(action ‘set_node_boot’ timeout). I am still investigating this issue. It will be great if we have official document for this scenario. Thanks, Cheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, May 29, 2019 3:31 PM To: airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Mon Aug 5 19:19:46 2019 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 5 Aug 2019 15:19:46 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Interesting idea. That makes me wonder if we have to pick a name starting with 'U'. If the rule can be relaxed to allow a name with 'U' as the second letter, it would be much easier. Best regards, Hongbin On Mon, Aug 5, 2019 at 12:34 PM LIU Yulong wrote: > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote > some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a > suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How > about we give the OpenStack version name with letters order of rotation? The > first word of Pinyin OpenStack version Name will have a sequence > switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' > and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic > Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming > civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan > province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > *From: * "Doug Hellmann"; > *Date: * Sun, Aug 4, 2019 08:48 AM > *To: * "openstack-discuss"; > *Subject: * [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based on > input from communty members. The name critera are described in [1], and > this time around we were looking for names starting with U associated with > China. With some extra assistance from local community members (thank you > to everyone who helped!), we have a list of candidate names that will go > into the poll. Below is a subset of the names propsed, including those that > meet the standard criteria and some of the suggestions that do not. Before > we start the poll, the process calls for us to provide a period of 1 week > so that any names removed from the proposals can be discussed and any > last-minute objections can be raised. We will start the poll next week > using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared > among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is > in Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Aug 5 19:28:42 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 19:28:42 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190805192842.qgzj377epvubrvyv@yuggoth.org> On 2019-08-05 15:19:46 -0400 (-0400), Hongbin Lu wrote: > Interesting idea. That makes me wonder if we have to pick a name > starting with 'U'. If the rule can be relaxed to allow a name with > 'U' as the second letter, it would be much easier. [...] Only if we also actually type the name starting with a "u" in the places we use it, so that it will ASCII sort between "train" and whatever name we come up with for the "v" cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Aug 5 19:32:45 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 15:32:45 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> I don’t think we want to start changing the rules at this point. For one thing, we have a lot of automation built up around the idea that our release names come alphabetically, so we really do need to choose something that starts with U this time around. I would rather we choose something that naturally starts with U, rather than flipping the first two letters of some other words that have U in them. Based on other feedback in this thread, we have dropped some of the proposed names so the list of candidates is now: 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia umbrella ultimate Unless we hear more feedback that those are invalid or inadequate by the end of the week, we will make up the poll using those names. Thanks, Doug > On Aug 5, 2019, at 3:19 PM, Hongbin Lu wrote: > > Interesting idea. That makes me wonder if we have to pick a name starting with 'U'. If the rule can be relaxed to allow a name with 'U' as the second letter, it would be much easier. > > Best regards, > Hongbin > > On Mon, Aug 5, 2019 at 12:34 PM LIU Yulong wrote: > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. > How about we give the OpenStack version name with letters order of rotation? > The first word of Pinyin OpenStack version Name will have a sequence switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: > Uwxi, City of China, Jiangsu province > > 玉溪: Uyxi, City of China, Yunnan province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > From: "Doug Hellmann"; > Date: Sun, Aug 4, 2019 08:48 AM > To: "openstack-discuss"; > Subject: [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > From jeremyfreudberg at gmail.com Mon Aug 5 19:41:30 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 15:41:30 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Responding to myself with a different approach, as seen elsewhere in the thread. I think switching the position of letters is not ideal (e.g. "Uwzhen" instead of "Wuzhen"). I think emphasizing the second letter is okay (e.g. "wUzhen" instead of "Wuzhen", with the branch name called "stable/u-wuzhen" to satisfy tooling). On Mon, Aug 5, 2019 at 1:30 PM Jeremy Freudberg wrote: > > Hi -- That's an interesting proposal. Do you believe that your > suggestion (non-standard modification of Pinyin) is more appropriate > than the alternatives (using Mongolian/Russian place names, or using > English words)? Would Chinese contributors understand these names with > switched letters? > > On Mon, Aug 5, 2019 at 12:31 PM LIU Yulong wrote: > > > > Hi, > > > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. > > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > > Here is my list: > > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > > 浦东区: Updong,Pudong District, Shanghai > > 徐汇区: Uxhui,Xuhui District, Shanghai > > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > > 乌镇: Uwzhen, yes, again > > 榆林: Uylin, City of China, Shaanxi province > > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. > > 湖南:Uhnan, Hunan Province > > 鲁:Ul, the abbreviation of Shandong Province > > > > Thank you > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > > > > LIU Yulong > > > > > > ------------------ Original ------------------ > > From: "Doug Hellmann"; > > Date: Sun, Aug 4, 2019 08:48 AM > > To: "openstack-discuss"; > > Subject: [all][tc] U Cycle Naming Poll > > > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > > Uma http://www.fallingrain.com/world/CH/20/Uma.html > > Unicorn > > Urban > > Unique > > Umpire > > Utopia > > Umbrella > > Ultimate > > > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > From irenab.dev at gmail.com Mon Aug 5 20:38:15 2019 From: irenab.dev at gmail.com (Irena Berezovsky) Date: Mon, 5 Aug 2019 23:38:15 +0300 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: Good luck Michal! Thank you Daniel for leading the project, it was a pleasure to work with you. On Monday, August 5, 2019, Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Aug 5 20:49:23 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 20:49:23 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> On 2019-08-05 15:41:30 -0400 (-0400), Jeremy Freudberg wrote: > Responding to myself with a different approach, as seen elsewhere > in the thread. [...] While I appreciate everyone's innovative ideas for coming up with additional names, the time for adding entries to the list has passed. We need a name for the U cycle, like, yesterday. Not having one is holding up a variety of governance and event planning tasks and generally making things more complicated for everyone involved. There were threads on this mailing list in February and April asking for help putting together a solution. People pitched in and the list Doug sent is what they came up with (minus some which we removed in advance for a variety of reasons). The remaining list still seems to have a number of viable options, and this is the phase where we're asking if anyone objects to any of what's there before we put together a community poll to rank them a few days from now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From michael at the-davies.net Mon Aug 5 21:10:47 2019 From: michael at the-davies.net (Michael Davies) Date: Tue, 6 Aug 2019 06:40:47 +0930 Subject: stestr Python 2 Support In-Reply-To: <20190805182144.GA10102@zeong> References: <20190724131136.GA11582@sinanju.localdomain> <20190803001018.GA2352@thor.bakeyournoodle.com> <20190805182144.GA10102@zeong> Message-ID: Thanks Matt - you're a team player! On Tue, Aug 6, 2019 at 3:52 AM Matthew Treinish wrote: > On Sat, Aug 03, 2019 at 10:10:18AM +1000, Tony Breeds wrote: > > On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > > > Hi Everyone, > > > > > > I just wanted to send a quick update about the state of python 2 > support in > > > stestr, since OpenStack is the largest user of the project. With the > recent > > > release of stestr 2.4.0 we've officially deprecated the python 2.7 > support > > > in stestr. It will emit a DeprecationWarning whenever it's run from > the CLI > > > with python 2.7 now. The plan (which is not set in stone) is that we > will be > > > pushing a 3.0.0 release that removes the python 2 support and compat > code from > > > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on > Jan. > > > 1st). I don't believe this conflicts with the current Python version > support > > > plans in OpenStack [1] but just wanted to make sure people were aware > so that > > > there are no surprises when stestr stops working with Python 2 in > 3.0.0. > > > > Thanks Matt. I know it's a little meta but if something really strange > > were to happen would you be open to doing 2.X.Y releases while we still > > have maintained branches that use it? > > Sure, I'm open to doing that if the need arises. The normal release > process for > stestr doesn't involve branching. But, if there is a critical issue in > 2.x.y > that requires a fix we can easily create a branch and push a bugfix release > when/if that happens. > > -Matt Treinish > -- Michael Davies michael at the-davies.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.anders.au at gmail.com Mon Aug 5 23:42:14 2019 From: jacob.anders.au at gmail.com (Jacob Anders) Date: Tue, 6 Aug 2019 09:42:14 +1000 Subject: [ironic] Moving to office hours as opposed to weekly meetings for the next month In-Reply-To: References: Message-ID: Hi Julia, Thank you for your email and apologies for delayed response on my side. It is tricky indeed. I see two potential ways going forward: - going back to the weekly meeting convention and alternating between two time slots (similarly to what Scientific SIG and Neutron folks do) - an additional "sync up" time for the APACs, as you suggested. It could be a smaller weekly meeting or just an agreed time window (or windows) when the APAC contributors can reach out to the key team members for direction etc. From my perspective the key bit is being able to reach out to someone who will be able to guide me on how best go about the work packages I've taken up etc. What are your thoughts on these? Best Regards, Jacob On Mon, Jul 29, 2019 at 9:56 PM Julia Kreger wrote: > Hi Jacob, > > Sorry for the delay. My hope was that APAC contributors would coalesce > around a time, but it really seems that has not happened, and I am > starting to think that the office hours experiment has not really > helped as there has not been a regular reminder each week. :( > > Happy to discuss more, but perhaps a establishing a dedicated APAC > sync-up meeting is what is required? > > Thoughts? > > -Julia > > On Wed, Jul 17, 2019 at 5:44 AM Jacob Anders > wrote: > > > > Hi Julia, > > > > Do we have more clarity regarding the second (APAC) session? I see the > polls have been open for some time, but haven't seen a mention of a > specific time. > > > > Thank you, > > Jacob > > > > On Wed, Jul 3, 2019 at 9:39 AM Julia Kreger > wrote: > >> > >> Greetings Everyone! > >> > >> This week, during the weekly meeting, we seemed to reach consensus that > >> we would try taking a break from meetings[0] and moving to orienting > >> around using the mailing list[1] and our etherpad "whiteboard" [2]. > >> With this, we're going to want to re-evaluate in about a month. > >> I suspect it would be a good time for us to have a "mid-cycle" style > >> set of topical calls. I've gone ahead and created a poll to try and > >> identify a couple days that might be ideal for contributors[3]. > >> > >> But in the mean time, we want to ensure that we have some times for > >> office hours. The suggestion was also made during this week's meeting > >> that we may want to make the office hours window a little larger to > >> enable more discussion. > >> > >> So when will we have office hours? > >> ---------------------------------- > >> > >> Ideally we'll start with two time windows. One to provide coverage > >> to US and Europe friendly time zones, and another for APAC contributors. > >> > >> * I think 2-4 PM UTC on Mondays would be ideal. This translates to > >> 7-9 AM US-Pacific or 10 AM to 12 PM US-Eastern. > >> * We need to determine a time window that would be ideal for APAC > >> contributors. I've created a poll to help facilitate discussion[4]. > >> > >> So what is Office Hours? > >> ------------------------ > >> > >> Office hours are a time window when we expect some contributors to be > >> on IRC and able to partake in higher bandwidth discussions. > >> These times are not absolute. They can change and evolve, > >> and that is the most important thing for us to keep in mind. > >> > >> -- > >> > >> If there are any questions, Please let me know! > >> Otherwise I'll send a summary email out on next Monday. > >> > >> -Julia > >> > >> [0]: > http://eavesdrop.openstack.org/meetings/ironic/2019/ironic.2019-07-01-15.00.log.html#l-123 > >> [1]: > http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007038.html > >> [2]: https://etherpad.openstack.org/p/IronicWhiteBoard > >> [3]: https://doodle.com/poll/652gzta6svsda343 > >> [4]: https://doodle.com/poll/2ta5vbskytpntmgv > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Aug 6 00:15:47 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 6 Aug 2019 08:15:47 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> Message-ID: agree with Jeremy and Doug, let's stick with the list (it already has an amazing list of names of beautiful cities/places in China) On Tue, Aug 6, 2019 at 4:53 AM Jeremy Stanley wrote: > On 2019-08-05 15:41:30 -0400 (-0400), Jeremy Freudberg wrote: > > Responding to myself with a different approach, as seen elsewhere > > in the thread. > [...] > > While I appreciate everyone's innovative ideas for coming up with > additional names, the time for adding entries to the list has > passed. We need a name for the U cycle, like, yesterday. Not having > one is holding up a variety of governance and event planning tasks > and generally making things more complicated for everyone involved. > There were threads on this mailing list in February and April asking > for help putting together a solution. People pitched in and the list > Doug sent is what they came up with (minus some which we removed in > advance for a variety of reasons). The remaining list still seems to > have a number of viable options, and this is the phase where we're > asking if anyone objects to any of what's there before we put > together a community poll to rank them a few days from now. > -- > Jeremy Stanley > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Aug 6 02:52:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 5 Aug 2019 22:52:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: On 5/08/19 3:32 PM, Doug Hellmann wrote: > Unicorn > Urban > Unique > Umpire > Utopia > umbrella > ultimate These names are the ones that don't meet the criteria, which should mean that by default they're not included in the poll. The TC has the discretion to include one or more of them if we think they're exceptionally good. Candidly, I don't think any of them are. None of them, with the questionable exception of 'Urban', have any relation to Shanghai that I am aware of. And there's no shortage of decent local names on the list, even after applying questionable criteria on which sorts of words beginning with W in the Pinyin system are acceptable to use other transliterations on. So to be clear, I expect the TC to get a vote on whether any names not meeting the criteria are added to the poll, and I am personally inclined to vote -1. - ZB From rico.lin.guanyu at gmail.com Tue Aug 6 03:11:31 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 6 Aug 2019 11:11:31 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> Message-ID: On Mon, Aug 5, 2019 at 8:22 PM Doug Hellmann wrote: > > As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. > > Are you saying we should not include any of these names either, or just that when we present the poll we should not include the Pinyin spelling? > Just clarify here, I mean we should not include Pinyin for these options -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Aug 6 03:40:14 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 6 Aug 2019 11:40:14 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: On Tue, Aug 6, 2019 at 10:56 AM Zane Bitter wrote: > > So to be clear, I expect the TC to get a vote on whether any names not > meeting the criteria are added to the poll, and I am personally inclined > to vote -1. +1. Since we still got some days before we officially start the poll, we can run a quick inner poll within TCs by irc/CIVS to final the list. So maybe during office hours on Thursday? > > - ZB > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Aug 6 03:41:02 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 5 Aug 2019 22:41:02 -0500 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> Message-ID: <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> All, This e-mail has multiple purposes.  First, I have expanded the mail audience to go beyond just openstack-discuss to a mailing list I have created for all 3rd Party CI Maintainers associated with Cinder.  I apologize to those of you who are getting this as a duplicate e-mail. For all 3rd Party CI maintainers who have already migrated your systems to using Python3.7...Thank you!  We appreciate you keeping up-to-date with Cinder's requirements and maintaining your CI systems. If this is the first time you are hearing of the Python3.7 requirement please continue reading. It has been decided by the OpenStack TC that support for Py2.7 would be deprecated [1].  The Train development cycle is the last cycle that will support Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. It was discussed at the Train PTG that we would require all 3rd Party CIs to be running using Python3 by the Train milestone 2: [2]  We have been communicating the importance of getting 3rd Party CI running with py3 in meetings and e-mail for quite some time now, but it still appears that nearly half of all vendors are not yet running with Python 3. [3] If you are a vendor who has not yet moved to using Python 3 please take some time to review this document [4] as it has guidance on how to get your CI system updated.  It also includes some additional details as to why this requirement has been set and the associated background.  Also, please update the py3-ci-review etherpad with notes indicating that you are working on adding py3 support. I would also ask all vendors to review the etherpad I have created as it indicates a number of other drivers that have been marked unsupported due to CI systems not running properly.  If you are not planning to continue to support a driver adding such a note in the etherpad would be appreciated. Thanks! Jay [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html [2] https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI [3] https://etherpad.openstack.org/p/cinder-py3-ci-review [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update From gmann at ghanshyammann.com Tue Aug 6 04:33:09 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 13:33:09 +0900 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: <16c6533cc55.acfc25ea739772.5697560946632489083@ghanshyammann.com> ---- On Mon, 05 Aug 2019 23:49:20 +0900 Daniel Mellado wrote ---- > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! Thanks Daniel for all your hard work and leadership in Kuryr project. Congrats and best of luck Michal for your new role. Feel free to reach out to TC anytime you need help. -gmann > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > From gmann at ghanshyammann.com Tue Aug 6 05:13:32 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 14:13:32 +0900 Subject: [tc][uc][all] Starting goal selection for U series Message-ID: <16c6558c543.f38f0196740077.4057984990144645350@ghanshyammann.com> Hello everyone, We are in R10 week of Train cycle and not so far from the start of U cycle. It's time to start the discussions about community-wide goals ideas for the U series. We are little late to start this thread for U series which usually happened during Summit Forum. But it is not mandatory to wait for f2f meetup to kick off the goal discussions, we can always start the same via ML or ad-hoc meetings. During Shanghai Summit Forum, we will be having the f2f discussion for U (continue the discussion from ML threads) as well as V cycle goals ideas. Community-wide goals are important in term of solving and improving a technical area across OpenStack as a whole. It has lot more benefits to be considered from users as well from a developers perspective. See [1] for more details about community-wide goals and process. Also, you can refer to the backlogs of community-wide goals from this[2] and train cycle goals[3]. If you are interested in proposing a goal, please write down the idea on this etherpad[4] - https://etherpad.openstack.org/p/PVG-u-series-goals Accordingly, we will start the separate ML discussion over each goal idea. [1] https://governance.openstack.org/tc/goals/index.html [2] https://etherpad.openstack.org/p/community-goals [3] https://etherpad.openstack.org/p/BER-t-series-goals [4] https://etherpad.openstack.org/p/PVG-u-series-goals -gmann From berndbausch at gmail.com Tue Aug 6 05:48:42 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 6 Aug 2019 14:48:42 +0900 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax In-Reply-To: References: Message-ID: Yes, aggregate syntax documentation has room for improvement. However, Gnocchi's API documentation has a rather useful list of supported operations at https://gnocchi.xyz/rest.html#list-of-supported-operations. See also my recent issue https://github.com/gnocchixyz/gnocchi/issues/1044, which helped me understand how aggregation works in Gnocchi. Note that you can write an aggregate operation as a string using prefix notation, or as a JSON structure. On the command line, the string version is easier to use in my opinion. Regarding your use case, allow me to focus on CPU. Ceilometer'/s cpu/ metric accumulates the nanoseconds an instance consumes. Try /max/ aggregation to look at the CPU usage of a single instance:     gnocchi measures show --aggregation max --resource-id SERVER_UUID cpu which is equivalent to     gnocchi aggregates '(metric cpu max)' id=SERVER_UUID then use /sum/ aggregation over all instances of a project:     gnocchi aggregates '(aggregate sum (metric cpu max))' project_id=PROJECT_UUID You can even divide the figures by one billion, which converts nanoseconds to seconds:     gnocchi aggregates '(/ (aggregate sum (metric cpu max)) 1000000000)' project_id=PROJECT_UUID If that works, it should not be too hard to do something equivalent for memory and storage. Bernd. On 8/5/2019 5:43 PM, Blom, Merlin, NMU-OI wrote: > > Hey, > > I would like to aggregate data from the gnocchi database by using the > gnocchi aggregates function of the CLI/API > > The documentation does not cover the operations that are available nor > the syntax that has to be used: > > https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggregates > > Searching for more information I found a GitHub Issue: > > https://github.com/gnocchixyz/gnocchi/issues/393 > > But I cannot use the syntax from that ether. > > *My use case:* > > I want to aggregate the vcpus hours per month, vram hours per month, … > per server or project. > > -when an instance is stopped only storage is counted > > -the exact usage is used e.g. 2 vcpus between 1^st and 7^th day 4vcpus > between 8^th and last month no mean calculations > > Do you have detailed documentation about the gnocchi Aggregates > Operation Syntax? > > Do you have complex examples for gnocchi aggregations? Especially when > using the python bindings: > > /conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], > resource_type='instance', groupby='original_resource_id') / > > Can you give me advice regarding my use case? Do's and don'ts… > > Thank you for your help in advance! > > Merlin Blom > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Tue Aug 6 05:59:55 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 6 Aug 2019 14:59:55 +0900 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Thanks much, Sumit. I did not detect your reply until now. Still hard to manage the openstack-discuss mailing list. In the mean time, I have made a lot of progress and understand how Ceilometer creates its own archive policies and adds resources and their metrics to Gnocchi - based on gnocchi_resources.yaml, as you correctly remarked. Thanks to help from the Gnocchi team, I also know how to generate CPU utilization figures. See this issue on Github if you are interested: https://github.com/gnocchixyz/gnocchi/issues/1044. My ultimate goal is autoscaling based on CPU utilization. I have not solved that problem that, but it's a different question. One question at a time! Thanks again, this immediate question is answered. Bernd. On 8/1/2019 8:20 PM, Sumit Jamgade wrote: > Hey Bernd, > > Can you try with just one publisher instead of 2 and also drop the > archive_policy query parameter and its value. > > Then ceilometer should publish metrics based on map defined > gnocchi_resources.yaml > > And while you are at it. Could you post a list of archive policies > already defined in gnocchi, I believe this list should match what > is listed in gnocchi_resources.yaml. > > Hope that helps > Sumit > > > > On 7/31/19 3:22 AM, Bernd Bausch wrote: >> The message at the end of this email is some three months old. I have >> the same problem. The question is: *How to use the new rate metrics in >> Gnocchi. *I am using a Stein Devstack for my tests.* >> * >> >> For example, I need the CPU rate, formerly named /cpu_util/. I created >> a new archive policy that uses /rate:mean/ aggregation and has a 1 >> minute granularity: >> >> $ gnocchi archive-policy show ceilometer-medium-rate >> +---------------------+------------------------------------------------------------------+ >> | Field               | >> Value                                                            | >> +---------------------+------------------------------------------------------------------+ >> | aggregation_methods | rate:mean, >> mean                                                  | >> | back_window         | >> 0                                                                | >> | definition          | - points: 10080, granularity: 0:01:00, >> timespan: 7 days, 0:00:00 | >> | name                | >> ceilometer-medium-rate                                           | >> +---------------------+------------------------------------------------------------------+ >> >> I added the new policy to the publishers in /pipeline.yaml/: >> >> $ tail -n5 /etc/ceilometer/pipeline.yaml >> sinks: >>     - name: meter_sink >>       publishers: >>           - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >>           *- >> gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* >> >> After restarting all of Ceilometer, my hope was that the CPU rate >> would magically appear in the metric list. But no: All metrics are >> linked to archive policy /medium/, and looking at the details of an >> instance, I don't detect anything rate-related: >> >> $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 >> +-----------------------+---------------------------------------------------------------------+ >> | Field                 | >> Value                                                               | >> +-----------------------+---------------------------------------------------------------------+ >> ... >> | metrics               | compute.instance.booting.time: >> 76fac1f5-962e-4ff2-8790-1f497c99c17d | >> |                       | cpu: >> af930d9a-a218-4230-b729-fee7e3796944                           | >> |                       | disk.ephemeral.size: >> 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | >> |                       | disk.root.size: >> 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | >> |                       | memory.resident: >> 09efd98d-c848-4379-ad89-f46ec526c183               | >> |                       | memory.swap.in: >> 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | >> |                       | memory.swap.out: >> 4d012697-1d89-4794-af29-61c01c925bb4               | >> |                       | memory.usage: >> 93eab625-0def-4780-9310-eceff46aab7b                  | >> |                       | memory: >> ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1                        | >> |                       | vcpus: >> e1c5acaf-1b10-4d34-98b5-3ad16de57a98                         | >> | original_resource_id  | >> ae3659d6-8998-44ae-a494-5248adbebe11                                | >> ... >> >> | type                  | >> instance                                                            | >> | user_id               | >> a9c935f52e5540fc9befae7f91b4b3ae                                    | >> +-----------------------+---------------------------------------------------------------------+ >> >> Obviously, I am missing something. Where is the missing link? What do >> I have to do to get CPU usage rates? Do I have to create metrics? >> Do//I have to ask Ceilometer to create metrics? How? >> >> Right now, no instructions seem to exist at all. If that is correct, I >> would be happy to write documentation once I understand how it works. >> >> Thanks a lot. >> >> Bernd >> >> On 5/10/2019 3:49 PM, info at dantalion.nl wrote: >>> Hello, >>> >>> I am working on Watcher and we are currently changing how metrics are >>> retrieved from different datasources such as Monasca or Gnocchi. Because >>> of this major overhaul I would like to validate that everything is >>> working correctly. >>> >>> Almost all of the optimization strategies in Watcher require the cpu >>> utilization of an instance as metric but with newer versions of >>> Ceilometer this has become unavailable. >>> >>> On IRC I received the information that Gnocchi could be used to >>> configure an aggregate and this aggregate would then report cpu >>> utilization, however, I have been unable to find documentation on how to >>> achieve this. >>> >>> I was also notified that cpu_util is something that could be computed >>> from other metrics. When reading >>> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >>> the documentation seems to agree on this as it states that cpu_util is >>> measured by using a 'rate of change' transformer. But I have not been >>> able to find how this can be computed. >>> >>> I was hoping someone could spare the time to provide documentation or >>> information on how this currently is best achieved. >>> >>> Kind Regards, >>> Corne Lukken (Dantali0n) >>> > From gmann at ghanshyammann.com Tue Aug 6 06:48:58 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 15:48:58 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-11 Update Message-ID: <16c65b023b0.125a44579741777.2847331246825225821@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R11 week. At the first step, I am preparing the ipv6 jobs for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. It seems like, there are a lot of works required to IPv6 deployment and testing than expected initially. 11 projects put of initial 17 projects are failing on "IPv6-only as listen address". Summary: # of Ipv6 job proposed Projects: 17 # of pass projects: 6 # of failing projects: 11 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Base job 'devstack-tempest-ipv6' and 'tempest-ipv6-only' are merged. 2. 'tempest-ipv6-only' has been proposed to run on 5 services project date side [2]. 3. Neutron stadium projects jobs have been prepared (as first only for projects having zuulv3 job) 4. New projects ipv6 jobs patch and status: - Congress: link: https://review.opendev.org/#/c/671908/ status: job is failing, I will ping Eric to help on debugging. - Monasca: links: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged)+projects:openstack/monasca status: jobs are failing. I fixed some of the IPv6 parsing on monasca-api and monasca-notoficatiob side but it seems kafka addresses still have an issue on IPv6 - Murano: links: https://review.opendev.org/#/c/673291/ Status: job is failing. I have not debugged it. I need project help on this if any bug or failing for any other issue. - cloudkitty: link: https://review.opendev.org/#/c/671909/ status: Job is passing. - quinling: link: https://review.opendev.org/#/c/673506/ status: job is failing. need to debug this. - networking-old link: https://review.opendev.org/#/c/673501/ status: working with Lajos to define the job on top of their new zuulv3 jobset. - networking-ovn: link: https://review.opendev.org/#/c/673488/ status: job is failing, Lucas and Brian already checking it on review. I will debug with them. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/673266/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing - I am adding this wiki info in goal doc also[4]. Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific varification then it can be added in project side job as post-run playbooks as described in wiki page[5]. [1] https://review.opendev.org/#/c/671231/ [2] https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) [3] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2019-07-26.log.html#t2019-07-26T08:56:20 [4] https://review.opendev.org/#/c/671898/ [5] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From dangtrinhnt at gmail.com Tue Aug 6 07:49:15 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 6 Aug 2019 16:49:15 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture Message-ID: Hi, Is there any documents somewhere describing the current architecture of the CI/CD system that OpenStack Infrastructure is running? Best regards, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From merlin.blom at bertelsmann.de Tue Aug 6 08:21:01 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Tue, 6 Aug 2019 08:21:01 +0000 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax Message-ID: Thanks Bernd, now I understand the new aggregation syntax. J In my setup max is not a valid aggregation method so I would use mean instead. Problem with my use case is that I don't want the actual used cpu NS, but the reserved vcpus per hour. Using the metric vcpus is no option, because it does not reflect the status of the VM. When you shut down the VM it is recorded anyway. So I decided to correlate the cpu with the vcpus metrics. For that I have to write my own python scripts Thanks again for your supportiv answer! Merlin Von: Bernd Bausch Gesendet: Dienstag, 6. August 2019 07:49 An: openstack-discuss at lists.openstack.org Betreff: Re: [telemetry] Gnocchi: Aggregates Operation Syntax Yes, aggregate syntax documentation has room for improvement. However, Gnocchi's API documentation has a rather useful list of supported operations at https://gnocchi.xyz/rest.html#list-of-supported-operations . See also my recent issue https://github.com/gnocchixyz/gnocchi/issues/1044 , which helped me understand how aggregation works in Gnocchi. Note that you can write an aggregate operation as a string using prefix notation, or as a JSON structure. On the command line, the string version is easier to use in my opinion. Regarding your use case, allow me to focus on CPU. Ceilometer's cpu metric accumulates the nanoseconds an instance consumes. Try max aggregation to look at the CPU usage of a single instance: gnocchi measures show --aggregation max --resource-id SERVER_UUID cpu which is equivalent to gnocchi aggregates '(metric cpu max)' id=SERVER_UUID then use sum aggregation over all instances of a project: gnocchi aggregates '(aggregate sum (metric cpu max))' project_id=PROJECT_UUID You can even divide the figures by one billion, which converts nanoseconds to seconds: gnocchi aggregates '(/ (aggregate sum (metric cpu max)) 1000000000)' project_id=PROJECT_UUID If that works, it should not be too hard to do something equivalent for memory and storage. Bernd. On 8/5/2019 5:43 PM, Blom, Merlin, NMU-OI wrote: Hey, I would like to aggregate data from the gnocchi database by using the gnocchi aggregates function of the CLI/API The documentation does not cover the operations that are available nor the syntax that has to be used: https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggrega tes Searching for more information I found a GitHub Issue: https://github.com/gnocchixyz/gnocchi/issues/393 But I cannot use the syntax from that ether. My use case: I want to aggregate the vcpus hours per month, vram hours per month, . per server or project. - when an instance is stopped only storage is counted - the exact usage is used e.g. 2 vcpus between 1st and 7th day 4vcpus between 8th and last month no mean calculations Do you have detailed documentation about the gnocchi Aggregates Operation Syntax? Do you have complex examples for gnocchi aggregations? Especially when using the python bindings: conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], resource_type='instance', groupby='original_resource_id') Can you give me advice regarding my use case? Do's and don'ts. Thank you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From merlin.blom at bertelsmann.de Tue Aug 6 08:52:49 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Tue, 6 Aug 2019 08:52:49 +0000 Subject: [telemetry] ceilometer: Octavia Loadbalancer Message-ID: Hey, Has anybody experiences with amphora/Octavia Loadbalancers in ceilometer/gnoochi with OpenStack Ansible on Stein? The Metrics for load balancing are not pushed into gnocchi. https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html #load-balancer-as-a-service-lbaas-v2 Looking for the amphora instances directly showed, that the instances in the service tenant don't show up in gnocchi ether. The integration in neutron for Octavia is deactivated because it is deprecated. Is there a workaround? Maybe creating the Measurements via a custom polling script? Thanks you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From thierry at openstack.org Tue Aug 6 09:12:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 6 Aug 2019 11:12:19 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: <20675865-2d74-014c-8f7e-3129343b3cb4@openstack.org> Zane Bitter wrote: > [...] > So to be clear, I expect the TC to get a vote on whether any names not > meeting the criteria are added to the poll, and I am personally inclined > to vote -1. That sounds fair. We can do a quick vote at the TC meeting this week (or earlier on the channel). For me, only two names feel sufficently-compelling: the above-mentioned 'Urban' (Shanghai is definitely urban), and 'Unicorn' to celebrate how difficult it was to find a name this time around. That said I'd only support addition of 'urban'... Because 'unicorn' would likely win if added, passing on the opportunity to get a name that is more representative of China. -- Thierry Carrez (ttx) From anlin.kong at gmail.com Tue Aug 6 09:46:15 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 6 Aug 2019 21:46:15 +1200 Subject: [telemetry] ceilometer: Octavia Loadbalancer In-Reply-To: References: Message-ID: The link you mentioned is related to LBaaS v2, not for Octavia. Currently, I don't think there is any pollster for Octavia in upstream Ceilometer. Best regards, Lingxian Kong Catalyst Cloud On Tue, Aug 6, 2019 at 9:02 PM Blom, Merlin, NMU-OI < merlin.blom at bertelsmann.de> wrote: > Hey, > > Has anybody experiences with amphora/Octavia Loadbalancers in > ceilometer/gnoochi with OpenStack Ansible on Stein? > > > > The Metrics for load balancing are not pushed into gnocchi. > > > https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html#load-balancer-as-a-service-lbaas-v2 > > Looking for the amphora instances directly showed, that the instances in > the service tenant don’t show up in gnocchi ether. > > > > The integration in neutron for Octavia is deactivated because it is > deprecated. > > > > Is there a workaround? > > Maybe creating the Measurements via a custom polling script? > > > > Thanks you for your help in advance! > > Merlin Blom > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Tue Aug 6 10:47:10 2019 From: florian.engelmann at everyware.ch (Engelmann Florian) Date: Tue, 6 Aug 2019 10:47:10 +0000 Subject: [nova] edit flavor Message-ID: Hi, I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any possible impact on running instances using those flavors? I guess resize will work? All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Tue Aug 6 11:12:15 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Aug 2019 13:12:15 +0200 Subject: [keystone] [stein] user_enabled_emulation config problem In-Reply-To: References: Message-ID: Hello all, I investigated the case. My issue arises from group_members_are_ids ignored for user_enabled_emulation_use_group_config. I reported a bug in keystone: https://bugs.launchpad.net/keystone/+bug/1839133 and will submit a patch. Hopefully it helps someone else as well. Kind regards, Radek sob., 3 sie 2019 o 20:56 Radosław Piliszek napisał(a): > Hello all, > > I have an issue using user_enabled_emulation with my LDAP solution. > > I set: > user_tree_dn = ou=Users,o=UCO > user_objectclass = inetOrgPerson > user_id_attribute = uid > user_name_attribute = uid > user_enabled_emulation = true > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > user_enabled_emulation_use_group_config = true > group_tree_dn = ou=Groups,o=UCO > group_objectclass = posixGroup > group_id_attribute = cn > group_name_attribute = cn > group_member_attribute = memberUid > group_members_are_ids = true > > Keystone properly lists members of the Users group but they all remain > disabled. > Did I misinterpret something? > > Kind regards, > Radek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Aug 6 11:38:44 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 06 Aug 2019 12:38:44 +0100 Subject: [nova] edit flavor In-Reply-To: References: Message-ID: On Tue, 2019-08-06 at 10:47 +0000, Engelmann Florian wrote: > Hi, > > I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not > handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to > delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any > possible impact on running instances using those flavors? I guess resize will work? Flavor with disk 0 are intended to be used with boot form volume guests if mangnum does not support boot form volume guest then it should not use those flavors. if you edit it in the db it will not update the embeded flavor in the instnce record. it may result in host being over substibed and it will not update the allocations in placement to reflect the new size. a resize should fix that but if you are using boot form volume be aware that instance will be schdule based on the local disk space available on the compute nodes so if you are using boot form volume it will not work as expected. your best solution assuming you are using local storage is to create a new flavor and do a resize. > > All the best, > Florian From mark at stackhpc.com Tue Aug 6 12:18:45 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Aug 2019 13:18:45 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Thu, 18 Jul 2019 at 09:54, Eddie Yen wrote: > Hi everyone, I met an issue when try to evacuate host. > The platform is stable/rocky and using kolla-ansible to deploy. > And all storage backends are connected to Ceph. > > Before I try to evacuate host, the source host had about 24 VMs running. > When I shutdown the node and execute evacuation, there're few VMs failed. > The error code is 504. > Strange is those VMs are all attach its own volume. > > Then I check nova-compute log, a detailed error has pasted at below link; > https://pastebin.com/uaE7YrP1 > > Does anyone have any experience with this? I googled but no enough > information about this. > > Thanks! > Gateway timeout suggests the server timeout in haproxy is too low, and the server (cinder-api) has not responded to the request in time. The default timeout is 60s, and is configured via haproxy_server_timeout (and possibly haproxy_client_timeout). You could try increasing this in globals.yml. We do use a larger timeout for glance-api (haproxy_glance_api_client_timeout and haproxy_glance_api_server_timeout, both 6h). Perhaps we need something similar for cinder-api. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Aug 6 12:45:20 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Aug 2019 08:45:20 -0400 Subject: [ironic] Moving to office hours as opposed to weekly meetings for the next month In-Reply-To: References: Message-ID: Indeed this is tricky. :( Is there a magic easy button out there? Anyway! I think the weekly meeting serves as a good checkpoint. Almost an interrupt that forces people to stop and context switch to the meeting. I think if APAC contributors are able to agree on a mutual time that would be better or work for them, then I think we can consider alternating, or try to perform an additional APAC focused sync time if we know a good time for it. Contributor availability being the key, without knowing a time, it is a little difficult to determine a forward path. :( Want to just arbitrarily toss out a time and lets see how that looks? -Julia On Mon, Aug 5, 2019 at 7:42 PM Jacob Anders wrote: > > Hi Julia, > > Thank you for your email and apologies for delayed response on my side. > > It is tricky indeed. I see two potential ways going forward: > > - going back to the weekly meeting convention and alternating between two time slots (similarly to what Scientific SIG and Neutron folks do) > - an additional "sync up" time for the APACs, as you suggested. It could be a smaller weekly meeting or just an agreed time window (or windows) when the APAC contributors can reach out to the key team members for direction etc. From my perspective the key bit is being able to reach out to someone who will be able to guide me on how best go about the work packages I've taken up etc. > > What are your thoughts on these? > > Best Regards, > Jacob > > On Mon, Jul 29, 2019 at 9:56 PM Julia Kreger wrote: >> >> Hi Jacob, >> >> Sorry for the delay. My hope was that APAC contributors would coalesce >> around a time, but it really seems that has not happened, and I am >> starting to think that the office hours experiment has not really >> helped as there has not been a regular reminder each week. :( >> >> Happy to discuss more, but perhaps a establishing a dedicated APAC >> sync-up meeting is what is required? >> >> Thoughts? >> >> -Julia >> >> On Wed, Jul 17, 2019 at 5:44 AM Jacob Anders wrote: >> > >> > Hi Julia, >> > >> > Do we have more clarity regarding the second (APAC) session? I see the polls have been open for some time, but haven't seen a mention of a specific time. >> > >> > Thank you, >> > Jacob >> > >> > On Wed, Jul 3, 2019 at 9:39 AM Julia Kreger wrote: >> >> >> >> Greetings Everyone! >> >> >> >> This week, during the weekly meeting, we seemed to reach consensus that >> >> we would try taking a break from meetings[0] and moving to orienting >> >> around using the mailing list[1] and our etherpad "whiteboard" [2]. >> >> With this, we're going to want to re-evaluate in about a month. >> >> I suspect it would be a good time for us to have a "mid-cycle" style >> >> set of topical calls. I've gone ahead and created a poll to try and >> >> identify a couple days that might be ideal for contributors[3]. >> >> >> >> But in the mean time, we want to ensure that we have some times for >> >> office hours. The suggestion was also made during this week's meeting >> >> that we may want to make the office hours window a little larger to >> >> enable more discussion. >> >> >> >> So when will we have office hours? >> >> ---------------------------------- >> >> >> >> Ideally we'll start with two time windows. One to provide coverage >> >> to US and Europe friendly time zones, and another for APAC contributors. >> >> >> >> * I think 2-4 PM UTC on Mondays would be ideal. This translates to >> >> 7-9 AM US-Pacific or 10 AM to 12 PM US-Eastern. >> >> * We need to determine a time window that would be ideal for APAC >> >> contributors. I've created a poll to help facilitate discussion[4]. >> >> >> >> So what is Office Hours? >> >> ------------------------ >> >> >> >> Office hours are a time window when we expect some contributors to be >> >> on IRC and able to partake in higher bandwidth discussions. >> >> These times are not absolute. They can change and evolve, >> >> and that is the most important thing for us to keep in mind. >> >> >> >> -- >> >> >> >> If there are any questions, Please let me know! >> >> Otherwise I'll send a summary email out on next Monday. >> >> >> >> -Julia >> >> >> >> [0]: http://eavesdrop.openstack.org/meetings/ironic/2019/ironic.2019-07-01-15.00.log.html#l-123 >> >> [1]: http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007038.html >> >> [2]: https://etherpad.openstack.org/p/IronicWhiteBoard >> >> [3]: https://doodle.com/poll/652gzta6svsda343 >> >> [4]: https://doodle.com/poll/2ta5vbskytpntmgv >> >> From mnaser at vexxhost.com Tue Aug 6 13:41:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 09:41:36 -0400 Subject: [nova] edit flavor In-Reply-To: References: Message-ID: On Tue, Aug 6, 2019 at 7:44 AM Sean Mooney wrote: > > On Tue, 2019-08-06 at 10:47 +0000, Engelmann Florian wrote: > > Hi, > > > > I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not > > handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to > > delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any > > possible impact on running instances using those flavors? I guess resize will work? > > Flavor with disk 0 are intended to be used with boot form volume guests if mangnum does not support boot form volume > guest then it should not use those flavors. FYI: https://review.opendev.org/#/c/621734/ > if you edit it in the db it will not update the embeded flavor in the instnce record. > it may result in host being over substibed and it will not update the allocations in placement to reflect the new size. > > a resize should fix that but if you are using boot form volume be aware that instance will be schdule based > on the local disk space available on the compute nodes so if you are using boot form volume it will not work as > expected. your best solution assuming you are using local storage is to create a new flavor and do a resize. > > > > > All the best, > > Florian > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Aug 6 14:05:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 10:05:12 -0400 Subject: [openstack-ansible] Shanghai Summit Planning In-Reply-To: References: Message-ID: Hi everyone, I have not seen any updates to the etherpad, neither have I seen any names added in there. I have added a secton of "who's not going": https://etherpad.openstack.org/p/PVG-OSA-PTG I just want to make sure to have an idea of attendance. Thanks, Mohammed On Thu, Aug 1, 2019 at 4:10 PM Mohammed Naser wrote: > > Hey everyone! > > Here's the link to the Etherpad for this year's Shanghai summit > initial planning. You can put your name if you're attending and also > write down your topic of discussion ideas. Looking forward to seeing > you there! > > https://etherpad.openstack.org/p/PVG-OSA-PTG > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Aug 6 14:06:06 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 10:06:06 -0400 Subject: [tc] Shanghai Summit Planning In-Reply-To: References: Message-ID: Bumping this email, I have not seen any traction on this and even if you don't have any ideas, please just add your name if you're attending or not! Thanks, Mohammed On Thu, Aug 1, 2019 at 4:11 PM Mohammed Naser wrote: > > Hey everyone! > > Here's the link to the Etherpad for this year's Shanghai summit > initial planning. You can put your name if you're attending and also > write down your topic of discussion ideas. Looking forward to seeing > you there! > > https://etherpad.openstack.org/p/PVG-TC-PTG > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Aug 6 14:37:43 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 14:37:43 +0000 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: References: Message-ID: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> On 2019-08-06 16:49:15 +0900 (+0900), Trinh Nguyen wrote: > Is there any documents somewhere describing the current > architecture of the CI/CD system that OpenStack Infrastructure is > running? OpenStack uses the OpenDev deployment of the Zuul project gating system for CI/CD. The OpenDev infrastructure sysadmins maintain some operational Zuul deployment documentation for their own purposes at https://docs.openstack.org/infra/system-config/zuul.html and also some information which was written separately during v3 migration at https://docs.openstack.org/infra/system-config/zuulv3.html which is due to get rolled into the first. Zuul itself already has excellent documentation (written by many of the same people) and its main architecture can be found described in the Admin Guide, with the diagram at https://zuul-ci.org/docs/zuul/admin/components.html providing a nice component overview. What specifically are you looking for? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Aug 6 15:10:40 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 10:10:40 -0500 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> Message-ID: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Another thing to check if you're having seemingly inexplicable messaging issues is that there isn't a notification queue filling up somewhere. If notifications are enabled somewhere but nothing is consuming them the size of the queue will eventually grind rabbit to a halt. I used to check queue sizes through the rabbit web ui, so I have to admit I'm not sure how to do it through the cli. On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: > Hi, > Are you using ssl connections ? > > Can be this issue ? > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 > > > ------------------------------------------------------------------------ > *From:* Laurent Dumont > *Sent:* Wednesday, July 31, 2019 4:20 PM > *To:* Grant Morley > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: Slow instance launch times due to RabbitMQ > That is a bit strange, list_queues should return stuff. Couple of ideas : > > * Are the Rabbit connection failure logs on the compute pointing to a > specific controller? > * Are there any logs within Rabbit on the controller that would point > to a transient issue? > * cluster_status is a snapshot of the cluster at the time you ran the > command. If the alarms have cleared, you won't see anything. > * If you have the RabbitMQ management plugin activated, I would > recommend a quick look to see the historical metrics and overall status. > > > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > wrote: > > Hi guys, > > We are using Ubuntu 16 and OpenStack ansible to do our setup. > > rabbitmqctl list_queues > Listing queues > > (Doesn't appear to be any queues ) > > rabbitmqctl cluster_status > > Cluster status of node > 'rabbit at management-1-rabbit-mq-container-b4d7791f' > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', >                 'rabbit at management-2-rabbit-mq-container-b455e77d', >                 'rabbit at management-3-rabbit-mq-container-1d6ae377']}]}, >  {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377', >                  'rabbit at management-2-rabbit-mq-container-b455e77d', >                  'rabbit at management-1-rabbit-mq-container-b4d7791f']}, >  {cluster_name,<<"openstack">>}, >  {partitions,[]}, >  {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, >           {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, >           {'rabbit at management-1-rabbit-mq-container-b4d7791f',[]}]}] > > Regards, > > On 31/07/2019 11:49, Laurent Dumont wrote: >> Could you forward the output of the following commands on a >> controller node? : >> >> rabbitmqctl cluster_status >> rabbitmqctl list_queues >> >> You won't necessarily see a high load on a Rabbit cluster that is >> in a bad state. >> >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley > > wrote: >> >> Hi all, >> >> We are randomly seeing slow instance launch / deletion times >> and it appears to be because of RabbitMQ. We are seeing a lot >> of these messages in the logs for Nova and Neutron: >> >> ERROR oslo.messaging._drivers.impl_rabbit [-] >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on >> 10.6.2.212:5671 is unreachable: Too >> many heartbeats missed. Trying again in 1 seconds. Client >> port: 37098: ConnectionForced: Too many heartbeats missed >> >> The RabbitMQ cluster isn't under high load and I am not seeing >> any packets drop over the network when I do some tracing. >> >> We are only running 15 compute nodes currently and have >1000 >> instances so it isn't a large deployment. >> >> Are there any good configuration tweaks for RabbitMQ running >> on OpenStack Queens? >> >> Many Thanks, >> >> -- >> >> Grant Morley >> Cloud Lead, Civo Ltd >> www.civo.com | Signup for an account! >> >> > -- > > Grant Morley > Cloud Lead, Civo Ltd > www.civo.com | Signup for an account! > > From openstack at nemebean.com Tue Aug 6 15:24:59 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 10:24:59 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> Message-ID: <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Just a reminder that there is also http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html which was intended to address this same issue. I toyed around with it a bit for TripleO installs back then and it did seem to speed things up, but at the time there was a bug in our client plugin where it was triggering a prompt for input that was problematic with the server running in the background. I never really got back to it once that was fixed. :-/ On 7/26/19 6:53 PM, Clark Boylan wrote: > Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). > > I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. > > I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? > > [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > [3] https://review.opendev.org/#/c/673108/ > [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > Note the jobs compared above all ran on rax-dfw. > > Clark > From mriedemos at gmail.com Tue Aug 6 15:31:29 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Aug 2019 10:31:29 -0500 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On 8/6/2019 7:18 AM, Mark Goddard wrote: > We do use a larger timeout for glance-api > (haproxy_glance_api_client_timeout > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need > something similar for cinder-api. A 6 hour timeout for cinder API calls would be nuts IMO. The thing that was failing was a volume attachment delete/create from what I recall, which is the newer version (as of Ocata?) for the old initialize_connection/terminate_connection APIs. These are synchronous RPC calls from cinder-api to cinder-volume to do things on the storage backend and we have seen them take longer than 60 seconds in the gate CI runs with the lvm driver. I think the investigation normally turned up lvchange taking over 60 seconds on some concurrent operation locking out the RPC call which eventually results in the MessagingTimeout from oslo.messaging. That's unrelated to your gateway timeout from HAProxy but the point is yeah you likely want to bump up those timeouts since cinder-api has these synchronous calls to the cinder-volume service. I just don't think you need to go to 6 hours :). I think the keystoneauth1 default http response timeout is 10 minutes so maybe try that. -- Thanks, Matt From cboylan at sapwetik.org Tue Aug 6 15:49:17 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Aug 2019 08:49:17 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > Just a reminder that there is also > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > which was intended to address this same issue. > > I toyed around with it a bit for TripleO installs back then and it did > seem to speed things up, but at the time there was a bug in our client > plugin where it was triggering a prompt for input that was problematic > with the server running in the background. I never really got back to it > once that was fixed. :-/ I'm not tied to any particular implementation. Mostly I wanted to show that we can take this ~5 minute portion of devstack and turn it into a 15 second portion of devstack by improving our use of the service APIs (and possibly even further if we apply it to all of the api interaction). Any idea how difficult it would be to get your client as a service stuff running in devstack again? I do not think we should make a one off change like I've done in my POC. That will just end up being harder to understand and debug in the future since it will be different than all of the other API interaction. I like the idea of a manifest or feeding a longer lived process api update commands as we can then avoid requesting new tokens as well as pkg_resource startup time. Such a system could be used by all of devstack as well (avoiding the "this bit is special" problem). Is there any interest from the QA team in committing to an approach and working to do a conversion? I don't want to commit any more time to this myself unless there is strong interest in getting changes merged (as I expect it will be a slow process weeding out places where we've made bad assumptions particularly around plugins). One of the things I found was that using names with osc results in name to id lookups as well. We can avoid these entirely if we remember name to id mappings instead (which my POC does). Any idea if your osc as a service tool does or can do that? Probably have to be more careful for scoping things in a tool like that as it may be reused by people with name collisions across projects/users/groups/domains. > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). > > > > I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. > > > > I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? > > > > [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > [3] https://review.opendev.org/#/c/673108/ > > [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > Note the jobs compared above all ran on rax-dfw. > > > > Clark > > > > From mark at stackhpc.com Tue Aug 6 15:59:25 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Aug 2019 16:59:25 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: > On 8/6/2019 7:18 AM, Mark Goddard wrote: > > We do use a larger timeout for glance-api > > (haproxy_glance_api_client_timeout > > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need > > something similar for cinder-api. > > A 6 hour timeout for cinder API calls would be nuts IMO. The thing that > was failing was a volume attachment delete/create from what I recall, > which is the newer version (as of Ocata?) for the old > initialize_connection/terminate_connection APIs. These are synchronous > RPC calls from cinder-api to cinder-volume to do things on the storage > backend and we have seen them take longer than 60 seconds in the gate CI > runs with the lvm driver. I think the investigation normally turned up > lvchange taking over 60 seconds on some concurrent operation locking out > the RPC call which eventually results in the MessagingTimeout from > oslo.messaging. That's unrelated to your gateway timeout from HAProxy > but the point is yeah you likely want to bump up those timeouts since > cinder-api has these synchronous calls to the cinder-volume service. I > just don't think you need to go to 6 hours :). I think the keystoneauth1 > default http response timeout is 10 minutes so maybe try that. > > Yeah, wasn't advocating for 6 hours - just showing which knobs are available :) > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Aug 6 16:16:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 16:16:01 +0000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: [...] > One of the things I found was that using names with osc results in > name to id lookups as well. We can avoid these entirely if we > remember name to id mappings instead (which my POC does). Any idea > if your osc as a service tool does or can do that? Probably have > to be more careful for scoping things in a tool like that as it > may be reused by people with name collisions across > projects/users/groups/domains. [...] Out of curiosity, could OSC/SDK cache those relationships so they're only looked up once (or at least infrequently)? I guess there are cache invalidation concerns if an entity is deleted and another created out-of-band using the same name, but if it's all done through the same persistent daemon then that's less of a risk right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Aug 6 16:34:28 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 11:34:28 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> On 8/6/19 10:49 AM, Clark Boylan wrote: > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >> Just a reminder that there is also >> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >> which was intended to address this same issue. >> >> I toyed around with it a bit for TripleO installs back then and it did >> seem to speed things up, but at the time there was a bug in our client >> plugin where it was triggering a prompt for input that was problematic >> with the server running in the background. I never really got back to it >> once that was fixed. :-/ > > I'm not tied to any particular implementation. Mostly I wanted to show that we can take this ~5 minute portion of devstack and turn it into a 15 second portion of devstack by improving our use of the service APIs (and possibly even further if we apply it to all of the api interaction). Any idea how difficult it would be to get your client as a service stuff running in devstack again? I wish I could take credit, but this is actually Dan Berrange's work. :-) > > I do not think we should make a one off change like I've done in my POC. That will just end up being harder to understand and debug in the future since it will be different than all of the other API interaction. I like the idea of a manifest or feeding a longer lived process api update commands as we can then avoid requesting new tokens as well as pkg_resource startup time. Such a system could be used by all of devstack as well (avoiding the "this bit is special" problem). > > Is there any interest from the QA team in committing to an approach and working to do a conversion? I don't want to commit any more time to this myself unless there is strong interest in getting changes merged (as I expect it will be a slow process weeding out places where we've made bad assumptions particularly around plugins). > > One of the things I found was that using names with osc results in name to id lookups as well. We can avoid these entirely if we remember name to id mappings instead (which my POC does). Any idea if your osc as a service tool does or can do that? Probably have to be more careful for scoping things in a tool like that as it may be reused by people with name collisions across projects/users/groups/domains. I don't believe this would handle name to id mapping. It's a very thin wrapper around the regular client code that just makes it persistent so we don't pay the startup costs every call. On the plus side that means it basically works like the vanilla client, on the minus side that means it may not provide as much improvement as a more targeted solution. IIRC it's pretty easy to use, so I can try it out again and make sure it still works and still provides a performance benefit. > >> >> On 7/26/19 6:53 PM, Clark Boylan wrote: >>> Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). >>> >>> I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. >>> >>> I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? >>> >>> [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>> [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>> [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>> [3] https://review.opendev.org/#/c/673108/ >>> [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>> >>> Note the jobs compared above all ran on rax-dfw. >>> >>> Clark >>> >> >> > From cdent+os at anticdent.org Tue Aug 6 16:38:48 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 6 Aug 2019 17:38:48 +0100 (BST) Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: On Tue, 6 Aug 2019, Jeremy Stanley wrote: > On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: > [...] >> One of the things I found was that using names with osc results in >> name to id lookups as well. We can avoid these entirely if we >> remember name to id mappings instead (which my POC does). Any idea >> if your osc as a service tool does or can do that? Probably have >> to be more careful for scoping things in a tool like that as it >> may be reused by people with name collisions across >> projects/users/groups/domains. > [...] > > Out of curiosity, could OSC/SDK cache those relationships so they're > only looked up once (or at least infrequently)? I guess there are > cache invalidation concerns if an entity is deleted and another > created out-of-band using the same name, but if it's all done > through the same persistent daemon then that's less of a risk right? If we are in a situation where name to id and id to name translations are slow at the services' API layer, isn't that a really big bug? One where the fixing is beneficial to everyone, including devstack users? (Yes, I'm aware of TCP overhead and all that, but I reckon that's way down on the list of contributing factors here?) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From cboylan at sapwetik.org Tue Aug 6 16:40:50 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Aug 2019 09:40:50 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: <339c12a2-a9bc-4938-b9c0-4f48ef846ad8@www.fastmail.com> On Tue, Aug 6, 2019, at 9:17 AM, Jeremy Stanley wrote: > On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: > [...] > > One of the things I found was that using names with osc results in > > name to id lookups as well. We can avoid these entirely if we > > remember name to id mappings instead (which my POC does). Any idea > > if your osc as a service tool does or can do that? Probably have > > to be more careful for scoping things in a tool like that as it > > may be reused by people with name collisions across > > projects/users/groups/domains. > [...] > > Out of curiosity, could OSC/SDK cache those relationships so they're > only looked up once (or at least infrequently)? I guess there are > cache invalidation concerns if an entity is deleted and another > created out-of-band using the same name, but if it's all done > through the same persistent daemon then that's less of a risk right? They could cache these things too. The concern is a valid one too; however, a relatively short TTL may address that as these resources tend to all be used near each other. For example create a router, network, subnet in neutron or a user, role, group/domain in keystone. That said I think a bigger win would be caching tokens if we want to make changes to caching for osc (I think it can cache tokens but we don't set it up properly in devstack?) Every invocation of osc first hits the pkg_resources cost, then hits the catalog and token lookup costs, then does name to id translations, then does the actual thing you requested. Addressing the first two upfront costs likely has a bigger impact than name to id translations. Clark From brentonpoke at outlook.com Tue Aug 6 18:44:24 2019 From: brentonpoke at outlook.com (Brenton Poke) Date: Tue, 6 Aug 2019 18:44:24 +0000 Subject: [scientific]How are orgs using openstack for research? Message-ID: One of the answers I'm seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I'm not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6530 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Tue Aug 6 19:04:22 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 6 Aug 2019 21:04:22 +0200 Subject: [edge][all] Edge Hacking Days - August 9, 16 In-Reply-To: <7966B87C-E600-4681-83FB-FD947250012A@gmail.com> References: <7966B87C-E600-4681-83FB-FD947250012A@gmail.com> Message-ID: <2BB743BA-5BE8-4DC3-BA7C-D7ACDA3CBED1@gmail.com> Hi, Based on the Doodle poll results __August 9 and August 16__ got the most votes. You can find the dial in details on this etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. Potential topics to work on: * Building and testing edge reference architectures * Keystone testing and bug fixing Please let me know if you have any questions. See you on Friday! :) Thanks and Best Regards, Ildikó > On 2019. Jul 30., at 14:04, Ildiko Vancsa wrote: > > Hi, > > I’m reaching out with an attempt to organize hacking days to work on edge related tasks. > > The idea is to get together remotely on IRC/Zoom or any other platform that supports remote communication and work on items like building and testing our reference architectures or work on some project specific items like in Keystone or Ironic. > > Here are Doodle polls for the next three months: > > August: https://doodle.com/poll/ucfc9w7iewe6gdp4 > September: https://doodle.com/poll/3cyqxzr9vd82pwtr > October: https://doodle.com/poll/6nzziuihs65hwt7b > > Please mark any day when you have some availability to dedicate to hack even if it’s not a full day. > > Please let me know if you have any questions. > > As a reminder you can find the edge computing group’s resources and information about latest activities here: https://wiki.openstack.org/wiki/Edge_Computing_Group > > Thanks and Best Regards, > Ildikó > (IRC: ildikov) > > From mnaser at vexxhost.com Tue Aug 6 19:18:52 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 15:18:52 -0400 Subject: [openstack-ansible] office hours update Message-ID: Hey everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. We talked about who was attending the Shanghai Summit this year but not many of us are compared to past years. There was an issue with Manila still failing but it's been fixed and there's good progress on that front. There’s also an issue with adding novnc test to os_nova because the tempest plugins used isn’t from master, which brought up some discussion to perhaps start using the Zuul checked-out roles. Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Aug 6 19:19:09 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 19:19:09 +0000 Subject: [OSSA-2019-003] Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433) Message-ID: <20190806191908.o52es6mbavyle2k4@yuggoth.org> ========================================================================== OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details ========================================================================== :Date: August 06, 2019 :CVE: CVE-2019-14433 Affects ~~~~~~~ - Nova: <17.0.12,>=18.0.0<18.2.2,>=19.0.0<19.0.2 Description ~~~~~~~~~~~ Donny Davis with Intel reported a vulnerability in Nova Compute resource fault handling. If an API request from an authenticated user ends in a fault condition due to an external exception, details of the underlying environment may be leaked in the response and could include sensitive configuration or other data. Patches ~~~~~~~ - https://review.openstack.org/674908 (Ocata) - https://review.openstack.org/674877 (Pike) - https://review.openstack.org/674859 (Queens) - https://review.openstack.org/674848 (Rocky) - https://review.openstack.org/674828 (Stein) - https://review.openstack.org/674821 (Train) Credits ~~~~~~~ - Donny Davis from Intel (CVE-2019-14433) References ~~~~~~~~~~ - https://launchpad.net/bugs/1837877 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14433 Notes ~~~~~ - The stable/ocata and stable/pike branches are under extended maintenance and will receive no new point releases, but patches for them are provided as a courtesy. -- Jeremy Stanley OpenStack Vulnerability Management Team -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Tue Aug 6 19:44:36 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 6 Aug 2019 14:44:36 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: On Tue, Aug 6, 2019 at 11:42 AM Chris Dent wrote: > If we are in a situation where name to id and id to name > translations are slow at the services' API layer, isn't that a > really big bug? One where the fixing is beneficial to everyone, > including devstack users? While the name->ID lookup is an additional API round trip, it does not cause an additional python startup scan, which is the major killer here. In fact, it is possible that there is more than one lookup and that at least one will always be done because we do not know if that value is a name or an ID. The GET is done in any case because nearly every time (in non-create operations) we probably want the full object anyway. I also played with starting OSC as a background process a while back, it actually does work pretty well and with a bit more error handling would have been good enough(tm)[0]. The major concern with it then was it was not representative of how people actually use OSC and changed the testing value we get from doing that. dt [0] Basically run interactive mode in background, plumb up stdin/stdout to some descriptors and off to the races. -- Dean Troyer dtroyer at gmail.com From fungi at yuggoth.org Tue Aug 6 20:00:14 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 20:00:14 +0000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: <20190806200014.5rkpjvg3uyhm2rkx@yuggoth.org> On 2019-08-06 14:44:36 -0500 (-0500), Dean Troyer wrote: [...] > The major concern with it then was it was not representative of > how people actually use OSC and changed the testing value we get > from doing that. [...] In an ideal world, OSC would have explicit functional testing independent of the side effect of calling it when standing up DevStack. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Aug 6 20:00:15 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 6 Aug 2019 16:00:15 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: > On Aug 6, 2019, at 3:44 PM, Dean Troyer wrote: > > On Tue, Aug 6, 2019 at 11:42 AM Chris Dent wrote: >> If we are in a situation where name to id and id to name >> translations are slow at the services' API layer, isn't that a >> really big bug? One where the fixing is beneficial to everyone, >> including devstack users? > > While the name->ID lookup is an additional API round trip, it does not > cause an additional python startup scan, which is the major killer > here. In fact, it is possible that there is more than one lookup and > that at least one will always be done because we do not know if that > value is a name or an ID. The GET is done in any case because nearly > every time (in non-create operations) we probably want the full object > anyway. > > I also played with starting OSC as a background process a while back, > it actually does work pretty well and with a bit more error handling > would have been good enough(tm)[0]. The major concern with it then > was it was not representative of how people actually use OSC and > changed the testing value we get from doing that. > > dt > > [0] Basically run interactive mode in background, plumb up > stdin/stdout to some descriptors and off to the races. > > -- > Dean Troyer > dtroyer at gmail.com > I made some notes about the plugin lookup issue a while back [1] and I looked at that again the most recent time we were in Denver [2], and came to the conclusion that the implementation was going to require more changes in osc-lib than I was going to have time to figure out on my own. Unfortunately, it’s not a simple matter of choosing between looking at 1 internal cache or doing the pkg_resource scan because of the plugin version management layer osc-lib added. In any case, I think we’ve discussed the fact many times that the way to fix this is to not scan for plugins unless we have to do so. We just need someone to sit down and work on figuring out how to make that work. Doug [1] https://etherpad.openstack.org/p/mFsAgTZggf [2] https://etherpad.openstack.org/p/train-ptg-osc From stig.openstack at telfer.org Tue Aug 6 20:08:31 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 6 Aug 2019 21:08:31 +0100 Subject: [scientific-sig] IRC meeting today - experiences with accounting and chargeback Message-ID: <3E29D363-66BC-47D7-A137-9B86FDBD434E@telfer.org> Hi all - We have a Scientific SIG IRC meeting today at 2100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. Today’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_6th_2019 We’d like to continue with last week’s discussion, gathering notes on experiences with accounting and chargeback for scientific OpenStack deployments. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Aug 6 22:04:14 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 6 Aug 2019 23:04:14 +0100 Subject: [scientific]How are orgs using openstack for research? In-Reply-To: References: Message-ID: Hi Brenton - This is the kind of discussion that goes on within the Scientific SIG (https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meetings ), particularly your final question. It would be great to have you join the meetings and raise some of these topics for discussion. The meetings alternate between EMEA and Americas timezones. Where are you based? Cheers, Stig (oneswig) > On 6 Aug 2019, at 19:44, Brenton Poke wrote: > > One of the answers I’m seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I’m not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Aug 7 00:01:11 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 17:01:11 -0700 Subject: Zuul log location changing Message-ID: <87y305onco.fsf@meyer.lemoncheese.net> Hi, We've been working for some time[1] to retire our current static log server in favor of storing Zuul job logs in Swift. We're just about ready to do that. This means that not only do we get to use another great OpenStack project, but we also can stop worrying about fscking our 14TB log partition whenever it gets hit with a comsic ray. The change will happen in two phases, only the first of which should be readily apparent. Phase 1: On Monday August 12, 2019 we will change the URL that Zuul reports back to Gerrit. Instead of being a direct link to the log server, it will be a link to the Zuul Build page for the job. This is part of Zuul's web interface which has been around for a while, but isn't well known since we haven't linked to it yet. The build page shows a summary of information about the build, including output snippets for failed tasks. Next to the "Summary" tab, you'll find the "Logs" tab. This contains an expandable index of all the log files uploaded for the build. If they are text files, they can be rendered in-app with line-number hyperlinks and severity filtering. There are also direct links to the files on the log server. Links to preview sites (e.g., for docs builds) will show up in the "Artifacts" section. We also plan to further enhance the build page with additional features. Here are some links to sample build pages so you can see what it's like: https://zuul.opendev.org/t/openstack/build/a6e13a8098fc4a1fbff43d8f2c27ad29 https://zuul.opendev.org/t/openstack/build/75d1e8d4ffaf477db00520d7bfd77246 This step is necessary because our static log server implements a number of features as WSGI middleware in Apache. We have re-implemented the these on the Zuul build page, so there should be no loss in functionality (in fact, we think this is an improvement). Once in place, we can change the backend storage options without impacting the user interface. Phase 2: Shortly afterwards (depending on how phase 1 goes), we will configure jobs to upload logs to Swift instead of the static log server. At this point, there should be no user-visible change, since the main interface for interacting with logs is now the Zuul build page. However, you may notice that log urls have changed from our static log server to one of six different Swift instances. The Swift instance used to store the logs for any given build is chosen at random from among our providers, and is yet another really cool multi-cloud feature we get by using OpenStack. Thanks to our amazing providers and all of the folks who have helped with this effort over the years[1]. Please let us know if you have any questions or encounter any issues, either here, or in #openstack-infra on IRC. -Jim [1] Years. So. Many. Years. From corvus at inaugust.com Wed Aug 7 00:32:38 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 17:32:38 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> (Doug Hellmann's message of "Sat, 3 Aug 2019 20:48:45 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <87blx1olw9.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based > on input from communty members. The name critera are described in [1], > and this time around we were looking for names starting with U > associated with China. With some extra assistance from local community > members (thank you to everyone who helped!), we have a list of > candidate names that will go into the poll. Below is a subset of the > names propsed, including those that meet the standard criteria and > some of the suggestions that do not. Before we start the poll, the > process calls for us to provide a period of 1 week so that any names > removed from the proposals can be discussed and any last-minute > objections can be raised. We will start the poll next week using this > list, including any modifications based on that discussion. Hi, I had previously added an entry to the suggestions wiki page, but I did not see it in this email: * University https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai (Shanghai is famous for its universities) To pick one at random, the "University of Shanghai for Science and Technology" is a place in Shanghai; I think that meets the requirement for "physical or human geography". It's a point of pride that Shanghai has so many renowned universities, so I think it's a good choice and one well worth considering. -Jim From dangtrinhnt at gmail.com Wed Aug 7 00:48:25 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 7 Aug 2019 09:48:25 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: Hi Jeremy, Thanks for pointing that out. They're pretty helpful. Sorry for not clarifying the purpose of my question in the first email. Right now my company is using Jenkins for CI/CD which is not scalable and for me it's hard to define job pipeline because of XML. I'm about to build a demonstration for my company using Zuul with Github as a replacement and trying to make sense of the OpenStack deployment of Zuul. I have been working with OpenStack projects for a couple of cycles in which Zuul has shown me its greatness and I think I can bring that power to the company. Bests, On Tue, Aug 6, 2019 at 11:42 PM Jeremy Stanley wrote: > On 2019-08-06 16:49:15 +0900 (+0900), Trinh Nguyen wrote: > > Is there any documents somewhere describing the current > > architecture of the CI/CD system that OpenStack Infrastructure is > > running? > > OpenStack uses the OpenDev deployment of the Zuul project gating > system for CI/CD. The OpenDev infrastructure sysadmins maintain some > operational Zuul deployment documentation for their own purposes at > https://docs.openstack.org/infra/system-config/zuul.html and also > some information which was written separately during v3 migration at > https://docs.openstack.org/infra/system-config/zuulv3.html which is > due to get rolled into the first. Zuul itself already has excellent > documentation (written by many of the same people) and its main > architecture can be found described in the Admin Guide, with the > diagram at https://zuul-ci.org/docs/zuul/admin/components.html > providing a nice component overview. > > What specifically are you looking for? > -- > Jeremy Stanley > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Wed Aug 7 01:06:16 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 6 Aug 2019 20:06:16 -0500 Subject: Shanghai Summit Schedule Live Message-ID: <4AB2799E-0E12-4B98-8681-0939DF6D8218@openstack.org> Hi everyone, The agenda for the Open Infrastructure Summit (formerly the OpenStack Summit) is now live! If you need a reason to join the Summit in Shanghai, November 4-6, here’s what you can expect: Breakout sessions spanning 30+ open source projects from technical community leaders and organizations including ARM, WalmartLabs, China Mobile, China Railway, Shanghai Electric Power Company, China UnionPay, Haitong Securities Company, CERN, and more. Project updates and onboarding from OSF projects: Airship, Kata Containers, OpenStack, StarlingX, and Zuul. Join collaborative sessions at the Forum , where open infrastructure operators and upstream developers will gather to jointly chart the future of open source infrastructure, discussing topics ranging from upgrades to networking models and how to get started contributing. Get hands on training around open source technologies directly from the developers and operators building the software. Now what? Register before prices increase on August 14 at 11:59pm PT (August 15 at 2:59pm China Standard Time). Recruiting new talent? Pitching a new product? Enhance the visibility of your organization by sponsoring the Summit ! Questions? Reach out to summit at openstack.org Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 7 01:25:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Aug 2019 01:25:16 +0000 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: <20190807012515.zkjfmrrnffpx3rql@yuggoth.org> On 2019-08-07 09:48:25 +0900 (+0900), Trinh Nguyen wrote: [...] > Right now my company is using Jenkins for CI/CD which is not > scalable and for me it's hard to define job pipeline because of > XML. Not steer you away from Zuul, but back when we originally used Jenkins we noticed the same things. We conquered the XML problem by inventing jenkins-job-builder, which allows you to define Jenkins jobs via templated YAML and then use that to generate the XML it expects. The scalability issue was worked around by creating the jenkins-gearman plugin and having (an earlier incarnation of) Zuul distribute jobs across multiple Jenkins masters via the Gearman protocol. You'll notice that current Zuul versions retain some of this heritage by continuing to use YAML (much of which is for Ansible) and multiple executors (which are no longer Jenkins masters, just servers invoking Ansible) communicating with the scheduler via Gearman. For us it's been a natural evolution. > I'm about to build a demonstration for my company using Zuul with > Github as a replacement and trying to make sense of the OpenStack > deployment of Zuul. I have been working with OpenStack projects > for a couple of cycles in which Zuul has shown me its greatness > and I think I can bring that power to the company. [...] If you're looking for inspiration, check out some of the user stories at https://zuul-ci.org/users.html and visit the Zuul community in the #zuul channel on the Freenode IRC network or maybe subscribe to the mailing lists here: http://lists.zuul-ci.org/cgi-bin/mailman/listinfo Zuul has some remarkably thorough documentation, and helpful folks who are always happy to answer your questions. Good luck with your demo and let us know if you need any help! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Aug 7 02:06:08 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 19:06:08 -0700 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: (Trinh Nguyen's message of "Wed, 7 Aug 2019 09:48:25 +0900") References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: <87o911n2zz.fsf@meyer.lemoncheese.net> Trinh Nguyen writes: > Hi Jeremy, > > Thanks for pointing that out. They're pretty helpful. > > Sorry for not clarifying the purpose of my question in the first email. > Right now my company is using Jenkins for CI/CD which is not scalable and > for me it's hard to define job pipeline because of XML. I'm about to build > a demonstration for my company using Zuul with Github as a replacement and > trying to make sense of the OpenStack deployment of Zuul. I have been > working with OpenStack projects for a couple of cycles in which Zuul has > shown me its greatness and I think I can bring that power to the company. > > Bests, In addition to the excellent information that Jeremy provided, since you're talking about setting up a proof of concept, you may find it simpler to start with the Zuul Quick-Start: https://zuul-ci.org/start That's a container-based tutorial that will set you up with a complete Zuul system running on a single host, along with a private Gerrit instance. Once you have that running, it's fairly straightforward to take that and update the configuration to use GitHub instead of Gerrit. -Jim From sundar.nadathur at intel.com Wed Aug 7 06:06:52 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 7 Aug 2019 06:06:52 +0000 Subject: [cyborg] Poll for new weekly IRC meeting time Message-ID: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> The current Cyborg weekly IRC meeting time [1] is a conflict for many. We are looking for a better time that works for more people, with the understanding that no time is perfect for all. Please fill out this poll: https://doodle.com/poll/6t279f9y6msztz7x Be sure to indicate which times do not work for you. You can propose a new timeslot beyond what I included in the poll. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Weekly_IRC_Cyborg_team_meeting Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Aug 7 06:46:35 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Aug 2019 14:46:35 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87blx1olw9.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Aug 7, 2019 at 9:41 AM James E. Blair wrote: > I had previously added an entry to the suggestions wiki page, but I did > not see it in this email: > > * University > https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai > (Shanghai is famous for its universities) > > To pick one at random, the "University of Shanghai for Science and > Technology" is a place in Shanghai; I think that meets the requirement > for "physical or human geography". > > It's a point of pride that Shanghai has so many renowned universities, > so I think it's a good choice and one well worth considering. > Just added it in https://wiki.openstack.org/wiki/Release_Naming/U_Proposals Will make sure TCs evaluate on this one when evaluating names that do not meet the criteria Thanks for the idea > -Jim > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 07:34:42 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 15:34:42 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] Message-ID: OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G interfaces) Ansible: 2.8.3 kolla-ansible: 8.0.0 kolla_base_distro: "centos" kolla_install_type: "source" # I also tried "binary" openstack_release: "stein" I already tried redeploying (fresh OS reinstallation) 3 times already and kolla-ansible deploy always fails on this TASK ( http://paste.openstack.org/show/755599/) and won't continue and finish the deployment. And I think the issue is that the admin_url was not created ( http://paste.openstack.org/show/755600/) but why? Which task failed to not create the admin_url? Kolla-ansible only specified that 1 task failed. On keystone logs (http://paste.openstack.org/show/755601/) it says that the admin endpoint was created. The 3 keystone containers (keystone_fernet, keystone_ssh and keystone) are running without error on their logs though. - Vlad ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Wed Aug 7 08:23:55 2019 From: iurygregory at gmail.com (Iury Gregory) Date: Wed, 7 Aug 2019 10:23:55 +0200 Subject: Zuul log location changing In-Reply-To: <87y305onco.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> Message-ID: Congratulations to everyone involved! I really liked the new build page, pretty cool! Em qua, 7 de ago de 2019 às 02:05, James E. Blair escreveu: > Hi, > > We've been working for some time[1] to retire our current static log > server in favor of storing Zuul job logs in Swift. We're just about > ready to do that. > > This means that not only do we get to use another great OpenStack > project, but we also can stop worrying about fscking our 14TB log > partition whenever it gets hit with a comsic ray. > > The change will happen in two phases, only the first of which should be > readily apparent. > > Phase 1: > > On Monday August 12, 2019 we will change the URL that Zuul reports back > to Gerrit. Instead of being a direct link to the log server, it will be > a link to the Zuul Build page for the job. This is part of Zuul's web > interface which has been around for a while, but isn't well known since > we haven't linked to it yet. > > The build page shows a summary of information about the build, including > output snippets for failed tasks. Next to the "Summary" tab, you'll > find the "Logs" tab. This contains an expandable index of all the log > files uploaded for the build. If they are text files, they can be > rendered in-app with line-number hyperlinks and severity filtering. > There are also direct links to the files on the log server. > > Links to preview sites (e.g., for docs builds) will show up in the > "Artifacts" section. > > We also plan to further enhance the build page with additional features. > > Here are some links to sample build pages so you can see what it's like: > > > https://zuul.opendev.org/t/openstack/build/a6e13a8098fc4a1fbff43d8f2c27ad29 > > https://zuul.opendev.org/t/openstack/build/75d1e8d4ffaf477db00520d7bfd77246 > > This step is necessary because our static log server implements a number > of features as WSGI middleware in Apache. We have re-implemented the > these on the Zuul build page, so there should be no loss in > functionality (in fact, we think this is an improvement). Once in > place, we can change the backend storage options without impacting the > user interface. > > Phase 2: > > Shortly afterwards (depending on how phase 1 goes), we will configure > jobs to upload logs to Swift instead of the static log server. At this > point, there should be no user-visible change, since the main interface > for interacting with logs is now the Zuul build page. However, you may > notice that log urls have changed from our static log server to one of > six different Swift instances. > > The Swift instance used to store the logs for any given build is chosen > at random from among our providers, and is yet another really cool > multi-cloud feature we get by using OpenStack. > > Thanks to our amazing providers and all of the folks who have helped > with this effort over the years[1]. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. > > -Jim > > [1] Years. So. Many. Years. > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 08:40:26 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 16:40:26 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: I was intrigued by the error so I tried redeploying but this time on a non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( http://paste.openstack.org/show/755605/ ) - Vlad ᐧ On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < vladimir.blando at gmail.com> wrote: > OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G > interfaces) > Ansible: 2.8.3 > kolla-ansible: 8.0.0 > > kolla_base_distro: "centos" > kolla_install_type: "source" # I also tried "binary" > openstack_release: "stein" > > I already tried redeploying (fresh OS reinstallation) 3 times already and > kolla-ansible deploy always fails on this TASK ( > http://paste.openstack.org/show/755599/) and won't continue and finish > the deployment. And I think the issue is that the admin_url was not > created (http://paste.openstack.org/show/755600/) but why? Which task > failed to not create the admin_url? Kolla-ansible only specified that 1 > task failed. On keystone logs (http://paste.openstack.org/show/755601/) > it says that the admin endpoint was created. The 3 keystone containers > (keystone_fernet, keystone_ssh and keystone) are running without error on > their logs though. > > - Vlad > > > ᐧ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Aug 7 08:43:49 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Aug 2019 08:43:49 +0000 Subject: [scientific]How are orgs using openstack for research? In-Reply-To: References: Message-ID: <4788571F-B1F9-4044-AEAA-DA9B96D1656E@cern.ch> Brenton, In addition to Stig’s recommendation on the Scientific SIG, CERN does use OpenStack extensively, including for running Kafka/Hadoop/Spark (some details at https://indico.cern.ch/event/728770/contributions/3001750/attachments/1653323/2645467/HadoopatCERN_Hepix2018spring.pdf) with all relevant changes contributed back to the open source communities. Some other scientific use cases were covered at the recent CERN OpenStack day - Slides/Video at https://indico.cern.ch/event/776411/timetable/#20190527.detailed Summit talks on the CERN cloud are at https://www.openstack.org/videos/search?search=cern Tim On 7 Aug 2019, at 00:04, Stig Telfer > wrote: Hi Brenton - This is the kind of discussion that goes on within the Scientific SIG (https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meetings), particularly your final question. It would be great to have you join the meetings and raise some of these topics for discussion. The meetings alternate between EMEA and Americas timezones. Where are you based? Cheers, Stig (oneswig) On 6 Aug 2019, at 19:44, Brenton Poke > wrote: One of the answers I’m seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I’m not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Aug 7 09:16:41 2019 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 7 Aug 2019 11:16:41 +0200 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: Hi Vlad, I think the message: "Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Internal Server Error (HTTP 500)” is the key here Can you please raise a bug in launchpad (https://launchpad.net/kolla-ansible) and attach: kolla-ansible package version /etc/kolla/globals.yml full log from kolla-ansible -vvv deploy as a starter? Best regards, Michal śr., 7 sie 2019 o 10:41 vladimir franciz blando napisał(a): > I was intrigued by the error so I tried redeploying but this time on a > non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( > http://paste.openstack.org/show/755605/ ) > > - Vlad > > ᐧ > > On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < > vladimir.blando at gmail.com> wrote: > >> OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G >> interfaces) >> Ansible: 2.8.3 >> kolla-ansible: 8.0.0 >> >> kolla_base_distro: "centos" >> kolla_install_type: "source" # I also tried "binary" >> openstack_release: "stein" >> >> I already tried redeploying (fresh OS reinstallation) 3 times already and >> kolla-ansible deploy always fails on this TASK ( >> http://paste.openstack.org/show/755599/) and won't continue and finish >> the deployment. And I think the issue is that the admin_url was not >> created (http://paste.openstack.org/show/755600/) but why? Which task >> failed to not create the admin_url? Kolla-ansible only specified that 1 >> task failed. On keystone logs (http://paste.openstack.org/show/755601/) >> it says that the admin endpoint was created. The 3 keystone containers >> (keystone_fernet, keystone_ssh and keystone) are running without error on >> their logs though. >> >> - Vlad >> >> >> ᐧ >> > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 09:24:40 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 17:24:40 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: Sure. - Vlad ᐧ On Wed, Aug 7, 2019 at 5:17 PM Michał Nasiadka wrote: > Hi Vlad, > > I think the message: > "Could not find versioned identity endpoints when attempting to > authenticate. Please check that your auth_url is correct. Internal Server > Error (HTTP 500)” > is the key here > > Can you please raise a bug in launchpad ( > https://launchpad.net/kolla-ansible) and attach: > kolla-ansible package version > /etc/kolla/globals.yml > full log from kolla-ansible -vvv deploy > > as a starter? > > Best regards, > Michal > > śr., 7 sie 2019 o 10:41 vladimir franciz blando > napisał(a): > >> I was intrigued by the error so I tried redeploying but this time on a >> non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( >> http://paste.openstack.org/show/755605/ ) >> >> - Vlad >> >> ᐧ >> >> On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < >> vladimir.blando at gmail.com> wrote: >> >>> OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G >>> interfaces) >>> Ansible: 2.8.3 >>> kolla-ansible: 8.0.0 >>> >>> kolla_base_distro: "centos" >>> kolla_install_type: "source" # I also tried "binary" >>> openstack_release: "stein" >>> >>> I already tried redeploying (fresh OS reinstallation) 3 times already >>> and kolla-ansible deploy always fails on this TASK ( >>> http://paste.openstack.org/show/755599/) and won't continue and finish >>> the deployment. And I think the issue is that the admin_url was not >>> created (http://paste.openstack.org/show/755600/) but why? Which task >>> failed to not create the admin_url? Kolla-ansible only specified that 1 >>> task failed. On keystone logs (http://paste.openstack.org/show/755601/) >>> it says that the admin endpoint was created. The 3 keystone containers >>> (keystone_fernet, keystone_ssh and keystone) are running without error on >>> their logs though. >>> >>> - Vlad >>> >>> >>> ᐧ >>> >> > > -- > Michał Nasiadka > mnasiadka at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Aug 7 11:01:14 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Aug 2019 20:01:14 +0900 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190801085818.GD2077@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <20190801085818.GD2077@fedora19.localdomain> Message-ID: <16c6bbd778a.fdd737fb22757.8067378431087320764@ghanshyammann.com> ---- On Thu, 01 Aug 2019 17:58:18 +0900 Ian Wienand wrote ---- > On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > > Given my change shows this can be so much quicker is there any > > interest in modifying devstack to be faster here? And if so what do > > we think an appropriate approach would be? > > My first concern was if anyone considered openstack-client setting > these things up as actually part of the testing. I'd say not, > comments in [1] suggest similar views. > > My second concern is that we do keep sufficient track of complexity v > speed; obviously doing things in a sequential manner via a script is > pretty simple to follow and as we start putting things into scripts we > make it harder to debug when a monoscript dies and you have to start > pulling apart where it was. With just a little json fiddling we can > currently pull good stats from logstash ([2]) so I think as we go it > would be good to make sure we account for the time using appropriate > wrappers, etc. I agree on this concern about maintainability and debugging with scripts. Now a days, very less people have good knowledge on devstack code and debugging the failure on job side is much harder for most of the developers. IMO the maintainability and easy to debug is much needed as first priority. If we wanted to convert the OSC with something faster, Tempest service client comes into my mind. They are the very straight call to API directly but the token is requested for each API call. But that is something need PoC about speed improvement especially. > > Then the third concern is not to break anything for plugins -- > devstack has a very very loose API which basically relies on plugin > authors using a combination of good taste and copying other code to > decide what's internal or not. > > Which made me start thinking I wonder if we look at this closely, even > without replacing things we might make inroads? > > For example [3]; it seems like SERVICE_DOMAIN_NAME is never not > default, so the get_or_create_domain call is always just overhead (the > result is never used). > > Then it seems that in the gate, basically all of the "get_or_create" > calls will really just be "create" calls? Because we're always > starting fresh. So we could cut out about half of the calls there > pre-checking if we know we're under zuul (proof-of-concept [4]). > > Then we have blocks like: > > get_or_add_user_project_role $member_role $demo_user $demo_project > get_or_add_user_project_role $admin_role $admin_user $demo_project > get_or_add_user_project_role $another_role $demo_user $demo_project > get_or_add_user_project_role $member_role $demo_user $invis_project > > If we wrapped that in something like > > start_osc_session > ... > end_osc_session > > which sets a variable that means instead of calling directly, those > functions write their arguments to a tmp file. Then at the end call, > end_osc_session does > > $ osc "$(< tmpfile)" > > and uses the inbuilt batching? If that had half the calls by skipping > the "get_or" bit, and used common authentication from batching, would > that help? > > And then I don't know if all the projects and groups are required for > every devstack run? Maybe someone skilled in the art could do a bit > of an audit and we could cut more of that out too? Yeah, improving such usused o not required call with the audit is a good call. For example, In most place, devstack need just resource id or name or few fields for created resource so get call which gives complete resource fileds might not be needed and for async call we can have an exception to get resource('addressess' in server). -gmann > > So I guess my point is that maybe we could tweak what we have a bit to > make some immediate wins, before anyone has to rewrite too much? > > -i > > [1] https://review.opendev.org/673018 > [2] https://ethercalc.openstack.org/rzuhevxz7793 > [3] https://review.opendev.org/673941 > [4] https://review.opendev.org/673936 > > From nate.johnston at redhat.com Wed Aug 7 13:13:25 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 7 Aug 2019 08:13:25 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 from me! Nate > On Aug 4, 2019, at 1:52 PM, Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Aug 7 13:33:48 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Aug 2019 08:33:48 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> Message-ID: <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> On 8/6/19 11:34 AM, Ben Nemec wrote: > > > On 8/6/19 10:49 AM, Clark Boylan wrote: >> On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >>> Just a reminder that there is also >>> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >>> >>> which was intended to address this same issue. >>> >>> I toyed around with it a bit for TripleO installs back then and it did >>> seem to speed things up, but at the time there was a bug in our client >>> plugin where it was triggering a prompt for input that was problematic >>> with the server running in the background. I never really got back to it >>> once that was fixed. :-/ >> >> I'm not tied to any particular implementation. Mostly I wanted to show >> that we can take this ~5 minute portion of devstack and turn it into a >> 15 second portion of devstack by improving our use of the service APIs >> (and possibly even further if we apply it to all of the api >> interaction). Any idea how difficult it would be to get your client as >> a service stuff running in devstack again? > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > >> >> I do not think we should make a one off change like I've done in my >> POC. That will just end up being harder to understand and debug in the >> future since it will be different than all of the other API >> interaction. I like the idea of a manifest or feeding a longer lived >> process api update commands as we can then avoid requesting new tokens >> as well as pkg_resource startup time. Such a system could be used by >> all of devstack as well (avoiding the "this bit is special" problem). >> >> Is there any interest from the QA team in committing to an approach >> and working to do a conversion? I don't want to commit any more time >> to this myself unless there is strong interest in getting changes >> merged (as I expect it will be a slow process weeding out places where >> we've made bad assumptions particularly around plugins). >> >> One of the things I found was that using names with osc results in >> name to id lookups as well. We can avoid these entirely if we remember >> name to id mappings instead (which my POC does). Any idea if your osc >> as a service tool does or can do that? Probably have to be more >> careful for scoping things in a tool like that as it may be reused by >> people with name collisions across projects/users/groups/domains. > > I don't believe this would handle name to id mapping. It's a very thin > wrapper around the regular client code that just makes it persistent so > we don't pay the startup costs every call. On the plus side that means > it basically works like the vanilla client, on the minus side that means > it may not provide as much improvement as a more targeted solution. > > IIRC it's pretty easy to use, so I can try it out again and make sure it > still works and still provides a performance benefit. It still works and it still helps. Using the osc service cut about 3 minutes off my 21 minute devstack run. Subjectively I would say that most of the time was being spent cloning and installing services and their deps. I guess the downside is that working around the OSC slowness in CI will reduce developer motivation to fix the problem, which affects all users too. Then again, this has been a problem for years and no one has fixed it, so apparently that isn't a big enough lever to get things moving anyway. :-/ > >> >>> >>> On 7/26/19 6:53 PM, Clark Boylan wrote: >>>> Today I have been digging into devstack runtime costs to help Donny >>>> Davis understand why tempest jobs sometimes timeout on the >>>> FortNebula cloud. One thing I discovered was that the keystone user, >>>> group, project, role, and domain setup [0] can take many minutes >>>> [1][2] (in the examples here almost 5). >>>> >>>> I've rewritten create_keystone_accounts to be a python tool [3] and >>>> get the runtime for that subset of setup from ~100s to ~9s [4].  I >>>> imagine that if we applied this to the other create_X_accounts >>>> functions we would see similar results. >>>> >>>> I think this is so much faster because we avoid repeated costs in >>>> openstack client including: python process startup, pkg_resource >>>> disk scanning to find entrypoints, and needing to convert names to >>>> IDs via the API every time osc is run. Given my change shows this >>>> can be so much quicker is there any interest in modifying devstack >>>> to be faster here? And if so what do we think an appropriate >>>> approach would be? >>>> >>>> [0] >>>> https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>>> >>>> [1] >>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>>> >>>> [2] >>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>>> >>>> [3] https://review.opendev.org/#/c/673108/ >>>> [4] >>>> http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>>> >>>> >>>> Note the jobs compared above all ran on rax-dfw. >>>> >>>> Clark >>>> >>> >>> >> > From mnaser at vexxhost.com Wed Aug 7 13:41:01 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 7 Aug 2019 09:41:01 -0400 Subject: Agenda for TC Meeting 8 August 2019 at 1400 UTC Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 8th) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee * Follow up on past action items ** fungi to add himself as TC liaison for Image Encryption popup team ** fungi to draft a resolution on proper retirement procedures * Active initiatives ** Python 3: mnaser to sync up with swift team on python3 migration and mugsie to sync with dhellmann or release-team to find the code for the proposal bot ** Forum follow-up: ttx to organise Milestone 2 forum meeting with tc-members (done) ** Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/) * Discussion ** Attendance for leadership meeting during Shanghai Summit on 3 November ** Reviving Performance WG / Large deployment team into a Large scale SIG (ttx) Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Wed Aug 7 14:22:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Aug 2019 14:22:11 +0000 Subject: Agenda for TC Meeting 8 August 2019 at 1400 UTC In-Reply-To: References: Message-ID: <20190807142211.busjg5ike6q7pgkd@yuggoth.org> On 2019-08-07 09:41:01 -0400 (-0400), Mohammed Naser wrote: [...] > ** fungi to add himself as TC liaison for Image Encryption popup team Done: https://governance.openstack.org/tc/reference/popup-teams.html#image-encryption > ** fungi to draft a resolution on proper retirement procedures [...] Latest revision has been under review since 2019-07-22 but is still a few votes shy of quorum. > find the code for the proposal bot [...] I didn't know anyone had lost it? https://opendev.org/openstack/project-config/src/branch/master/playbooks/proposal/propose_update.sh -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Aug 7 14:30:00 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Aug 2019 07:30:00 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Wed, 7 Aug 2019 14:46:35 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> Message-ID: <871rxxm4k7.fsf@meyer.lemoncheese.net> Rico Lin writes: > On Wed, Aug 7, 2019 at 9:41 AM James E. Blair wrote: > >> I had previously added an entry to the suggestions wiki page, but I did >> not see it in this email: >> >> * University >> > https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai >> (Shanghai is famous for its universities) >> >> To pick one at random, the "University of Shanghai for Science and >> Technology" is a place in Shanghai; I think that meets the requirement >> for "physical or human geography". >> >> It's a point of pride that Shanghai has so many renowned universities, >> so I think it's a good choice and one well worth considering. >> > Just added it in https://wiki.openstack.org/wiki/Release_Naming/U_Proposals > Will make sure TCs evaluate on this one when evaluating names that do not > meet the criteria > Thanks for the idea Sorry if I wasn't clear, I had already added it to the wiki page more than a week ago -- you can still see my entry there at the bottom of the list of names that do meet the criteria. Here's the diff: https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 Also, I do think this meets the criteria, since there is a place in Shanghai with "University" in the name. This is similar to "Pike" which is short for the "Massachusetts Turnpike", which was deemed to meet the criteria for the P naming poll. Of course, as the coordinator it's up to you to determine whether it meets the criteria, but I believe it does, and hope you agree. Thanks, Jim From smooney at redhat.com Wed Aug 7 14:37:42 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 07 Aug 2019 15:37:42 +0100 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> Message-ID: On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > Just a reminder that there is also > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > which was intended to address this same issue. > > > > > > > > I toyed around with it a bit for TripleO installs back then and it did > > > > seem to speed things up, but at the time there was a bug in our client > > > > plugin where it was triggering a prompt for input that was problematic > > > > with the server running in the background. I never really got back to it > > > > once that was fixed. :-/ > > > > > > I'm not tied to any particular implementation. Mostly I wanted to show > > > that we can take this ~5 minute portion of devstack and turn it into a > > > 15 second portion of devstack by improving our use of the service APIs > > > (and possibly even further if we apply it to all of the api > > > interaction). Any idea how difficult it would be to get your client as > > > a service stuff running in devstack again? > > > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > > > > > > > > I do not think we should make a one off change like I've done in my > > > POC. That will just end up being harder to understand and debug in the > > > future since it will be different than all of the other API > > > interaction. I like the idea of a manifest or feeding a longer lived > > > process api update commands as we can then avoid requesting new tokens > > > as well as pkg_resource startup time. Such a system could be used by > > > all of devstack as well (avoiding the "this bit is special" problem). > > > > > > Is there any interest from the QA team in committing to an approach > > > and working to do a conversion? I don't want to commit any more time > > > to this myself unless there is strong interest in getting changes > > > merged (as I expect it will be a slow process weeding out places where > > > we've made bad assumptions particularly around plugins). > > > > > > One of the things I found was that using names with osc results in > > > name to id lookups as well. We can avoid these entirely if we remember > > > name to id mappings instead (which my POC does). Any idea if your osc > > > as a service tool does or can do that? Probably have to be more > > > careful for scoping things in a tool like that as it may be reused by > > > people with name collisions across projects/users/groups/domains. > > > > I don't believe this would handle name to id mapping. It's a very thin > > wrapper around the regular client code that just makes it persistent so > > we don't pay the startup costs every call. On the plus side that means > > it basically works like the vanilla client, on the minus side that means > > it may not provide as much improvement as a more targeted solution. > > > > IIRC it's pretty easy to use, so I can try it out again and make sure it > > still works and still provides a performance benefit. > > It still works and it still helps. Using the osc service cut about 3 > minutes off my 21 minute devstack run. Subjectively I would say that > most of the time was being spent cloning and installing services and > their deps. > > I guess the downside is that working around the OSC slowness in CI will > reduce developer motivation to fix the problem, which affects all users > too. Then again, this has been a problem for years and no one has fixed > it, so apparently that isn't a big enough lever to get things moving > anyway. :-/ using osc diretly i dont think the slowness is really perceptable from a human stand point but it adds up in a ci run. there are large problems to kill with gate slowness then fixing osc will solve be every little helps. i do agree however that the gage is not a big enough motivater for people to fix osc slowness as we can wait hours in some cases for jobs to start so 3 minutes is not really a consern form a latency perspective but if we saved 3 mins on every run that might in aggreaget reduce the latency problems we have. > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > Today I have been digging into devstack runtime costs to help Donny > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > FortNebula cloud. One thing I discovered was that the keystone user, > > > > > group, project, role, and domain setup [0] can take many minutes > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool [3] and > > > > > get the runtime for that subset of setup from ~100s to ~9s [4]. I > > > > > imagine that if we applied this to the other create_X_accounts > > > > > functions we would see similar results. > > > > > > > > > > I think this is so much faster because we avoid repeated costs in > > > > > openstack client including: python process startup, pkg_resource > > > > > disk scanning to find entrypoints, and needing to convert names to > > > > > IDs via the API every time osc is run. Given my change shows this > > > > > can be so much quicker is there any interest in modifying devstack > > > > > to be faster here? And if so what do we think an appropriate > > > > > approach would be? > > > > > > > > > > [0] > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > [1] > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > [2] > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > [4] > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > Clark > > > > > > > > > > > > > > > From rico.lin.guanyu at gmail.com Wed Aug 7 14:58:53 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Aug 2019 22:58:53 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <871rxxm4k7.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Aug 7, 2019 at 10:30 PM James E. Blair wrote: > > Sorry if I wasn't clear, I had already added it to the wiki page more > than a week ago -- you can still see my entry there at the bottom of the > list of names that do meet the criteria. Here's the diff: > > https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 > > Also, I do think this meets the criteria, since there is a place in > Shanghai with "University" in the name. This is similar to "Pike" which > is short for the "Massachusetts Turnpike", which was deemed to meet the > criteria for the P naming poll. > As we discussed in IRC:#openstack-tc, change the reference from general Universities to specific University will make it meet the criteria "The name must refer to the physical or human geography" Added it back to the 'meet criteria' list and update it with reference to specific university "University of Shanghai for Science and Technology". feel free to correct me, if I misunderstand the criteria rule. :) > Of course, as the coordinator it's up to you to determine whether it > meets the criteria, but I believe it does, and hope you agree. > > Thanks, > > Jim -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Aug 7 15:10:20 2019 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 7 Aug 2019 17:10:20 +0200 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: Le mar. 6 août 2019 à 17:14, Ben Nemec a écrit : > Another thing to check if you're having seemingly inexplicable messaging > issues is that there isn't a notification queue filling up somewhere. If > notifications are enabled somewhere but nothing is consuming them the > size of the queue will eventually grind rabbit to a halt. > > I used to check queue sizes through the rabbit web ui, so I have to > admit I'm not sure how to do it through the cli. > You can use the following command to monitor your queues and observe size and growing: ``` watch -c "rabbitmqctl list_queues name messages_unacknowledged" ``` Or also something like that: ``` rabbitmqctl list_queues messages consumers name message_bytes messages_unacknowledged > messages_ready head_message_timestamp consumer_utilisation memory state | grep reply ``` > > On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: > > Hi, > > Are you using ssl connections ? > > > > Can be this issue ? > > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 > > > > > > ------------------------------------------------------------------------ > > *From:* Laurent Dumont > > *Sent:* Wednesday, July 31, 2019 4:20 PM > > *To:* Grant Morley > > *Cc:* openstack-operators at lists.openstack.org > > *Subject:* Re: Slow instance launch times due to RabbitMQ > > That is a bit strange, list_queues should return stuff. Couple of ideas : > > > > * Are the Rabbit connection failure logs on the compute pointing to a > > specific controller? > > * Are there any logs within Rabbit on the controller that would point > > to a transient issue? > > * cluster_status is a snapshot of the cluster at the time you ran the > > command. If the alarms have cleared, you won't see anything. > > * If you have the RabbitMQ management plugin activated, I would > > recommend a quick look to see the historical metrics and overall > status. > > > > > > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > > wrote: > > > > Hi guys, > > > > We are using Ubuntu 16 and OpenStack ansible to do our setup. > > > > rabbitmqctl list_queues > > Listing queues > > > > (Doesn't appear to be any queues ) > > > > rabbitmqctl cluster_status > > > > Cluster status of node > > 'rabbit at management-1-rabbit-mq-container-b4d7791f' > > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', > > 'rabbit at management-2-rabbit-mq-container-b455e77d', > > 'rabbit at management-3-rabbit-mq-container-1d6ae377 > ']}]}, > > {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377 > ', > > 'rabbit at management-2-rabbit-mq-container-b455e77d > ', > > 'rabbit at management-1-rabbit-mq-container-b4d7791f > ']}, > > {cluster_name,<<"openstack">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, > > {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, > > {'rabbit at management-1-rabbit-mq-container-b4d7791f > ',[]}]}] > > > > Regards, > > > > On 31/07/2019 11:49, Laurent Dumont wrote: > >> Could you forward the output of the following commands on a > >> controller node? : > >> > >> rabbitmqctl cluster_status > >> rabbitmqctl list_queues > >> > >> You won't necessarily see a high load on a Rabbit cluster that is > >> in a bad state. > >> > >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley >> > wrote: > >> > >> Hi all, > >> > >> We are randomly seeing slow instance launch / deletion times > >> and it appears to be because of RabbitMQ. We are seeing a lot > >> of these messages in the logs for Nova and Neutron: > >> > >> ERROR oslo.messaging._drivers.impl_rabbit [-] > >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on > >> 10.6.2.212:5671 is unreachable: Too > >> many heartbeats missed. Trying again in 1 seconds. Client > >> port: 37098: ConnectionForced: Too many heartbeats missed > >> > >> The RabbitMQ cluster isn't under high load and I am not seeing > >> any packets drop over the network when I do some tracing. > >> > >> We are only running 15 compute nodes currently and have >1000 > >> instances so it isn't a large deployment. > >> > >> Are there any good configuration tweaks for RabbitMQ running > >> on OpenStack Queens? > >> > >> Many Thanks, > >> > >> -- > >> > >> Grant Morley > >> Cloud Lead, Civo Ltd > >> www.civo.com | Signup for an account! > >> > >> > > -- > > > > Grant Morley > > Cloud Lead, Civo Ltd > > www.civo.com | Signup for an account! > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Aug 7 15:11:20 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Aug 2019 10:11:20 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> Message-ID: <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> On 8/7/19 9:37 AM, Sean Mooney wrote: > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: >> >> On 8/6/19 11:34 AM, Ben Nemec wrote: >>> >>> >>> On 8/6/19 10:49 AM, Clark Boylan wrote: >>>> On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >>>>> Just a reminder that there is also >>>>> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >>>>> >>>>> which was intended to address this same issue. >>>>> >>>>> I toyed around with it a bit for TripleO installs back then and it did >>>>> seem to speed things up, but at the time there was a bug in our client >>>>> plugin where it was triggering a prompt for input that was problematic >>>>> with the server running in the background. I never really got back to it >>>>> once that was fixed. :-/ >>>> >>>> I'm not tied to any particular implementation. Mostly I wanted to show >>>> that we can take this ~5 minute portion of devstack and turn it into a >>>> 15 second portion of devstack by improving our use of the service APIs >>>> (and possibly even further if we apply it to all of the api >>>> interaction). Any idea how difficult it would be to get your client as >>>> a service stuff running in devstack again? >>> >>> I wish I could take credit, but this is actually Dan Berrange's work. :-) >>> >>>> >>>> I do not think we should make a one off change like I've done in my >>>> POC. That will just end up being harder to understand and debug in the >>>> future since it will be different than all of the other API >>>> interaction. I like the idea of a manifest or feeding a longer lived >>>> process api update commands as we can then avoid requesting new tokens >>>> as well as pkg_resource startup time. Such a system could be used by >>>> all of devstack as well (avoiding the "this bit is special" problem). >>>> >>>> Is there any interest from the QA team in committing to an approach >>>> and working to do a conversion? I don't want to commit any more time >>>> to this myself unless there is strong interest in getting changes >>>> merged (as I expect it will be a slow process weeding out places where >>>> we've made bad assumptions particularly around plugins). >>>> >>>> One of the things I found was that using names with osc results in >>>> name to id lookups as well. We can avoid these entirely if we remember >>>> name to id mappings instead (which my POC does). Any idea if your osc >>>> as a service tool does or can do that? Probably have to be more >>>> careful for scoping things in a tool like that as it may be reused by >>>> people with name collisions across projects/users/groups/domains. >>> >>> I don't believe this would handle name to id mapping. It's a very thin >>> wrapper around the regular client code that just makes it persistent so >>> we don't pay the startup costs every call. On the plus side that means >>> it basically works like the vanilla client, on the minus side that means >>> it may not provide as much improvement as a more targeted solution. >>> >>> IIRC it's pretty easy to use, so I can try it out again and make sure it >>> still works and still provides a performance benefit. >> >> It still works and it still helps. Using the osc service cut about 3 >> minutes off my 21 minute devstack run. Subjectively I would say that >> most of the time was being spent cloning and installing services and >> their deps. >> >> I guess the downside is that working around the OSC slowness in CI will >> reduce developer motivation to fix the problem, which affects all users >> too. Then again, this has been a problem for years and no one has fixed >> it, so apparently that isn't a big enough lever to get things moving >> anyway. :-/ > using osc diretly i dont think the slowness is really perceptable from a human > stand point but it adds up in a ci run. there are large problems to kill with gate > slowness then fixing osc will solve be every little helps. i do agree however > that the gage is not a big enough motivater for people to fix osc slowness as > we can wait hours in some cases for jobs to start so 3 minutes is not really a consern > form a latency perspective but if we saved 3 mins on every run that might > in aggreaget reduce the latency problems we have. I find the slowness very noticeable in interactive use. It adds something like 2 seconds to a basic call like image list that returns almost instantly in the OSC interactive shell where there is no startup overhead. From my performance days, any latency over 1 second was considered unacceptable for an interactive call. The interactive shell does help with that if I'm doing a bunch of calls in a row though. That said, you're right that 3 minutes multiplied by the number of jobs we run per day is significant. Picking 1000 as a round number (and I'm pretty sure we run a _lot_ more than that per day), a 3 minute decrease in runtime per job would save about 50 hours of CI time in total. Small things add up at scale. :-) >> >>> >>>> >>>>> >>>>> On 7/26/19 6:53 PM, Clark Boylan wrote: >>>>>> Today I have been digging into devstack runtime costs to help Donny >>>>>> Davis understand why tempest jobs sometimes timeout on the >>>>>> FortNebula cloud. One thing I discovered was that the keystone user, >>>>>> group, project, role, and domain setup [0] can take many minutes >>>>>> [1][2] (in the examples here almost 5). >>>>>> >>>>>> I've rewritten create_keystone_accounts to be a python tool [3] and >>>>>> get the runtime for that subset of setup from ~100s to ~9s [4]. I >>>>>> imagine that if we applied this to the other create_X_accounts >>>>>> functions we would see similar results. >>>>>> >>>>>> I think this is so much faster because we avoid repeated costs in >>>>>> openstack client including: python process startup, pkg_resource >>>>>> disk scanning to find entrypoints, and needing to convert names to >>>>>> IDs via the API every time osc is run. Given my change shows this >>>>>> can be so much quicker is there any interest in modifying devstack >>>>>> to be faster here? And if so what do we think an appropriate >>>>>> approach would be? >>>>>> >>>>>> [0] >>>>>> > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>>>>> >>>>>> >>>>>> [1] >>>>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>>>>> >>>>>> >>>>>> [2] >>>>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>>>>> >>>>>> >>>>>> [3] https://review.opendev.org/#/c/673108/ >>>>>> [4] >>>>>> > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>>>>> >>>>>> >>>>>> >>>>>> Note the jobs compared above all ran on rax-dfw. >>>>>> >>>>>> Clark >>>>>> >>>>> >>>>> >> >> > > From smooney at redhat.com Wed Aug 7 17:16:07 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 07 Aug 2019 18:16:07 +0100 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: On Wed, 2019-08-07 at 10:11 -0500, Ben Nemec wrote: > > On 8/7/19 9:37 AM, Sean Mooney wrote: > > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > > > > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > > > Just a reminder that there is also > > > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > > > > > which was intended to address this same issue. > > > > > > > > > > > > I toyed around with it a bit for TripleO installs back then and it did > > > > > > seem to speed things up, but at the time there was a bug in our client > > > > > > plugin where it was triggering a prompt for input that was problematic > > > > > > with the server running in the background. I never really got back to it > > > > > > once that was fixed. :-/ > > > > > > > > > > I'm not tied to any particular implementation. Mostly I wanted to show > > > > > that we can take this ~5 minute portion of devstack and turn it into a > > > > > 15 second portion of devstack by improving our use of the service APIs > > > > > (and possibly even further if we apply it to all of the api > > > > > interaction). Any idea how difficult it would be to get your client as > > > > > a service stuff running in devstack again? > > > > > > > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > > > > > > > > > > > > > > I do not think we should make a one off change like I've done in my > > > > > POC. That will just end up being harder to understand and debug in the > > > > > future since it will be different than all of the other API > > > > > interaction. I like the idea of a manifest or feeding a longer lived > > > > > process api update commands as we can then avoid requesting new tokens > > > > > as well as pkg_resource startup time. Such a system could be used by > > > > > all of devstack as well (avoiding the "this bit is special" problem). > > > > > > > > > > Is there any interest from the QA team in committing to an approach > > > > > and working to do a conversion? I don't want to commit any more time > > > > > to this myself unless there is strong interest in getting changes > > > > > merged (as I expect it will be a slow process weeding out places where > > > > > we've made bad assumptions particularly around plugins). > > > > > > > > > > One of the things I found was that using names with osc results in > > > > > name to id lookups as well. We can avoid these entirely if we remember > > > > > name to id mappings instead (which my POC does). Any idea if your osc > > > > > as a service tool does or can do that? Probably have to be more > > > > > careful for scoping things in a tool like that as it may be reused by > > > > > people with name collisions across projects/users/groups/domains. > > > > > > > > I don't believe this would handle name to id mapping. It's a very thin > > > > wrapper around the regular client code that just makes it persistent so > > > > we don't pay the startup costs every call. On the plus side that means > > > > it basically works like the vanilla client, on the minus side that means > > > > it may not provide as much improvement as a more targeted solution. > > > > > > > > IIRC it's pretty easy to use, so I can try it out again and make sure it > > > > still works and still provides a performance benefit. > > > > > > It still works and it still helps. Using the osc service cut about 3 > > > minutes off my 21 minute devstack run. Subjectively I would say that > > > most of the time was being spent cloning and installing services and > > > their deps. > > > > > > I guess the downside is that working around the OSC slowness in CI will > > > reduce developer motivation to fix the problem, which affects all users > > > too. Then again, this has been a problem for years and no one has fixed > > > it, so apparently that isn't a big enough lever to get things moving > > > anyway. :-/ > > > > using osc diretly i dont think the slowness is really perceptable from a human > > stand point but it adds up in a ci run. there are large problems to kill with gate > > slowness then fixing osc will solve be every little helps. i do agree however > > that the gage is not a big enough motivater for people to fix osc slowness as > > we can wait hours in some cases for jobs to start so 3 minutes is not really a consern > > form a latency perspective but if we saved 3 mins on every run that might > > in aggreaget reduce the latency problems we have. > > I find the slowness very noticeable in interactive use. It adds > something like 2 seconds to a basic call like image list that returns > almost instantly in the OSC interactive shell where there is no startup > overhead. From my performance days, any latency over 1 second was > considered unacceptable for an interactive call. The interactive shell > does help with that if I'm doing a bunch of calls in a row though. well that was kind of my point when we write sripts we invoke it over and over again. if i need to use osc to do lots of commands for some reason i generaly enter the interactive mode. the interactive mode already masks the pain so anytime it has bother me in the past i have just ended up using it instead. its been a long time since i looked at this but it think there were two reasons it is slow on startup. one is the need to get the token for each request and the other was related to the way we scan for plugins. i honestly dont know if either have imporved but the interactive shell elimiates both as issues. > > That said, you're right that 3 minutes multiplied by the number of jobs > we run per day is significant. Picking 1000 as a round number (and I'm > pretty sure we run a _lot_ more than that per day), a 3 minute decrease > in runtime per job would save about 50 hours of CI time in total. Small > things add up at scale. :-) yep it defintly does. > > > > > > > > > > > > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > > > Today I have been digging into devstack runtime costs to help Donny > > > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > > > FortNebula cloud. One thing I discovered was that the keystone user, > > > > > > > group, project, role, and domain setup [0] can take many minutes > > > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool [3] and > > > > > > > get the runtime for that subset of setup from ~100s to ~9s [4]. I > > > > > > > imagine that if we applied this to the other create_X_accounts > > > > > > > functions we would see similar results. > > > > > > > > > > > > > > I think this is so much faster because we avoid repeated costs in > > > > > > > openstack client including: python process startup, pkg_resource > > > > > > > disk scanning to find entrypoints, and needing to convert names to > > > > > > > IDs via the API every time osc is run. Given my change shows this > > > > > > > can be so much quicker is there any interest in modifying devstack > > > > > > > to be faster here? And if so what do we think an appropriate > > > > > > > approach would be? > > > > > > > > > > > > > > [0] > > > > > > > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > > > > > > > [2] > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > > > [4] > > > > > > > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > > > > > Clark > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From melwittt at gmail.com Wed Aug 7 18:48:23 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 7 Aug 2019 11:48:23 -0700 Subject: [nova] Hyper-V CI broken on stable branches Message-ID: Dear Hyper-V CI maintainers, We noticed upstream that the Hyper-V CI has been failing on the stable/stein and stable/rocky (and perhaps older branches too) since this tempest change merged to unskip test_stamp_pattern: https://review.opendev.org/615434 Here are links to sample failed CI runs: stable/stein: http://cloudbase-ci.com/nova/674828/1/tempest/subunit.html.gz stable/rocky: http://cloudbase-ci.com/nova/674916/1/tempest/subunit.html.gz It looks like all is well on the master branch (looks like test_stamp_pattern does not run on master). Just wanted to alert you about the failures and see if anyone is available to fix it or add test_stamp_pattern to a skip list for Hyper-V CI. Cheers, -melanie From donny at fortnebula.com Wed Aug 7 19:45:56 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 7 Aug 2019 15:45:56 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: Just for reference FortNebula does 73-80 jobs an hour, so that's 1700(ish) jobs a day -3 minutes per job. That is 5200(ish) cycle minutes a day. Or about 4 days worth of computing time. If there can be a fix that saves minutes, its surely worth it. On Wed, Aug 7, 2019 at 1:18 PM Sean Mooney wrote: > On Wed, 2019-08-07 at 10:11 -0500, Ben Nemec wrote: > > > > On 8/7/19 9:37 AM, Sean Mooney wrote: > > > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > > > > > > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > > > > > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > > > > Just a reminder that there is also > > > > > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > > > > > > > which was intended to address this same issue. > > > > > > > > > > > > > > I toyed around with it a bit for TripleO installs back then > and it did > > > > > > > seem to speed things up, but at the time there was a bug in > our client > > > > > > > plugin where it was triggering a prompt for input that was > problematic > > > > > > > with the server running in the background. I never really got > back to it > > > > > > > once that was fixed. :-/ > > > > > > > > > > > > I'm not tied to any particular implementation. Mostly I wanted > to show > > > > > > that we can take this ~5 minute portion of devstack and turn it > into a > > > > > > 15 second portion of devstack by improving our use of the > service APIs > > > > > > (and possibly even further if we apply it to all of the api > > > > > > interaction). Any idea how difficult it would be to get your > client as > > > > > > a service stuff running in devstack again? > > > > > > > > > > I wish I could take credit, but this is actually Dan Berrange's > work. :-) > > > > > > > > > > > > > > > > > I do not think we should make a one off change like I've done in > my > > > > > > POC. That will just end up being harder to understand and debug > in the > > > > > > future since it will be different than all of the other API > > > > > > interaction. I like the idea of a manifest or feeding a longer > lived > > > > > > process api update commands as we can then avoid requesting new > tokens > > > > > > as well as pkg_resource startup time. Such a system could be > used by > > > > > > all of devstack as well (avoiding the "this bit is special" > problem). > > > > > > > > > > > > Is there any interest from the QA team in committing to an > approach > > > > > > and working to do a conversion? I don't want to commit any more > time > > > > > > to this myself unless there is strong interest in getting changes > > > > > > merged (as I expect it will be a slow process weeding out places > where > > > > > > we've made bad assumptions particularly around plugins). > > > > > > > > > > > > One of the things I found was that using names with osc results > in > > > > > > name to id lookups as well. We can avoid these entirely if we > remember > > > > > > name to id mappings instead (which my POC does). Any idea if > your osc > > > > > > as a service tool does or can do that? Probably have to be more > > > > > > careful for scoping things in a tool like that as it may be > reused by > > > > > > people with name collisions across projects/users/groups/domains. > > > > > > > > > > I don't believe this would handle name to id mapping. It's a very > thin > > > > > wrapper around the regular client code that just makes it > persistent so > > > > > we don't pay the startup costs every call. On the plus side that > means > > > > > it basically works like the vanilla client, on the minus side that > means > > > > > it may not provide as much improvement as a more targeted solution. > > > > > > > > > > IIRC it's pretty easy to use, so I can try it out again and make > sure it > > > > > still works and still provides a performance benefit. > > > > > > > > It still works and it still helps. Using the osc service cut about 3 > > > > minutes off my 21 minute devstack run. Subjectively I would say that > > > > most of the time was being spent cloning and installing services and > > > > their deps. > > > > > > > > I guess the downside is that working around the OSC slowness in CI > will > > > > reduce developer motivation to fix the problem, which affects all > users > > > > too. Then again, this has been a problem for years and no one has > fixed > > > > it, so apparently that isn't a big enough lever to get things moving > > > > anyway. :-/ > > > > > > using osc diretly i dont think the slowness is really perceptable from > a human > > > stand point but it adds up in a ci run. there are large problems to > kill with gate > > > slowness then fixing osc will solve be every little helps. i do agree > however > > > that the gage is not a big enough motivater for people to fix osc > slowness as > > > we can wait hours in some cases for jobs to start so 3 minutes is not > really a consern > > > form a latency perspective but if we saved 3 mins on every run that > might > > > in aggreaget reduce the latency problems we have. > > > > I find the slowness very noticeable in interactive use. It adds > > something like 2 seconds to a basic call like image list that returns > > almost instantly in the OSC interactive shell where there is no startup > > overhead. From my performance days, any latency over 1 second was > > considered unacceptable for an interactive call. The interactive shell > > does help with that if I'm doing a bunch of calls in a row though. > well that was kind of my point when we write sripts we invoke it over and > over again. > if i need to use osc to do lots of commands for some reason i generaly > enter the interactive > mode. the interactive mode already masks the pain so anytime it has bother > me in the past > i have just ended up using it instead. > > its been a long time since i looked at this but it think there were two > reasons it is slow on startup. > one is the need to get the token for each request and the other was > related to the way we scan > for plugins. i honestly dont know if either have imporved but the > interactive shell elimiates both > as issues. > > > > That said, you're right that 3 minutes multiplied by the number of jobs > > we run per day is significant. Picking 1000 as a round number (and I'm > > pretty sure we run a _lot_ more than that per day), a 3 minute decrease > > in runtime per job would save about 50 hours of CI time in total. Small > > things add up at scale. :-) > yep it defintly does. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > > > > Today I have been digging into devstack runtime costs to > help Donny > > > > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > > > > FortNebula cloud. One thing I discovered was that the > keystone user, > > > > > > > > group, project, role, and domain setup [0] can take many > minutes > > > > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool > [3] and > > > > > > > > get the runtime for that subset of setup from ~100s to ~9s > [4]. I > > > > > > > > imagine that if we applied this to the other > create_X_accounts > > > > > > > > functions we would see similar results. > > > > > > > > > > > > > > > > I think this is so much faster because we avoid repeated > costs in > > > > > > > > openstack client including: python process startup, > pkg_resource > > > > > > > > disk scanning to find entrypoints, and needing to convert > names to > > > > > > > > IDs via the API every time osc is run. Given my change shows > this > > > > > > > > can be so much quicker is there any interest in modifying > devstack > > > > > > > > to be faster here? And if so what do we think an appropriate > > > > > > > > approach would be? > > > > > > > > > > > > > > > > [0] > > > > > > > > > > > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > > > > > > > > > > [2] > > > > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > > > > [4] > > > > > > > > > > > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > > > > > > > Clark > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Aug 7 19:50:56 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 7 Aug 2019 12:50:56 -0700 Subject: [nova] Hyper-V CI broken on stable branches In-Reply-To: References: Message-ID: <85954248-d3f4-d095-899e-d7725b98d615@gmail.com> -hyper-v_ci at microsoft.com, +nova_hyperv_ci at cloudbasesolutions.com (used the wrong third party CI page contact info earlier, sorry) Dear Hyper-V CI maintainers, We noticed upstream that the Hyper-V CI has been failing on the stable/stein and stable/rocky (and perhaps older branches too) since this tempest change merged to unskip test_stamp_pattern: https://review.opendev.org/615434 Here are links to sample failed CI runs: stable/stein: http://cloudbase-ci.com/nova/674828/1/tempest/subunit.html.gz stable/rocky: http://cloudbase-ci.com/nova/674916/1/tempest/subunit.html.gz It looks like all is well on the master branch (looks like test_stamp_pattern does not run on master). Just wanted to alert you about the failures and see if anyone is available to fix it or add test_stamp_pattern to a skip list for Hyper-V CI. Cheers, -melanie From donny at fortnebula.com Wed Aug 7 19:52:51 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 7 Aug 2019 15:52:51 -0400 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: I am curious how your system is setup? Are you using nova with local storage? Are you using ceph? How long does it take to launch an instance when you are seeing this message? On Wed, Aug 7, 2019 at 11:12 AM Herve Beraud wrote: > > > Le mar. 6 août 2019 à 17:14, Ben Nemec a écrit : > >> Another thing to check if you're having seemingly inexplicable messaging >> issues is that there isn't a notification queue filling up somewhere. If >> notifications are enabled somewhere but nothing is consuming them the >> size of the queue will eventually grind rabbit to a halt. >> >> I used to check queue sizes through the rabbit web ui, so I have to >> admit I'm not sure how to do it through the cli. >> > > You can use the following command to monitor your queues and observe size > and growing: > > ``` > watch -c "rabbitmqctl list_queues name messages_unacknowledged" > ``` > > Or also something like that: > > ``` > rabbitmqctl list_queues messages consumers name message_bytes > messages_unacknowledged > messages_ready head_message_timestamp > consumer_utilisation memory state | grep reply > ``` > > >> >> On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: >> > Hi, >> > Are you using ssl connections ? >> > >> > Can be this issue ? >> > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 >> > >> > >> > ------------------------------------------------------------------------ >> > *From:* Laurent Dumont >> > *Sent:* Wednesday, July 31, 2019 4:20 PM >> > *To:* Grant Morley >> > *Cc:* openstack-operators at lists.openstack.org >> > *Subject:* Re: Slow instance launch times due to RabbitMQ >> > That is a bit strange, list_queues should return stuff. Couple of ideas >> : >> > >> > * Are the Rabbit connection failure logs on the compute pointing to a >> > specific controller? >> > * Are there any logs within Rabbit on the controller that would point >> > to a transient issue? >> > * cluster_status is a snapshot of the cluster at the time you ran the >> > command. If the alarms have cleared, you won't see anything. >> > * If you have the RabbitMQ management plugin activated, I would >> > recommend a quick look to see the historical metrics and overall >> status. >> > >> > >> > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > > > wrote: >> > >> > Hi guys, >> > >> > We are using Ubuntu 16 and OpenStack ansible to do our setup. >> > >> > rabbitmqctl list_queues >> > Listing queues >> > >> > (Doesn't appear to be any queues ) >> > >> > rabbitmqctl cluster_status >> > >> > Cluster status of node >> > 'rabbit at management-1-rabbit-mq-container-b4d7791f' >> > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', >> > 'rabbit at management-2-rabbit-mq-container-b455e77d >> ', >> > 'rabbit at management-3-rabbit-mq-container-1d6ae377 >> ']}]}, >> > {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377 >> ', >> > 'rabbit at management-2-rabbit-mq-container-b455e77d >> ', >> > 'rabbit at management-1-rabbit-mq-container-b4d7791f >> ']}, >> > {cluster_name,<<"openstack">>}, >> > {partitions,[]}, >> > {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, >> > {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, >> > {'rabbit at management-1-rabbit-mq-container-b4d7791f >> ',[]}]}] >> > >> > Regards, >> > >> > On 31/07/2019 11:49, Laurent Dumont wrote: >> >> Could you forward the output of the following commands on a >> >> controller node? : >> >> >> >> rabbitmqctl cluster_status >> >> rabbitmqctl list_queues >> >> >> >> You won't necessarily see a high load on a Rabbit cluster that is >> >> in a bad state. >> >> >> >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley > >> > wrote: >> >> >> >> Hi all, >> >> >> >> We are randomly seeing slow instance launch / deletion times >> >> and it appears to be because of RabbitMQ. We are seeing a lot >> >> of these messages in the logs for Nova and Neutron: >> >> >> >> ERROR oslo.messaging._drivers.impl_rabbit [-] >> >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on >> >> 10.6.2.212:5671 is unreachable: Too >> >> many heartbeats missed. Trying again in 1 seconds. Client >> >> port: 37098: ConnectionForced: Too many heartbeats missed >> >> >> >> The RabbitMQ cluster isn't under high load and I am not seeing >> >> any packets drop over the network when I do some tracing. >> >> >> >> We are only running 15 compute nodes currently and have >1000 >> >> instances so it isn't a large deployment. >> >> >> >> Are there any good configuration tweaks for RabbitMQ running >> >> on OpenStack Queens? >> >> >> >> Many Thanks, >> >> >> >> -- >> >> >> >> Grant Morley >> >> Cloud Lead, Civo Ltd >> >> www.civo.com | Signup for an account! >> >> >> >> >> > -- >> > >> > Grant Morley >> > Cloud Lead, Civo Ltd >> > www.civo.com | Signup for an account! >> > >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Aug 6 03:38:14 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 5 Aug 2019 22:38:14 -0500 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration Message-ID: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> All, This e-mail has multiple purposes.  First, I have expanded the mail audience to go beyond just openstack-discuss to a mailing list I have created for all 3rd Party CI Maintainers associated with Cinder.  I apologize to those of you who are getting this as a duplicate e-mail. For all 3rd Party CI maintainers who have already migrated your systems to using Python3.7...Thank you!  We appreciate you keeping up-to-date with Cinder's requirements and maintaining your CI systems. If this is the first time you are hearing of the Python3.7 requirement please continue reading. It has been decided by the OpenStack TC that support for Py2.7 would be deprecated [1].  The Train development cycle is the last cycle that will support Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. It was discussed at the Train PTG that we would require all 3rd Party CIs to be running using Python3 by the Train milestone 2: [2]  We have been communicating the importance of getting 3rd Party CI running with py3 in meetings and e-mail for quite some time now, but it still appears that nearly half of all vendors are not yet running with Python 3. [3] If you are a vendor who has not yet moved to using Python 3 please take some time to review this document [4] as it has guidance on how to get your CI system updated.  It also includes some additional details as to why this requirement has been set and the associated background.  Also, please update the py3-ci-review etherpad with notes indicating that you are working on adding py3 support. I would also ask all vendors to review the etherpad I have created as it indicates a number of other drivers that have been marked unsupported due to CI systems not running properly.  If you are not planning to continue to support a driver adding such a note in the etherpad would be appreciated. Thanks! Jay [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html [2] https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI [3] https://etherpad.openstack.org/p/cinder-py3-ci-review [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update From jlibosva at redhat.com Tue Aug 6 15:42:29 2019 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 6 Aug 2019 17:42:29 +0200 Subject: [neutron] OpenvSwitch firewall sctp getting dropped In-Reply-To: <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> Message-ID: <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> On 05/08/2019 12:01, thuanlk at viettel.com.vn wrote: > I have tried any version of OpenvSwitch but problem continue happened. > Is Openvswitch firewall support sctp? Yes, as long as you have sctp conntrack support in kernel. Can you paste output of 'ovs-ofctl dump-flows br-int | grep +inv' on the node where the VM using sctp is running? If the counters are not 0 it's likely that you're missing the sctp conntrack kernel module. Jakub > > Thanks and best regards ! > > --------------------------------------- > Lăng Khắc Thuận > OCS Cloud | OCS (VTTEK) > +(84)- 966463589 > > > -----Original Message----- > From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] > Sent: Tuesday, July 30, 2019 11:22 AM > To: 'smooney at redhat.com' ; 'openstack-discuss at lists.openstack.org' > Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped > > I have tried config SCTP but nothing change! > > openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp > > Displaying 2 items > Direction Ether Type IP Protocol Port Range Remote IP Prefix Remote Security Group Actions > Egress IPv4 132 2000 - 10000 0.0.0.0/0 - > Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - > > > Thanks and best regards ! > > --------------------------------------- > Lăng Khắc Thuận > OCS Cloud | OCS (VTTEK) > +(84)- 966463589 > > > -----Original Message----- > From: smooney at redhat.com [mailto:smooney at redhat.com] > Sent: Tuesday, July 30, 2019 1:27 AM > To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org > Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped > > On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: >> I have installed Openstack Queens on CentOs 7 with OvS and I recently >> used the native openvswitch firewall to implement SecusiryGroup. The >> native OvS firewall seems to work just fine with TCP/UDP traffic but >> it does not forward any SCTP traffic going to the VMs no matter how I >> change the security groups, But it run if i disable port security >> completely or use iptables_hybrid firewall driver. What do I have to >> do to allow SCTP packets to reach the VMs? > the security groups api is a whitelist model so all traffic is droped by default. > > if you want to allow sctp you would ihave to create an new security group rule with ip_protocol set to the protocol number for sctp. > > e.g. > openstack security group rule create --protocol sctp ... > > im not sure if neutron support --dst-port for sctp but you can still filter on --remote-ip or --remote-group and can specify the rule as an --ingress or --egress rule as normal. > > https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html > > based on this commit https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 > > it looks like neutron now validates the prot ranges for sctp impligying it support setting them so i gues its just a gap in the documentation. > > > >> > > From julen at larrucea.eu Wed Aug 7 13:20:04 2019 From: julen at larrucea.eu (Julen Larrucea.) Date: Wed, 7 Aug 2019 15:20:04 +0200 Subject: [training-labs] what is access domain In-Reply-To: References: Message-ID: Hi Oscar, Sorry for the delay. Here is the file you need: https://github.com/openstack/training-labs/blob/master/labs/osbash/config/credentials So: User: admin Project: admin Password: admin_user_secret Best regards Julen On Fri, Aug 2, 2019 at 11:45 PM Oscar Omar Posada Sanchez < oscar.posada.sanchez at gmail.com> wrote: > Hi Team, > I am starting to study openStack I am following this reference > https://github.com/openstack/training-labs but I can not find the access > domain in the first login already installed the laboratory. Could you tell > me, thanks. > > -- > Por su atención y tiempo, gracias. Que pase feliz día. > ------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Aug 7 23:34:25 2019 From: zigo at debian.org (Thomas Goirand) Date: Thu, 8 Aug 2019 01:34:25 +0200 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: On 8/6/19 5:10 PM, Ben Nemec wrote: > Another thing to check if you're having seemingly inexplicable messaging > issues is that there isn't a notification queue filling up somewhere. If > notifications are enabled somewhere but nothing is consuming them the > size of the queue will eventually grind rabbit to a halt. > > I used to check queue sizes through the rabbit web ui, so I have to > admit I'm not sure how to do it through the cli. On the cli. Purging Rabbit notification queues: rabbitmqctl purge_queue versioned_notifications.info rabbitmqctl purge_queue notifications.info Getting the total number of messages in Rabbit: NUM_MESSAGE=$(curl -k -uuser:pass https://192.168.0.1:15671/api/overview 2>/dev/null | jq '.["queue_totals"]["messages"]') The same way, you can get a json output of all queues using this URL: https://192.168.0.1:15671/api/queues and playing with jq, you can do many things like: jq '.[] | select(.name == "versioned_notifications.info") | .messages' jq '.[] | select(.name == "notifications.info") | .messages' jq '.[] | select(.name == "versioned_notifications.error") | .messages' jq '.[] | select(.name == "notifications.error") | .messages' If sum add the output of all of the above 4 queues, you get the total number of notification messages. What I did is outputing to graphite like this: echo "`hostname`.rabbitmq.notifications ${NUM_TOTAL_NOTIF} `date +%s`" \ | nc -w 2 graphite-node-hostname 2003 for the amount of notif + the other types of messages. Doing this every minute makes it possible to graph the number of messages in Grafana, which gives me a nice overview of what's going on with notifications and the rest. I hope this will help someone, Cheers, Thomas Goirand (zigo) From dangtrinhnt at gmail.com Thu Aug 8 01:00:27 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 8 Aug 2019 10:00:27 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: <87o911n2zz.fsf@meyer.lemoncheese.net> References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> <87o911n2zz.fsf@meyer.lemoncheese.net> Message-ID: Thanks, Jeremy and James for the super helpful information. I will try. On Wed, Aug 7, 2019 at 11:06 AM James E. Blair wrote: > Trinh Nguyen writes: > > > Hi Jeremy, > > > > Thanks for pointing that out. They're pretty helpful. > > > > Sorry for not clarifying the purpose of my question in the first email. > > Right now my company is using Jenkins for CI/CD which is not scalable and > > for me it's hard to define job pipeline because of XML. I'm about to > build > > a demonstration for my company using Zuul with Github as a replacement > and > > trying to make sense of the OpenStack deployment of Zuul. I have been > > working with OpenStack projects for a couple of cycles in which Zuul has > > shown me its greatness and I think I can bring that power to the company. > > > > Bests, > > In addition to the excellent information that Jeremy provided, since > you're talking about setting up a proof of concept, you may find it > simpler to start with the Zuul Quick-Start: > > https://zuul-ci.org/start > > That's a container-based tutorial that will set you up with a complete > Zuul system running on a single host, along with a private Gerrit > instance. Once you have that running, it's fairly straightforward to > take that and update the configuration to use GitHub instead of Gerrit. > > -Jim > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From khacthuan.hut at gmail.com Thu Aug 8 02:09:24 2019 From: khacthuan.hut at gmail.com (KhacThuan Bk) Date: Thu, 8 Aug 2019 09:09:24 +0700 Subject: [neutron] OpenvSwitch firewall sctp getting dropped In-Reply-To: <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> Message-ID: I saw the counter is not 0. But no sctp conntrack module in my system. How can i find it? [root at compute02 ~]# ovs-ofctl dump-flows br-int | grep +inv cookie=0x46c226b6d9a3ff8f, duration=229312.185s, table=72, n_packets=13, n_bytes=1274, idle_age=65534, hard_age=65534, priority=50,ct_state=+inv+trk actions=resubmit(,93) cookie=0x46c226b6d9a3ff8f, duration=229312.186s, table=82, n_packets=2517, n_bytes=925218, idle_age=65534, hard_age=65534, priority=50,ct_state=+inv+trk actions=resubmit(,93) [root at compute02 ~]# [root at compute02 ~]# [root at compute02 ~]# lsmod | grep sctp [root at compute02 ~]# [root at compute02 ~]# [root at compute02 ~]# modprobe ip_conntrack_proto_sctp modprobe: FATAL: Module ip_conntrack_proto_sctp not found. [root at compute02 ~]# [root at compute02 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root at compute02 ~]# [root at compute02 ~]# uname -r 3.10.0-957.el7.x86_64 Vào Th 5, 8 thg 8, 2019 lúc 04:37 Jakub Libosvar đã viết: > On 05/08/2019 12:01, thuanlk at viettel.com.vn wrote: > > I have tried any version of OpenvSwitch but problem continue happened. > > Is Openvswitch firewall support sctp? > > Yes, as long as you have sctp conntrack support in kernel. Can you paste > output of 'ovs-ofctl dump-flows br-int | grep +inv' on the node where > the VM using sctp is running? If the counters are not 0 it's likely that > you're missing the sctp conntrack kernel module. > > Jakub > > > > > Thanks and best regards ! > > > > --------------------------------------- > > Lăng Khắc Thuận > > OCS Cloud | OCS (VTTEK) > > +(84)- 966463589 > > > > > > -----Original Message----- > > From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] > > Sent: Tuesday, July 30, 2019 11:22 AM > > To: 'smooney at redhat.com' ; ' > openstack-discuss at lists.openstack.org' < > openstack-discuss at lists.openstack.org> > > Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped > > > > I have tried config SCTP but nothing change! > > > > openstack security group rule create --ingress --remote-ip 0.0.0.0/0 > --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack > security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 > --dst-port 2000:10000 --description "SCTP" sctp > > > > Displaying 2 items > > Direction Ether Type IP Protocol Port Range Remote IP > Prefix Remote Security Group Actions > > Egress IPv4 132 2000 - 10000 0.0.0.0/0 - > > Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - > > > > > > Thanks and best regards ! > > > > --------------------------------------- > > Lăng Khắc Thuận > > OCS Cloud | OCS (VTTEK) > > +(84)- 966463589 > > > > > > -----Original Message----- > > From: smooney at redhat.com [mailto:smooney at redhat.com] > > Sent: Tuesday, July 30, 2019 1:27 AM > > To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org > > Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped > > > > On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: > >> I have installed Openstack Queens on CentOs 7 with OvS and I recently > >> used the native openvswitch firewall to implement SecusiryGroup. The > >> native OvS firewall seems to work just fine with TCP/UDP traffic but > >> it does not forward any SCTP traffic going to the VMs no matter how I > >> change the security groups, But it run if i disable port security > >> completely or use iptables_hybrid firewall driver. What do I have to > >> do to allow SCTP packets to reach the VMs? > > the security groups api is a whitelist model so all traffic is droped by > default. > > > > if you want to allow sctp you would ihave to create an new security > group rule with ip_protocol set to the protocol number for sctp. > > > > e.g. > > openstack security group rule create --protocol sctp ... > > > > im not sure if neutron support --dst-port for sctp but you can still > filter on --remote-ip or --remote-group and can specify the rule as an > --ingress or --egress rule as normal. > > > > > https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html > > > > based on this commit > https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 > > > > it looks like neutron now validates the prot ranges for sctp impligying > it support setting them so i gues its just a gap in the documentation. > > > > > > > >> > > > > > > > -- *Lăng Khắc Thuận* *Phone*: 01649729889 *Email: khacthuan.hut at gmail.com * *Skype: khacthuan_bk* *Student at Applied Mathematics and Informatics* *Center for training of excellent students* *Hanoi University of Science and Technology. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Aug 8 04:11:59 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Aug 2019 14:11:59 +1000 Subject: [ptl][release] Stepping down as Release Management PTL Message-ID: <20190808041159.GK2352@thor.bakeyournoodle.com> Hello all, I'm sorry to say that I have insufficient time and resources to dedicate to being the kind of PTL the community deserves. With that in mind I've asked if Sean has the time to see out the role for the remainder of the Train cycle and he's agreed. I have proposed https://review.opendev.org/675246 up update the governance repo. I'm not going any where but instead trying to take on less and do *that* better. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From nicolas.ghirlanda at everyware.ch Thu Aug 8 08:44:56 2019 From: nicolas.ghirlanda at everyware.ch (Nicolas Ghirlanda) Date: Thu, 8 Aug 2019 10:44:56 +0200 Subject: [nova/neutron/openvswitch] "No such device" after failed live migrations Message-ID: <30fa7330-e758-10fe-d380-f19eb7fab264@everyware.ch> Hello all, after a misconfigured setup of a couple of VMs, the live migration of those VMs failed. Now we have some "No such device" listings in openvswitch for those VMs on some compute nodes. (openvswitch-vswitchd)[root at ewos1-com1-prod /]# ovs-vsctl show b5034213-9b15-45f5-8ce0-edcf32d16c57     Manager "ptcp:6640:127.0.0.1"         is_connected: true     Bridge br-int         Controller "tcp:127.0.0.1:6633"             is_connected: true         fail_mode: secure         Port "qvo35487c68-23"             tag: 5             Interface "qvo35487c68-23" *        Port "qvo5b1aac7e-d4"** **            Interface "qvo5b1aac7e-d4"** **                error: "could not open network device qvo5b1aac7e-d4 (No such device)"** **        Port "qvod9cdff27-9c"** **            Interface "qvod9cdff27-9c"** **                error: "could not open network device qvod9cdff27-9c (No such device)"*         Port "qvoed2da602-02"             tag: 32             Interface "qvoed2da602-02"         Port "qvoa63378f6-d9" for example qvo5b1aac7e-d4 (openvswitch-vswitchd)[root at ewos1-com1-prod /]# ovs-vsctl list Interface  ca7e771b-88a0-4cf6-b9be-6be77816baff _uuid               : ca7e771b-88a0-4cf6-b9be-6be77816baff admin_state         : [] bfd                 : {} bfd_status          : {} cfm_fault           : [] cfm_fault_status    : [] cfm_flap_count      : [] cfm_health          : [] cfm_mpid            : [] cfm_remote_mpids    : [] cfm_remote_opstate  : [] duplex              : [] error               : "could not open network device qvo5b1aac7e-d4 (No such device)" external_ids        : {attached-mac="fa:16:3e:4b:f0:4b", iface-id="5b1aac7e-d45b-4a40-8f30-1275b07ffc0b", iface-status=active, vm-uuid="c3fbcc6b-2dfe-4aa1-82bd-522b161a37a9"} ifindex             : [] ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current        : [] link_resets         : [] link_speed          : [] link_state          : [] lldp                : {} mac                 : [] mac_in_use          : [] mtu                 : [] mtu_request         : [] name                : "qvo5b1aac7e-d4" ofport              : -1 ofport_request      : [] options             : {} other_config        : {} statistics          : {} status              : {} type                : "" there are no tap devices or bridge devices on the compute node, but still the entry in openvswitch root at computenode5:~# brctl show | grep 5b1aac7e root at computenode5:~# Is that an issue? if yes, should we remove the ports manually or reboot the compute nodes? Could that lead to networking issues? kind regards Nicolas ... -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From missile0407 at gmail.com Thu Aug 8 10:38:15 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 8 Aug 2019 18:38:15 +0800 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: Hi Mark, thanks for suggestion. I think this, too. Cinder-api may normal but HAproxy could be very busy since one controller down. I'll try to increase the value about cinder-api timeout. Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: > > > On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: > >> On 8/6/2019 7:18 AM, Mark Goddard wrote: >> > We do use a larger timeout for glance-api >> > (haproxy_glance_api_client_timeout >> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >> > something similar for cinder-api. >> >> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >> was failing was a volume attachment delete/create from what I recall, >> which is the newer version (as of Ocata?) for the old >> initialize_connection/terminate_connection APIs. These are synchronous >> RPC calls from cinder-api to cinder-volume to do things on the storage >> backend and we have seen them take longer than 60 seconds in the gate CI >> runs with the lvm driver. I think the investigation normally turned up >> lvchange taking over 60 seconds on some concurrent operation locking out >> the RPC call which eventually results in the MessagingTimeout from >> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >> but the point is yeah you likely want to bump up those timeouts since >> cinder-api has these synchronous calls to the cinder-volume service. I >> just don't think you need to go to 6 hours :). I think the keystoneauth1 >> default http response timeout is 10 minutes so maybe try that. >> >> > Yeah, wasn't advocating for 6 hours - just showing which knobs are > available :) > > >> -- >> >> Thanks, >> >> Matt >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 8 11:43:45 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 8 Aug 2019 20:43:45 +0900 Subject: [glance] glance-cache-management hardcodes URL with port Message-ID: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Stein re-introduces Glance cache management, but I have not been able to use the glance-cache-manage command. I always get errno 111, connection refused. It turns out that the command tries to access http://localhost:9292. It has options for non-default IP address and port, but unfortunately on my (devstack) cloud, the Glance endpoint is http://192.168.1.200/image. No port. Is there a way to tell glance-cache-manage to use this endpoint? Bernd. From sean.mcginnis at gmx.com Thu Aug 8 11:50:50 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Aug 2019 06:50:50 -0500 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808041159.GK2352@thor.bakeyournoodle.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> Message-ID: <20190808115050.GA28237@sm-workstation> On Thu, Aug 08, 2019 at 02:11:59PM +1000, Tony Breeds wrote: > Hello all, > I'm sorry to say that I have insufficient time and resources to > dedicate to being the kind of PTL the community deserves. With that in > mind I've asked if Sean has the time to see out the role for the > remainder of the Train cycle and he's agreed. > > I have proposed https://review.opendev.org/675246 up update the > governance repo. > > I'm not going any where but instead trying to take on less and do *that* > better. > > Yours Tony. Thanks for leading the team over the last several months Tony! From satish.txt at gmail.com Thu Aug 8 12:39:18 2019 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 8 Aug 2019 08:39:18 -0400 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: References: Message-ID: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> +1 Without any single doubt. in past, I worked with him on openstack-ansible project and he is freaking awesome! Sent from my iPhone > On Jul 26, 2019, at 5:00 PM, Alex Schultz wrote: > > Hey folks, > > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. tripleo-ansible-core). He has made excellent progress centralizing our ansible roles and improving the testing around them. > > Please reply with your approval/objections. If there are no objections, we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. > > Thanks, > -Alex From ssbarnea at redhat.com Thu Aug 8 12:49:02 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Thu, 8 Aug 2019 13:49:02 +0100 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> References: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> Message-ID: <337F9564-2253-42CE-AC32-96BD9FAD0C30@redhat.com> While I am not a core myself, I do support the proposal as I happened to watch and review many changes made by him around ansible roles. Cheers, Sorin > On 8 Aug 2019, at 13:39, Satish Patel wrote: > > +1 Without any single doubt. in past, I worked with him on openstack-ansible project and he is freaking awesome! > > Sent from my iPhone > >> On Jul 26, 2019, at 5:00 PM, Alex Schultz wrote: >> >> Hey folks, >> >> I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. tripleo-ansible-core). He has made excellent progress centralizing our ansible roles and improving the testing around them. >> >> Please reply with your approval/objections. If there are no objections, we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. >> >> Thanks, >> -Alex > From doug at doughellmann.com Thu Aug 8 13:16:32 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 8 Aug 2019 09:16:32 -0400 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808115050.GA28237@sm-workstation> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <20190808115050.GA28237@sm-workstation> Message-ID: <6713DC6D-0297-4705-89D7-B7CBF4F99D85@doughellmann.com> > On Aug 8, 2019, at 7:50 AM, Sean McGinnis wrote: > > On Thu, Aug 08, 2019 at 02:11:59PM +1000, Tony Breeds wrote: >> Hello all, >> I'm sorry to say that I have insufficient time and resources to >> dedicate to being the kind of PTL the community deserves. With that in >> mind I've asked if Sean has the time to see out the role for the >> remainder of the Train cycle and he's agreed. >> >> I have proposed https://review.opendev.org/675246 up update the >> governance repo. >> >> I'm not going any where but instead trying to take on less and do *that* >> better. >> >> Yours Tony. > > Thanks for leading the team over the last several months Tony! > > Thank you, Tony & Sean. I know how time-consuming the role can be, so I appreciate the work both of you are doing. Doug From thierry at openstack.org Thu Aug 8 13:31:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Aug 2019 15:31:37 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808041159.GK2352@thor.bakeyournoodle.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> Message-ID: <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> Tony Breeds wrote: > Hello all, > I'm sorry to say that I have insufficient time and resources to > dedicate to being the kind of PTL the community deserves. With that in > mind I've asked if Sean has the time to see out the role for the > remainder of the Train cycle and he's agreed. Thanks Tony for all your work and for driving the train up to this station ! Taking the opportunity to recruit: if you are interested in learning how we do release management at scale, help openstack as a whole and are not afraid of train-themed dadjokes, join us in #openstack-release ! -- Thierry Carrez (ttx) From dtantsur at redhat.com Thu Aug 8 13:41:40 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 8 Aug 2019 15:41:40 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> Message-ID: <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> On 8/8/19 3:31 PM, Thierry Carrez wrote: > Tony Breeds wrote: >> Hello all, >>     I'm sorry to say that I have insufficient time and resources to >> dedicate to being the kind of PTL the community deserves.  With that in >> mind I've asked if Sean has the time to see out the role for the >> remainder of the Train cycle and he's agreed. > > Thanks Tony for all your work and for driving the train up to this station ! > > Taking the opportunity to recruit: if you are interested in learning how we do > release management at scale, help openstack as a whole and are not afraid of > train-themed dadjokes, join us in #openstack-release ! > After having quite some (years of) experience with release and stable affairs for ironic, I think I could help here. Dmitry From gmann at ghanshyammann.com Thu Aug 8 14:01:21 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Aug 2019 23:01:21 +0900 Subject: [nova] API updates week 19-32 Message-ID: <16c7188b80b.d6b84f7161156.3889376610277260569@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. API Related BP : ============ COMPLETED: 1. Support adding description while locking an instance: - https://blueprints.launchpad.net/nova/+spec/add-locked-reason 2. Add host and hypervisor_hostname flag to create server - https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server Code Ready for Review: ------------------------------ 1. Specifying az when restore shelved server - Topic: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) - Weekly Progress: It is ready for re-review. Patch is updated. 2. Nova API cleanup - Topic: https://review.opendev.org/#/q/topic:bp/api-consistency-cleanup+(status:open+OR+status:merged) - Weekly Progress: This is on runway. stephenfin has +2 on nova patch. I will work on novalcient and osc patch tomorrow. 3. Show Server numa-topology - Topic: https://review.opendev.org/#/q/topic:bp/show-server-numa-topology+(status:open+OR+status:merged) - Weekly Progress: Under review. 4. Nova API policy improvement - Topic: https://review.openstack.org/#/q/topic:bp/policy-default-refresh+(status:open+OR+status:merged) - Weekly Progress: First set of os-service API policy series is ready to review - https://review.opendev.org/#/c/648480/7 Specs are merged and code in-progress: ------------------------------ ------------------ 5. Detach and attach boot volumes: - Topic: https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: No Progress. Patches are in merge conflict. Spec Ready for Review: ----------------------------- 1. Support for changing deleted_on_termination after boot -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: No update this week. Pending on Lee Yarwood proposal after PTG discussion. 3. Support delete_on_termination in volume attach api -Spec: https://review.openstack.org/#/c/612949/ - Weekly Progress: No updates this week. matt recommend to merging this with 580336 which is pending on Lee Yarwood proposal. Previously approved Spec needs to be re-proposed for Train: --------------------------------------------------------------------------- 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - I remember I planned this to re-propose but could not get time. If anyone would like to help on this please repropose. otherwise I will start this in U cycle. 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - This also need volutneer - http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007411.html Others: 1. Add API ref guideline for body text - 2 api-ref are left to fix. Bugs: ==== No progress report in this week. NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. From mark at stackhpc.com Thu Aug 8 14:36:27 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 8 Aug 2019 15:36:27 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Thu, 8 Aug 2019 at 11:39, Eddie Yen wrote: > Hi Mark, thanks for suggestion. > > I think this, too. Cinder-api may normal but HAproxy could be very busy > since one controller down. > I'll try to increase the value about cinder-api timeout. > Will you be proposing this fix upstream? > > Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: > >> >> >> On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: >> >>> On 8/6/2019 7:18 AM, Mark Goddard wrote: >>> > We do use a larger timeout for glance-api >>> > (haproxy_glance_api_client_timeout >>> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >>> > something similar for cinder-api. >>> >>> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >>> was failing was a volume attachment delete/create from what I recall, >>> which is the newer version (as of Ocata?) for the old >>> initialize_connection/terminate_connection APIs. These are synchronous >>> RPC calls from cinder-api to cinder-volume to do things on the storage >>> backend and we have seen them take longer than 60 seconds in the gate CI >>> runs with the lvm driver. I think the investigation normally turned up >>> lvchange taking over 60 seconds on some concurrent operation locking out >>> the RPC call which eventually results in the MessagingTimeout from >>> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >>> but the point is yeah you likely want to bump up those timeouts since >>> cinder-api has these synchronous calls to the cinder-volume service. I >>> just don't think you need to go to 6 hours :). I think the keystoneauth1 >>> default http response timeout is 10 minutes so maybe try that. >>> >>> >> Yeah, wasn't advocating for 6 hours - just showing which knobs are >> available :) >> >> >>> -- >>> >>> Thanks, >>> >>> Matt >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thiagocmartinsc at gmail.com Thu Aug 8 19:24:27 2019 From: thiagocmartinsc at gmail.com (=?UTF-8?B?TWFydGlueCAtIOOCuOOCp+ODvOODoOOCug==?=) Date: Thu, 8 Aug 2019 15:24:27 -0400 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Hey Adrian, I was playing with Nova LXD with OpenStack Ansible, I have an example here: https://github.com/tmartinx/openstack_deploy/tree/master/group_vars It's too bad that Nova LXD is gone... :-( I use LXD a lot in my Ubuntu servers (not openstack based), so, next step would be to deploy bare-metal OpenStack clouds with it but, I canceled my plans. Cheers! Thiago On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > Hey, > > We were planing to migrate some thousand containers from OpenVZ 6 to > Nova-LXD this fall and I know at least one company with the same plans. > > I read the message about current team retiring from the project. > > Unfortunately we don't have the manpower to invest heavily in the project > development. > We would however be able to allocate a few hours per month, at least for > bug fixing. > > So I'm curios if there are organizations using or planning to use Nova-LXD > in production and they have the know-how and time to contribute. > > It would be a pity if the the project dies. > > > Cheers! > - Adrian Andreias > https://fleio.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Aug 8 20:15:58 2019 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Aug 2019 16:15:58 -0400 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: https://docs.openstack.org/nova/stein/configuration/config.html Looks like you can still use LXC if that fits your use case. It also looks like there are images that are current here https://us.images.linuxcontainers.org/ Not sure the state of the driver, but maybe give it a whirl and let us know how it goes. On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ wrote: > Hey Adrian, > > I was playing with Nova LXD with OpenStack Ansible, I have an example > here: > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > It's too bad that Nova LXD is gone... :-( > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next step > would be to deploy bare-metal OpenStack clouds with it but, I canceled my > plans. > > Cheers! > Thiago > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > >> Hey, >> >> We were planing to migrate some thousand containers from OpenVZ 6 to >> Nova-LXD this fall and I know at least one company with the same plans. >> >> I read the message about current team retiring from the project. >> >> Unfortunately we don't have the manpower to invest heavily in the project >> development. >> We would however be able to allocate a few hours per month, at least for >> bug fixing. >> >> So I'm curios if there are organizations using or planning to use >> Nova-LXD in production and they have the know-how and time to contribute. >> >> It would be a pity if the the project dies. >> >> >> Cheers! >> - Adrian Andreias >> https://fleio.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Aug 8 21:22:50 2019 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Aug 2019 17:22:50 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core Message-ID: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Hey all, I'd like to propose Eric Fried be made core on SDK. This is slightly different than a normal core proposal, so I'd like to say a few more words about it than normal- largely because I think it's a pattern we might want to explore in SDK land. Eric is obviously super smart and capable - he's PTL of Nova after all. He's one of the few people in OpenStack that has a handle on version discovery, having helped write the keystoneauth support for it. And he's core on os-service-types, which is another piece of client-side arcana. However, he's a busy human, what with being Nova core, and his interaction with SDK has been limited to the intersection of it needed for Nova integration. I don't expect that to change. Basically, as it stands now, Eric only votes on SDK patches that have impact on the use of SDK in Nova - but when he does they are thorough reviews and they indicate "this makes things better for Nova". So I'd like to start recognizing such a vote. As our overall numbers diminish, I think we need to be more efficient with the use of our human time - and along with that we need to find new ways to trust each other to act on behalf of the project. I'd like to give a stab at doing that here. Thoughts? Monty From openstack at nemebean.com Thu Aug 8 21:38:52 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Aug 2019 16:38:52 -0500 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> On 8/8/19 4:22 PM, Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? +1 from me. I've adopted essentially the same philosophy for Oslo. > Monty > From flux.adam at gmail.com Thu Aug 8 22:05:39 2019 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 9 Aug 2019 07:05:39 +0900 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Octavia was looking at doing a proof of concept container based backend driver using nova-lxd, and some work had been slowly ongoing for the past couple of years. But, it looks like we also will have to completely abandon that effort if the driver is dead. Shame. :( --Adam On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > https://docs.openstack.org/nova/stein/configuration/config.html > > Looks like you can still use LXC if that fits your use case. It also looks > like there are images that are current here > https://us.images.linuxcontainers.org/ > > Not sure the state of the driver, but maybe give it a whirl and let us > know how it goes. > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ > wrote: > >> Hey Adrian, >> >> I was playing with Nova LXD with OpenStack Ansible, I have an example >> here: >> >> https://github.com/tmartinx/openstack_deploy/tree/master/group_vars >> >> It's too bad that Nova LXD is gone... :-( >> >> I use LXD a lot in my Ubuntu servers (not openstack based), so, next >> step would be to deploy bare-metal OpenStack clouds with it but, I canceled >> my plans. >> >> Cheers! >> Thiago >> >> On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: >> >>> Hey, >>> >>> We were planing to migrate some thousand containers from OpenVZ 6 to >>> Nova-LXD this fall and I know at least one company with the same plans. >>> >>> I read the message about current team retiring from the project. >>> >>> Unfortunately we don't have the manpower to invest heavily in the >>> project development. >>> We would however be able to allocate a few hours per month, at least for >>> bug fixing. >>> >>> So I'm curios if there are organizations using or planning to use >>> Nova-LXD in production and they have the know-how and time to contribute. >>> >>> It would be a pity if the the project dies. >>> >>> >>> Cheers! >>> - Adrian Andreias >>> https://fleio.com >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Aug 8 22:18:16 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 08 Aug 2019 23:18:16 +0100 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: On Fri, 2019-08-09 at 07:05 +0900, Adam Harwell wrote: > Octavia was looking at doing a proof of concept container based backend > driver using nova-lxd, and some work had been slowly ongoing for the past > couple of years. But, it looks like we also will have to completely abandon > that effort if the driver is dead. Shame. :( you could try the nova libivirt dirver with virt_type=lxc or zun instead > > --Adam > > On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > > > https://docs.openstack.org/nova/stein/configuration/config.html > > > > Looks like you can still use LXC if that fits your use case. It also looks > > like there are images that are current here > > https://us.images.linuxcontainers.org/ > > > > Not sure the state of the driver, but maybe give it a whirl and let us > > know how it goes. > > > > > > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ > > wrote: > > > > > Hey Adrian, > > > > > > I was playing with Nova LXD with OpenStack Ansible, I have an example > > > here: > > > > > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > > > > > It's too bad that Nova LXD is gone... :-( > > > > > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next > > > step would be to deploy bare-metal OpenStack clouds with it but, I canceled > > > my plans. > > > > > > Cheers! > > > Thiago > > > > > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > > > > > > > Hey, > > > > > > > > We were planing to migrate some thousand containers from OpenVZ 6 to > > > > Nova-LXD this fall and I know at least one company with the same plans. > > > > > > > > I read the message about current team retiring from the project. > > > > > > > > Unfortunately we don't have the manpower to invest heavily in the > > > > project development. > > > > We would however be able to allocate a few hours per month, at least for > > > > bug fixing. > > > > > > > > So I'm curios if there are organizations using or planning to use > > > > Nova-LXD in production and they have the know-how and time to contribute. > > > > > > > > It would be a pity if the the project dies. > > > > > > > > > > > > Cheers! > > > > - Adrian Andreias > > > > https://fleio.com > > > > > > > > From mriedemos at gmail.com Thu Aug 8 23:31:24 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Aug 2019 18:31:24 -0500 Subject: [nova] Intermittent gate failures in functional tests Message-ID: <7817edaa-57e7-4fe4-231c-e9214827c301@gmail.com> In case you're seeing a bunch of nova versioned notification functional tests failing this week, it's being tracked [1] and there is a skip patch approved [2] to hopefully resolve it while a long-term fix is worked. [1] http://status.openstack.org/elastic-recheck/#1839515 [2] https://review.opendev.org/#/c/675417 -- Thanks, Matt From colleen at gazlene.net Fri Aug 9 00:29:44 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 08 Aug 2019 17:29:44 -0700 Subject: [keystone] [stein] [ops] user_enabled_emulation config problem In-Reply-To: References: Message-ID: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> Hi Radosław, On Tue, Aug 6, 2019, at 04:13, Radosław Piliszek wrote: > Hello all, > > I investigated the case. > My issue arises from group_members_are_ids ignored for > user_enabled_emulation_use_group_config. > I reported a bug in keystone: > https://bugs.launchpad.net/keystone/+bug/1839133 > and will submit a patch. > Hopefully it helps someone else as well. > > Kind regards, > Radek Thanks for the bug report and the patch. I've added the [ops] tag to the subject line of this thread because I'm curious how many other people have tried to use the user_enabled_emulation feature and whether anyone else has run into this problem. I'm seeing similar behavior even when using the groupOfNames objectclass and not using group_members_are_ids, so I'm hesitant to add conditionals based on that configuration. Have you tried this on any other versions of keystone besides Stein? Colleen > > sob., 3 sie 2019 o 20:56 Radosław Piliszek > napisał(a): > > Hello all, > > > > I have an issue using user_enabled_emulation with my LDAP solution. > > > > I set: > > user_tree_dn = ou=Users,o=UCO > > user_objectclass = inetOrgPerson > > user_id_attribute = uid > > user_name_attribute = uid > > user_enabled_emulation = true > > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > > user_enabled_emulation_use_group_config = true > > group_tree_dn = ou=Groups,o=UCO > > group_objectclass = posixGroup > > group_id_attribute = cn > > group_name_attribute = cn > > group_member_attribute = memberUid > > group_members_are_ids = true > > > > Keystone properly lists members of the Users group but they all remain disabled. > > Did I misinterpret something? > > > > Kind regards, > > Radek From jordan.ansell at catalyst.net.nz Fri Aug 9 03:31:52 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Fri, 9 Aug 2019 15:31:52 +1200 Subject: [nova][entropy] what are your rate limits?? Message-ID: Hello Openstack Discuss, I am doing some investigation into instance entropy and was wondering what settings others are using with regard to rate limiting entropy supplied by the hypervisor. Specifically, we're adding the "hw_rng:allowed=True" nova flavor property to pass libvirt the relevant config, but need to decide the appropriate rate limiting settings to prevent instances from being greedy with entropy but still retain a comfortable level for themselves. I've done some experimenting (100 bytes/s is possibly a minimum, but still allows a comfortable value of ~1000 for free entropy in instances) I'm also curious to hear other's experiences when it comes to entropy in Openstack: * What sources of entropy did you use in the hypervisor? * Issues you've faced which was caused by insufficient entropy (instance or host) Note, this is for a public cloud scenario, should that impact any suggestions you have. Regards, Jordan From artem.goncharov at gmail.com Fri Aug 9 06:04:46 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 9 Aug 2019 08:04:46 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> Message-ID: +1 from me. Would be great if so we would be able to address all our discovery challenges. Artem ---- typed from mobile, auto-correct typos assumed ---- On Thu, 8 Aug 2019, 23:42 Ben Nemec, wrote: > > > On 8/8/19 4:22 PM, Monty Taylor wrote: > > Hey all, > > > > I'd like to propose Eric Fried be made core on SDK. > > > > This is slightly different than a normal core proposal, so I'd like to > > say a few more words about it than normal- largely because I think it's > > a pattern we might want to explore in SDK land. > > > > Eric is obviously super smart and capable - he's PTL of Nova after all. > > He's one of the few people in OpenStack that has a handle on version > > discovery, having helped write the keystoneauth support for it. And he's > > core on os-service-types, which is another piece of client-side arcana. > > > > However, he's a busy human, what with being Nova core, and his > > interaction with SDK has been limited to the intersection of it needed > > for Nova integration. I don't expect that to change. > > > > Basically, as it stands now, Eric only votes on SDK patches that have > > impact on the use of SDK in Nova - but when he does they are thorough > > reviews and they indicate "this makes things better for Nova". So I'd > > like to start recognizing such a vote. > > > > As our overall numbers diminish, I think we need to be more efficient > > with the use of our human time - and along with that we need to find new > > ways to trust each other to act on behalf of the project. I'd like to > > give a stab at doing that here. > > > > Thoughts? > > +1 from me. I've adopted essentially the same philosophy for Oslo. > > > Monty > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Aug 9 06:05:55 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 9 Aug 2019 08:05:55 +0200 Subject: [keystone] [stein] [ops] user_enabled_emulation config problem In-Reply-To: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> References: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> Message-ID: Hi Colleen, at least Rocky is affected too. The issue is posixGroup is not a list of DNs (unlike groupOfNames, the default, which is) but IDs - the listing code already took that into account (by group_members_are_ids being on), the emulation code did not. It does not make sense for the two to behave differently when you ask to behave the same (by user_enabled_emulation_use_group_config being on). Kind regards, Radek pt., 9 sie 2019 o 02:31 Colleen Murphy napisał(a): > Hi Radosław, > > On Tue, Aug 6, 2019, at 04:13, Radosław Piliszek wrote: > > Hello all, > > > > I investigated the case. > > My issue arises from group_members_are_ids ignored for > > user_enabled_emulation_use_group_config. > > I reported a bug in keystone: > > https://bugs.launchpad.net/keystone/+bug/1839133 > > and will submit a patch. > > Hopefully it helps someone else as well. > > > > Kind regards, > > Radek > > Thanks for the bug report and the patch. I've added the [ops] tag to the > subject line of this thread because I'm curious how many other people have > tried to use the user_enabled_emulation feature and whether anyone else has > run into this problem. > > I'm seeing similar behavior even when using the groupOfNames objectclass > and not using group_members_are_ids, so I'm hesitant to add conditionals > based on that configuration. > > Have you tried this on any other versions of keystone besides Stein? > > Colleen > > > > > sob., 3 sie 2019 o 20:56 Radosław Piliszek > > napisał(a): > > > Hello all, > > > > > > I have an issue using user_enabled_emulation with my LDAP solution. > > > > > > I set: > > > user_tree_dn = ou=Users,o=UCO > > > user_objectclass = inetOrgPerson > > > user_id_attribute = uid > > > user_name_attribute = uid > > > user_enabled_emulation = true > > > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > > > user_enabled_emulation_use_group_config = true > > > group_tree_dn = ou=Groups,o=UCO > > > group_objectclass = posixGroup > > > group_id_attribute = cn > > > group_name_attribute = cn > > > group_member_attribute = memberUid > > > group_members_are_ids = true > > > > > > Keystone properly lists members of the Users group but they all remain > disabled. > > > Did I misinterpret something? > > > > > > Kind regards, > > > Radek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Aug 9 06:47:21 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 9 Aug 2019 08:47:21 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On 8/8/19 11:22 PM, Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to say a few > more words about it than normal- largely because I think it's a pattern we might > want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. He's one > of the few people in OpenStack that has a handle on version discovery, having > helped write the keystoneauth support for it. And he's core on os-service-types, > which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his interaction with > SDK has been limited to the intersection of it needed for Nova integration. I > don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have impact on > the use of SDK in Nova - but when he does they are thorough reviews and they > indicate "this makes things better for Nova". So I'd like to start recognizing > such a vote. +2, makes a lot of sense. > > As our overall numbers diminish, I think we need to be more efficient with the > use of our human time - and along with that we need to find new ways to trust > each other to act on behalf of the project. I'd like to give a stab at doing > that here. > > Thoughts? > Monty > From skaplons at redhat.com Fri Aug 9 06:50:02 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 9 Aug 2019 08:50:02 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <73581B23-6A15-45C7-A4A3-D688539D5388@redhat.com> +1 from me. I also feels that my role in SDK team is similar but from Neutron’s point of view. And I think that it’s good approach for OpenStack SDK. > On 8 Aug 2019, at 23:22, Monty Taylor wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to say a few more words about it than normal- largely because I think it's a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. He's one of the few people in OpenStack that has a handle on version discovery, having helped write the keystoneauth support for it. And he's core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his interaction with SDK has been limited to the intersection of it needed for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have impact on the use of SDK in Nova - but when he does they are thorough reviews and they indicate "this makes things better for Nova". So I'd like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient with the use of our human time - and along with that we need to find new ways to trust each other to act on behalf of the project. I'd like to give a stab at doing that here. > > Thoughts? > Monty > — Slawek Kaplonski Senior software engineer Red Hat From tim.j.culhane at gmail.com Fri Aug 9 08:18:04 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 09:18:04 +0100 Subject: inaccessibility of instances page in my openstack installation Message-ID: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions From jean-philippe at evrard.me Fri Aug 9 09:28:21 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 09 Aug 2019 11:28:21 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <088df2047f999214a64207a4fd283c72f2ed8816.camel@evrard.me> On Thu, 2019-08-08 at 17:22 -0400, Monty Taylor wrote: > As our overall numbers diminish, I think we need to be more > efficient > with the use of our human time - and along with that we need to find > new > ways to trust each other to act on behalf of the project. I'd like > to > give a stab at doing that here. > I like this. Regards, JP From doug at stackhpc.com Fri Aug 9 10:38:05 2019 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 9 Aug 2019 11:38:05 +0100 Subject: [monasca] Enable 'Review-Priority' voting in Gerrit Message-ID: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> A number of projects have added a 'Review-Priority' vote alongside the usual 'Code-Review' and 'Workflow' radio buttons [1]. The idea is to make it easier to direct reviewer attention towards high priority patches. As such, Gerrit dashboards can filter based on the 'Review-Priority' rating. Please vote on whether to enable this feature in Monasca here: https://review.opendev.org/#/c/675574 [1]: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001304.html From donny at fortnebula.com Fri Aug 9 11:55:29 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 07:55:29 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM wrote: > Hi, > > I'm a blind programmer and we use Openstack in our organisation to create > and manage servers for development. > > In the past when I wanted to launch or delete an instance I'd go to the > Instances page. > > Just above the table listing the current instances I have, would be a > series of buttons which allowed you to launch, delete instances or carry > out more actions. > > > When using Firefox I can access these items via the keyboard by hitting > enter on them. > > Up to very recently this was also the case in Chrome (which is my preferred > browser). > > However, in the last week of so I've noticed that I can no longer use the > keyboard to interact with these controls, you need to click them with the > mouse. > > I'm not sure if this is an issue with Chrome or with Openstack. > Has there been any recent changes to Openstack which might explain this? > > Many thanks, > > Tim > > Instance ID = Filter Launch Instance Delete Instances More Actions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Aug 9 11:57:41 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 07:57:41 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis wrote: > I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official > Build) (64-bit) and just tested this functionality and it seems to be > working as expected. > > I went to the instances page, tabbed to launch instance and hit enter. > This brings up the launch instance dialog for me. > > What version of chrome are you using? > > > > > On Fri, Aug 9, 2019 at 4:23 AM wrote: > >> Hi, >> >> I'm a blind programmer and we use Openstack in our organisation to >> create >> and manage servers for development. >> >> In the past when I wanted to launch or delete an instance I'd go to the >> Instances page. >> >> Just above the table listing the current instances I have, would be a >> series of buttons which allowed you to launch, delete instances or carry >> out more actions. >> >> >> When using Firefox I can access these items via the keyboard by hitting >> enter on them. >> >> Up to very recently this was also the case in Chrome (which is my >> preferred >> browser). >> >> However, in the last week of so I've noticed that I can no longer use the >> keyboard to interact with these controls, you need to click them with the >> mouse. >> >> I'm not sure if this is an issue with Chrome or with Openstack. >> Has there been any recent changes to Openstack which might explain this? >> >> Many thanks, >> >> Tim >> >> Instance ID = Filter Launch Instance Delete Instances More Actions >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.j.culhane at gmail.com Fri Aug 9 11:59:34 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 12:59:34 +0100 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Yes. I’m using the Jaws screen reader to access the site. Notice you’ll need to use a screen reader to access the site to reproduce the issue. Chrome is latest version (it automatically updates). Tim From: Donny Davis Sent: Friday 9 August 2019 12:58 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss Subject: Re: inaccessibility of instances page in my openstack installation Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis > wrote: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM > wrote: Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Aug 9 12:13:40 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 9 Aug 2019 08:13:40 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:24 PM Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > This is something we need to do more of. I know some teams are doing it well, but thank you for saying it publicly and explicitly! :) // jim > > Thoughts? > Monty > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Aug 9 12:19:13 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 08:19:13 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: <002901d54ea9$e959be70$bc0d3b50$@gmail.com> References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Message-ID: I don't have windows to test jaws with, I am using linux which jaws does not seem to be packaged for. On Fri, Aug 9, 2019 at 7:59 AM wrote: > Yes. > > > > I’m using the Jaws screen reader to access the site. > > > > Notice you’ll need to use a screen reader to access the site to reproduce > the issue. > > > > Chrome is latest version (it automatically updates). > > > > Tim > > > > > > *From:* Donny Davis > *Sent:* Friday 9 August 2019 12:58 > *To:* tim.j.culhane at gmail.com > *Cc:* OpenStack Discuss > *Subject:* Re: inaccessibility of instances page in my openstack > installation > > > > Hit enter too soon, > > > > Also are you trying to use the arrow buttons on your keyboard to navigate > around? > > > > > > > > On Fri, Aug 9, 2019 at 7:55 AM Donny Davis wrote: > > I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official > Build) (64-bit) and just tested this functionality and it seems to be > working as expected. > > > > I went to the instances page, tabbed to launch instance and hit enter. > This brings up the launch instance dialog for me. > > > > What version of chrome are you using? > > > > > > > > > > On Fri, Aug 9, 2019 at 4:23 AM wrote: > > Hi, > > I'm a blind programmer and we use Openstack in our organisation to create > and manage servers for development. > > In the past when I wanted to launch or delete an instance I'd go to the > Instances page. > > Just above the table listing the current instances I have, would be a > series of buttons which allowed you to launch, delete instances or carry > out more actions. > > > When using Firefox I can access these items via the keyboard by hitting > enter on them. > > Up to very recently this was also the case in Chrome (which is my preferred > browser). > > However, in the last week of so I've noticed that I can no longer use the > keyboard to interact with these controls, you need to click them with the > mouse. > > I'm not sure if this is an issue with Chrome or with Openstack. > Has there been any recent changes to Openstack which might explain this? > > Many thanks, > > Tim > > Instance ID = Filter Launch Instance Delete Instances More Actions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.j.culhane at gmail.com Fri Aug 9 12:55:11 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 13:55:11 +0100 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Message-ID: <004001d54eb1$ae6bdca0$0b4395e0$@gmail.com> Yes, Jaws is only a Windows screen reader. Tim From: Donny Davis Sent: Friday 9 August 2019 13:19 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss Subject: Re: inaccessibility of instances page in my openstack installation I don't have windows to test jaws with, I am using linux which jaws does not seem to be packaged for. On Fri, Aug 9, 2019 at 7:59 AM > wrote: Yes. I’m using the Jaws screen reader to access the site. Notice you’ll need to use a screen reader to access the site to reproduce the issue. Chrome is latest version (it automatically updates). Tim From: Donny Davis > Sent: Friday 9 August 2019 12:58 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss > Subject: Re: inaccessibility of instances page in my openstack installation Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis > wrote: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM > wrote: Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Aug 9 13:03:40 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 9 Aug 2019 15:03:40 +0200 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint Message-ID: Hi folks! I'd like to propose adding Riccardo to our stable team. He's been consistently checking stable patches [1], and we're clearly understaffed when it comes to stable reviews. Thoughts? Dmitry [1] https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master From juliaashleykreger at gmail.com Fri Aug 9 13:21:08 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Aug 2019 09:21:08 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: +2 :) On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur wrote: > > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been consistently > checking stable patches [1], and we're clearly understaffed when it comes to > stable reviews. Thoughts? > > Dmitry > > [1] > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > From ssbarnea at redhat.com Fri Aug 9 14:04:36 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Fri, 9 Aug 2019 15:04:36 +0100 Subject: [monasca] Enable 'Review-Priority' voting in Gerrit In-Reply-To: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> References: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> Message-ID: <1B4560BA-E93B-4FCD-A766-53FC448D767C@redhat.com> I like the idea and I seen it used in other projects. My question is: why not implementing it on all openstack projects? > On 9 Aug 2019, at 11:38, Doug Szumski wrote: > > A number of projects have added a 'Review-Priority' vote alongside the usual 'Code-Review' and 'Workflow' radio buttons [1]. The idea is to make it easier to direct reviewer attention towards high priority patches. As such, Gerrit dashboards can filter based on the 'Review-Priority' rating. > > Please vote on whether to enable this feature in Monasca here: > > https://review.opendev.org/#/c/675574 > > [1]: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001304.html > > From cdent+os at anticdent.org Fri Aug 9 14:09:28 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 9 Aug 2019 15:09:28 +0100 (BST) Subject: [placement] update 19-31 Message-ID: HTML: https://anticdent.org/placement-update-19-31.html Pupdate 19-31. No bromides today. # Most Important Same as last week: The main things on the Placement radar are implementing Consumer Types and cleanups, performance analysis, and documentation related to nested resource providers. We need to decide how much of a priority consumer types support is. I've taken the task of asking around with the various interested parties. # What's Changed * A more complex nested topology is now being used in the nested-perfload check job, and both that and the non-nest perfload run [apache benchmark](https://review.opendev.org/#/c/673540/) at the end. When you make changes you can have a look at the results of the `placement-perfload` and `placement-nested-perfload` gate jobs to see if there has been a performance impact. Keep in mind the numbers are only a guide. The performance characteristics of VMs from different CI providers varies _wildly_. * A stack of several performance related improvements has merged, with still more to come. I've written a separate [Placement Performance Analysis](https://anticdent.org/placement-performance-analysis.html) that summarizes some of the changes. Many of these may be useful for other services. Each iteration reveals another opportunity. * In some environments placement will receive a URL of '' when '/' is expected. Auth handling for version control needs to [handle this](https://review.opendev.org/674543). * osc-placmeent 1.6.0 is in the process of being [released](https://review.opendev.org/675311). # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 22 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 3 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (-1) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (0) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 5 (1). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new '--amend' option which can update resource provider inventory without requiring the user to pass a full replacement for inventory. This has been broken up into three patches to help with review. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. As said above, there's lots of performance work in progress. We'll need to make a similar effort with regard to docs. One outcome of this work will be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There are two [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Nova: nova-manage: heal port allocations * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * blazar: Fix placement operations in multi-region deployments * Nova: libvirt: Start reporting PCPU inventory to placement A part of Nova: support move ops with qos ports * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * Nova: Update HostState.\*\_allocation_ratio earlier * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Watcher: Remove resource used fields from ComputeNode * Nova: Scheduler translate properties to traits # End Somewhere in this performance work is a lesson for life: Every time I think we've reached the bottom of the "easy stuff", I find yet another bit of easy stuff. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From jim at jimrollenhagen.com Fri Aug 9 14:25:05 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 9 Aug 2019 10:25:05 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 9:22 AM Julia Kreger wrote: > +2 :) > Same! > > On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur wrote: > > > > Hi folks! > > > > I'd like to propose adding Riccardo to our stable team. He's been > consistently > > checking stable patches [1], and we're clearly understaffed when it > comes to > > stable reviews. Thoughts? > > > > Dmitry > > > > [1] > > > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiko at redhat.com Fri Aug 9 14:29:18 2019 From: elmiko at redhat.com (Michael McCune) Date: Fri, 9 Aug 2019 10:29:18 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:27 PM Monty Taylor wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? sounds entirely reasonable to me, ++ > Monty > From thomas at goirand.fr Fri Aug 9 09:34:20 2019 From: thomas at goirand.fr (Thomas Goirand) Date: Fri, 9 Aug 2019 11:34:20 +0200 Subject: [nova][entropy] what are your rate limits?? In-Reply-To: References: Message-ID: <5c0864c3-d00b-f944-d8e9-7e2e2b3df229@goirand.fr> On 8/9/19 5:31 AM, Jordan Ansell wrote: > * What sources of entropy did you use in the hypervisor? When we need a real, trust-able, source of entropy, we use a ChaosKey. https://altusmetrum.org/ChaosKey/ Otherwise, we just install haveged on the host. That's not ideal, but costs nothing, and better having entropy starvation. I hope this helps, Cheers, Thomas Goirand (zigo) From missile0407 at gmail.com Fri Aug 9 15:24:43 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Fri, 9 Aug 2019 23:24:43 +0800 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: Perhaps, but not so fast. I still need more investigation. Mark Goddard 於 2019年8月8日 週四 下午10:36寫道: > > > On Thu, 8 Aug 2019 at 11:39, Eddie Yen wrote: > >> Hi Mark, thanks for suggestion. >> >> I think this, too. Cinder-api may normal but HAproxy could be very busy >> since one controller down. >> I'll try to increase the value about cinder-api timeout. >> > > Will you be proposing this fix upstream? > >> >> Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: >> >>> >>> >>> On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: >>> >>>> On 8/6/2019 7:18 AM, Mark Goddard wrote: >>>> > We do use a larger timeout for glance-api >>>> > (haproxy_glance_api_client_timeout >>>> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >>>> > something similar for cinder-api. >>>> >>>> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >>>> was failing was a volume attachment delete/create from what I recall, >>>> which is the newer version (as of Ocata?) for the old >>>> initialize_connection/terminate_connection APIs. These are synchronous >>>> RPC calls from cinder-api to cinder-volume to do things on the storage >>>> backend and we have seen them take longer than 60 seconds in the gate >>>> CI >>>> runs with the lvm driver. I think the investigation normally turned up >>>> lvchange taking over 60 seconds on some concurrent operation locking >>>> out >>>> the RPC call which eventually results in the MessagingTimeout from >>>> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >>>> but the point is yeah you likely want to bump up those timeouts since >>>> cinder-api has these synchronous calls to the cinder-volume service. I >>>> just don't think you need to go to 6 hours :). I think the >>>> keystoneauth1 >>>> default http response timeout is 10 minutes so maybe try that. >>>> >>>> >>> Yeah, wasn't advocating for 6 hours - just showing which knobs are >>> available :) >>> >>> >>>> -- >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Aug 9 16:33:02 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Aug 2019 11:33:02 -0500 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> Message-ID: <20190809163302.GA29942@sm-workstation> > > > > Taking the opportunity to recruit: if you are interested in learning how > > we do release management at scale, help openstack as a whole and are not > > afraid of train-themed dadjokes, join us in #openstack-release ! > > > > After having quite some (years of) experience with release and stable > affairs for ironic, I think I could help here. > > Dmitry > We'd love to have you Dmitry! All of the tasks to do over the course of a release cycle should now be captured here: https://releases.openstack.org/reference/process.html We have a weekly meeting that is currently on Thursday: http://eavesdrop.openstack.org/#Release_Team_Meeting Doug recorded a nice introductory walk through of how to review release requests and some common things to look for. I can't find the link to that at the moment, but will see if we can track that down. Sean From shrews at redhat.com Fri Aug 9 17:08:47 2019 From: shrews at redhat.com (David Shrewsbury) Date: Fri, 9 Aug 2019 13:08:47 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:26 PM Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? > Monty > > I whole-heartedly embrace this experiment. Having worked with shade since the beginning of time, and now sdk, I personally know how tremendously difficult it is to be (or even pretend to be) knowledgeable in all of the OpenStack services and code interacting with them. Let's bring in experts with a narrow focus to help with the pieces they know well. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Aug 9 17:23:03 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 9 Aug 2019 13:23:03 -0400 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190809163302.GA29942@sm-workstation> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: > On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: > >>> >>> Taking the opportunity to recruit: if you are interested in learning how >>> we do release management at scale, help openstack as a whole and are not >>> afraid of train-themed dadjokes, join us in #openstack-release ! >>> >> >> After having quite some (years of) experience with release and stable >> affairs for ironic, I think I could help here. >> >> Dmitry >> > > We'd love to have you Dmitry! > > All of the tasks to do over the course of a release cycle should now be > captured here: > > https://releases.openstack.org/reference/process.html > > We have a weekly meeting that is currently on Thursday: > > http://eavesdrop.openstack.org/#Release_Team_Meeting > > Doug recorded a nice introductory walk through of how to review release > requests and some common things to look for. I can't find the link to that at > the moment, but will see if we can track that down. > > Sean > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html Doug From mordred at inaugust.com Fri Aug 9 17:28:14 2019 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 9 Aug 2019 13:28:14 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <76f53ed8-b6d3-5f85-80f7-18fa72920ac6@inaugust.com> On 8/9/19 1:08 PM, David Shrewsbury wrote: > > On Thu, Aug 8, 2019 at 5:26 PM Monty Taylor > wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And > he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find > new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? > Monty > > > I whole-heartedly embrace this experiment. Having worked with shade since > the beginning of time, and now sdk, I personally know how tremendously > difficult > it is to be (or even pretend to be) knowledgeable in all of the > OpenStack services > and code interacting with them. Let's bring in experts with a narrow > focus to help > with the pieces they know well. Sweet. That seems like a good number of agreement and no dissent. efried - you have been en-core-ified. From johnsomor at gmail.com Fri Aug 9 17:39:35 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 9 Aug 2019 10:39:35 -0700 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: Thank you Tony for all of your help and work on releases! The Octavia team appreciated it. Michael On Fri, Aug 9, 2019 at 10:24 AM Doug Hellmann wrote: > > > > > On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: > > > >>> > >>> Taking the opportunity to recruit: if you are interested in learning how > >>> we do release management at scale, help openstack as a whole and are not > >>> afraid of train-themed dadjokes, join us in #openstack-release ! > >>> > >> > >> After having quite some (years of) experience with release and stable > >> affairs for ironic, I think I could help here. > >> > >> Dmitry > >> > > > > We'd love to have you Dmitry! > > > > All of the tasks to do over the course of a release cycle should now be > > captured here: > > > > https://releases.openstack.org/reference/process.html > > > > We have a weekly meeting that is currently on Thursday: > > > > http://eavesdrop.openstack.org/#Release_Team_Meeting > > > > Doug recorded a nice introductory walk through of how to review release > > requests and some common things to look for. I can't find the link to that at > > the moment, but will see if we can track that down. > > > > Sean > > > > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html > > Doug > > From dirk at dmllr.de Fri Aug 9 18:14:47 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 9 Aug 2019 20:14:47 +0200 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient Message-ID: Hi, For a while the requirements team is trying to go through the process of removing the upper cap on jsonschema to allow the update to jsonschema 3.x. The update for that is becoming more urgent as more and more other (non-OpenStack) projects are going with requiring jsonschema >= 3, so we need to move forward as well to keep co-installability and be able to consume updates of packages to versions that depend on jsonschema >= 3. The current blocker seems to be tripleo-common / os-collect-config depending on python-zaqarclient, which has a broken gate since the merge of: http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html on the server side, which was done here: https://review.opendev.org/#/c/628723/ The python-zaqarclient functional tests have not been correspondingly adjusted, and are failing for more than 5 months meanwhile, in consequence many patches for zaqarclient, including the one uncapping jsonschema are piling up. It looks like no real merge activity happened since https://review.opendev.org/#/c/607553/ which is a bit more than 6 months ago. How should we move forward? doing a release of zaqarclient using some implementation of an API that got removed server side doesn't seem to be a terribly great idea, plus that we still need to merge either one of my patches (one that makes functional testing non-voting or the brutal "lets drop all tests that fail" patch). On the other side, I don't know how feasible it is for Triple-O to drop the dependency on os-collect-config or os-collect-config to drop the dependency on zaqar. Any suggestion on how to move forward? TIA, Dirk From mnaser at vexxhost.com Fri Aug 9 18:15:13 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 9 Aug 2019 14:15:13 -0400 Subject: [tc] meeting summary for aug. 8 2019 Message-ID: Hi everyone, The TC held it’s monthly meeting on the 8th of August 2019 and this email provides a summary of that meeting Jeremy Stanley (fungi) was added as TC Liaison for the Image Encryption pop-up team and their proposed resolution on proper retirement procedures was approved and merged. Swift is now working in Python 3 and inside DevStack so this puts us in a really good place to continue with Python 3 efforts in Swift. Graham Hayes (mugsie) is currently working on the code for the proposal bot so that when we cut a branch, it automatically pushes up a patch to add the ‘python3 jobs’ for that series. Thierry Carrez (ttx) organized the milestone 2 forum meeting with TC members. We have Jim Rollenhagen (jroll) and maybe Graham Hayes (mugsie) volunteering for the programming committee. The proposal for making goal selection a two-step process has been needing reviews for a while so I encourage the community to have a look at it as well as other TC members. We also talked about who’s attending the Shanghai Summit (here’s the Etherpad with the list of who’s going: https://etherpad.openstack.org/p/PVG-TC-PTG) and who will be attending the leadership meeting. We think that everybody already on the TC going to PTG will be there but we’ll only be able to know the exact number at the end of the election, so towards the end of September. We’re also thinking about starting a “Large Scale” SIG so people could collaborate in tackling more of the scaling issues together. Thierry Carrez (ttx) and I will be looking into that by mentioning the idea to LINE and YahooJapan (as some perspective at-scale operators)to see what they think and also make a list of organizations that could be interested. Rico Lin (ricolin) will also update the SIG guidelines documents to make the whole process easier and Jim Rollenhagen (jroll) will try and bring this up at Verizon Media. Finally, we talked about an issue regarding CI Maintainers associated with Cinder not maintaining their systems by not migrating them to Python 3.7. Half of those drivers will be deprecated since they still run on Python 2.7, which won’t be supported by the next ‘U’ release. Jay Bryant tried contacting them all individually but most didn’t answer (a lot of contact info isn’t up to date). If you know someone who maintains a Cinder driver in-tree, please have them double-check on this. I hope that I covered most of what we discussed, for the full meeting logs, you can find them here: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-08-08-14.00.log.html Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack at nemebean.com Fri Aug 9 18:59:51 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 9 Aug 2019 13:59:51 -0500 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: <624207ba-b55e-768e-4caf-36995908ca2a@nemebean.com> On 8/9/19 1:15 PM, Mohammed Naser wrote: > Hi everyone, > > The TC held it’s monthly meeting on the 8th of August 2019 and this > email provides a summary of that meeting > > Jeremy Stanley (fungi) was added as TC Liaison for the Image > Encryption pop-up team and their proposed resolution on proper > retirement procedures was approved and merged. > > Swift is now working in Python 3 and inside DevStack so this puts us > in a really good place to continue with Python 3 efforts in Swift. \o/ It's been a long journey, nice work to everyone who made it happen! Also, this feels like something someone should successbot (I would do it myself, but it feel weird to since I had nothing to do with it). From corey.bryant at canonical.com Fri Aug 9 19:12:56 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 9 Aug 2019 15:12:56 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-5) Message-ID: This is the goal-5 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 5 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. If your project has patches with successful tests please help get them merged. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == Today I reached out to all PTLs who have projects with failing patches, to ask for their help with getting tests to pass. == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Aug 9 19:45:45 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 9 Aug 2019 19:45:45 +0000 Subject: [Edge-computing] [ironic][ops] Taking ironic nodes out of production In-Reply-To: References: <08cb8294-04c8-e4ba-78c0-dec00f87156a@redhat.com> <6A205BFA-881E-4D2D-9A7D-E35935F6631B@est.tech> <09e4bfaa95404bcfba37ee63f6bf1189@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0a38187f191c4a739fc7ed106a4188c9@AUSX13MPS308.AMER.DELL.COM> Julia, For #3 what I was trying to cover the case when Ironic is used to manage servers for multiple different platform clusters. Like 2 different OpenStack cluster that share single Ironic. Ore One OpenStack and one Kubernetes cluster with shared Ironic between them. This use case support take a node from one platform cluster, clean it up, and allocate to another platform cluster. Thanks, Arkady -----Original Message----- From: Julia Kreger Sent: Tuesday, May 21, 2019 12:33 PM To: Kanevsky, Arkady Cc: Christopher Price; Bogdan Dobrelya; openstack-discuss; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [ironic][ops] Taking ironic nodes out of production [EXTERNAL EMAIL] On Tue, May 21, 2019 at 5:55 AM wrote: > > Let's dig deeper into requirements. > I see three distinct use cases: > 1. put node into maintenance mode. Say to upgrade FW/BIOS or any other life-cycle event. It stays in ironic cluster but it is no longer in use by the rest of openstack, like Nova. > 2. Put node into "fail" state. That is remove from usage, remove from Ironic cluster. What cleanup, operator would like/can do is subject to failure. Depending on the node type it may need to be "replaced". Or troubleshooted by a human, and could be returned to a non-failure state. I think largely the only way we as developers could support that is allow for hook scripts to be called upon entering/exiting such a state. That being said, At least from what Beth was saying at the PTG, this seems to be one of the most important states. > 3. Put node into "available" to other usage. What cleanup operator wants to do will need to be defined. This is very similar step as used for Baremetal as a Service as node is reassigned back into available pool. Depending on the next usage of a node it may stay in the Ironic cluster or may be removed from it. Once removed it can be "retired" or used for any other purpose. Do you mean "unprovision" a node and move it through cleaning? I'm not sure I understand what your trying to get across. There is a case where a node would have been moved to a "failed" state, and could be "unprovisioned". If we reach the point where we are able to unprovision, it seems like we might be able to re-deploy, so maybe the option is to automatically move to state which is kind of like bucket for broken nodes? > > Thanks, > Arkady > > -----Original Message----- > From: Christopher Price > Sent: Tuesday, May 21, 2019 3:26 AM > To: Bogdan Dobrelya; openstack-discuss at lists.openstack.org; > edge-computing at lists.openstack.org > Subject: Re: [Edge-computing] [ironic][ops] Taking ironic nodes out of > production > > > [EXTERNAL EMAIL] > > I would add that something as simple as an operator policy could/should be able to remove hardware from an operational domain. It does not specifically need to be a fault or retirement, it may be as simple as repurposing to a different operational domain. From an OpenStack perspective this should not require any special handling from "retirement", it's just to know that there may be time constraints implied in a policy change that could potentially be ignored in a "retirement scenario". > > Further, at least in my imagination, one might be reallocating > hardware from one Ironic domain to another which may have implications > on how we best bring a new node online. (or not, I'm no expert) end dubious thought stream> > > / Chris > > On 2019-05-21, 09:16, "Bogdan Dobrelya" wrote: > > [CC'ed edge-computing at lists.openstack.org] > > On 20.05.2019 18:33, Arne Wiebalck wrote: > > Dear all, > > > > One of the discussions at the PTG in Denver raised the need for > > a mechanism to take ironic nodes out of production (a task for > > which the currently available 'maintenance' flag does not seem > > appropriate [1]). > > > > The use case there is an unhealthy physical node in state 'active', > > i.e. associated with an instance. The request is then to enable an > > admin to mark such a node as 'faulty' or 'in quarantine' with the > > aim of not returning the node to the pool of available nodes once > > the hosted instance is deleted. > > > > A very similar use case which came up independently is node > > retirement: it should be possible to mark nodes ('active' or not) > > as being 'up for retirement' to prepare the eventual removal from > > ironic. As in the example above, ('active') nodes marked this way > > should not become eligible for instance scheduling again, but > > automatic cleaning, for instance, should still be possible. > > > > In an effort to cover these use cases by a more general > > "quarantine/retirement" feature: > > > > - are there additional use cases which could profit from such a > > "take a node out of service" mechanism? > > There are security related examples described in the Edge Security > Challenges whitepaper [0] drafted by k8s IoT SIG [1], like in the > chapter 2 Trusting hardware, whereby "GPS coordinate changes can be used > to force a shutdown of an edge node". So a node may be taken out of > service as an indicator of a particular condition of edge hardware. > > [0] > https://docs.google.com/document/d/1iSIk8ERcheehk0aRG92dfOvW5NjkdedN8F7mSUTr-r0/edit#heading=h.xf8mdv7zexgq > [1] > https://github.com/kubernetes/community/tree/master/wg-iot-edge > > > > > - would these use cases put additional constraints on how the > > feature should look like (e.g.: "should not prevent cleaning") > > > > - are there other characteristics such a feature should have > > (e.g.: "finding these nodes should be supported by the cli") > > > > Let me know if you have any thoughts on this. > > > > Cheers, > > Arne > > > > > > [1] https://etherpad.openstack.org/p/DEN-train-ironic-ptg, l. 360 > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From Tim.Bell at cern.ch Fri Aug 9 20:21:54 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 9 Aug 2019 20:21:54 +0000 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: > On 9 Aug 2019, at 20:15, Mohammed Naser wrote: > > Hi everyone, > > The TC held it’s monthly meeting on the 8th of August 2019 and this > email provides a summary of that meeting > > ... > > We’re also thinking about starting a “Large Scale” SIG so people could > collaborate in tackling more of the scaling issues together. Thierry > Carrez (ttx) and I will be looking into that by mentioning the idea to > LINE and YahooJapan (as some perspective at-scale operators)to see > what they think and also make a list of organizations that could be > interested. Rico Lin (ricolin) will also update the SIG guidelines > documents to make the whole process easier and Jim Rollenhagen (jroll) > will try and bring this up at Verizon Media. > How about a forum brainstorm in Shanghai ? Tim > > > Thanks for tuning in! > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > From alifshit at redhat.com Fri Aug 9 21:11:02 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Fri, 9 Aug 2019 17:11:02 -0400 Subject: [nova] NUMA live migration is ready for review and testing Message-ID: tl;dr If you care about NUMA live migration, check out [1] and test in in your env(s), or review it. Over the months that I've worked on NUMA LM, I've been pinged by various folks that were interested in helping out. At this point I've addressed all the issues that were found at the end of the Stein cycle, and the series is ready for review and testing, with the aim of getting it merged in Train (for real this time). So if you care about NUMA-aware live migration and have some spare time and hardware (if you're in the former category I don't think I need to explain what kind of hardware - though I'll try to answer questions as best I can), I would greatly appreciate it if you deployed the patches and tested them. I've done that myself, of course, but, as at the end of Stein, I'm sure there are edge cases that I didn't think of (though I'm selfishly hoping that there aren't). I believe the series is also ready for review, though I haven't put it in the runway queue just yet because the last functional test patch is still a WIP, as I need to fiddle with it to assert more things. Thanks in advance, cheers! [1] https://review.opendev.org/#/c/672595/8 From sundar.nadathur at intel.com Fri Aug 9 21:25:33 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 9 Aug 2019 21:25:33 +0000 Subject: [cyborg] [release] Process to discontinue os-acc Message-ID: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> Hi, A project called os-acc [1] was created in Stein cycle based on an expectation that it will be used for Cyborg - Nova integration. It is not relevant anymore and we have no plans to support it in Train. It needs to be discontinued. What is the process for doing that? A part of that is presumably to update or delete the os-acc release yaml [2]. What else is needed? [1] https://opendev.org/openstack/os-acc/ [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/train/os-acc.yaml Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 9 21:58:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Aug 2019 21:58:10 +0000 Subject: [cyborg] [release] Process to discontinue os-acc In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> Message-ID: <20190809215809.ufqvysqsax4uthah@yuggoth.org> On 2019-08-09 21:25:33 +0000 (+0000), Nadathur, Sundar wrote: > A project called os-acc [1] was created in Stein cycle based on an > expectation that it will be used for Cyborg - Nova integration. It > is not relevant anymore and we have no plans to support it in > Train. > > It needs to be discontinued. What is the process for doing that? A > part of that is presumably to update or delete the os-acc release > yaml [2]. What else is needed? [...] For instructions on retiring a repository, see: https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Aug 9 23:44:12 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 09 Aug 2019 16:44:12 -0700 Subject: [keystone] Keystone Team Update - Week of 5 August 2019 Message-ID: <6dee223e-2752-4628-b70f-d7d81d19d235@www.fastmail.com> # Keystone Team Update - Week of 5 August 2019 ## News ### CI instability update To follow up from this topic from last week, we came up with a solution[1][2] that at least reduces the size of the unit test log files to an acceptable, non-browser-crashing size. Unfortunately that didn't seem to be the root cause of the frequent timeouts, so it's unclear (to me, at least) whether the issue stems from a problem in our unit tests or if we're just getting unlucky with noisy neighbors. It could be as simple as needing to raise the timeout to account for all the additional protection tests we've added in the past few months. [1] https://review.opendev.org/673932 [2] https://review.opendev.org/673933 ### Call for help We need help completing the system-scope and default roles policy updates[3][4] before the end of the cycle, as operators cannot safely enable [oslo_policy]/enforce_scope until all of them are completed. For the most part, the task involves updating the scope_types option in the policy and adding a ton of unit tests. The already completed work[5][6] can serve as an example for what's needed. [3] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [4] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope [5] https://bugs.launchpad.net/keystone/+bugs?field.status%3Alist=FIXRELEASED&field.tag=default-roles [6] https://bugs.launchpad.net/keystone/+bugs?field.status%3Alist=FIXRELEASED&field.tag=system-scope ### PTG attendance and Forum Planning Based on our poll[7] it's looking like there are not enough keystone-minded people planning to attend the Shanghai PTG to warrant requesting a room, so I will likely tell Kendall that we don't need a room unless something changes very soon. Even if you won't be attending, please use that etherpad to add topics you would like to see discussed at the Forum. We can use those discussions as a jumping off point for our pre- and post-PTG virtual gatherings. [7] https://etherpad.openstack.org/p/keystone-shanghai-ptg ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. We will skip next week's office hours since we don't have a topic planned. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Open Specs Train specs: https://bit.ly/2uZ2tRl Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 9 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 43 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories - https://review.opendev.org/#/q/topic:bug/1818734 (system scope and default roles) - https://review.opendev.org/#/q/topic:implement-default-roles+is:open (system scope and default roles) - https://review.opendev.org/#/q/topic:bp/whitelist-extension-for-app-creds+is:open (application credential access rules) - https://review.opendev.org/672120 (caching guide) - https://review.opendev.org/#/q/project:openstack/oslo.limit+topic:rewrite+is:open (oslo.limit) * Needs Discussion - https://review.opendev.org/618144 (Reparent Projects) - https://review.opendev.org/674940 (Make policy deprecation reasons less verbose) - https://review.opendev.org/675303 (Allows LDAP extra attributes to be exposed to the end user) ## Bugs This week we opened 4 new bugs and closed 4. Bugs opened (4)  Bug #1839393 (keystone:Low) opened by Matthew Thode https://bugs.launchpad.net/keystone/+bug/1839393  Bug #1839133 (keystone:Undecided) opened by Radosław Piliszek https://bugs.launchpad.net/keystone/+bug/1839133  Bug #1839441 (keystone:Undecided) opened by Jose Castro Leon https://bugs.launchpad.net/keystone/+bug/1839441  Bug #1839577 (keystone:Undecided) opened by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1839577  Bugs fixed (4)  Bug #1773967 (keystone:High) fixed by Jose Castro Leon https://bugs.launchpad.net/keystone/+bug/1773967  Bug #1838592 (keystone:High) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1838592  Bug #1709344 (keystone:Low) fixed by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1709344  Bug #1837741 (oslo.policy:High) fixed by no one https://bugs.launchpad.net/oslo.policy/+bug/1837741 ## Milestone Outlook https://releases.openstack.org/train/schedule.html Feature proposal freeze is NEXT WEEK (August 12-August 16). Spec implementations that are not submitted or still in a WIP state by the end of the week will need to be postponed until next cycle unless we agree on an exception. Code implementing system scope and default roles policy work will be accepted until feature freeze week (September 9-September 13). If you are able, please help by picking up some of these tasks[7][8] or helping to review them (thanks Vishakha for jumping on the endpoint groups policies!). Final release of non-client libraries is the week of September 2 which allows us about three weeks to both implement and review library changes needed for this cycle. [7] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [8] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From gagehugo at gmail.com Fri Aug 9 23:56:13 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 9 Aug 2019 18:56:13 -0500 Subject: [Security SIG] Weekly Newsletter Aug 08 2019 Message-ID: Of note, OSSA-2019-003 was released this week #Week of: 08 Aug 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-08-08-15.00.html - Announced OSSA-2019-003 release on Tuesday August 06th 2019 - Image Encryption Spec - image encryption spec for nova unlikely to get a freeze exception - Will likely polish it up, target an early 'U' release # VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - OSSA-2019-003 was released this week: https://security.openstack.org/ossa/OSSA-2019-003.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Sat Aug 10 12:09:41 2019 From: flux.adam at gmail.com (Adam Harwell) Date: Sat, 10 Aug 2019 21:09:41 +0900 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Yeah we're definitely looking into Zun, which is probably a better approach going forward, but it's a very different implemention and we were pretty close with the old one. Didn't mean to make it sound like we'd stop working on containerization, it's just a setback. :D On Fri, Aug 9, 2019, 07:18 Sean Mooney wrote: > On Fri, 2019-08-09 at 07:05 +0900, Adam Harwell wrote: > > Octavia was looking at doing a proof of concept container based backend > > driver using nova-lxd, and some work had been slowly ongoing for the past > > couple of years. But, it looks like we also will have to completely > abandon > > that effort if the driver is dead. Shame. :( > > you could try the nova libivirt dirver with virt_type=lxc or zun instead > > > > --Adam > > > > On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > > > > > https://docs.openstack.org/nova/stein/configuration/config.html > > > > > > Looks like you can still use LXC if that fits your use case. It also > looks > > > like there are images that are current here > > > https://us.images.linuxcontainers.org/ > > > > > > Not sure the state of the driver, but maybe give it a whirl and let us > > > know how it goes. > > > > > > > > > > > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ < > thiagocmartinsc at gmail.com> > > > wrote: > > > > > > > Hey Adrian, > > > > > > > > I was playing with Nova LXD with OpenStack Ansible, I have an > example > > > > here: > > > > > > > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > > > > > > > It's too bad that Nova LXD is gone... :-( > > > > > > > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next > > > > step would be to deploy bare-metal OpenStack clouds with it but, I > canceled > > > > my plans. > > > > > > > > Cheers! > > > > Thiago > > > > > > > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias > wrote: > > > > > > > > > Hey, > > > > > > > > > > We were planing to migrate some thousand containers from OpenVZ 6 > to > > > > > Nova-LXD this fall and I know at least one company with the same > plans. > > > > > > > > > > I read the message about current team retiring from the project. > > > > > > > > > > Unfortunately we don't have the manpower to invest heavily in the > > > > > project development. > > > > > We would however be able to allocate a few hours per month, at > least for > > > > > bug fixing. > > > > > > > > > > So I'm curios if there are organizations using or planning to use > > > > > Nova-LXD in production and they have the know-how and time to > contribute. > > > > > > > > > > It would be a pity if the the project dies. > > > > > > > > > > > > > > > Cheers! > > > > > - Adrian Andreias > > > > > https://fleio.com > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sat Aug 10 17:45:06 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 11 Aug 2019 01:45:06 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: After discussion in this ML and in irc [1], I will finalize the U release name candidate list [2] and will go forward to create a public poll at 2019-08-12. Here is finalized name list from etherpad: - 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name) - 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) - 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - Ula "Miocene Baogeda Ula" (the name is in Mongolian) - Uma http://www.fallingrain.com/world/CH/20/Uma.html So thanks to all who help with propose names, provide solutions or join discussions. And big thanks for Doug who put a significant amount of effort on this. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 [2] https://etherpad.openstack.org/p/u-name-poll-email On Wed, Aug 7, 2019 at 10:58 PM Rico Lin wrote: > > > On Wed, Aug 7, 2019 at 10:30 PM James E. Blair > wrote: > > > > Sorry if I wasn't clear, I had already added it to the wiki page more > > than a week ago -- you can still see my entry there at the bottom of the > > list of names that do meet the criteria. Here's the diff: > > > > > https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 > > > > Also, I do think this meets the criteria, since there is a place in > > Shanghai with "University" in the name. This is similar to "Pike" which > > is short for the "Massachusetts Turnpike", which was deemed to meet the > > criteria for the P naming poll. > > > As we discussed in IRC:#openstack-tc, change the reference from general > Universities to specific University will make it meet the criteria "The > name must refer to the physical or human geography" > Added it back to the 'meet criteria' list and update it with reference to > specific university "University of Shanghai for Science and Technology". > feel free to correct me, if I misunderstand the criteria rule. :) > > > Of course, as the coordinator it's up to you to determine whether it > > meets the criteria, but I believe it does, and hope you agree. > > > > > Thanks, > > > > Jim > > > > -- > May The Force of OpenStack Be With You, > Rico Lin > irc: ricolin > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Aug 10 20:30:14 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 10 Aug 2019 16:30:14 -0400 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 2:18 PM Dirk Müller wrote: > > Hi, > > For a while the requirements team is trying to go through the process > of removing the upper cap > on jsonschema to allow the update to jsonschema 3.x. The update > for that is becoming more urgent as more and more other > (non-OpenStack) projects are going > with requiring jsonschema >= 3, so we need to move forward as well to > keep co-installability > and be able to consume updates of packages to versions that depend on > jsonschema >= 3. > > The current blocker seems to be tripleo-common / os-collect-config > depending on python-zaqarclient, > which has a broken gate since the merge of: > > http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html > > on the server side, which was done here: > > https://review.opendev.org/#/c/628723/ > > The python-zaqarclient functional tests have not been correspondingly > adjusted, and are failing > for more than 5 months meanwhile, in consequence many patches for > zaqarclient, including > the one uncapping jsonschema are piling up. It looks like no real > merge activity happened since > > https://review.opendev.org/#/c/607553/ > > which is a bit more than 6 months ago. How should we move forward? > doing a release of zaqarclient > using some implementation of an API that got removed server side > doesn't seem to be a terribly great > idea, plus that we still need to merge either one of my patches (one > that makes functional testing non-voting > or the brutal "lets drop all tests that fail" patch). On the other > side, I don't know how feasible it is for Triple-O > to drop the dependency on os-collect-config or os-collect-config to > drop the dependency on zaqar. > > Any suggestion on how to move forward? I'm going to reach out to the PTL who seems to be active as the last proposed change was 5 days ago by them. > TIA, > Dirk > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From miguel at mlavalle.com Sun Aug 11 16:51:02 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 11 Aug 2019 11:51:02 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Dear Neutrinos, It has been a week since this nomination was sent out to the community and it has only received positive feedback. As a consequence, I have added Rodolfo to the Neutron core team. Congratulations and keep up the good work! Best regards Miguel On Sun, Aug 4, 2019 at 1:52 PM Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron since > the Mitaka cycle. He has been a driving force over these years in the > implementation an evolution of Neutron's QoS feature, currently leading the > sub-team dedicated to it. Recently he has been working on improving the > interaction with Nova during the port binding process, driven the adoption > of Pyroute2 and has become very active in fixing all kinds of bugs. The > quality and number of his code reviews during the Train cycle are > comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Sun Aug 11 17:30:32 2019 From: corvus at inaugust.com (James E. Blair) Date: Sun, 11 Aug 2019 10:30:32 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Sun, 11 Aug 2019 01:45:06 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: <87pnlbehjb.fsf@meyer.lemoncheese.net> Rico Lin writes: > After discussion in this ML and in irc [1], I will finalize the U release > name candidate list [2] and will go forward to create a public poll > at 2019-08-12. > Here is finalized name list from etherpad: > > - 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is > shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name) > - 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > - 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name > is in Mongolian; this is a common Latin-alphabet transcription of the name) > - 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - Ula "Miocene Baogeda Ula" (the name is in Mongolian) > - Uma http://www.fallingrain.com/world/CH/20/Uma.html > > So thanks to all who help with propose names, provide solutions or join > discussions. And big thanks for Doug who put a significant amount of effort > on this. Hi, I object to the omission of University (which I thought, based on the previous email, had been determined to have meet the criteria). If I had known there would be a followup conversation, I would have participated. I still do believe that it meets all of the criteria. In particular, it meets this: * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack summit for the corresponding release. It is short for "University of Shanghai for Science and Technology", which is a place in Shanghai. Here is their website: http://en.usst.edu.cn/ Moreover, it met the criteria *before* it was enlarged to include all of China. The subtext of this name is that Shanghai is famous for its Universities, and it has a lot of them. Wikipedia lists 36. The most famous of which is Fudan -- the first institution of higher education to be founded by a Chinese person. It is, in short, a name to honor the unique qualities of our host city. It deserves to be considered. -Jim From fungi at yuggoth.org Sun Aug 11 18:03:05 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 11 Aug 2019 18:03:05 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87pnlbehjb.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> Message-ID: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: [...] > I still do believe that it meets all of the criteria. In particular, it > meets this: > > * The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack summit for the > corresponding release. > > It is short for "University of Shanghai for Science and Technology", > which is a place in Shanghai. Here is their website: > http://en.usst.edu.cn/ [...] This got discussed after last week's TC meeting during Thursday office hours, and I'm sorry I didn't think to give you a heads-up when the topic arose: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 One of the objections raised was that "University" in the name "University of Shanghai for Science and Technology" was a general class of place or feature and not a particular place or feature. But as you pointed out in IRC a while back (and which I should have remembered), there is precedent with the Pike cycle name: Pike (the Massachusetts Turnpike, also the Mass Pike...) https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names Another objection raised is that "OpenStack University" was the old name for what we now call the OpenStack Upstream Institute and that it could lead to name confusion if chosen. A search of the Web for that name last week turned up only two occurrences for me on the first page of results, both of which were lingering references in our wiki which I immediately corrected, so I don't think that argument holds. Then there was the suggestion that "University" might somehow be a trademark risk, though in my opinion that's why we have the OSF vet the preliminary winning results after the community ranks them (so that the TC doesn't need to concern itself with trademark issues). It was also pointed out that each time we have a poll with a mix of English and non-English names/words, an English name inevitably wins. Since this concern isn't backed up by the documented process[*] we're ostensibly following, I'm not really sure how to address it. Ultimately I was unable to convince my colleagues on the TC that "University" was a qualifying name, and so it was handled as a possible exception to the normal rules which, following a poll of most TC members, was decided would not be granted. [*] https://governance.openstack.org/tc/reference/release-naming.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Sun Aug 11 18:15:00 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 12 Aug 2019 02:15:00 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> Message-ID: Just make sure more information is awarded by all, here's some discussion on irc: openstack-tc during this mail is send. http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-11.log.html#t2019-08-11T16:37:03 On Mon, Aug 12, 2019 at 2:07 AM Jeremy Stanley wrote: > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > [...] > > I still do believe that it meets all of the criteria. In particular, it > > meets this: > > > > * The name must refer to the physical or human geography of the region > > encompassing the location of the OpenStack summit for the > > corresponding release. > > > > It is short for "University of Shanghai for Science and Technology", > > which is a place in Shanghai. Here is their website: > > http://en.usst.edu.cn/ > [...] > > This got discussed after last week's TC meeting during Thursday > office hours, and I'm sorry I didn't think to give you a heads-up > when the topic arose: > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > One of the objections raised was that "University" in the name > "University of Shanghai for Science and Technology" was a general > class of place or feature and not a particular place or feature. But > as you pointed out in IRC a while back (and which I should have > remembered), there is precedent with the Pike cycle name: > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > Another objection raised is that "OpenStack University" was the old > name for what we now call the OpenStack Upstream Institute and that > it could lead to name confusion if chosen. A search of the Web for > that name last week turned up only two occurrences for me on the > first page of results, both of which were lingering references in > our wiki which I immediately corrected, so I don't think that > argument holds. > > Then there was the suggestion that "University" might somehow be a > trademark risk, though in my opinion that's why we have the OSF vet > the preliminary winning results after the community ranks them (so > that the TC doesn't need to concern itself with trademark issues). > > It was also pointed out that each time we have a poll with a mix of > English and non-English names/words, an English name inevitably > wins. Since this concern isn't backed up by the documented > process[*] we're ostensibly following, I'm not really sure how to > address it. > > Ultimately I was unable to convince my colleagues on the TC that > "University" was a qualifying name, and so it was handled as a > possible exception to the normal rules which, following a poll of > most TC members, was decided would not be granted. > > [*] https://governance.openstack.org/tc/reference/release-naming.html > -- > Jeremy Stanley > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From yikunkero at gmail.com Mon Aug 12 03:03:01 2019 From: yikunkero at gmail.com (Yikun Jiang) Date: Mon, 12 Aug 2019 11:03:01 +0800 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> Message-ID: Hi, Jay Thanks for reminder, and I cc this mail to futaotao, who is working on python3 migration of Huawei volume and FusionStorage Driver. And @futaotao will make that the Huawei volume and Huawei FusionStorage driver have good python3 support in T release. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com Jay Bryant 于2019年8月6日周二 上午11:45写道: > All, > > This e-mail has multiple purposes. First, I have expanded the mail > audience to go beyond just openstack-discuss to a mailing list I have > created for all 3rd Party CI Maintainers associated with Cinder. I > apologize to those of you who are getting this as a duplicate e-mail. > > For all 3rd Party CI maintainers who have already migrated your systems > to using Python3.7...Thank you! We appreciate you keeping up-to-date > with Cinder's requirements and maintaining your CI systems. > > If this is the first time you are hearing of the Python3.7 requirement > please continue reading. > > It has been decided by the OpenStack TC that support for Py2.7 would be > deprecated [1]. The Train development cycle is the last cycle that will > support Py2.7 and therefore all vendor drivers need to demonstrate > support for Py3.7. > > It was discussed at the Train PTG that we would require all 3rd Party > CIs to be running using Python3 by the Train milestone 2: [2] We have > been communicating the importance of getting 3rd Party CI running with > py3 in meetings and e-mail for quite some time now, but it still appears > that nearly half of all vendors are not yet running with Python 3. [3] > > If you are a vendor who has not yet moved to using Python 3 please take > some time to review this document [4] as it has guidance on how to get > your CI system updated. It also includes some additional details as to > why this requirement has been set and the associated background. Also, > please update the py3-ci-review etherpad with notes indicating that you > are working on adding py3 support. > > I would also ask all vendors to review the etherpad I have created as it > indicates a number of other drivers that have been marked unsupported > due to CI systems not running properly. If you are not planning to > continue to support a driver adding such a note in the etherpad would be > appreciated. > > Thanks! > > Jay > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html > > [2] > https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI > > [3] https://etherpad.openstack.org/p/cinder-py3-ci-review > > [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhuri.kumari at intel.com Mon Aug 12 09:41:59 2019 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Mon, 12 Aug 2019 09:41:59 +0000 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: <0512CBBECA36994BAA14C7FEDE986CA614AA454D@BGSMSX102.gar.corp.intel.com> +1 if my vote counts ☺ Regards, Madhuri From: Jim Rollenhagen [mailto:jim at jimrollenhagen.com] Sent: Friday, August 9, 2019 7:55 PM To: Julia Kreger Cc: Dmitry Tantsur ; openstack-discuss Subject: Re: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint On Fri, Aug 9, 2019 at 9:22 AM Julia Kreger > wrote: +2 :) Same! On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur > wrote: > > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been consistently > checking stable patches [1], and we're clearly understaffed when it comes to > stable reviews. Thoughts? > > Dmitry > > [1] > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Mon Aug 12 10:32:34 2019 From: arxcruz at redhat.com (Arx Cruz) Date: Mon, 12 Aug 2019 12:32:34 +0200 Subject: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA In-Reply-To: References: <71d568ee47ce516a5a4cab1422290da2be1baff6.camel@evrard.me> Message-ID: Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://review.opendev.org/#/c/675858/ On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From novy at ondrej.org Mon Aug 12 10:56:15 2019 From: novy at ondrej.org (Ondrej Novy) Date: Mon, 12 Aug 2019 12:56:15 +0200 Subject: [swauth][swift] Retiring swauth Message-ID: Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Aug 12 11:55:07 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Aug 2019 07:55:07 -0400 Subject: [ironic] Resuming having weekly meetings Message-ID: All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia From ralf.teckelmann at bertelsmann.de Mon Aug 12 12:59:20 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Mon, 12 Aug 2019 12:59:20 +0000 Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Aug 12 13:43:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Aug 2019 15:43:14 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> On 8/9/19 7:23 PM, Doug Hellmann wrote: > > >> On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: >> >>>> >>>> Taking the opportunity to recruit: if you are interested in learning how >>>> we do release management at scale, help openstack as a whole and are not >>>> afraid of train-themed dadjokes, join us in #openstack-release ! >>>> >>> >>> After having quite some (years of) experience with release and stable >>> affairs for ironic, I think I could help here. >>> >>> Dmitry >>> >> >> We'd love to have you Dmitry! >> >> All of the tasks to do over the course of a release cycle should now be >> captured here: >> >> https://releases.openstack.org/reference/process.html >> >> We have a weekly meeting that is currently on Thursday: >> >> http://eavesdrop.openstack.org/#Release_Team_Meeting 9pm my time is a bit difficult :( >> >> Doug recorded a nice introductory walk through of how to review release >> requests and some common things to look for. I can't find the link to that at >> the moment, but will see if we can track that down. >> >> Sean >> > > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html Cool, thanks! I'll have a PTO this week, I'll start jumping on reviews next week. Dmitry > > Doug > From corvus at inaugust.com Mon Aug 12 14:08:49 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 12 Aug 2019 07:08:49 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> (Jeremy Stanley's message of "Sun, 11 Aug 2019 18:03:05 +0000") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> Message-ID: <87mugecw7i.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > [...] >> I still do believe that it meets all of the criteria. In particular, it >> meets this: >> >> * The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack summit for the >> corresponding release. >> >> It is short for "University of Shanghai for Science and Technology", >> which is a place in Shanghai. Here is their website: >> http://en.usst.edu.cn/ > [...] > > This got discussed after last week's TC meeting during Thursday > office hours, and I'm sorry I didn't think to give you a heads-up > when the topic arose: > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > One of the objections raised was that "University" in the name > "University of Shanghai for Science and Technology" was a general > class of place or feature and not a particular place or feature. But > as you pointed out in IRC a while back (and which I should have > remembered), there is precedent with the Pike cycle name: > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > Another objection raised is that "OpenStack University" was the old > name for what we now call the OpenStack Upstream Institute and that > it could lead to name confusion if chosen. A search of the Web for > that name last week turned up only two occurrences for me on the > first page of results, both of which were lingering references in > our wiki which I immediately corrected, so I don't think that > argument holds. > > Then there was the suggestion that "University" might somehow be a > trademark risk, though in my opinion that's why we have the OSF vet > the preliminary winning results after the community ranks them (so > that the TC doesn't need to concern itself with trademark issues). > > It was also pointed out that each time we have a poll with a mix of > English and non-English names/words, an English name inevitably > wins. Since this concern isn't backed up by the documented > process[*] we're ostensibly following, I'm not really sure how to > address it. > > Ultimately I was unable to convince my colleagues on the TC that > "University" was a qualifying name, and so it was handled as a > possible exception to the normal rules which, following a poll of > most TC members, was decided would not be granted. > > [*] https://governance.openstack.org/tc/reference/release-naming.html Thanks for the clarification. The only point raised which should have any bearing on the process at this time is is the first one, and I think that has been addressed. The process is designed to collect the widest range of names, and let the *community* decide. It is not the function of the TC to vet the names for suitability before the poll. The community itself is to do that, in the poll. And because vetting for trademark is a specialized and costly task, that happens *after* the poll, so that we don't waste time and money on it. It was exactly the kind of seemingly arbitrary process of producing the names for the poll which is on display here that prompted us to write down this more open process in the first place. It's unfortunate that the last three objections that you cite are clearly in contradiction to that. We pride ourselves on fairness and openness, but we seem to have lost the enthusiasm for that here. I would rather we not do this at all than to do it poorly, so I have proposed we simply stop naming releases. It's more trouble than it's worth. Here's my proposed TC resolution for that: https://review.opendev.org/675788 -Jim From opensrloo at gmail.com Mon Aug 12 14:59:52 2019 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 12 Aug 2019 10:59:52 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: +2. Good idea! :) --ruby On Fri, Aug 9, 2019 at 9:06 AM Dmitry Tantsur wrote: > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been > consistently > checking stable patches [1], and we're clearly understaffed when it comes > to > stable reviews. Thoughts? > > Dmitry > > [1] > > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Mon Aug 12 16:50:00 2019 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 12 Aug 2019 17:50:00 +0100 Subject: [glance] glance-cache-management hardcodes URL with port In-Reply-To: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> References: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Message-ID: On Thu, Aug 8, 2019 at 12:49 PM Bernd Bausch wrote: > Stein re-introduces Glance cache management, but I have not been able to > use the glance-cache-manage command. I always get errno 111, connection > refused. > > It turns out that the command tries to access http://localhost:9292. It > has options for non-default IP address and port, but unfortunately on my > (devstack) cloud, the Glance endpoint is http://192.168.1.200/image. No > port. > > Is there a way to tell glance-cache-manage to use this endpoint? > > Bernd. > > Hi Bernd, You can always give it the port 80, the real problem likely is the prefix /image you have there. Are you running glance-api as wsgi app under some http-server or is that reverse-proxy/loadbalancer you're directing the glance-cache-manage towards? Remember that the management is not currently cluster wide so you should be always targeting single service process at the time. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Mon Aug 12 17:02:32 2019 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 12 Aug 2019 18:02:32 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87mugecw7i.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Aug 12, 2019 at 3:12 PM James E. Blair wrote: > Jeremy Stanley writes: > > > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > > [...] > >> I still do believe that it meets all of the criteria. In particular, it > >> meets this: > >> > >> * The name must refer to the physical or human geography of the region > >> encompassing the location of the OpenStack summit for the > >> corresponding release. > >> > >> It is short for "University of Shanghai for Science and Technology", > >> which is a place in Shanghai. Here is their website: > >> http://en.usst.edu.cn/ > > [...] > > > > This got discussed after last week's TC meeting during Thursday > > office hours, and I'm sorry I didn't think to give you a heads-up > > when the topic arose: > > > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > > > One of the objections raised was that "University" in the name > > "University of Shanghai for Science and Technology" was a general > > class of place or feature and not a particular place or feature. But > > as you pointed out in IRC a while back (and which I should have > > remembered), there is precedent with the Pike cycle name: > > > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > > > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > > > Another objection raised is that "OpenStack University" was the old > > name for what we now call the OpenStack Upstream Institute and that > > it could lead to name confusion if chosen. A search of the Web for > > that name last week turned up only two occurrences for me on the > > first page of results, both of which were lingering references in > > our wiki which I immediately corrected, so I don't think that > > argument holds. > > > > Then there was the suggestion that "University" might somehow be a > > trademark risk, though in my opinion that's why we have the OSF vet > > the preliminary winning results after the community ranks them (so > > that the TC doesn't need to concern itself with trademark issues). > > > > It was also pointed out that each time we have a poll with a mix of > > English and non-English names/words, an English name inevitably > > wins. Since this concern isn't backed up by the documented > > process[*] we're ostensibly following, I'm not really sure how to > > address it. > > > > Ultimately I was unable to convince my colleagues on the TC that > > "University" was a qualifying name, and so it was handled as a > > possible exception to the normal rules which, following a poll of > > most TC members, was decided would not be granted. > > > > [*] https://governance.openstack.org/tc/reference/release-naming.html > > Thanks for the clarification. The only point raised which should have > any bearing on the process at this time is is the first one, and I think > that has been addressed. > > The process is designed to collect the widest range of names, and let > the *community* decide. It is not the function of the TC to vet the > names for suitability before the poll. The community itself is to do > that, in the poll. And because vetting for trademark is a specialized > and costly task, that happens *after* the poll, so that we don't waste > time and money on it. > > It was exactly the kind of seemingly arbitrary process of producing the > names for the poll which is on display here that prompted us to write > down this more open process in the first place. It's unfortunate that > the last three objections that you cite are clearly in contradiction to > that. > > We pride ourselves on fairness and openness, but we seem to have lost > the enthusiasm for that here. I would rather we not do this at all than > to do it poorly, so I have proposed we simply stop naming releases. > It's more trouble than it's worth. > > Here's my proposed TC resolution for that: > > https://review.opendev.org/675788 > > -Jim > > I'm with Jim on this, specially would like to highlight couple of points from the governance: """ #. The marketing community may identify any names of particular concern from a marketing standpoint and discuss such issues publicly on the Marketing mailing list. The marketing community may produce a list of problematic items (with citations to the mailing list discussion of the rationale) to the election official. This information will be communicated during the election, but the names will not be removed from the poll. #. After the close of nominations, the election official will finalize the list of proposed names and publicize it. In general, the official should strive to make objective determinations as to whether a name meets the `Release Name Criteria`_, but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number. #. Once the list is finalized and publicized, a one-week period shall elapse before the start of the election so that any names removed from consideration because they did not meet the `Release Name Criteria`_ may be discussed. Names erroneously removed may be re-added during this period, and the Technical Committee may vote to add exceptional names (which do not meet the standard criteria). """ The marketing community concerns will be communicated, "but the names will not be removed from the poll." Officials should be objective if the name meets the criteria "but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number." "Technical Committee may vote to add exceptional names", not to remove qualifying names for personal preference. i think if we take the route taken here, we better just stop naming things. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Aug 12 17:29:16 2019 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 12 Aug 2019 11:29:16 -0600 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 12:20 PM Dirk Müller wrote: > Hi, > > For a while the requirements team is trying to go through the process > of removing the upper cap > on jsonschema to allow the update to jsonschema 3.x. The update > for that is becoming more urgent as more and more other > (non-OpenStack) projects are going > with requiring jsonschema >= 3, so we need to move forward as well to > keep co-installability > and be able to consume updates of packages to versions that depend on > jsonschema >= 3. > > The current blocker seems to be tripleo-common / os-collect-config > depending on python-zaqarclient, > which has a broken gate since the merge of: > > > http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html > > on the server side, which was done here: > > https://review.opendev.org/#/c/628723/ > > The python-zaqarclient functional tests have not been correspondingly > adjusted, and are failing > for more than 5 months meanwhile, in consequence many patches for > zaqarclient, including > the one uncapping jsonschema are piling up. It looks like no real > merge activity happened since > > https://review.opendev.org/#/c/607553/ > > which is a bit more than 6 months ago. How should we move forward? > doing a release of zaqarclient > using some implementation of an API that got removed server side > doesn't seem to be a terribly great > idea, plus that we still need to merge either one of my patches (one > that makes functional testing non-voting > or the brutal "lets drop all tests that fail" patch). On the other > side, I don't know how feasible it is for Triple-O > to drop the dependency on os-collect-config or os-collect-config to > drop the dependency on zaqar. > Do you have an example of what the issue with tripleo/os-collect-config is? It looks like os-collect-config has support for using zaqarclient as a notification mechanism for work but I don't think it's currently used. That being said, can we just fix whatever issue is? I don't see os-collect-config using pool_group anywhere > > Any suggestion on how to move forward? > > TIA, > Dirk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debayan.ray at gmail.com Mon Aug 12 13:55:18 2019 From: debayan.ray at gmail.com (Debayan Ray) Date: Mon, 12 Aug 2019 19:25:18 +0530 Subject: [ironic] [sushy] Stepping down from Sushy core Message-ID: Hey all, it's not easy to send this email. With a heavy heart, I announce that I'll be stepping down from the Sushy core team effective today. Almost six months back, I left HPE and joined Oracle. Although earlier I thought of keeping out some time to dedicate to being an effective Sushy core reviewer, gradually I found those "spare" time very much elusive and near impossible to find. Now after all these years of working together, I really can't completely disassociate myself altogether. So you can expect me reviewing stuff from time to time and I will continue to follow Ironic and its projects and other interesting ones in OpenStack. Thanks, everyone for everything. It has been great, to say the least, to work with you all these years. Cheers! Debayan (deray) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Aug 12 19:12:57 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Aug 2019 15:12:57 -0400 Subject: [ironic] [sushy] Stepping down from Sushy core In-Reply-To: References: Message-ID: Debayan, Thank you for your heartfelt email. It is always difficult when life gets in the way. Accordingly, I have removed you from the sushy-core group in gerrit. Thank you for your service as a core reviewer on sushy, and we'll see you around. :) -Julia On Mon, Aug 12, 2019 at 2:25 PM Debayan Ray wrote: > > Hey all, it's not easy to send this email. With a heavy heart, I announce that I'll be stepping down from the Sushy core team effective today. > > Almost six months back, I left HPE and joined Oracle. Although earlier I thought of keeping out some time to dedicate to being an effective Sushy core reviewer, gradually I found those "spare" time very much elusive and near impossible to find. > > Now after all these years of working together, I really can't completely disassociate myself altogether. So you can expect me reviewing stuff from time to time and I will continue to follow Ironic and its projects and other interesting ones in OpenStack. > > Thanks, everyone for everything. It has been great, to say the least, to work with you all these years. Cheers! > > Debayan (deray) From peter.matulis at canonical.com Mon Aug 12 20:34:01 2019 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 12 Aug 2019 16:34:01 -0400 Subject: [charms] OpenStack Charms 19.07 release is now available Message-ID: The OpenStack Charms team is delighted to announce the 19.07 charms release. This release brings several new features and improvements, including some for existing releases (Queens, Rocky, Stein) and many stable combinations of Ubuntu and OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/1907.html == Highlights == * Percona Cluster Cold Start The percona-cluster charm now contains logic and actions to assist with operational tasks surrounding a database shutdown scenario. * DVR SNAT The neutron-openvswitch charm now supports deployment of DVR based routers with combined SNAT functionality, removing the need to use the neutron-gateway charm in some types of deployment. * Octavia Image Lifecycle Management A new octavia-diskimage-retrofit charm provides a tool for retrofitting cloud images for use as Octavia Amphora. * Nova Live Migration: Streamline SSH Host Key Handling The nova-cloud-controller charm has improved the host key discovery and distribution algorithm. This will make the addition of a nova-compute unit faster and the nova-cloud-controller upgrade-charm hook will be significantly improved for large deployments. == OpenStack Charms team == The OpenStack Charms development team can be contacted on the #openstack-charms IRC channel on Freenode. The team will also be represented at the Open Infrastructure Summit and PTG events in Shanghai (November, 2019). == Thank you == Huge thanks to the below 46 charm contributors who worked together to squash 48 bugs, to enable an entirely new charmed version of OpenStack, and to move the line forward on several key features! Frode Nordahl Chris MacNaughton David Ames Liam Young Alex Kavanagh James Page Corey Bryant Ryan Beisner Tytus Kurek Edward Hope-Morley Sahid Orentino Ferdjaoui Dmitrii Shcherbakov Rodrigo Barbieri Peter Matulis Ghanshyam Mann Jorge Niedbalski Nicolas Pochet Andrea Ieri Andreas Jaeger Zachary Zehring Trent Lloyd Tiago Pasqualini David Coronel Hua Zhang Ian Wienand Dan Ackerson Ramon Grullon George Kraft Alvaro Uria Michael Skalka Nikolay Vinogradov melissaml Nobuto Murata Andrew McLeod Frank Kloeker Tim Burke Cory Johns Marian Gasparovic sunnyve Felipe Reyes Pete Vander Giessen Ryan Farrell Mark Maglana Levente Tamas Alexander Litvinov Marcelo Subtil Marcal -- OpenStack Charms Team From ed at leafe.com Mon Aug 12 21:44:33 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 12 Aug 2019 16:44:33 -0500 Subject: [uc] Less than 4 days left to nominate for the UC! Message-ID: A week has gone by since nominations opened, and we have yet to receive a single nomination! Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. -- Ed Leafe [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html From zbitter at redhat.com Mon Aug 12 21:57:42 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 12 Aug 2019 17:57:42 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87mugecw7i.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> Message-ID: <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> On 12/08/19 10:08 AM, James E. Blair wrote: > Jeremy Stanley writes: > >> On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: >> [...] >>> I still do believe that it meets all of the criteria. In particular, it >>> meets this: >>> >>> * The name must refer to the physical or human geography of the region >>> encompassing the location of the OpenStack summit for the >>> corresponding release. >>> >>> It is short for "University of Shanghai for Science and Technology", >>> which is a place in Shanghai. Here is their website: >>> http://en.usst.edu.cn/ >> [...] >> >> This got discussed after last week's TC meeting during Thursday >> office hours, and I'm sorry I didn't think to give you a heads-up >> when the topic arose: >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 >> >> One of the objections raised was that "University" in the name >> "University of Shanghai for Science and Technology" was a general >> class of place or feature and not a particular place or feature. But >> as you pointed out in IRC a while back (and which I should have >> remembered), there is precedent with the Pike cycle name: >> >> Pike (the Massachusetts Turnpike, also the Mass Pike...) >> >> https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names >> >> Another objection raised is that "OpenStack University" was the old >> name for what we now call the OpenStack Upstream Institute and that >> it could lead to name confusion if chosen. A search of the Web for >> that name last week turned up only two occurrences for me on the >> first page of results, both of which were lingering references in >> our wiki which I immediately corrected, so I don't think that >> argument holds. >> >> Then there was the suggestion that "University" might somehow be a >> trademark risk, though in my opinion that's why we have the OSF vet >> the preliminary winning results after the community ranks them (so >> that the TC doesn't need to concern itself with trademark issues). >> >> It was also pointed out that each time we have a poll with a mix of >> English and non-English names/words, an English name inevitably >> wins. Since this concern isn't backed up by the documented >> process[*] we're ostensibly following, I'm not really sure how to >> address it. >> >> Ultimately I was unable to convince my colleagues on the TC that >> "University" was a qualifying name, and so it was handled as a >> possible exception to the normal rules which, following a poll of >> most TC members, was decided would not be granted. >> >> [*] https://governance.openstack.org/tc/reference/release-naming.html > > Thanks for the clarification. The only point raised which should have > any bearing on the process at this time is is the first one, and I think > that has been addressed. To be clear, the thing that stopped us from automatically including it was that there was no consensus that it met the criteria, which exclude words that describe a general class of Geographic feature. I regret that you didn't get an opportunity to discuss this; I initially raised it in response to you and I both being pinged[1], but we probably should have tried to ping you again when discussions resumed during office hours the next day. FWIW I never thought that Pike should have been automatically included either, but nobody asked me at the time ;) Once it's treated as an exception put to a TC vote, it's up to TC members to decide if it "sounds really cool"[2] enough to make an exception for. I think we can all agree that this is an extremely subjective decision, and I'd expect that people took all of the factors mentioned in this thread (both for and against) into account in their vote. In the end, a majority of the TC decided not to add it to the list. I hope that helps clarify the process that led us here. cheers, Zane. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-07.log.html#t2019-08-07T15:21:10) [2] actual words From corvus at inaugust.com Mon Aug 12 23:18:07 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 12 Aug 2019 16:18:07 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> (Zane Bitter's message of "Mon, 12 Aug 2019 17:57:42 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> Message-ID: <875zn27z2o.fsf@meyer.lemoncheese.net> Zane Bitter writes: > To be clear, the thing that stopped us from automatically including it > was that there was no consensus that it met the criteria, which > exclude words that describe a general class of Geographic feature. I > regret that you didn't get an opportunity to discuss this; I initially > raised it in response to you and I both being pinged[1], but we > probably should have tried to ping you again when discussions resumed > during office hours the next day. FWIW I never thought that Pike > should have been automatically included either, but nobody asked me at > the time ;) Thanks, I suppose it's better late than never to have this discussion. Happily, the process does not require that the TC come to a consensus on whether a name fits the criteria. In establishing the process, this was a deliberate decision to avoid the TC having exactly that kind of discussion because we all have better things to be doing. That is why this is the sole purview of the election official. We should remember that the purpose of this process is to collect as many names as possible, weeding out only the obvious non-conforming candidates, so that the whole community may decide on the name. As I understand it, the sequence of events that led us here was: A) Doug (as interim unofficial election official) removed the name for unspecified reasons. [1] B) I objected to the removal. This is in accordance with step 5 of the process: Once the list is finalized and publicized, a one-week period shall elapse before the start of the election so that any names removed from consideration because they did not meet the Release Name Criteria may be discussed. Names erroneously removed may be re-added during this period, and the Technical Committee may vote to add exceptional names (which do not meet the standard criteria). C) Rico (the election official at the time) agreed with my reasoning that it was erroneously removed and re-added the name. [2] D) The list was re-issued and the name was once again missing. Four reasons were cited, three of which have no place being considered prior to voting, and the fourth is a claim that it does not meet the criteria. Aside from no explanation being given for (A) (and assuming that the explanation, if offered, would have been that the name does not meet the criteria) the events A through C are fairly in accordance with the documented process. I believe the following: * It was incorrect for the name to have been removed in the first place (but that's fine, it's an appeal-able decision and I have appealed it). * It was correct for Rico to re-add the name. There are several reasons for this: * Points 1 and 2 of the Release Name Criteria are not at issue. * The name refers to the human geography of the area around the summit (it is a name of a place you can find on the map), and so satisfies point 3. * I believe that point 4, which it has been recently asserted the name does not satisfy, was not intended to exclude names which describe features. It was a point of clarification that should a feature have a descriptive term, it should not be included, for the sake of brevity. Point 4 begins with the length limitation, and therefore should be considered as a discussion primarily of length. It states: The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Note that the examples in the text are "Foo City" and "Foo Peak" for "Foo". Obviously, that example would be for the "F" release where "City" and "Peak" would not be candidates. Therefore, point 4 is effectively silent on whether words like "City" and "Peak" would be permitted for the "C" and "P" releases. * The name "Pike" was accepted as meeting the criteria. It is short for "Massachusetts Turnpike". It serves the same function as a descriptive name and serves and precedent. * I will absolutely agree that point 4 could provide more clarity on this and therefore a subjective evaluation must be made. On this point, we should refer to step 4 of the Release Naming Process: In general, the official should strive to make objective determinations as to whether a name meets the Release Name Criteria, but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number. This indicates again that Rico was correct to accept the name, because of the "generous interpretation" clause. The ambiguity in point 4 combined with the precedent set by Pike is certainly sufficient reason to be "generous". * While the election official is free to consult with whomever they wish, including the rest of the TC, there is no formal role for the TC in reducing the names before voting begins (in fact, the process clearly indicates that is an anti-goal). So after Rico re-added the name, it was not necessary to further review or reverse the decision. I appreciate that the TC proactively considered the name under the "really cool" exception, even though I had not requested it (deeming it to be unnecessary). Thank you for that. Given the above reasoning, I hope that I have made a compelling case that the name meets the criteria (or at least, warrants "generous interpretation") and would appreciate it if the name were added back to the poll. Thanks, Jim [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T15:02:46 [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008334.html From rico.lin.guanyu at gmail.com Tue Aug 13 02:01:03 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Aug 2019 10:01:03 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <875zn27z2o.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: IMO, it's good to release the whole thing out of TC's responsibility, and do hope we can do these in an automatic way, so like people can just raise whatever cool name it's and see if that pass a CI job. :) As long as the whole naming process is still under TC's governance and words like *the process should consider potential issues of trademark* still in [1] (which I think we should specific put down as a more formal rule, or remove it out of that docs), I believe TCs still need to confirm the final list. And that's why I'm the one asking TCs to put their final confirm with it through the inner TC poll during office hour. Maybe the process will change though all these discussions and patches you proposed on governance repo (kind of hope it will, at least we should improve the docs to make more clear info. for all), but as long as the inner TC poll result does not turn over, I will respect the result, and hope that's good enough reason to call that list is final. This discussion definitely worth to keep developing, but as I promised for postponing 24 hours from yesterday, it's time to bring the public poll up. [1] https://governance.openstack.org/tc/reference/release-naming.html On Tue, Aug 13, 2019 at 7:22 AM James E. Blair wrote: > Zane Bitter writes: > > > To be clear, the thing that stopped us from automatically including it > > was that there was no consensus that it met the criteria, which > > exclude words that describe a general class of Geographic feature. I > > regret that you didn't get an opportunity to discuss this; I initially > > raised it in response to you and I both being pinged[1], but we > > probably should have tried to ping you again when discussions resumed > > during office hours the next day. FWIW I never thought that Pike > > should have been automatically included either, but nobody asked me at > > the time ;) > > Thanks, I suppose it's better late than never to have this discussion. > > Happily, the process does not require that the TC come to a consensus on > whether a name fits the criteria. In establishing the process, this was > a deliberate decision to avoid the TC having exactly that kind of > discussion because we all have better things to be doing. That is why > this is the sole purview of the election official. > > We should remember that the purpose of this process is to collect as > many names as possible, weeding out only the obvious non-conforming > candidates, so that the whole community may decide on the name. > > As I understand it, the sequence of events that led us here was: > > A) Doug (as interim unofficial election official) removed the name for > unspecified reasons. [1] > > B) I objected to the removal. This is in accordance with step 5 of the > process: > > Once the list is finalized and publicized, a one-week period shall > elapse before the start of the election so that any names removed > from consideration because they did not meet the Release Name > Criteria may be discussed. Names erroneously removed may be > re-added during this period, and the Technical Committee may vote > to add exceptional names (which do not meet the standard criteria). > > C) Rico (the election official at the time) agreed with my reasoning > that it was erroneously removed and re-added the name. [2] > > D) The list was re-issued and the name was once again missing. Four > reasons were cited, three of which have no place being considered > prior to voting, and the fourth is a claim that it does not meet the > criteria. > > Aside from no explanation being given for (A) (and assuming that the > explanation, if offered, would have been that the name does not meet the > criteria) the events A through C are fairly in accordance with the > documented process. > > I believe the following: > > * It was incorrect for the name to have been removed in the first place > (but that's fine, it's an appeal-able decision and I have appealed > it). > > * It was correct for Rico to re-add the name. There are several reasons > for this: > > * Points 1 and 2 of the Release Name Criteria are not at issue. > > * The name refers to the human geography of the area around the summit > (it is a name of a place you can find on the map), and so satisfies > point 3. > > * I believe that point 4, which it has been recently asserted the name > does not satisfy, was not intended to exclude names which describe > features. It was a point of clarification that should a feature > have a descriptive term, it should not be included, for the sake of > brevity. Point 4 begins with the length limitation, and therefore > should be considered as a discussion primarily of length. It > states: > > The name must be a single word with a maximum of 10 characters. > Words that describe the feature should not be included, so "Foo > City" or "Foo Peak" would both be eligible as "Foo". > > Note that the examples in the text are "Foo City" and "Foo Peak" for > "Foo". Obviously, that example would be for the "F" release where > "City" and "Peak" would not be candidates. Therefore, point 4 is > effectively silent on whether words like "City" and "Peak" would be > permitted for the "C" and "P" releases. > > * The name "Pike" was accepted as meeting the criteria. It is short > for "Massachusetts Turnpike". It serves the same function as a > descriptive name and serves and precedent. > > * I will absolutely agree that point 4 could provide more clarity on > this and therefore a subjective evaluation must be made. On this > point, we should refer to step 4 of the Release Naming Process: > > In general, the official should strive to make objective > determinations as to whether a name meets the Release Name > Criteria, but if subjective evaluation is required, should be > generous in interpreting the rules. It is not necessary to reduce > the list of proposed names to a small number. > > This indicates again that Rico was correct to accept the name, > because of the "generous interpretation" clause. The ambiguity in > point 4 combined with the precedent set by Pike is certainly > sufficient reason to be "generous". > > * While the election official is free to consult with whomever they > wish, including the rest of the TC, there is no formal role for the TC > in reducing the names before voting begins (in fact, the process > clearly indicates that is an anti-goal). So after Rico re-added the > name, it was not necessary to further review or reverse the decision. > > I appreciate that the TC proactively considered the name under the > "really cool" exception, even though I had not requested it (deeming it > to be unnecessary). Thank you for that. > > Given the above reasoning, I hope that I have made a compelling case > that the name meets the criteria (or at least, warrants "generous > interpretation") and would appreciate it if the name were added back to > the poll. > > Thanks, > > Jim > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T15:02:46 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008334.html > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Aug 13 04:56:08 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Aug 2019 12:56:08 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open Message-ID: Hi, all OpenStackers, It's time to vote for the naming of the U release!! U 版本正式命名票选开始!! First, big thanks for all people who take their own time to propose names on [2] or help to push/improve to the naming process. Thank you. We'll use a public polling option over per-user private URLs for voting. This means everybody should proceed to use the following URL to cast their vote: *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 * We've selected a public poll to ensure that the whole community, not just Gerrit change owners get a vote. Also, the size of our community has grown such that we can overwhelm CIVS if using private URLs. A public can mean that users behind NAT, proxy servers or firewalls may receive a message saying that your vote has already been lodged if this happens please try another IP. Because this is a public poll, results will currently be only viewable by me until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], and results will be posted shortly after. [1] https://governance.openstack.org/tc/reference/release-naming.html [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk at dmllr.de Tue Aug 13 09:29:49 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Tue, 13 Aug 2019 11:29:49 +0200 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: Hi Alex, Am Mo., 12. Aug. 2019 um 19:29 Uhr schrieb Alex Schultz : > Do you have an example of what the issue with tripleo/os-collect-config is? It looks like os-collect-config has support for using zaqarclient as a notification mechanism for work but I don't think it's currently used. it depends on it (has it in its requirements.txt) so if its not used, maybe we can remove it? > That being said, can we just fix whatever issue is? I don't see os-collect-config using pool_group anywhere sure, lets see if we can fix zaqarclient and release a new version. hao wang recently started responding to this (in a private conversation), so I'll give him and the team a few days to sort the issues out. Greetings, Dirk From mriedemos at gmail.com Tue Aug 13 12:25:52 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 13 Aug 2019 07:25:52 -0500 Subject: [nova] The race for 2.76 Message-ID: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> There are several compute API microversion changes that are conflicting and will be fighting for 2.76, but I think we're trying to prioritize this one [1] for the ironic power sync external event handling since (1) Surya is going to be on vacation soon, (2) there is an ironic change that depends on it which has had review [2] and (3) the nova change has had quite a bit of review already. As such I think others waiting to rebase from 2.75 to 2.76 should probably hold off until [1] is approved which should happen today or tomorrow. [1] https://review.opendev.org/#/c/645611/ [2] https://review.opendev.org/#/c/664842/ -- Thanks, Matt From mnaser at vexxhost.com Tue Aug 13 12:29:23 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 13 Aug 2019 08:29:23 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the weekly update for what happened in the Openstack TC. You can get more information by checking for changes in openstack/governance repository. # General changes - Changed the dates for the ‘U’ naming poll: https://review.opendev.org/#/c/674465/ - Rico Lin volunteered to be the naming poll coordinator instead of Tony Breeds: https://review.opendev.org/#/c/674494/ - Added a wiki link to the rpm-packaging project: https://review.opendev.org/#/c/673837/ - Updated policy regarding project retirements: https://review.opendev.org/#/c/670741/ Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amotoki at gmail.com Tue Aug 13 13:30:05 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 13 Aug 2019 22:30:05 +0900 Subject: [neutron] bug deputy report (the week of Aug 5) Message-ID: Hi neutrinos, I was a bug deputy for last week (Aug 5 to Aug 11). We got a few new bug last week. Two bugs are in the undecided state. https://bugs.launchpad.net/neutron/+bug/1834045 Live-migration double binding doesn't work with OVN New, Undecided, No assignee (in neutron) NOTE: 'neutron' was added to the affected projects last week. This is related to the double binding feature on non-agent based drivers like networking-ovn. networking-ovn already has a workaround for this. Is any further work needed in neutron side? More input would be appreciated. https://bugs.launchpad.net/neutron/+bug/1839658 "subnet" register in the DB can have network_id=NULL New, Undecided, Assigned to ralonsoh NOTE: I could not reproduce it yet. It looks like a corner case and more code investigation might be needed. The following new bugs have been fixed. - https://bugs.launchpad.net/bugs/1839595 Thanks, Akihiro From aschultz at redhat.com Tue Aug 13 13:51:43 2019 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 13 Aug 2019 07:51:43 -0600 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Tue, Aug 13, 2019 at 3:30 AM Dirk Müller wrote: > Hi Alex, > > Am Mo., 12. Aug. 2019 um 19:29 Uhr schrieb Alex Schultz < > aschultz at redhat.com>: > > > Do you have an example of what the issue with tripleo/os-collect-config > is? It looks like os-collect-config has support for using zaqarclient as a > notification mechanism for work but I don't think it's currently used. > > it depends on it (has it in its requirements.txt) so if its not used, > maybe we can remove it? > > Well code exists that calls zaqarclient, I just don't know if anyone has deployed the functionality. > > That being said, can we just fix whatever issue is? I don't see > os-collect-config using pool_group anywhere > > sure, lets see if we can fix zaqarclient and release a new version. > hao wang recently started responding to this (in a private > conversation), so I'll give him and the team a few days to sort the > issues out. > > Ok let us know if you have any specific issues with os-collect-config and we can take a look. Thanks > Greetings, > Dirk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Aug 13 14:44:07 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Aug 2019 15:44:07 +0100 Subject: [neutron] bug deputy report (the week of Aug 5) In-Reply-To: References: Message-ID: On Tue, 2019-08-13 at 22:30 +0900, Akihiro Motoki wrote: > Hi neutrinos, > > I was a bug deputy for last week (Aug 5 to Aug 11). > We got a few new bug last week. > > Two bugs are in the undecided state. > > https://bugs.launchpad.net/neutron/+bug/1834045 > Live-migration double binding doesn't work with OVN > New, Undecided, No assignee (in neutron) > NOTE: 'neutron' was added to the affected projects last week. > This is related to the double binding feature on non-agent based > drivers like networking-ovn. > networking-ovn already has a workaround for this. > Is any further work needed in neutron side? > More input would be appreciated. Form a nova and neutron perspective my understanding is that all drivers are required to implement this feature. The nova implementation certenly requires that all neutron backends support it if neutron reports support for the portbinding-extended extension. Given that neutron does not support disabling this extension that in trun implies that this has been a required extension for all ml2 drivers to support. It is my understanding at least that this was not intended to be optional. As an interim workaround nova supports disabling treating the lack of a vif plugged event as fatal for live migrations. https://docs.openstack.org/nova/latest/configuration/config.html#compute.live_migration_wait_for_vif_plug however we do want to eventually remove that and it should also be noted that we intent to use the multiple port binding in other code paths which means some nova feature will not be available with ovn until support is added. i have not updated the bug but i personal tend to be live it should be marked as invalid against nova. the issue to me seams to be that neutron is reporting support for an extension that not supported by networking-ovn. and networking-ovn is not implementing a mandatory extention that cannot be disabled. i you look at the nova spec https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html#proposed-change it states "Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have tThere is no additional configuration for deployers. The use of multiple bindings will be enabled automatically. We decide whether to use the new or old API flow, if both compute nodes support this feature and based on the available Neutron API extensions. We cache extensions support in the usual way utilizing the existing neutron_extensions_cache. Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have to implement the extension separately and will continue to use the old workflow until their maintainers support the new neutron extension." This statement about implementing the exteion in the ml2 plugin layer came organically form conversations with miguel lavalle when he took over the implemantion of https://review.opendev.org/#/c/414251/ and we discuss this and the existence of the extention being report being the contract by which nova detect support of this feature at before codifying it at the nova spec. this contract was estrablish specificaly to ensure that we could ensure out of tree drivers would still work by falling back to the old flow with the expectation that they would all be updated eventurally. > > https://bugs.launchpad.net/neutron/+bug/1839658 > "subnet" register in the DB can have network_id=NULL > New, Undecided, Assigned to ralonsoh > NOTE: I could not reproduce it yet. It looks like a corner case and > more code investigation might be needed. > > The following new bugs have been fixed. > - https://bugs.launchpad.net/bugs/1839595 > > Thanks, > Akihiro > From skaplons at redhat.com Tue Aug 13 14:46:17 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 13 Aug 2019 16:46:17 +0200 Subject: [neutron] QoS meeting cancelled today Message-ID: <9956FAC6-390D-4EE3-86A8-A29781EE1348@redhat.com> Hi, As there is no agenda for today and Rodolfo is not available, today’s QoS meeting will be cancelled. Sorry for very late announcement of that :) — Slawek Kaplonski Senior software engineer Red Hat From berndbausch at gmail.com Tue Aug 13 14:47:50 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 13 Aug 2019 23:47:50 +0900 Subject: [glance] glance-cache-management hardcodes URL with port In-Reply-To: References: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Message-ID: <712e178b-0862-2cad-a9f3-f1c92729295c@gmail.com> Hi Erno, yes, the /image is the problem. This is a Devstack (stable/Stein), which by default deploys Glance and all(?) other services as WSGI applications. I know that this is not recommended for Glance, but it's not illegal either, as far as I understand it. Not illegal, but it disables the glance-cache-manage client in its current form. Bernd On 8/13/2019 1:50 AM, Erno Kuvaja wrote: > On Thu, Aug 8, 2019 at 12:49 PM Bernd Bausch > wrote: > > Stein re-introduces Glance cache management, but I have not been > able to > use the glance-cache-manage command. I always get errno 111, > connection > refused. > > It turns out that the command tries to access > http://localhost:9292. It > has options for non-default IP address and port, but unfortunately > on my > (devstack) cloud, the Glance endpoint is > http://192.168.1.200/image. No > port. > > Is there a way to tell glance-cache-manage to use this endpoint? > > Bernd. > > Hi Bernd, > > You can always give it the port 80, the real problem likely is the > prefix /image you have there. Are you running glance-api as wsgi app > under some http-server or is that reverse-proxy/loadbalancer you're > directing the glance-cache-manage towards? Remember that the > management is not currently cluster wide so you should be always > targeting single service process at the time. > > - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Aug 13 14:57:29 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 07:57:29 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Tue, 13 Aug 2019 10:01:03 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: <8736i56rl2.fsf@meyer.lemoncheese.net> Rico Lin writes: > IMO, it's good to release the whole thing out of TC's responsibility, and > do hope we can do these in an automatic way, so like people can just raise > whatever cool name it's and see if that pass a CI job. :) I agree, and in fact, that's why I wrote this process originally, to do exactly that. If we were to simply follow the steps described in [1] (it is a 7 step process, each one clearly saying what should be done), I don't think we would have so much confusion. The only responsibilities that the TC has in that document is to set the dates, the region, appoint the coordinator, and vote on adding "really cool" names. That's it. The process also says that in the rare event that a subjective evaluation of whether a name meets the criteria needs to be made, the coordinator should be generous. That means that the coordinator should accept names, even if they are not certain they meet the criteria. > As long as the whole naming process is still under TC's governance and > words like *the process should consider potential issues of trademark* > still in [1] (which I think we should specific put down as a more formal > rule, or remove it out of that docs), I believe TCs still need to confirm > the final list. I disagree here. That quote is from the preamble. It is general introductory material, but is not part of the specific step-by-step process which should be followed. There *is* more specific detail about that, it is step 7: The Foundation will perform a trademark check on the winning name. If there is a trademark conflict, then the Foundation will proceed down the ranked list of Condorcet results until a name without a trademark conflict is found. This will be the selected name. Therefore, trademark considerations are explicitly out of the purview of the TC. Several folks, including you, have said that they wish the process were out of the TC's hands. The fact is that it already is, but unfortunately people seem to keep wanting to manipulate the list before it goes out for a vote. I believe that the current process as written is as straightforward and fair as we can make it and still have community involvement. This is not the first time we, as a community, have not been able to follow it. I think that's because not enough of us care. This election had, at least, three coordinators, it was run late, dates were missed, and something like 10 names were dropped from the poll before it went out, simply due to personal preference of various folks on the TC. Since we take particular pride in our community participation, the fact that we have not been able or willing to do this correctly reflects very poorly on us. I would rather that we not do this at all than do it badly, so I think this should be the last release with a name. I've proposed that change here: https://review.opendev.org/675788 -Jim [1] https://governance.openstack.org/tc/reference/release-naming.html From skaplons at redhat.com Tue Aug 13 14:58:20 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 13 Aug 2019 16:58:20 +0200 Subject: [neutron] bug deputy report (the week of Aug 5) In-Reply-To: References: Message-ID: <9218FFFD-0D55-44C8-938B-8751C63643C7@redhat.com> Hi, > On 13 Aug 2019, at 16:44, Sean Mooney wrote: > > On Tue, 2019-08-13 at 22:30 +0900, Akihiro Motoki wrote: >> Hi neutrinos, >> >> I was a bug deputy for last week (Aug 5 to Aug 11). >> We got a few new bug last week. >> >> Two bugs are in the undecided state. >> >> https://bugs.launchpad.net/neutron/+bug/1834045 >> Live-migration double binding doesn't work with OVN >> New, Undecided, No assignee (in neutron) >> NOTE: 'neutron' was added to the affected projects last week. >> This is related to the double binding feature on non-agent based >> drivers like networking-ovn. >> networking-ovn already has a workaround for this. >> Is any further work needed in neutron side? >> More input would be appreciated. > Form a nova and neutron perspective my understanding is that all drivers are > required to implement this feature. The nova implementation certenly requires > that all neutron backends support it if neutron reports support for the > portbinding-extended extension. Given that neutron does not support disabling this > extension that in trun implies that this has been a required extension for all ml2 drivers > to support. It is my understanding at least that this was not intended to be optional. > > As an interim workaround nova supports disabling treating the lack of a vif plugged event as fatal for live migrations. > https://docs.openstack.org/nova/latest/configuration/config.html#compute.live_migration_wait_for_vif_plug > however we do want to eventually remove that and it should also be noted that we intent to use the multiple port binding > in other code paths which means some nova feature will not be available with ovn until support is added. > > i have not updated the bug but i personal tend to be live it should be marked as invalid against nova. > the issue to me seams to be that neutron is reporting support for an extension that not supported by networking-ovn. > and networking-ovn is not implementing a mandatory extention that cannot be disabled. IIUC bug report correctly, problem is that during live-migration nova is doing inactive binding on destination host and waits for "network-vif-plugged” even about port on destination host before it will start migration. And that is IMO wrong as this even should be IMO send after migration as then vif is really plugged and configured on dest host. It works currently for ML2/OVS case because during creation of inactive binding neutron is doing port update which triggers neutron-ovs-agent on source host and this triggers sending this notification. IMO there should be different notification for this case or nova should maybe check in some other way if inactive binding is done on destination host or not. But I wasn’t the one who was debugging it so my understanding might be wrong here. Adding Maciek in CC as he was debugging this issue on networking-ovn side and he reported this bug originally. > > i you look at the nova spec > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html#proposed-change > it states > > "Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the > extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have tThere is no additional > configuration for deployers. The use of multiple bindings will be enabled automatically. We decide whether to use the > new or old API flow, if both compute nodes support this feature and based on the available Neutron API extensions. We > cache extensions support in the usual way utilizing the existing neutron_extensions_cache. > > Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the > extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have to implement the extension > separately and will continue to use the old workflow until their maintainers support the new neutron extension." > > This statement about implementing the exteion in the ml2 plugin layer came organically form conversations with miguel > lavalle when he took over the implemantion of https://review.opendev.org/#/c/414251/ and we discuss this and the > existence of the extention being report being the contract by which nova detect support of this feature at before > codifying it at the nova spec. > > > this contract was estrablish specificaly to ensure that we could ensure out of tree drivers would still work by falling > back to the old flow with the expectation that they would all be updated eventurally. > >> >> https://bugs.launchpad.net/neutron/+bug/1839658 >> "subnet" register in the DB can have network_id=NULL >> New, Undecided, Assigned to ralonsoh >> NOTE: I could not reproduce it yet. It looks like a corner case and >> more code investigation might be needed. >> >> The following new bugs have been fixed. >> - https://bugs.launchpad.net/bugs/1839595 >> >> Thanks, >> Akihiro — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Tue Aug 13 15:38:28 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Aug 2019 15:38:28 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <20190813153828.ifzdhhxeviz5svs2@yuggoth.org> On 2019-08-13 07:57:29 -0700 (-0700), James E. Blair wrote: [...] > Several folks, including you, have said that they wish the process > were out of the TC's hands. The fact is that it already is, but > unfortunately people seem to keep wanting to manipulate the list > before it goes out for a vote. [...] You've also convinced me that I should not have requested removal of politically-sensitive choices from the list, since we could have done that as part of the community discussion instead of prior to it. I feel like the goal in narrowing the list of options was to speed up the public review period so we could get on with the vote quickly and have a name sooner, but in retrospect that raised as much or more discussion than leaving them in likely would have done. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Tue Aug 13 16:14:41 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Aug 2019 17:14:41 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: > Since we take particular pride in our community participation, the fact > that we have not been able or willing to do this correctly reflects very > poorly on us. I would rather that we not do this at all than do it > badly, so I think this should be the last release with a name. I've > proposed that change here: > > https://review.opendev.org/675788 not to takethis out of context but it is rather long thread so i have sniped the bit i wanted to comment on. i thnik not nameing release would be problemeatic on two fronts. one without a common comunity name i think codename or other conventint names are going to crop up as many have been refering to the U release as the unicorn release just to avoid the confusion between "U" and "you" when speak about the release untill we have an offical name. if we had no offical names i think we woudl keep using those placeholders at least on irc or in person. (granted we would not use them for code or docs) that is a minor thing but the more distributive issue i see is that nova's U release will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the set of compatiable project for a given version we woudl only have the letter and form a marketing perspective and even from development perspective i think that will be problematic. we could just have the V release but i think it loses something in clarity. From thierry at openstack.org Tue Aug 13 16:19:36 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:19:36 +0200 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> Tim Bell wrote: >> We’re also thinking about starting a “Large Scale” SIG so people could >> collaborate in tackling more of the scaling issues together. Thierry >> Carrez (ttx) and I will be looking into that by mentioning the idea to >> LINE and YahooJapan (as some perspective at-scale operators)to see >> what they think and also make a list of organizations that could be >> interested. Rico Lin (ricolin) will also update the SIG guidelines >> documents to make the whole process easier and Jim Rollenhagen (jroll) >> will try and bring this up at Verizon Media. > > How about a forum brainstorm in Shanghai ? Yes, my goal is to get a number of interested stakeholders to meet in Shanghai and see if there is enough alignment in goals to openly collaborate on those questions. We have had several groups tackling facets of the "large scale" problem in the past (performance WG, large deployments workgroup, LCOO...) -- my thinking here would be to narrow it down to addressing scaling limitations in cluster sizes (think: RabbitMQ falling down after a given number of compute nodes). -- Thierry Carrez (ttx) From corvus at inaugust.com Tue Aug 13 16:34:33 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 09:34:33 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> (Sean Mooney's message of "Tue, 13 Aug 2019 17:14:41 +0100") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: <87imr13tye.fsf@meyer.lemoncheese.net> Sean Mooney writes: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >> Since we take particular pride in our community participation, the fact >> that we have not been able or willing to do this correctly reflects very >> poorly on us. I would rather that we not do this at all than do it >> badly, so I think this should be the last release with a name. I've >> proposed that change here: >> >> https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the > letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. That's a good point. Maybe we could just number them? V would be "OpenStack Release 22". Or we could refer to them by date, as we used to, but without attempting to use dates as actual version numbers. -Jim From thierry at openstack.org Tue Aug 13 16:46:21 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:46:21 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> Sean Mooney wrote: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >> Since we take particular pride in our community participation, the fact >> that we have not been able or willing to do this correctly reflects very >> poorly on us. I would rather that we not do this at all than do it >> badly, so I think this should be the last release with a name. I've >> proposed that change here: >> >> https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. So... I agree the naming process is creating a lot of problems (the reason I decided a long time ago to stop handling it myself, the moment it stopped being a fun exercise). But I still think we need a way to refer to a given series, and we have lots of tooling that is based on the fact that it's alpha-ordered. Ideally we'd have a way to name releases that removes the subjectivity and polling parts, which seems to be the painful part. Just have some objective way of ranking a limited number of options for trademark analysis, and be done with it. -- Thierry Carrez (ttx) From thierry at openstack.org Tue Aug 13 16:55:20 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:55:20 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> Message-ID: <9d81d7c9-a131-96dc-a34b-432eada956bb@openstack.org> Dmitry Tantsur wrote: >>> We have a weekly meeting that is currently on Thursday: >>> >>> http://eavesdrop.openstack.org/#Release_Team_Meeting > > 9pm my time is a bit difficult :( NB: We are talking of moving it to 16utc on Thursdays. -- Thierry Carrez (ttx) From aj at suse.com Tue Aug 13 17:09:11 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 13 Aug 2019 19:09:11 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: On 8/13/19 4:57 PM, James E. Blair wrote: > [...] > Since we take particular pride in our community participation, the fact > that we have not been able or willing to do this correctly reflects very > poorly on us. I would rather that we not do this at all than do it > badly, so I think this should be the last release with a name. I've > proposed that change here: > > https://review.opendev.org/675788 The names were fun initially - but sometimes a joke turns old, I agree, it's time to change the process. And giving up names is fine. But then we need another way to sequence. The U release is the 21th release, so let's use that as overall number (even if different projects have less than 20 releases). Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Tue Aug 13 17:09:31 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 13 Aug 2019 19:09:31 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> Message-ID: <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> On 8/13/19 6:46 PM, Thierry Carrez wrote: > Sean Mooney wrote: >> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >>> Since we take particular pride in our community participation, the fact >>> that we have not been able or willing to do this correctly reflects very >>> poorly on us. I would rather that we not do this at all than do it >>> badly, so I think this should be the last release with a name. I've >>> proposed that change here: >>> >>> https://review.opendev.org/675788 >> >> not to takethis out of context but it is rather long thread so i have sniped >> the bit i wanted to comment on. >> >> i thnik not nameing release would be problemeatic on two fronts. >> one without a common comunity name i think codename or other conventint names >> are going to crop up as many have been refering to the U release as the unicorn >> release just to avoid the confusion between "U" and "you" when speak about the release >> untill we have an offical name. if we had no offical names i think we woudl keep using >> those placeholders at least on irc or in person. (granted we would not use them for code >> or docs) >> >> that is a minor thing but the more distributive issue i see is that nova's U release >> will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the >> set of compatiable project for a given version we woudl only have the letter and form a marketing >> perspective and even from development perspective i think that will be problematic. >> >> we could just have the V release but i think it loses something in clarity. > > So... I agree the naming process is creating a lot of problems (the > reason I decided a long time ago to stop handling it myself, the moment > it stopped being a fun exercise). But I still think we need a way to > refer to a given series, and we have lots of tooling that is based on > the fact that it's alpha-ordered. And we need to change this anyhow once we go to 26 (Z is end of alphabet). So, we have it a few releases earlier now ;) > > Ideally we'd have a way to name releases that removes the subjectivity > and polling parts, which seems to be the painful part. Just have some > objective way of ranking a limited number of options for trademark > analysis, and be done with it. Or use numbers, years,... Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cdent+os at anticdent.org Tue Aug 13 17:25:42 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 13 Aug 2019 18:25:42 +0100 (BST) Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: On Tue, 13 Aug 2019, Andreas Jaeger wrote: > But then we need another way to sequence. The U release is the 21th > release, so let's use that as overall number (even if different projects > have less than 20 releases). How about "U"? No tooling changes required. And then we get a couple years of not having to have this discussion again. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From rico.lin.guanyu at gmail.com Tue Aug 13 17:50:21 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 14 Aug 2019 01:50:21 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: (Put my whatever hat on) Here's my suggestion, we can either make a patch to clarify the process step by step (no exception) or simply move everything out of https://governance.openstack.org/tc That actually leads to the current discussion here, to just use versions or not. Personally, I'm interested in improving the document and not that much interested in making only versions. I do like to see if we can use whatever alphabet we like so this version can be *cool*, and the next version can be *awesome*. Isn't that sounds cool and awesome? :) And like the idea Chris Dent propose to just use *U* or *V*, etc. to save us from having to have this discussion again(I'm actually the one to propose *U* in the list this time:) ) And if we're going to use any new naming system, I strongly suggest we should remove the *Geographic Region* constraint if we plan to have a poll. It's always easy to find conflict between what local people think about the name and what the entire community thinks about it. (Put my official hat on) And for the problem of *University* part: back in the proposal period, I find a way to add *University* back to the meet criteria list so hope people get to discuss whether or not it can be in the poll. And (regardless for the ongoing discussion about whether or not TC got any role to govern this process) I did turn to TCs and ask for the advice for the final answer (isn't that is the responsibility for TCs to guide?), so I guess we can say I'm the one to remove it out of the final list. Therefore I'm taking the responsibility to say I'm the one to omit *University*. During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan from the meet criteria list because they're not the most popular spelling system in China. And we omitted Urumqi from the meet criteria list because of the potential political issue. Those are before I was an official. And we should consider them all during discuss about *University* here. I guess we should define more about in which stage should the official propose the final list of all names that meet criteria should all automatically be part of the final list. On Wed, Aug 14, 2019 at 1:29 AM Chris Dent wrote: > On Tue, 13 Aug 2019, Andreas Jaeger wrote: > > > But then we need another way to sequence. The U release is the 21th > > release, so let's use that as overall number (even if different projects > > have less than 20 releases). > > How about "U"? > > No tooling changes required. And then we get a couple years of not > having to have this discussion again. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Tue Aug 13 18:05:33 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 13 Aug 2019 13:05:33 -0500 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> Message-ID: <041B5C41-39BB-4056-B249-DDBB9A718521@leafe.com> On Aug 13, 2019, at 12:09 PM, Andreas Jaeger wrote: > >> Ideally we'd have a way to name releases that removes the subjectivity >> and polling parts, which seems to be the painful part. Just have some >> objective way of ranking a limited number of options for trademark >> analysis, and be done with it. > > Or use numbers, years,... The whole release cycle was based on Ubuntu patterns, since many early OpenStackers came from Ubuntu. OpenStack, though, just used the alphabetical names for releases, rather than also using the YYYY.MM pattern. The Ubuntu animal names are cute, but most people refer to a release by the year/month name, as it's simpler. The problem with this naming cycle in the convergence of the next letter being a letter that doesn’t occur natively in the language where the summit is held. That possibility was not considered when the naming requirements were adopted, and is the root cause of all these naming discussions. It seems rather square-peg-round-hole to force it with this release by using English-like renderings of Chinese words to force compliance with a requirement that wasn’t fully thought out. So since the requirements assume the English alphabet, which doesn’t fit well with Chinese, what about suspending the requirements for geographic relevance, and instead select English words beginning with “U” that have some relevance to Shanghai. I don’t have any ideas along these lines; just pointing out that blind adherence to a poor rule will usually produce poor results. -- Ed Leafe From Tim.Bell at cern.ch Tue Aug 13 18:54:45 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 13 Aug 2019 18:54:45 +0000 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> References: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> Message-ID: <0F54F6FE-4A27-494F-BB36-54BCF5043FA2@cern.ch> On 13 Aug 2019, at 18:19, Thierry Carrez > wrote: Tim Bell wrote: We’re also thinking about starting a “Large Scale” SIG so people could collaborate in tackling more of the scaling issues together. Thierry Carrez (ttx) and I will be looking into that by mentioning the idea to LINE and YahooJapan (as some perspective at-scale operators)to see what they think and also make a list of organizations that could be interested. Rico Lin (ricolin) will also update the SIG guidelines documents to make the whole process easier and Jim Rollenhagen (jroll) will try and bring this up at Verizon Media. How about a forum brainstorm in Shanghai ? Yes, my goal is to get a number of interested stakeholders to meet in Shanghai and see if there is enough alignment in goals to openly collaborate on those questions. We have had several groups tackling facets of the "large scale" problem in the past (performance WG, large deployments workgroup, LCOO...) -- my thinking here would be to narrow it down to addressing scaling limitations in cluster sizes (think: RabbitMQ falling down after a given number of compute nodes). We’ll have some CERN people in Shanghai and would be happy to participate. We’re following up on a number of scaling issues (such as https://techblog.web.cern.ch/techblog/post/nova-ironic-at-scale/ and Neutron) and would be happy to share best practise with other large deployments. Tim -- Thierry Carrez (ttx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Aug 13 19:01:26 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 12:01:26 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Wed, 14 Aug 2019 01:50:21 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <87v9v028l5.fsf@meyer.lemoncheese.net> Rico Lin writes: > (Put my whatever hat on) > Here's my suggestion, we can either make a patch to clarify the process > step by step (no exception) or simply move everything out of > https://governance.openstack.org/tc > That actually leads to the current discussion here, to just use versions or > not. Personally, I'm interested in improving the document and not that much > interested in making only versions. I do like to see if we can use whatever > alphabet we like so this version can be *cool*, and the next version can be > *awesome*. Isn't that sounds cool and awesome? :) I'm happy to help improve it if that's what folks want. I already think it says what you and several other people want it to say. But I wrote it, and so the fact that people keep reading it and coming away with different understandings means I did a bad job. So I'll need help to figure out which parts I wasn't clear on. But I'm serious about the suggestion to scrap names altogether. Every time we have an issue with this, it's because people start making their own judgments when the job of the coordinator is basically just to send some emails. The process is 7 very clear steps. Many of them were definitely not followed this time. We can try to make it more clear, but we have done that before, and it still didn't prevent things from going wrong this time. As a community, we just don't care enough to get it right, and getting it wrong only produces bad feelings and wastes all our time. I'm looking forward to OpenStack Release 22. That sounds cool. That's a big number. Way bigger than like 1.x. > And like the idea > Chris Dent propose to just use *U* or *V*, etc. to save us from having to > have this discussion again(I'm actually the one to propose *U* in the list > this time:) ) That would solve a lot of problems, and create one new one in a few years. :) > And if we're going to use any new naming system, I strongly suggest we > should remove the *Geographic Region* constraint if we plan to have a poll. > It's always easy to find conflict between what local people think about the > name and what the entire community thinks about it. We will have a very long list if we do that. I'm not sure I agree with you about that problem though. In practice, deciding whether a river is within a state boundary is not that contentious. That's pretty much all that's ever been asked. > (Put my official hat on) > And for the problem of *University* part: > back in the proposal period, I find a way to add *University* back to the > meet criteria list so hope people get to discuss whether or not it can be > in the poll. And (regardless for the ongoing discussion about whether or > not TC got any role to govern this process) I did turn to TCs and ask for > the advice for the final answer (isn't that is the responsibility for TCs > to guide?), so I guess we can say I'm the one to remove it out of the final > list. Therefore I'm taking the responsibility to say I'm the one to omit > *University*. Thanks. I don't fault you personally for this, I think we got into this situation because no one wanted to do it and so a confusing set of people on the TC ended up performing various tasks ad-hoc. That you stepped up and took action and responsibility is commendable. You have my respect for that. I do think the conversation about University could have been more clear. Specific yes/no answers and reasons would have been nice. Instead of a single decision about whether it was included, I received 3 decisions with 4 rationales from several different people. Either of the following would have been perfectly fine outcomes: Me: Can has University, plz? Coordinator: Violates criterion 4 Me: But Pike Coordinator: Questionable, but process says be "generous" so, okay, it's in. or Coordinator: . Sorry, it's still out. However, reasons around trademark or the suitability of English words are not appropriate reasons to exclude a name. Nor is "the TC didn't like it". There is only one reason to exclude a name, and that is that it violates one of the 4 criteria. Of course it's fine to ask the TC, or anyone else for guidance. However, it's clear from the IRC log that many members of the TC did not appreciate what was being asked of them. It would be okay to ask them "Do you think this meets the criteria?" But instead, a long discussion about whether the names were *good choices* ensued. That's not one of the steps in the process. In fact, it's the exact thing that the process is supposed to avoid. No matter what the members of the TC thought about whether a name was a good idea, if it met the criteria it should be in. > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan > from the meet criteria list because they're not the most popular spelling > system in China. And we omitted Urumqi from the meet criteria list because > of the potential political issue. Those are before I was an official. And > we should consider them all during discuss about *University* here. I guess > we should define more about in which stage should the official propose the > final list of all names that meet criteria should all automatically be part > of the final list. None of those should have been removed. They, even more so than University, clearly meet the criteria, and were only removed due to personal preference. I want to be clear, there *is* a place for consideration of all of these things. That is step 3: The marketing community may identify any names of particular concern from a marketing standpoint and discuss such issues publicly on the Marketing mailing list. The marketing community may produce a list of problematic items (with citations to the mailing list discussion of the rationale) to the election official. This information will be communicated during the election, but the names will not be removed from the poll. That is where we would identify things like "this name uses an unusual romanization system" or "this name has political ramifications". We don't remove those names from the list, but we let the community know about the issues, so that when people vote, they have all the information. We trust our community to make good (or hilariously bad) decisions. That's what this all comes down to. The process as written is supposed to collect a lot of names, with a lot of information, and present them to our community and let us all decide together. That's what has been lost. -Jim From mnaser at vexxhost.com Tue Aug 13 19:19:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 13 Aug 2019 15:19:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87v9v028l5.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of a > single decision about whether it was included, I received 3 decisions > with 4 rationales from several different people. Either of the > following would have been perfectly fine outcomes: For transparency, I did not feel comfortable vetoing options, expressed that I don't think it's my business to be picking what's in and what's out. To me, the steps that come *after* the poll were the ones that decided which ones we ended up picking, that's why we have voting that lets you pick more than one option, so we can back down to second/third choices if need-be. I chose to cast a vote in the TC poll with all options tied equally. While having said that, I'm pretty disappointed of the state that we are currently in and I'm starting to lean towards simplifying the process of name selection for releases. However, I think it's probably way too late to make these type of changes. I feel partly responsible because I spent a significant amount of time trying to work with our Chinese community members (and OSF staff in China) to make sure that we get the name choices right, but it seems that has added too much delay and process in the system. In retrospective, this was hard from the start and I think we should have seen this coming much earlier, because the issue about no romanization that starts with "U" was brought up really early on but we didn't really take action on it. As much as I am disappointed in the outcome, I'd like us to turn it around and resolve this while everyone's interests are invested in it right now to avoid the same thing from happening again. It's clearly not the first time this has happened and this is a good time to rework that release naming document. From mesutaygn at gmail.com Tue Aug 13 19:32:45 2019 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Tue, 13 Aug 2019 22:32:45 +0300 Subject: Heat Template Message-ID: Hi everyone; I am writing the template for cluster but i cant inject the cloud-init data. How can I inject the password data for vm? heat_template_version: 2014-10-16 new_instance: type: OS::Nova::Server properties: key_name: { get_param: key_name } image: { get_param: image_id } flavor: bir name: str_replace: template: master-$NONCE-dmind params: $NONCE: { get_resource: name_nonce } user_data: | #!/bin/bash #cloud-config password: 724365 echo "Running boot script" >> /home/ubuntu/test sudo sed -i "/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes" /etc/ssh/sshd_config sudo useradd -d /home/mesut -m mesut sudo usermod --password 724365 ubuntu /etc/init.d/ssh restart -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Tue Aug 13 19:45:28 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 13 Aug 2019 15:45:28 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87v9v028l5.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: Even though I agree the process this time around was comedy of errors (or worse), I don't think switching to numeric releases is particularly wise... for example, more than once someone has reported an issue with Sahara and stated that they are using version of Sahara. Turns out that is actually the version of the OSA playbooks being used. Let's not add another number to get confused about into the mix. Anyway: - I think that now with you having pointed out everything that went wrong and having pointed us towards the simple steps that should be followed instead, we ought to give ourselves one more try to get the process correct for "V". We really should be able to get it right next time and there's something to be said for tradition. - I did not see in the document about how to determine the geographic region (its size etc or who should determine it). This is an opportunity for confusion sometimes leading to bitterness (and it was in the case of U -- whole China versus near Shanghai). Just some thoughts. P.S.: Single letter (like "U", "V") doesn't work when we wrap the alphabet (as has already been observed), but something like "U21", "V22", ... "A27" seems to work fine. On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: > > Rico Lin writes: > > > (Put my whatever hat on) > > Here's my suggestion, we can either make a patch to clarify the process > > step by step (no exception) or simply move everything out of > > https://governance.openstack.org/tc > > That actually leads to the current discussion here, to just use versions or > > not. Personally, I'm interested in improving the document and not that much > > interested in making only versions. I do like to see if we can use whatever > > alphabet we like so this version can be *cool*, and the next version can be > > *awesome*. Isn't that sounds cool and awesome? :) > > I'm happy to help improve it if that's what folks want. I already think > it says what you and several other people want it to say. But I wrote > it, and so the fact that people keep reading it and coming away with > different understandings means I did a bad job. So I'll need help to > figure out which parts I wasn't clear on. > > But I'm serious about the suggestion to scrap names altogether. Every > time we have an issue with this, it's because people start making their > own judgments when the job of the coordinator is basically just to send > some emails. > > The process is 7 very clear steps. Many of them were definitely not > followed this time. We can try to make it more clear, but we have done > that before, and it still didn't prevent things from going wrong this > time. > > As a community, we just don't care enough to get it right, and getting > it wrong only produces bad feelings and wastes all our time. I'm > looking forward to OpenStack Release 22. > > That sounds cool. That's a big number. Way bigger than like 1.x. > > > And like the idea > > Chris Dent propose to just use *U* or *V*, etc. to save us from having to > > have this discussion again(I'm actually the one to propose *U* in the list > > this time:) ) > > That would solve a lot of problems, and create one new one in a few > years. :) > > > And if we're going to use any new naming system, I strongly suggest we > > should remove the *Geographic Region* constraint if we plan to have a poll. > > It's always easy to find conflict between what local people think about the > > name and what the entire community thinks about it. > > We will have a very long list if we do that. > > I'm not sure I agree with you about that problem though. In practice, > deciding whether a river is within a state boundary is not that > contentious. That's pretty much all that's ever been asked. > > > (Put my official hat on) > > And for the problem of *University* part: > > back in the proposal period, I find a way to add *University* back to the > > meet criteria list so hope people get to discuss whether or not it can be > > in the poll. And (regardless for the ongoing discussion about whether or > > not TC got any role to govern this process) I did turn to TCs and ask for > > the advice for the final answer (isn't that is the responsibility for TCs > > to guide?), so I guess we can say I'm the one to remove it out of the final > > list. Therefore I'm taking the responsibility to say I'm the one to omit > > *University*. > > Thanks. I don't fault you personally for this, I think we got into this > situation because no one wanted to do it and so a confusing set of > people on the TC ended up performing various tasks ad-hoc. That you > stepped up and took action and responsibility is commendable. You have > my respect for that. > > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of a > single decision about whether it was included, I received 3 decisions > with 4 rationales from several different people. Either of the > following would have been perfectly fine outcomes: > > Me: Can has University, plz? > Coordinator: Violates criterion 4 > Me: But Pike > Coordinator: Questionable, but process says be "generous" > so, okay, it's in. > or > Coordinator: . Sorry, it's > still out. > > However, reasons around trademark or the suitability of English words > are not appropriate reasons to exclude a name. Nor is "the TC didn't > like it". There is only one reason to exclude a name, and that is that > it violates one of the 4 criteria. > > Of course it's fine to ask the TC, or anyone else for guidance. > However, it's clear from the IRC log that many members of the TC did not > appreciate what was being asked of them. It would be okay to ask them > "Do you think this meets the criteria?" But instead, a long discussion > about whether the names were *good choices* ensued. That's not one of > the steps in the process. In fact, it's the exact thing that the > process is supposed to avoid. No matter what the members of the TC > thought about whether a name was a good idea, if it met the criteria it > should be in. > > > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan > > from the meet criteria list because they're not the most popular spelling > > system in China. And we omitted Urumqi from the meet criteria list because > > of the potential political issue. Those are before I was an official. And > > we should consider them all during discuss about *University* here. I guess > > we should define more about in which stage should the official propose the > > final list of all names that meet criteria should all automatically be part > > of the final list. > > None of those should have been removed. They, even more so than > University, clearly meet the criteria, and were only removed due to > personal preference. > > I want to be clear, there *is* a place for consideration of all of these > things. That is step 3: > > The marketing community may identify any names of particular concern > from a marketing standpoint and discuss such issues publicly on the > Marketing mailing list. The marketing community may produce a list of > problematic items (with citations to the mailing list discussion of > the rationale) to the election official. This information will be > communicated during the election, but the names will not be removed > from the poll. > > That is where we would identify things like "this name uses an unusual > romanization system" or "this name has political ramifications". We > don't remove those names from the list, but we let the community know > about the issues, so that when people vote, they have all the > information. > > We trust our community to make good (or hilariously bad) decisions. > > That's what this all comes down to. The process as written is supposed > to collect a lot of names, with a lot of information, and present them > to our community and let us all decide together. That's what has been > lost. > > -Jim > From openstack at fried.cc Tue Aug 13 20:20:50 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 13 Aug 2019 15:20:50 -0500 Subject: [nova] Shanghai Project Update - Volunteer(s) needed Message-ID: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> Hello Nova. Traditionally summit project updates are done by the PTL(s) of the foregoing and/or upcoming cycles. In this case, the former (that would be me) will not be attending the summit, and we don't yet know who the latter will be. So I am asking for a volunteer or two who is a) attending the summit [1], and b) willing [2] to deliver a Nova project update presentation. I (and others, I am sure) will be happy to help with slide content and other prep. Please respond ASAP so we can reserve a slot. Thanks, efried [1] it need not be definite at this point - obviously corporate, political, travel, and personal contingencies may interfere [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. From corvus at inaugust.com Tue Aug 13 20:53:28 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 13:53:28 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Jeremy Freudberg's message of "Tue, 13 Aug 2019 15:45:28 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: <87o90szt13.fsf@meyer.lemoncheese.net> Jeremy Freudberg writes: > - I did not see in the document about how to determine the geographic > region (its size etc or who should determine it). This is an > opportunity for confusion sometimes leading to bitterness (and it was > in the case of U -- whole China versus near Shanghai). That's a good question. The TC decides that before the process starts, along with setting the dates. It appears in the table at the end of: https://governance.openstack.org/tc/reference/release-naming.html The process kicks off with a TC resolution commit like this: https://opendev.org/openstack/governance/commit/9219939fb153857ec5f53b986f867fcf4d29ab37 Hopefully at that point everyone is on the same page. So at least once the process starts, there shouldn't be any question about the geographic area. Of course, this time, there were 3 more commits after that one changing various things, including the area. Ideally, we'd set a region and not change it. But to me, expanding a region is at least better than reducing it. So I don't fault the TC for making that change (and making it in a deliberative way). Specifying the region in advance was in fact a late addition to the process and document. We didn't get that right the first time. The first entry in that table (which now says "Tokyo"; this seems revisionist to me) used to say "N/A" because we did not specify a region in advance, and it caused problems. If we keep the document (I hope we don't), I agree that we should add more text explaining that. -Jim > Just some thoughts. > > P.S.: Single letter (like "U", "V") doesn't work when we wrap the > alphabet (as has already been observed), but something like "U21", > "V22", ... "A27" seems to work fine. If we can shift it by 2 to get a B-52's tribute release, I can get on board with that. From Arkady.Kanevsky at dell.com Tue Aug 13 20:59:50 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 13 Aug 2019 20:59:50 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: Suggest we stick to geographically named releases. AT least till we reach Z. Community and User are well accustomed to it now. It has marketing value now, so let's keep it. Thanks, Arkady -----Original Message----- From: Jeremy Freudberg Sent: Tuesday, August 13, 2019 2:45 PM To: James E. Blair Cc: OpenStack Discuss Subject: Re: [all][tc] U Cycle Naming Poll [EXTERNAL EMAIL] Even though I agree the process this time around was comedy of errors (or worse), I don't think switching to numeric releases is particularly wise... for example, more than once someone has reported an issue with Sahara and stated that they are using version of Sahara. Turns out that is actually the version of the OSA playbooks being used. Let's not add another number to get confused about into the mix. Anyway: - I think that now with you having pointed out everything that went wrong and having pointed us towards the simple steps that should be followed instead, we ought to give ourselves one more try to get the process correct for "V". We really should be able to get it right next time and there's something to be said for tradition. - I did not see in the document about how to determine the geographic region (its size etc or who should determine it). This is an opportunity for confusion sometimes leading to bitterness (and it was in the case of U -- whole China versus near Shanghai). Just some thoughts. P.S.: Single letter (like "U", "V") doesn't work when we wrap the alphabet (as has already been observed), but something like "U21", "V22", ... "A27" seems to work fine. On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: > > Rico Lin writes: > > > (Put my whatever hat on) > > Here's my suggestion, we can either make a patch to clarify the > > process step by step (no exception) or simply move everything out of > > https://governance.openstack.org/tc > > That actually leads to the current discussion here, to just use > > versions or not. Personally, I'm interested in improving the > > document and not that much interested in making only versions. I do > > like to see if we can use whatever alphabet we like so this version > > can be *cool*, and the next version can be *awesome*. Isn't that > > sounds cool and awesome? :) > > I'm happy to help improve it if that's what folks want. I already > think it says what you and several other people want it to say. But I > wrote it, and so the fact that people keep reading it and coming away > with different understandings means I did a bad job. So I'll need > help to figure out which parts I wasn't clear on. > > But I'm serious about the suggestion to scrap names altogether. Every > time we have an issue with this, it's because people start making > their own judgments when the job of the coordinator is basically just > to send some emails. > > The process is 7 very clear steps. Many of them were definitely not > followed this time. We can try to make it more clear, but we have > done that before, and it still didn't prevent things from going wrong > this time. > > As a community, we just don't care enough to get it right, and getting > it wrong only produces bad feelings and wastes all our time. I'm > looking forward to OpenStack Release 22. > > That sounds cool. That's a big number. Way bigger than like 1.x. > > > And like the idea > > Chris Dent propose to just use *U* or *V*, etc. to save us from > > having to have this discussion again(I'm actually the one to propose > > *U* in the list this time:) ) > > That would solve a lot of problems, and create one new one in a few > years. :) > > > And if we're going to use any new naming system, I strongly suggest > > we should remove the *Geographic Region* constraint if we plan to have a poll. > > It's always easy to find conflict between what local people think > > about the name and what the entire community thinks about it. > > We will have a very long list if we do that. > > I'm not sure I agree with you about that problem though. In practice, > deciding whether a river is within a state boundary is not that > contentious. That's pretty much all that's ever been asked. > > > (Put my official hat on) > > And for the problem of *University* part: > > back in the proposal period, I find a way to add *University* back > > to the meet criteria list so hope people get to discuss whether or > > not it can be in the poll. And (regardless for the ongoing > > discussion about whether or not TC got any role to govern this > > process) I did turn to TCs and ask for the advice for the final > > answer (isn't that is the responsibility for TCs to guide?), so I > > guess we can say I'm the one to remove it out of the final list. > > Therefore I'm taking the responsibility to say I'm the one to omit *University*. > > Thanks. I don't fault you personally for this, I think we got into > this situation because no one wanted to do it and so a confusing set > of people on the TC ended up performing various tasks ad-hoc. That > you stepped up and took action and responsibility is commendable. You > have my respect for that. > > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of > a single decision about whether it was included, I received 3 > decisions with 4 rationales from several different people. Either of > the following would have been perfectly fine outcomes: > > Me: Can has University, plz? > Coordinator: Violates criterion 4 > Me: But Pike > Coordinator: Questionable, but process says be "generous" > so, okay, it's in. > or > Coordinator: . Sorry, it's > still out. > > However, reasons around trademark or the suitability of English words > are not appropriate reasons to exclude a name. Nor is "the TC didn't > like it". There is only one reason to exclude a name, and that is > that it violates one of the 4 criteria. > > Of course it's fine to ask the TC, or anyone else for guidance. > However, it's clear from the IRC log that many members of the TC did > not appreciate what was being asked of them. It would be okay to ask > them "Do you think this meets the criteria?" But instead, a long > discussion about whether the names were *good choices* ensued. That's > not one of the steps in the process. In fact, it's the exact thing > that the process is supposed to avoid. No matter what the members of > the TC thought about whether a name was a good idea, if it met the > criteria it should be in. > > > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, > > Ueishan from the meet criteria list because they're not the most > > popular spelling system in China. And we omitted Urumqi from the > > meet criteria list because of the potential political issue. Those > > are before I was an official. And we should consider them all during > > discuss about *University* here. I guess we should define more about > > in which stage should the official propose the final list of all > > names that meet criteria should all automatically be part of the final list. > > None of those should have been removed. They, even more so than > University, clearly meet the criteria, and were only removed due to > personal preference. > > I want to be clear, there *is* a place for consideration of all of > these things. That is step 3: > > The marketing community may identify any names of particular concern > from a marketing standpoint and discuss such issues publicly on the > Marketing mailing list. The marketing community may produce a list of > problematic items (with citations to the mailing list discussion of > the rationale) to the election official. This information will be > communicated during the election, but the names will not be removed > from the poll. > > That is where we would identify things like "this name uses an unusual > romanization system" or "this name has political ramifications". We > don't remove those names from the list, but we let the community know > about the issues, so that when people vote, they have all the > information. > > We trust our community to make good (or hilariously bad) decisions. > > That's what this all comes down to. The process as written is > supposed to collect a lot of names, with a lot of information, and > present them to our community and let us all decide together. That's > what has been lost. > > -Jim > From openstack at nemebean.com Tue Aug 13 21:17:21 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 13 Aug 2019 16:17:21 -0500 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: <6fccbd18-e449-bf31-cf91-bae6356d74db@nemebean.com> On 8/13/19 2:45 PM, Jeremy Freudberg wrote: > Even though I agree the process this time around was comedy of errors > (or worse), I don't think switching to numeric releases is > particularly wise... for example, more than once someone has reported > an issue with Sahara and stated that they are using > version of Sahara. Turns out that is actually the > version of the OSA playbooks being used. Let's not add another number > to get confused about into the mix. We also use version numbers for our downstream OpenStack product, and I believe others do as well. It's kind of nice to know that if someone is talking about a numerical version they mean downstream, whereas a letter means upstream. Although I guess the other side of that is if upstream went to numbers downstream could just match those. It would be a little weird because we'd skip several major versions, but it could be done. > > Anyway: > - I think that now with you having pointed out everything that went > wrong and having pointed us towards the simple steps that should be > followed instead, we ought to give ourselves one more try to get the > process correct for "V". We really should be able to get it right next > time and there's something to be said for tradition. > - I did not see in the document about how to determine the geographic > region (its size etc or who should determine it). This is an > opportunity for confusion sometimes leading to bitterness (and it was > in the case of U -- whole China versus near Shanghai). > > Just some thoughts. > > P.S.: Single letter (like "U", "V") doesn't work when we wrap the > alphabet (as has already been observed), but something like "U21", > "V22", ... "A27" seems to work fine. It doesn't solve the problem of tooling that assumes names will sort alphabetically though. I suppose we could go to ZA (all hail the mighty Za-Lord!*), ZB, etc., but that seems pretty hacky. I think we're ultimately going to have to make some changes to the tooling no matter what we decide now. * any Dresden Files fans here? ;-) > > On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: >> >> Rico Lin writes: >> >>> (Put my whatever hat on) >>> Here's my suggestion, we can either make a patch to clarify the process >>> step by step (no exception) or simply move everything out of >>> https://governance.openstack.org/tc >>> That actually leads to the current discussion here, to just use versions or >>> not. Personally, I'm interested in improving the document and not that much >>> interested in making only versions. I do like to see if we can use whatever >>> alphabet we like so this version can be *cool*, and the next version can be >>> *awesome*. Isn't that sounds cool and awesome? :) >> >> I'm happy to help improve it if that's what folks want. I already think >> it says what you and several other people want it to say. But I wrote >> it, and so the fact that people keep reading it and coming away with >> different understandings means I did a bad job. So I'll need help to >> figure out which parts I wasn't clear on. >> >> But I'm serious about the suggestion to scrap names altogether. Every >> time we have an issue with this, it's because people start making their >> own judgments when the job of the coordinator is basically just to send >> some emails. >> >> The process is 7 very clear steps. Many of them were definitely not >> followed this time. We can try to make it more clear, but we have done >> that before, and it still didn't prevent things from going wrong this >> time. >> >> As a community, we just don't care enough to get it right, and getting >> it wrong only produces bad feelings and wastes all our time. I'm >> looking forward to OpenStack Release 22. >> >> That sounds cool. That's a big number. Way bigger than like 1.x. >> >>> And like the idea >>> Chris Dent propose to just use *U* or *V*, etc. to save us from having to >>> have this discussion again(I'm actually the one to propose *U* in the list >>> this time:) ) >> >> That would solve a lot of problems, and create one new one in a few >> years. :) >> >>> And if we're going to use any new naming system, I strongly suggest we >>> should remove the *Geographic Region* constraint if we plan to have a poll. >>> It's always easy to find conflict between what local people think about the >>> name and what the entire community thinks about it. >> >> We will have a very long list if we do that. >> >> I'm not sure I agree with you about that problem though. In practice, >> deciding whether a river is within a state boundary is not that >> contentious. That's pretty much all that's ever been asked. >> >>> (Put my official hat on) >>> And for the problem of *University* part: >>> back in the proposal period, I find a way to add *University* back to the >>> meet criteria list so hope people get to discuss whether or not it can be >>> in the poll. And (regardless for the ongoing discussion about whether or >>> not TC got any role to govern this process) I did turn to TCs and ask for >>> the advice for the final answer (isn't that is the responsibility for TCs >>> to guide?), so I guess we can say I'm the one to remove it out of the final >>> list. Therefore I'm taking the responsibility to say I'm the one to omit >>> *University*. >> >> Thanks. I don't fault you personally for this, I think we got into this >> situation because no one wanted to do it and so a confusing set of >> people on the TC ended up performing various tasks ad-hoc. That you >> stepped up and took action and responsibility is commendable. You have >> my respect for that. >> >> I do think the conversation about University could have been more clear. >> Specific yes/no answers and reasons would have been nice. Instead of a >> single decision about whether it was included, I received 3 decisions >> with 4 rationales from several different people. Either of the >> following would have been perfectly fine outcomes: >> >> Me: Can has University, plz? >> Coordinator: Violates criterion 4 >> Me: But Pike >> Coordinator: Questionable, but process says be "generous" >> so, okay, it's in. >> or >> Coordinator: . Sorry, it's >> still out. >> >> However, reasons around trademark or the suitability of English words >> are not appropriate reasons to exclude a name. Nor is "the TC didn't >> like it". There is only one reason to exclude a name, and that is that >> it violates one of the 4 criteria. >> >> Of course it's fine to ask the TC, or anyone else for guidance. >> However, it's clear from the IRC log that many members of the TC did not >> appreciate what was being asked of them. It would be okay to ask them >> "Do you think this meets the criteria?" But instead, a long discussion >> about whether the names were *good choices* ensued. That's not one of >> the steps in the process. In fact, it's the exact thing that the >> process is supposed to avoid. No matter what the members of the TC >> thought about whether a name was a good idea, if it met the criteria it >> should be in. >> >>> During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan >>> from the meet criteria list because they're not the most popular spelling >>> system in China. And we omitted Urumqi from the meet criteria list because >>> of the potential political issue. Those are before I was an official. And >>> we should consider them all during discuss about *University* here. I guess >>> we should define more about in which stage should the official propose the >>> final list of all names that meet criteria should all automatically be part >>> of the final list. >> >> None of those should have been removed. They, even more so than >> University, clearly meet the criteria, and were only removed due to >> personal preference. >> >> I want to be clear, there *is* a place for consideration of all of these >> things. That is step 3: >> >> The marketing community may identify any names of particular concern >> from a marketing standpoint and discuss such issues publicly on the >> Marketing mailing list. The marketing community may produce a list of >> problematic items (with citations to the mailing list discussion of >> the rationale) to the election official. This information will be >> communicated during the election, but the names will not be removed >> from the poll. >> >> That is where we would identify things like "this name uses an unusual >> romanization system" or "this name has political ramifications". We >> don't remove those names from the list, but we let the community know >> about the issues, so that when people vote, they have all the >> information. >> >> We trust our community to make good (or hilariously bad) decisions. >> >> That's what this all comes down to. The process as written is supposed >> to collect a lot of names, with a lot of information, and present them >> to our community and let us all decide together. That's what has been >> lost. >> >> -Jim >> > From sfinucan at redhat.com Tue Aug 13 21:20:12 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 13 Aug 2019 22:20:12 +0100 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> Message-ID: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > Hello Nova. > > Traditionally summit project updates are done by the PTL(s) of the > foregoing and/or upcoming cycles. In this case, the former (that would > be me) will not be attending the summit, and we don't yet know who the > latter will be. > > So I am asking for a volunteer or two who is a) attending the summit > [1], and b) willing [2] to deliver a Nova project update presentation. I > (and others, I am sure) will be happy to help with slide content and > other prep. Unless someone else wants to do it, I think I can probably do it. Stephen > Please respond ASAP so we can reserve a slot. > > Thanks, > efried > > [1] it need not be definite at this point - obviously corporate, > political, travel, and personal contingencies may interfere > [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. > From sfinucan at redhat.com Tue Aug 13 21:31:42 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 13 Aug 2019 22:31:42 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: On Tue, 2019-08-13 at 17:14 +0100, Sean Mooney wrote: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: > > Since we take particular pride in our community participation, the fact > > that we have not been able or willing to do this correctly reflects very > > poorly on us. I would rather that we not do this at all than do it > > badly, so I think this should be the last release with a name. I've > > proposed that change here: > > > > https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. +1. As Sean points out, and as has been pointed out elsewhere in the thread, we already have waaay too many version-related numbers floating around. If we were to opt for numbers instead of a U-based name for this release, that would mean for _nova alone_, I'd personaly have to distinguish between OpenStack 22, nova 21.0 (I think) and OSP 17.0 (again, I think), and that's before I think about other projects and packages. Nooope. I haven't heard anyone objecting to the use of release names but rather the process used to choose those names. Change that process, by either loosening the constraints used in choosing it or by moving it from a community-driven decision to something the Foundation/TC just decides on, but please don't drop the alphabetic names entirely. Stephen From piotr.baranowski at osec.pl Tue Aug 13 22:21:03 2019 From: piotr.baranowski at osec.pl (Piotr Baranowski) Date: Wed, 14 Aug 2019 00:21:03 +0200 (CEST) Subject: OpenStack 14 CentOS and Nvidia driver for vgpu? Message-ID: <433243135.116623.1565734863200.JavaMail.zimbra@osec.pl> Hello list, I'm struggling deploying Rocky with vGPU using nvidia drivers. Has anyone experienced the issues loading nvidia modules? I'm talking about hypervisor part of the setup. There are two modules provided by nvidia. One loads correctly it's the nvidia.ko one. The other however does not. The module is called nvidia-vgpu-vfio.ko I'm trying to load it and it seems that 7.6 kernel is no longer compatible with it modprobe nvidia-vgpu-vfio modprobe: ERROR: could not insert 'nvidia_vgpu_vfio': Invalid argument dmesg shows this: nvidia_vgpu_vfio: disagrees about version of symbol vfio_pin_pages nvidia_vgpu_vfio: Unknown symbol vfio_pin_pages (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_unpin_pages nvidia_vgpu_vfio: Unknown symbol vfio_unpin_pages (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_register_notifier nvidia_vgpu_vfio: Unknown symbol vfio_register_notifier (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_unregister_notifier nvidia_vgpu_vfio: Unknown symbol vfio_unregister_notifier (err -22) modinfo nvidia-vgpu-vfio filename: /lib/modules/3.10.0-957.27.2.el7.x86_64/weak-updates/nvidia-vgpu-vfio.ko version: 430.27 supported: external license: MIT rhelversion: 7.6 srcversion: 0A179A61A02AD500D05FB1A alias: pci:v000010DEd00000E00sv*sd*bc04sc80i00* alias: pci:v000010DEd*sv*sd*bc03sc02i00* alias: pci:v000010DEd*sv*sd*bc03sc00i00* depends: nvidia,mdev,vfio vermagic: 3.10.0-940.el7.x86_64 SMP mod_unload modversions My guess is that somewhere along the rhel/centos 7.6 lifecycle vfio module changed the vfio module and broke the compatibility. Nvidia provides those modules built against the BETA 7.6 release and assume weak-modules to make it work. Somehow it does not. Anybody got any suggestions how to handle this? I'm working on it with nvidia enterprise support but maybe one of you got there first? best regards -- Piotr Baranowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Aug 14 01:27:51 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Wed, 14 Aug 2019 01:27:51 +0000 Subject: reply: [lists.openstack.org][nova] The race for 2.76 Message-ID: <90d2716296164247bc053b09f6bbf318@inspur.com> > There are several compute API microversion changes that are conflicting and will be fighting for 2.76, but I think we're trying to prioritize this one [1] for the ironic > power sync external event handling since (1) Surya is going to be on vacation soon, (2) there is an ironic change that depends on it which has had review [2] and (3) > the nova change has had quite a bit of review already. > As such I think others waiting to rebase from 2.75 to 2.76 should probably hold off until [1] is approved which should happen today or tomorrow. > > [1] https://review.opendev.org/#/c/645611/ > [2] https://review.opendev.org/#/c/664842/ > > -- > > Thanks, > > Matt Agree with Matt. It is recommended to speed up the review of the patches on nova-runway [1]. I found that it has accumulated a lot, and some of them are difficult to complete in one cycle. [1] https://etherpad.openstack.org/p/nova-runways-train From soulxu at gmail.com Wed Aug 14 01:36:04 2019 From: soulxu at gmail.com (Alex Xu) Date: Wed, 14 Aug 2019 09:36:04 +0800 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> Message-ID: I apply the second one. Stephen Finucane 于2019年8月14日周三 上午5:25写道: > On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > > Hello Nova. > > > > Traditionally summit project updates are done by the PTL(s) of the > > foregoing and/or upcoming cycles. In this case, the former (that would > > be me) will not be attending the summit, and we don't yet know who the > > latter will be. > > > > So I am asking for a volunteer or two who is a) attending the summit > > [1], and b) willing [2] to deliver a Nova project update presentation. I > > (and others, I am sure) will be happy to help with slide content and > > other prep. > > Unless someone else wants to do it, I think I can probably do it. > > Stephen > > > Please respond ASAP so we can reserve a slot. > > > > Thanks, > > efried > > > > [1] it need not be definite at this point - obviously corporate, > > political, travel, and personal contingencies may interfere > > [2] blah blah, opportunity for exposure, yatta yatta, good experience, > etc. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Aug 14 01:48:52 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Wed, 14 Aug 2019 01:48:52 +0000 Subject: =?utf-8?B?UmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVJlOiBbbm92YV0gU2hh?= =?utf-8?Q?nghai_Project_Update_-_Volunteer(s)_needed?= Message-ID: <6da5507a24944535802774e96f1aeee8@inspur.com> > I apply the second one. I can also provide some help if there is a need. Stephen Finucane > 于2019年8月14日周三 上午5:25写道: On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > Hello Nova. > > Traditionally summit project updates are done by the PTL(s) of the > foregoing and/or upcoming cycles. In this case, the former (that would > be me) will not be attending the summit, and we don't yet know who the > latter will be. > > So I am asking for a volunteer or two who is a) attending the summit > [1], and b) willing [2] to deliver a Nova project update presentation. I > (and others, I am sure) will be happy to help with slide content and > other prep. Unless someone else wants to do it, I think I can probably do it. Stephen > Please respond ASAP so we can reserve a slot. > > Thanks, > efried > > [1] it need not be definite at this point - obviously corporate, > political, travel, and personal contingencies may interfere > [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Aug 14 02:10:11 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 13 Aug 2019 22:10:11 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87imr13tye.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <87imr13tye.fsf@meyer.lemoncheese.net> Message-ID: On 13/08/19 12:34 PM, James E. Blair wrote: > Sean Mooney writes: > >> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >>> Since we take particular pride in our community participation, the fact >>> that we have not been able or willing to do this correctly reflects very >>> poorly on us. I would rather that we not do this at all than do it >>> badly, so I think this should be the last release with a name. I've >>> proposed that change here: >>> >>> https://review.opendev.org/675788 >> >> not to takethis out of context but it is rather long thread so i have sniped >> the bit i wanted to comment on. >> >> i thnik not nameing release would be problemeatic on two fronts. >> one without a common comunity name i think codename or other conventint names >> are going to crop up as many have been refering to the U release as the unicorn >> release just to avoid the confusion between "U" and "you" when speak about the release >> untill we have an offical name. if we had no offical names i think we woudl keep using >> those placeholders at least on irc or in person. (granted we would not use them for code >> or docs) >> >> that is a minor thing but the more distributive issue i see is that nova's U release >> will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the >> set of compatiable project for a given version we woudl only have the >> letter and form a marketing >> perspective and even from development perspective i think that will be problematic. >> >> we could just have the V release but i think it loses something in clarity. > > That's a good point. > > Maybe we could just number them? V would be "OpenStack Release 22". > > Or we could refer to them by date, as we used to, but without attempting > to use dates as actual version numbers. I propose that once we wrap back to A, the next series should be named exclusively after words that generically describe a geographic feature (Park/Quay/Road/Street/Train/University &c.) since those should be less fraught and seem to be everyone's favourites anyway :P From li.canwei2 at zte.com.cn Wed Aug 14 02:41:18 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 14 Aug 2019 10:41:18 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIHRlYW0gbWVldGluZyBhdCAwODowMCBVVEMgdG9kYXk=?= Message-ID: <201908141041182213224@zte.com.cn> Hi team, Watcher team will have a meeting at 08:00 UTC today in the #openstack-meeting-alt channel. The agenda is available on https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda feel free to add any additional items. Thanks! Canwei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Aug 14 04:12:22 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 14 Aug 2019 00:12:22 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <875zn27z2o.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: On 12/08/19 7:18 PM, James E. Blair wrote: > As I understand it, the sequence of events that led us here was: > > A) Doug (as interim unofficial election official) removed the name for > unspecified reasons. [1] > > B) I objected to the removal. This is in accordance with step 5 of the > process: > > Once the list is finalized and publicized, a one-week period shall > elapse before the start of the election so that any names removed > from consideration because they did not meet the Release Name > Criteria may be discussed. Names erroneously removed may be > re-added during this period, and the Technical Committee may vote > to add exceptional names (which do not meet the standard criteria). > > C) Rico (the election official at the time) agreed with my reasoning > that it was erroneously removed and re-added the name. [2] > > D) The list was re-issued and the name was once again missing. Four > reasons were cited, three of which have no place being considered > prior to voting, and the fourth is a claim that it does not meet the > criteria. I'd just like to point out that Rico was placed in a very difficult position here - after he generously volunteered to step up as the co-ordinator at a time when the deadline to begin the vote had already passed, doing so from a timezone where any discussion with you, the rest of the TC, or indeed most people in the community effectively had a 24 hour round trip time. So when you pointed out that Doug's reason for dropping it from the list was not in line with the guidelines, he agreed. It was only after that that I raised the issue of it not appearing to meet the criteria. There wasn't a loud chorus of TC members (or people in general) saying that it did, so he essentially agreed that it didn't and we treated it as a proposed exception. Perhaps I gave him bad advice, but he's entitled to take advice from anyone and it's easy to see why the opinions of his fellow TC members might be influential. I must confess that I neglected to re-read the portion of the guidelines that says that in the case of questionable proposals the co-ordinator should err on the side of inclusion. Perhaps if you had been alerted to the discussion in time to raise this point then the outcome might have been different. Nevertheless, given that each step in the consultation process consumed another 12 hours following a deadline that had already passed before the process began, I think Rico handled it as well as anyone could have. My understanding (which may be wrong because it all seems to have gone down within a day that I happened to be on vacation) of how we got into that state to begin with is that after Tony did a ton of work figuring out how to get a local name beginning with U, collected a bunch of names + feedback, and was basically ready to start the poll, the Foundation implied that they would veto all of the names on the grounds that their China expert didn't feel that using the GR transliteration would be appropriate because of reasons. Those reasons conflicted with the interpretation of the China expert that Tony consulted and with all available information published in English, and honestly I wish somebody had pushed back on them, but at a certain point there's probably nothing else you can do but expand the geographic region, delay the poll, and start again. Which the TC did. And of course this had the knock-on effect of requiring someone to decide whether certain incandescently-hot potato options should be omitted from the poll. They were of course, and I know you think that's the wrong call but I disagree. IIRC the current process was put in place after the Lemming debacle, on the principle that in future the community should be allowed to have our fun and vote for Lemming (or not), and if the Foundation marketing want to veto that after the fact then fine, but don't let them take away our fun before the fact. I agree with that so far as it goes. (Full disclosure: I would have voted for Lemming.) However, it's just not the case that having a culturally-insensitive choice win the poll, or just do well in the poll, or even appear in the poll, cannot damage the community so long as marketing later rejects it. Nor does a public airing of dirty laundry seem conducive to _reducing_ the problem. This seems to be an issue that was not contemplated when the process was set down. (As if to prove the point, this very thing happened the very first time that the new process was used!) And quite frankly, it's not the responsibility of random people on the internet (the poll is open to anyone) to research the cultural sensitivity of all of the options. This is exactly the kind of reason we have representative governance. I agree that it's a problem that the TC has a written policy of abdicating this responsibility, and we have (mercifully) not followed it. We should change the policy if we don't believe in it. You wrote elsewhere in this thread that all of the delays and handoffs were due to nobody caring. I think this is completely wrong. The delays were due to people caring *a lot* under difficult circumstances (beginning with the fact that the official transliteration of local place names does not contain any syllables starting with U). Taking the Summit to Shanghai is a massive exercise and a huge opportunity to listen to the developer community there and find ways to engage with them better in the future, and nobody wants to waste that opportunity by alienating people unnecessarily. cheers, Zane. From sundar.nadathur at intel.com Wed Aug 14 04:29:58 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 14 Aug 2019 04:29:58 +0000 Subject: [cyborg] Poll for new weekly IRC meeting time In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> Message-ID: <1CC272501B5BC543A05DB90AA509DED5276005CD@fmsmsx122.amr.corp.intel.com> Based on the poll, we have chosen the new time for the Cyborg IRC weekly meeting: Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed) This will take effect from next week. The Cyborg meeting page [1] has been updated. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting Regards, Sundar From: Nadathur, Sundar Sent: Tuesday, August 6, 2019 11:07 PM To: openstack-discuss at lists.openstack.org Subject: [cyborg] Poll for new weekly IRC meeting time The current Cyborg weekly IRC meeting time [1] is a conflict for many. We are looking for a better time that works for more people, with the understanding that no time is perfect for all. Please fill out this poll: https://doodle.com/poll/6t279f9y6msztz7x Be sure to indicate which times do not work for you. You can propose a new timeslot beyond what I included in the poll. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Weekly_IRC_Cyborg_team_meeting Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Aug 14 05:37:10 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 14 Aug 2019 05:37:10 +0000 Subject: [os-acc] os-acc will be retired as a project Message-ID: <1CC272501B5BC543A05DB90AA509DED5276006C9@fmsmsx122.amr.corp.intel.com> A project called os-acc [1] was created in Stein cycle based on an expectation that it will be used for Cyborg - Nova integration. It is not relevant anymore and we have no plans to support it in Train. We are discontinuing it with immediate effect. It was never used and had no developer base to speak of. So, we do not see any issues or impact for anybody. [1] https://opendev.org/openstack/os-acc/ Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Aug 14 06:38:18 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 14 Aug 2019 15:38:18 +0900 Subject: [Telemetry][Shanghai Summit] Looking for Project Update session volunteers Message-ID: Hi team, I cannot attend the next summit in Shanghai so I'm looking for one-two volunteers who want to represent the Telemetry team at the summit. Please reply to this email by this Sunday and we will work through the presentation together. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Aug 14 07:53:12 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 14 Aug 2019 07:53:12 +0000 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> Message-ID: <1565769180.30413.0@smtp.office365.com> On Tue, Aug 13, 2019 at 11:20 PM, Stephen Finucane wrote: > On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: >> Hello Nova. >> >> Traditionally summit project updates are done by the PTL(s) of the >> foregoing and/or upcoming cycles. In this case, the former (that >> would >> be me) will not be attending the summit, and we don't yet know who >> the >> latter will be. >> >> So I am asking for a volunteer or two who is a) attending the summit >> [1], and b) willing [2] to deliver a Nova project update >> presentation. I >> (and others, I am sure) will be happy to help with slide content and >> other prep. > > Unless someone else wants to do it, I think I can probably do it. You were faster than me noticing the IRC ping: 22:31 < mriedem> efried: i volunteer alex_xu and gibi for the project update in shanghai So you won! ;) gibi > > Stephen > >> Please respond ASAP so we can reserve a slot. >> >> Thanks, >> efried >> >> [1] it need not be definite at this point - obviously corporate, >> political, travel, and personal contingencies may interfere >> [2] blah blah, opportunity for exposure, yatta yatta, good >> experience, etc. >> > > From ralf.teckelmann at bertelsmann.de Wed Aug 14 08:41:28 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Wed, 14 Aug 2019 08:41:28 +0000 Subject: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent Message-ID: Hello, Working me through documentation and articles I am totally lost on the matter. All I want to know is: - if issueing "openstack snapshot create ...." - if klicking "create Snaphost" in Horizon for an instance will secure a consistent snapshot (of all volumes in question). With "consistent", I mean that all the data in memory are written to the disc before starting a snapshot. I hope someone can clear up, if using the setup described in the following is sufficient to achieve this goal or if I have to do something in addition. If you have any question I am eager to answer as fast as possible. Setup: We have a Stein-based OpenStack deployment with cinder backed by ceph. Instances are created with cinder volumes. Boot volumes are based on an image having the properties: - hw_qemu_guest_agent='yes' - os_require_quiesce='yes' The image is ubuntu 16.04 or 18.04 with quemu-guest-agent package installed and service running (no additional configuration besides distro-default): qemu-guest-agent.service - LSB: QEMU Guest Agent startup script Loaded: loaded (/etc/init.d/qemu-guest-agent; bad; vendor preset: enabled) Active: active (running) since Wed 2019-08-14 07:42:21 UTC; 9min ago Docs: man:systemd-sysv-generator(8) CGroup: /system.slice/qemu-guest-agent.service └─2300 /usr/sbin/qemu-ga --daemonize -m virtio-serial -p /dev/virtio-ports/org.qemu.guest_agent.0 Aug 14 07:42:21 ulthwe systemd[1]: Starting LSB: QEMU Guest Agent startup script... Aug 14 07:42:21 ulthwe systemd[1]: Started LSB: QEMU Guest Agent startup script. I can see the socket on the compute node and send pings successfully: ~# ls /var/lib/libvirt/qemu/*.sock /var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0000248e.sock root at pcevh2404:~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-ping"}' {"return":{}} I can also send freeze and thaw successfully: ~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-fsfreeze-freeze"}' {"return":1} ~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-fsfreeze-thaw"}' {"return":1} Sending a simple write (echo "bla" > blub.file) in the "frozen" state will be blocked until "thaw" as expected. Best regards Ralf T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Wed Aug 14 11:16:22 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Wed, 14 Aug 2019 18:16:22 +0700 Subject: neutron-server won't start in OpenStack OVN Message-ID: Hi everyone, I try to install OpenStack with OVN enabled. But when trying to start neutron-server, the service is always inactive (exited) I try to check neutron-server logs and gets this message: 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ Does anyone know why this error happens? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Aug 14 12:29:37 2019 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 14 Aug 2019 17:59:37 +0530 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: On Wed, Mar 27, 2019 at 11:52 PM Alex Schultz wrote: > > On Tue, Sep 18, 2018 at 1:30 PM Alex Schultz wrote: > > > > On Tue, Sep 18, 2018 at 1:27 PM, Matt Riedemann wrote: > > > The release page says Ocata is planned to go into extended maintenance mode > > > on Aug 27 [1]. There really isn't much to this except it means we don't do > > > releases for Ocata anymore [2]. There is a caveat that project teams that do > > > not wish to maintain stable/ocata after this point can immediately end of > > > life the branch for their project [3]. We can still run CI using tags, e.g. > > > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > > > install from stable/ocata for nova and the ocata-eol tag for keystone. > > > Having said that, if there is no undue burden on the project team keeping > > > the lights on for stable/ocata, I would recommend not tagging the > > > stable/ocata branch end of life at this point. > > > > > > So, questions that need answering are: > > > > > > 1. Should we cut a final release for projects with stable/ocata branches > > > before going into extended maintenance mode? I tend to think "yes" to flush > > > the queue of backports. In fact, [3] doesn't mention it, but the resolution > > > said we'd tag the branch [4] to indicate it has entered the EM phase. > > > > > > 2. Are there any projects that would want to skip EM and go directly to EOL > > > (yes this feels like a Monopoly question)? > > > > > > > I believe TripleO would like to EOL instead of EM for Ocata as > > indicated by the thead > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134671.html > > > > Bringing this backup to see what we need to do to get the stable/ocata > branches ended for the TripleO projects. I'm bringing this up > because we have https://review.openstack.org/#/c/647009/ which is for > the upcoming rename but CI is broken and we have no interest in > continue to keep the stable/ocata branches alive (or fix ci for them). > So we had a discussion yesterday in TripleO meeting regarding EOL of Ocata and Pike Branches for TripleO projects, and there was no clarity regarding the process of making the branches EOL(is just pushing a change to openstack/releases(deliverables/ocata/.yaml) creating ocata-eol tag enough or something else is also needed), can someone from Release team point us in the right direction. > Thanks, > -Alex > > > Thanks, > > -Alex > > > > > [1] https://releases.openstack.org/ > > > [2] > > > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > > > [3] > > > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > > > [4] > > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > > > > > -- > > > > > > Thanks, > > > > > > Matt > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks and Regards Yatin Karel From merlin.blom at bertelsmann.de Wed Aug 14 12:43:28 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Wed, 14 Aug 2019 12:43:28 +0000 Subject: [nova][stacktach] Message-ID: Hey, it anybody using stacktach with OpenStack Nova Stein? I can't stream messages to it via Nova: -> stacktach_worker_config.json {"deployments": [ { "name": "xxx.dev", "durable_queue": false, "rabbit_host": "10.x.x.x", "rabbit_port": 5672, "rabbit_userid": "nova", "rabbit_password": "xxx", "rabbit_virtual_host": "/nova", "exit_on_exception": true, "topics": { "nova": [ { "queue": "notification.info", "routing_key": "notification.info" }, { "queue": "monitor.error", "routing_key": "monitor.error" } . How do you configure it? Are there alternatives for reading RabbitMQ Messages for debug/billing purposes? Greetings, Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From pierre at stackhpc.com Wed Aug 14 12:48:30 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 14 Aug 2019 14:48:30 +0200 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: Hello, I am reviving this thread as I have never received any email notifications from starred projects in Storyboard, despite enabling them multiple times. Although the change appears to be saved correctly, if I log out and log back in, my preferences are reset to defaults (not only for email settings, but also for page size). I also noticed that increasing "Page size" doesn't have any effect within the same session, I always see 10 results per page. Is there a known issue with persisting preferences in Storyboard? Thanks, Pierre On Sun, 19 May 2019 at 06:17, Akihiro Motoki wrote: > > Thanks for the information. > > I re-enabled email notification and then started to receive notifications. I am not sure why this solved the problem but it now works for me. > > > 2019年5月15日(水) 22:43 : >> >> On 2019-05-15 13:58, Akihiro Motoki wrote: >> > Hi, >> > >> > Is there a way to get email notification on stories/tasks of >> > subscribed projects in storyboard? >> >> Yes, go to your preferences >> (https://storyboard.openstack.org/#!/profile/preferences) >> by clicking on your name in the top right, then Preferences. >> >> Scroll to the bottom and check the "Enable notification emails" >> checkbox, then >> click "Save". There's a UI bug where sometimes the displayed preferences >> will >> look like the save button didn't work, but rest assured that it did >> unless you >> get an error message. >> >> Once you've done this the email associated with your OpenID will receive >> notification emails for things you're subscribed to (which includes >> changes on >> stories/tasks related to projects you're subscribed to). >> >> Thanks, >> >> Adam (SotK) >> From rfolco at redhat.com Wed Aug 14 13:06:07 2019 From: rfolco at redhat.com (Rafael Folco) Date: Wed, 14 Aug 2019 10:06:07 -0300 Subject: [tripleo] TripleO CI Summary: Sprint 34 Message-ID: Greetings, The TripleO CI team has just completed Sprint 34 / Unified Sprint 13 (Jul 18 thru Aug 07). The following is a summary of completed work during this sprint cycle: - Created RHEL8 jobs to build a periodic pipeline in the RDO Software Factory and provide early feedback for CentOS8 coverage. - Fixed RHEL8 container and image build jobs in the periodic pipeline. - Bootstrapped RHEL8 standalone job and made progress on RHEL8 OVB featureset 001 job. - Completed scenario007 and featureset039 job updates upstream. - Promotion status: red on all branches at half of the sprint due to rhel8 changes and infra related issues (transient failures). - Disabled Fedora jobs from periodic pipeline. - Merged code for automatic creation of featureset matrix on TripleO quickstart documentation [3]. The planned work for the next sprint [1] are: - Create scenario1-4 jobs for RHEL8 in the periodic pipeline. - Design and test multi-arch container support. - Resume the design work for a staging environment to test changes in the promoter server for the multi-arch builds. - Continue OVB featureset001 bootstrapping on RHEL8. - Disable Fedora jobs upstream. The Ruck and Rover for this sprint are Chandan Kumar (chkumar) and Ronelle Landy (rlandy). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes are being tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-14 [2] https://etherpad.openstack.org/p/ruckroversprint14 [3] https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 14 14:47:34 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 14 Aug 2019 10:47:34 -0400 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG Message-ID: Hi everyone, One of the things that came up recently was collaborating more with other deployment tools and this has brought up things like working together on our Ansible roles as more and more deployments tools start to use it. However, as we start to practice this, we realize that a project team ownership starts not making much sense (due to the fact that a project can only be under one team in governance). It starts being confusing when a role is built by OSA, and consumed by TripleO, so the PTL is from another team and it all starts getting weird and odd, so we discussed the creation of an Ansible SIG for those who are interested in maintaining code across our community together that would be consumed together. We already have some deliverables that can live underneath it which is pretty awesome too, so I'm emailing this to ask for interested parties to speak up if they're interested and to mention that we're more than happy to have other co-chairs that are intersted. I've submitted the initial patch here: https://review.opendev.org/676428 Thank you, Mohammed From duc.openstack at gmail.com Wed Aug 14 16:34:54 2019 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 14 Aug 2019 09:34:54 -0700 Subject: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? In-Reply-To: References: Message-ID: I don't know how to solve this problem in aodh, but it is possible to use Prometheus to aggregate CPU utilization and trigger scaling. I wrote up how to do this with Senlin and Prometheus here: https://medium.com/@dkt26111/auto-scaling-openstack-instances-with-senlin-and-prometheus-46100a9a14e1?source=friends_link&sk=5c0a2aa9e541e8c350963e7ec72bcbb5 You can probably do something similar with Heat and Prometheus. On Sun, Aug 4, 2019 at 12:52 AM Bernd Bausch wrote: > > Prior to Stein, Ceilometer issued a metric named cpu_util, which I could use to trigger alarms and autoscaling when CPU utilization was too high. > > cpu_util doesn't exist anymore. Instead, we are asked to use Gnocchi's rate feature. However, when using rates, alarms on a group of resources require more parameters than just one metric: Both an aggregation and a reaggregation method are needed. > > For example, a group of instances that implement "myapp": > > gnocchi measures aggregation -m cpu --reaggregation mean --aggregation rate:mean --query server_group=myapp --resource-type instance > > Actually, this command uses a deprecated API (but from what I can see, Aodh still uses it). The new way is like this: > > gnocchi aggregates --resource-type instance '(aggregate rate:mean (metric cpu mean))' server_group=myapp > > If rate:mean is in the archive policy, it also works the other way around: > > gnocchi aggregates --resource-type instance '(aggregate mean (metric cpu rate:mean))' server_group=myapp > > Without reaggregation, I get quite unexpected numbers, including negative CPU rates. If you want to understand why, see this discussion with one of the Gnocchi maintainers [1]. > > My problem: Aodh allows me to set an aggregation method, but not a reaggregation method. How can I create alarms based on rates? The problem extends to Heat and autoscaling. > > Thanks much, > > Bernd. > > [1] https://github.com/gnocchixyz/gnocchi/issues/1044 From openstack at nemebean.com Wed Aug 14 17:07:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Aug 2019 12:07:01 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> I have a PoC patch up in devstack[0] to start using the openstack-server client. It passed the basic devstack test and looking through the logs you can see that openstack calls are now completing in fractions of a second as opposed to 2.5 to 3, so I think it's working as intended. That said, it needs quite a bit of refinement. For example, I think we should disable this on any OSC patches. I also suspect it will fall over for any projects that use an OSC plugin since the server is started before any plugins are installed. This could probably be worked around by restarting the service after a project is installed, but it's something that needs to be dealt with. Before I start taking a serious look at those things, do we want to pursue this? It does add some potential complexity to debugging if a client call fails or if the server crashes. I'm not sure I can quantify the risk there though since it's always Just Worked(tm) for me. -Ben 0: https://review.opendev.org/676016 From cboylan at sapwetik.org Wed Aug 14 17:20:37 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Aug 2019 10:20:37 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> Message-ID: On Wed, Aug 14, 2019, at 10:07 AM, Ben Nemec wrote: > I have a PoC patch up in devstack[0] to start using the openstack-server > client. It passed the basic devstack test and looking through the logs > you can see that openstack calls are now completing in fractions of a > second as opposed to 2.5 to 3, so I think it's working as intended. > > That said, it needs quite a bit of refinement. For example, I think we > should disable this on any OSC patches. I also suspect it will fall over > for any projects that use an OSC plugin since the server is started > before any plugins are installed. This could probably be worked around > by restarting the service after a project is installed, but it's > something that needs to be dealt with. > > Before I start taking a serious look at those things, do we want to > pursue this? It does add some potential complexity to debugging if a > client call fails or if the server crashes. I'm not sure I can quantify > the risk there though since it's always Just Worked(tm) for me. Considering that our number one identified e-r bug is job timeouts [1] I think anything to reduce job time by measurable amounts is worthwhile. Additionally if we save 5 minutes per devstack run and then run devstack 10k times a day (not an up to date number but has been in that range in the past, someone can double check this with grafana or logstash or zuul dashboard) that is a massive savings when looked at on the whole. To me that makes it worthwhile. > > -Ben > > 0: https://review.opendev.org/676016 > [1] http://status.openstack.org/elastic-recheck/index.html#1686542 From sean.mcginnis at gmx.com Wed Aug 14 19:24:40 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Aug 2019 14:24:40 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <20190814192440.GA3048@sm-workstation> > > > > Bringing this backup to see what we need to do to get the stable/ocata > > branches ended for the TripleO projects. I'm bringing this up > > because we have https://review.openstack.org/#/c/647009/ which is for > > the upcoming rename but CI is broken and we have no interest in > > continue to keep the stable/ocata branches alive (or fix ci for them). > > > So we had a discussion yesterday in TripleO meeting regarding EOL of > Ocata and Pike Branches for TripleO projects, and there was no clarity > regarding the process of making the branches EOL(is just pushing a > change to openstack/releases(deliverables/ocata/.yaml) > creating ocata-eol tag enough or something else is also needed), can > someone from Release team point us in the right direction. > > > Thanks, > > -Alex > > It would appear we have additional information we should add to somewhere like: https://docs.openstack.org/project-team-guide/stable-branches.html or https://releases.openstack.org/#references I believe it really is just a matter of requesting the new tag in the openstack/releases repo. There is a good example of this when Tony did it for TripleO's stable/newton branch: https://review.opendev.org/#/c/583856/ I think I recall there were some additional steps Tony took at the time, but I think everything is now covered by the automated process. Tony, please correct me if I am wrong. Not sure if it applies, but you may want to see if there are any Zuul jobs that need to be cleaned up or anything of that sort. We do say branches will be in unmaintained in the Extended Maintenance phase for six months before going End of Life. Looking at Ocata, that happened April 5 of this year. Six months would put it at the beginning of October. But I think if the team knows they will not be accepting any more patches to these branches, then it is better to get it clearly marked as EOL so proper expectations are set. Sean From kendall at openstack.org Wed Aug 14 19:39:09 2019 From: kendall at openstack.org (Kendall Waters) Date: Wed, 14 Aug 2019 12:39:09 -0700 Subject: August 14 Early Bird Registration Deadline - Open Infrastructure Summit Shanghai Message-ID: <0F6C0A39-7D80-46E5-B4F7-A2D3B31F0709@openstack.org> Hi everyone, Friendly reminder that today, August 14, at 11:59pm PT (August 15 at 2:59pm China Standard Time) is the deadline to purchase passes for the Open Infrastructure Summit at the early bird price. Register now before the prices increase! There are 2 ways to register - in USD or in RMB (with e-fapiao) . In case you missed it, the agenda went live last week and features sessions covering CI/CD, Edge Computing, 5G, hybrid cloud, and more. Other Summit News If you require a visa to travel to China, please apply here Hiring talent to build your open infrastructure strategy? Have a new product to share? Join the Summit as as sponsor If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 14 19:45:26 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Aug 2019 14:45:26 -0500 Subject: [release] Release countdown for week R-8, August 19-23 Message-ID: <20190814194526.GA6075@sm-workstation> Your long awaited countdown email... Development Focus ----------------- It's probably a good time for teams to take stock of their library and client work that needs to be completed yet. The non-client library freeze is coming up, followed closely by the client lib freeze. Please plan accordingly to avoid any last minute rushes to get key functionality in. General Information ------------------- Looking ahead to Train-3, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (September 05). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (September 12) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, September 12. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. * After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (September 26) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (October 10) Finally, now is a good time to list contributors to your team who do not have a code contribution, and therefore won't automatically be considered an Active Technical Contributor and allowed to vote in election. This is done by adding extra-atcs to: https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml before the Extra-ATC freeze on August 29. Upcoming Deadlines & Dates -------------------------- Extra-ATC freeze: August 29 (R-7 week) Non-client library freeze: September 05 (R-6 week) Client library freeze: September 12 (R-5 week) Train-3 milestone: September 12 (R-5 week) -- Sean McGinnis (smcginnis) From openstack at fried.cc Wed Aug 14 19:52:52 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 14 Aug 2019 14:52:52 -0500 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <1565769180.30413.0@smtp.office365.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> <1565769180.30413.0@smtp.office365.com> Message-ID: <04a77fa3-1c3f-efe2-e563-81bfb4554297@fried.cc> Thanks for the quick responses. I've requested a slot. efried . From aaronzhu1121 at gmail.com Thu Aug 15 00:42:43 2019 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Thu, 15 Aug 2019 08:42:43 +0800 Subject: [Telemetry][Shanghai Summit] Looking for Project Update session volunteers In-Reply-To: References: Message-ID: Hi Trinh, I probably attend the summit, we can discuss more later. Trinh Nguyen 于2019年8月14日 周三14:41写道: > Hi team, > > I cannot attend the next summit in Shanghai so I'm looking for one-two > volunteers who want to represent the Telemetry team at the summit. Please > reply to this email by this Sunday and we will work through the > presentation together. > > Bests, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- Thanks, Rong Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Thu Aug 15 01:49:57 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 15 Aug 2019 11:49:57 +1000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> Message-ID: <20190815014957.GB5923@fedora19.localdomain> On Wed, Aug 14, 2019 at 12:07:01PM -0500, Ben Nemec wrote: > I have a PoC patch up in devstack[0] to start using the openstack-server > client. It passed the basic devstack test and looking through the logs you > can see that openstack calls are now completing in fractions of a second as > opposed to 2.5 to 3, so I think it's working as intended. I see this as having a couple of advantages * no bespoke API interfacing code to maintain * the wrapper is custom but pretty small * plugins can benefit by using the same wrapper * we can turn the wrapper off and fall back to the same calls directly with the client (also good for local interaction) * in a similar theme, it's still pretty close to "what I'd type on the command line to do this" which is a bit of a devstack theme So FWIW I'm positive on the direction, thanks! -i (some very experienced people have said "we know it's slow" and I guess we should take advice on if this is a temporary work-around, or an actual solution) From rico.lin.guanyu at gmail.com Thu Aug 15 02:49:54 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 15 Aug 2019 10:49:54 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open In-Reply-To: References: Message-ID: bump and *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 * :) On Tue, Aug 13, 2019 at 12:56 PM Rico Lin wrote: > Hi, all OpenStackers, > > It's time to vote for the naming of the U release!! > U 版本正式命名票选开始!! > > First, big thanks for all people who take their own time to propose names > on [2] or help to push/improve to the naming process. Thank you. > > We'll use a public polling option over per-user private URLs > for voting. This means everybody should proceed to use the following URL > to > cast their vote: > > *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 > * > > We've selected a public poll to ensure that the whole community, not just > Gerrit > change owners get a vote. Also, the size of our community has grown such > that we > can overwhelm CIVS if using private URLs. A public can mean that users > behind NAT, proxy servers or firewalls may receive a message saying > that your vote has already been lodged if this happens please try > another IP. > Because this is a public poll, results will currently be only viewable by > me > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results > while > the public poll is running. > > The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], > and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From chx769467092 at 163.com Thu Aug 15 06:09:25 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Thu, 15 Aug 2019 14:09:25 +0800 (CST) Subject: [question][placement][rest api][403] Message-ID: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> This is my problem with placement rest api.(Token and endpoint were OK) {"errors": [{"status": 403, "title": "Forbidden", "detail": "Access was denied to this resource.\n\n Policy does not allow placement:resource_providers:list to be performed. ", "request_id": "req-5b409f22-7741-4948-be6f-ea28c2896a3f" }]} Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20190814153351.png Type: image/png Size: 16544 bytes Desc: not available URL: From gregory.orange at pawsey.org.au Thu Aug 15 07:35:46 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 15 Aug 2019 15:35:46 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Message-ID: <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> Hello Ruslanas and thank you for the response. I didn't see it until now! I have given some responses inline... On 1/8/19 3:57 pm, Ruslanas Gžibovskis wrote: > when in newton release were introduced role separation, we divided memory hungry processes into 4 different VM's on 3 physical boxes: > 1) Networker: all Neutron agent processes (network throughput) > 2) Systemd: all services started by systemd (Neutron) > 3) pcs: all services controlled by pcs (Galera + RabbitMQ) > 4) horizon We have separated each control plane service (Glance, Neutron, Cinder, etc) onto its own VM. We are considering containers instead of VMs in future. > Gregory > do you have local storage for swift and cinder background? Our Cinder and Glance use Ceph as backend. No Swift installed. > also double check where _base image is located? is it in /var/lib/nova/instances/_base/* ? and flavor disks stored in /var/lib/nova/instances ? (can check on compute by: virsh domiflist instance-00000## ) domiflist shows the VM's interface - how does that help? Greg. From gregory.orange at pawsey.org.au Thu Aug 15 07:55:16 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 15 Aug 2019 15:55:16 +0800 Subject: [glance] worker, thread, taskflow interplay Message-ID: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> We are trying to figure out how these two settings interplay in: [DEFAULT]/workers [taskflow_executor]/max_workers [oslo_messaging_zmq]/rpc_thread_pool_size Just setting workers makes a bit of sense, and based on our testing: =0 creates one process =1 creates 1 plus 1 child =n creates 1 plus n children Are there green threads (i.e. coroutines, per https://eventlet.net/doc/basic_usage.html) within every process, regardless of the value of workers? Does max_workers affect that? We have read some Glance doco, hunted about various bug reports[1] and other discussions online to get some insight, but I think we're not clear on it. Can anyone explain this a bit better to me? This is all in Rocky. Thank you, Greg. [1] https://bugs.launchpad.net/glance/+bug/1748916 From ruslanas at lpic.lt Thu Aug 15 08:00:43 2019 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 15 Aug 2019 10:00:43 +0200 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> Message-ID: I am bad at containers, just starting to learn them, not sure how they are limited. So you are using local hard drives. I guess it is one of the points for slow down, somehow. I ask my developers to use heat to create more than 1 instance/resource. Try checking CEPH speed. I think CEPH has the option to send "created" callback after 1 copy created/written to HDD, and then finish duplicating or tripling data in the background, what makes CEPH data not so reliable but MUUUCH faster. Need to google for that, I do not remember it. sorry, yes my fault, not domiflist but domblklist: virsh domblklist instance-00000## Generally, I have the same issue as you have, but on older version of OpenStack (Mitaka, Mirantis implementation). I have difficulties when I have an instance, which is using CEPH based volume and sharing it over NFS in the instance1 in compute1 to another instance2 in another compute2. I receive around 13KB/s, if I reshare it on root drive, I get around 30KB/s still too low. On Thu, 15 Aug 2019 at 09:35, Gregory Orange wrote: > Hello Ruslanas and thank you for the response. I didn't see it until now! > I have given some responses inline... > > On 1/8/19 3:57 pm, Ruslanas Gžibovskis wrote: > > when in newton release were introduced role separation, we divided > memory hungry processes into 4 different VM's on 3 physical boxes: > > 1) Networker: all Neutron agent processes (network throughput) > > 2) Systemd: all services started by systemd (Neutron) > > 3) pcs: all services controlled by pcs (Galera + RabbitMQ) > > 4) horizon > > We have separated each control plane service (Glance, Neutron, Cinder, > etc) onto its own VM. We are considering containers instead of VMs in > future. > > > > Gregory > do you have local storage for swift and cinder background? > > Our Cinder and Glance use Ceph as backend. No Swift installed. > > > > also double check where _base image is located? is it in > /var/lib/nova/instances/_base/* ? and flavor disks stored in > /var/lib/nova/instances ? (can check on compute by: virsh domiflist > instance-00000## ) > > domiflist shows the VM's interface - how does that help? > > Greg. > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From taoyunupt at 126.com Thu Aug 15 08:51:52 2019 From: taoyunupt at 126.com (taoyunupt) Date: Thu, 15 Aug 2019 16:51:52 +0800 (CST) Subject: neutron-server won't start in OpenStack OVN In-Reply-To: References: Message-ID: Hi, I found you have a wrong configuration of "overlay_ip_version" from ml2_conf.ini, you can check it , it could config to "6" or "4". But I am not sure it is the reason for you problem. Thanks, Yun At 2019-08-14 19:16:22, "Zufar Dhiyaulhaq" wrote: Hi everyone, I try to install OpenStack with OVN enabled. But when trying to start neutron-server, the service is always inactive (exited) I try to check neutron-server logs and gets this message: 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ Does anyone know why this error happens? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Thu Aug 15 09:09:01 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Thu, 15 Aug 2019 16:09:01 +0700 Subject: neutron-server won't start in OpenStack OVN In-Reply-To: References: Message-ID: Hi, Yes, I fix it yesterday, sorry for not reporting it. The error is not in the configuration but in the crudini script. Thanks Best Regards, Zufar Dhiyaulhaq On Thu, Aug 15, 2019 at 3:52 PM taoyunupt wrote: > Hi, > I found you have a wrong configuration of "overlay_ip_version" from ml2_conf.ini, > you can check it , it could config to "6" or "4". But I am not sure it is > the reason for you problem. > > Thanks, > Yun > > > > > > > At 2019-08-14 19:16:22, "Zufar Dhiyaulhaq" > wrote: > > Hi everyone, > > I try to install OpenStack with OVN enabled. But when trying to start > neutron-server, the service is always inactive (exited) > > I try to check neutron-server logs and gets this message: > > 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver > 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry > 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... > 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected > 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry > 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 > 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver > 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > > > neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ > ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ > neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ > > Does anyone know why this error happens? > > Best Regards, > Zufar Dhiyaulhaq > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Aug 15 09:44:41 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 15 Aug 2019 10:44:41 +0100 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG In-Reply-To: References: Message-ID: On Wed, 14 Aug 2019 at 15:51, Mohammed Naser wrote: > Hi everyone, > > One of the things that came up recently was collaborating more with > other deployment tools and this has brought up things like working > together on our Ansible roles as more and more deployments tools start > to use it. However, as we start to practice this, we realize that a > project team ownership starts not making much sense (due to the fact > that a project can only be under one team in governance). > > It starts being confusing when a role is built by OSA, and consumed by > TripleO, so the PTL is from another team and it all starts getting > weird and odd, so we discussed the creation of an Ansible SIG for > those who are interested in maintaining code across our community > together that would be consumed together. > > We already have some deliverables that can live underneath it which is > pretty awesome too, so I'm emailing this to ask for interested parties > to speak up if they're interested and to mention that we're more than > happy to have other co-chairs that are intersted. > Nice idea. We (kolla-ansible) are not involved in any role sharing right now, but I don't want to rule it out and will be interested to see where this goes. I added my name on the etherpad ( https://etherpad.openstack.org/p/ansible-sig). > I've submitted the initial patch here: https://review.opendev.org/676428 > > Thank you, > Mohammed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 15 11:48:46 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 15 Aug 2019 07:48:46 -0400 Subject: [ironic][ptl] Taking a break - i.e. disconnecting for a few weeks in September Message-ID: Greetings everyone, For my own mental health and with various things that have occurred in my life these past six months, I will be disconnecting for two weeks during the month of September, starting on the 6th. To put icing on the cake, I then have business related travel the two weeks following my return that will inhibit regular IRC access during peak contributor days/hours. In my absence, Dmitry Tantsur has agreed to take care of my PTL responsibilities. This will include the time through requirements freeze and quite possibly include the creation of the stable/train branch if necessary. That being said, fear not! I still intend to run for Ironic's PTL for the next cycle! -Julia From sfinucan at redhat.com Thu Aug 15 12:21:42 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 15 Aug 2019 13:21:42 +0100 Subject: More upgrade issues with PCPUs - input wanted Message-ID: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> tl;dr: Is breaking booting of pinned instances on Stein compute nodes in a Train deployment an acceptable thing to do, and if not, how do we best handle the VCPU->PCPU migration in Train? I've been working through the cpu-resources spec [1] and have run into a tricky issue I'd like some input on. In short, this spec means that pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start consuming a new resources type, PCPU, instead of VCPU. Many things need to change to make this happen but the key changes are: 1. The scheduler needs to start modifying requests for pinned instances to request PCPU resources instead of VCPU resources 2. The libvirt driver needs to start reporting PCPU resources 3. The libvirt driver needs to do a reshape, moving all existing allocations of VCPUs to PCPUs, if the instance holding that allocation is pinned The first two of these steps presents an issue for which we have a solution, but the solutions we've chosen are now resulting in this new issue. * For (1), the translation of VCPU to PCPU in the scheduler means compute nodes must now report PCPU in order for a pinned instance to land on that host. Since controllers are upgraded before compute nodes and all compute nodes aren't necessarily upgraded in one go (particularly for edge or other large or multi-cell deployments), this can mean there will be a period of time where there are very few or no hosts available on which to schedule pinned instances. * For (2), we're hampered by the fact that there is no clear way to determine if a host is used for pinned instances or not. Because of this, we can't determine if a host should be reporting PCPU or VCPU inventory. The solution we have for the issues with (1) is to add a workaround option that would disable this translation, allowing operators time to upgrade all their compute nodes to report PCPU resources before anything starts using them. For (2), we've decided to temporarily (i.e. for one release or until configuration is updated) report both, in the expectation that everyone using pinned instances has followed the long- standing advice to separate hosts intended for pinned instances from those intended for unpinned instances using host aggregates (e.g. even if we started reporting PCPUs on a host, nothing would consume that due to 'pinned=False' aggregate metadata or similar). These actually benefit each other, since if instances are still consuming VCPUs then the hosts need to continue reporting VCPUs. However, both interfere with our ability to do the reshape. Normally, a reshape is a one time thing. The way we'd planned to determine if a reshape was necessary was to check if PCPU inventory was registered against the host and, if not, whether there were any pinned instances on the host. If PCPU inventory was not available and there were pinned instances, we would update the allocations for these instances so that they would be consuming PCPUs instead of VCPUs and then update the inventory. This is problematic though, because our solution for the issue with (1) means pinned instances can continue to request VCPU resources, which in turn means we could end up with some pinned instances on a host consuming PCPU and other consuming VCPU. That obviously can't happen, so we need to change tacks slightly. The two obvious solutions would be to either (a) remove the workaround option so the scheduler would immediately start requesting PCPUs and just advise operators to upgrade their hosts for pinned instances asap or (b) add a different option, defaulting to True, that would apply to both the scheduler and compute nodes and prevent not only translation of flavors in the scheduler but also the reporting PCPUs and reshaping of allocations until disabled. I'm currently leaning towards (a) because it's a *lot* simpler, far more robust (IMO) and lets us finish this effort in a single cycle, but I imagine this could make upgrades very painful for operators if they can't fast track their compute node upgrades. (b) is more complex and would have some constraints, chief among them being that the option would have to be disabled at some point post-release and would have to be disabled on the scheduler first (to prevent the mismash or VCPU and PCPU resource allocations) above. It also means this becomes a three cycle effort at minimum, since this new option will default to True in Train, before defaulting to False and being deprecated in U and finally being removed in V. As such, I'd like some input, particularly from operators using pinned instances in larger deployments. What are your thoughts, and are there any potential solutions that I'm missing here? Cheers, Stephen [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html From mnaser at vexxhost.com Thu Aug 15 12:41:58 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 15 Aug 2019 08:41:58 -0400 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG In-Reply-To: References: Message-ID: On Thu, Aug 15, 2019 at 5:44 AM Mark Goddard wrote: > > > > On Wed, 14 Aug 2019 at 15:51, Mohammed Naser wrote: >> >> Hi everyone, >> >> One of the things that came up recently was collaborating more with >> other deployment tools and this has brought up things like working >> together on our Ansible roles as more and more deployments tools start >> to use it. However, as we start to practice this, we realize that a >> project team ownership starts not making much sense (due to the fact >> that a project can only be under one team in governance). >> >> It starts being confusing when a role is built by OSA, and consumed by >> TripleO, so the PTL is from another team and it all starts getting >> weird and odd, so we discussed the creation of an Ansible SIG for >> those who are interested in maintaining code across our community >> together that would be consumed together. >> >> We already have some deliverables that can live underneath it which is >> pretty awesome too, so I'm emailing this to ask for interested parties >> to speak up if they're interested and to mention that we're more than >> happy to have other co-chairs that are intersted. > > > Nice idea. We (kolla-ansible) are not involved in any role sharing right now, but I don't want to rule it out and will be interested to see where this goes. I added my name on the etherpad (https://etherpad.openstack.org/p/ansible-sig). Great. The governance patch has merged. I've setup an IRC channel too so for those that are interested, please join #openstack-ansible-sig and I'm going to start organizing efforts around a meeting and bringing repos that live under it. >> >> I've submitted the initial patch here: https://review.opendev.org/676428 >> >> Thank you, >> Mohammed >> -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From a.settle at outlook.com Thu Aug 15 13:12:15 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 15 Aug 2019 13:12:15 +0000 Subject: [all] [tc] General questions on PDF community goal In-Reply-To: References: Message-ID: Thanks for responding to these questions, Doug. Appreciate you being so forward and working hard on this, Akihiro. Due to vacation and personal circumstance, I have been more-or-less offline for the last 2 months. I've been speaking with Stephen today on some action items we need to get through and work further on coordinating this goal. At this point in time, it is Stephen working on it by himself. He's been working on adding Python 3 support to rst2pdf, since he thinks that should provide a pure Python way to generate PDFs. However, he hasn't gone so far as to check if the output with Python 2. I'm going to send a separate email to see if we can get some volunteers to to help work on this. Otherwise, I will update in the email the current status too. Apologies again, Alex On 27/07/2019 07:26, Doug Hellmann wrote: > Akihiro Motoki writes: > >> Hi, >> >> I have a couple of general questions on PDF community goal. >> >> What is the criteria of completing the PDF community goal? >> Is it okay if we publish a PDF deliverable anyway? >> >> During working on it, I hit the following questions. >> We already reached Train-2 milestone, so I think it is the time to clarify >> the detail criteria of completing the community goal. >> >> - Where should the generated PDF file to be located and What is the >> recommended PDF file name? >> /.pdf? index.pdf? Any path and any >> file is okay? > The job will run the sphinx instructions to build PDFs, and then copy > them from build/latex (or build/pdf, I can't remember) into the > build/html directory so they are published as part of the project's > series-specific documentation set. > > Project teams should not add anything to their HTML documentation build > to move, rename, etc. the PDF files. That will all be done by the job > changes Stephen has been developing. > > Project teams should ensure there is exactly 1 PDF file being built and > that it has a meaningful name. That could be ${repository_name}.pdf as > you suggest, but it could be something else, for now. > >> - Do we create a link to the PDF file from index.html (per-project top page)? >> If needed, perhaps this would be a thing covered by openstackdocstheme. >> Otherwise, how can normal consumers know PDF documents? > Not yet. We should be able to do that automatically through the theme by > looking at the latex build parameters. If we do it in the theme, rather > than having projects add link content to their builds, we can ensure > that all projects have the link in a consistent location in their docs > pages with 1 patch. If you want to work on that, it would be good, but > it isn't part of the goal. > >> - Which repositories need to provide PDF version of documents? >> My understanding (amotoki) is that repositories with >> 'publish-openstack-docs-pti' should publish PDF doc. right? > Yes, all repositories that follow the documentation PTI. > >> - Do we need PDF version of release notes? >> - Do we need PDF version of API reference? > The goal is focused on publishing the content of the doc/source > directory in each repository. There is no need to deal with release > notes or the API reference. We may work on those later, but not for this > goal. > >> I see no coordination efforts recently and am afraid that individual >> projects cannot decide whether patches to their repositories are okay >> to merge. > The goal champions have been on vacation. Have a bit of patience, > please. :-) > > In the mean time, if there are questions about specific patches, please > raise those here on the mailing list. > > The most important thing to accomplish is to ensure that one PDF builds > *at all* from the content in doc/source in each repo. The goal was > purposefully scoped to this one task to allow teams to focus on getting > successful PDF builds from their content, because we weren't sure what > issues we might encounter. We can come back around later and improve the > experience of consuming the PDFs but there is no point in making a bunch > of decisions about how to do that until we know we have the files to > publish. > From gmann at ghanshyammann.com Thu Aug 15 13:18:52 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Aug 2019 22:18:52 +0900 Subject: [nova]review guide for the policy default refresh spec Message-ID: <16c956e577d.e430310375433.1480190884000484078@ghanshyammann.com> Hello Everyone, As many of you might know that in Train, we are doing Nova policy changes to adopt the keystone's new defaults and scope type[1]. There are multiple changes required per policy as mentioned in spec. I am writing this review guide for the patch sequence and at the end how each policy will look like. I have prepared the first set of patches. I would like to get the feedback on those so that we can modify the other policy also on the same line. My plan is to start the other policy work after we merge the first set of policy changes. Patch sequence: Example: os-services API policy: ------------------------------------------------------------- 1. Cover/Improve the test coverage for existing policies: This will be the first patch. We do not have good test coverage of the policy, current tests are not at all useful and do not perform the real checks. Idea is to add the actual tests coverage for each policy as the first patch. new tests try to access the API with all possible context and check for positive and negative cases. - https://review.opendev.org/#/c/669181/ 2. Introduce scope_types: This will add the scope_type for policy. It will be either 'system', 'project' or 'system and project'. In the same patch, along with existing test working as it is, new tests of scope type will be added which will run with [oslo_policy] enforce_scope=True so that we can capture the real scope checks. - https://review.opendev.org/#/c/645427/ 3. Add new default roles: This will add new defaults which can be SYSTEM_ADMIN, SYSTEM_READER, PROJECT_MEMBER_OR_SYSTEM_ADMIN, PROJECT_READER_OR_SYSTEM_READER etc depends on Policy. Test coverage of new defaults, as well as deprecated defaults, are covered in same patch. This patch will add the granularity in policy if needed. Without policy granularity, we cannot add new defaults per rule. - https://review.opendev.org/#/c/648480/ (I need to add more tests for deprecated rules) 4. Pass actual Targets in policy: This is to pass the actual targets in context.can(). Main goal is to remove the defaults targets which is nothing but context'user_id,project_id only. It will be {} if no actual target data needed in check_str. - https://review.opendev.org/#/c/676688/ Patch sequence: Example: Admin Action API policy: 1. https://review.opendev.org/#/c/657698/ 2. https://review.opendev.org/#/c/657823/ 3. https://review.opendev.org/#/c/676682/ 4. https://review.opendev.org/#/c/663095/ There are other patches I have posted in between for common changes or fix or framework etc. [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/policy-default-refresh.html -gmann From smooney at redhat.com Thu Aug 15 13:31:11 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Aug 2019 14:31:11 +0100 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: On Thu, 2019-08-15 at 13:21 +0100, Stephen Finucane wrote: > tl;dr: Is breaking booting of pinned instances on Stein compute nodes > in a Train deployment an acceptable thing to do, and if not, how do we > best handle the VCPU->PCPU migration in Train? > > I've been working through the cpu-resources spec [1] and have run into > a tricky issue I'd like some input on. In short, this spec means that > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > consuming a new resources type, PCPU, instead of VCPU. Many things need > to change to make this happen but the key changes are: > > 1. The scheduler needs to start modifying requests for pinned instances > to request PCPU resources instead of VCPU resources > 2. The libvirt driver needs to start reporting PCPU resources > 3. The libvirt driver needs to do a reshape, moving all existing > allocations of VCPUs to PCPUs, if the instance holding that > allocation is pinned > > The first two of these steps presents an issue for which we have a > solution, but the solutions we've chosen are now resulting in this new > issue. > > * For (1), the translation of VCPU to PCPU in the scheduler means > compute nodes must now report PCPU in order for a pinned instance to > land on that host. Since controllers are upgraded before compute > nodes and all compute nodes aren't necessarily upgraded in one go > (particularly for edge or other large or multi-cell deployments), > this can mean there will be a period of time where there are very > few or no hosts available on which to schedule pinned instances. > > * For (2), we're hampered by the fact that there is no clear way to > determine if a host is used for pinned instances or not. Because of > this, we can't determine if a host should be reporting PCPU or VCPU > inventory. > > The solution we have for the issues with (1) is to add a workaround > option that would disable this translation, allowing operators time to > upgrade all their compute nodes to report PCPU resources before > anything starts using them. For (2), we've decided to temporarily (i.e. > for one release or until configuration is updated) report both, in the > expectation that everyone using pinned instances has followed the long- > standing advice to separate hosts intended for pinned instances from > those intended for unpinned instances using host aggregates (e.g. even > if we started reporting PCPUs on a host, nothing would consume that due > to 'pinned=False' aggregate metadata or similar). These actually > benefit each other, since if instances are still consuming VCPUs then > the hosts need to continue reporting VCPUs. However, both interfere > with our ability to do the reshape. > > Normally, a reshape is a one time thing. The way we'd planned to > determine if a reshape was necessary was to check if PCPU inventory was > registered against the host and, if not, whether there were any pinned > instances on the host. If PCPU inventory was not available and there > were pinned instances, we would update the allocations for these > instances so that they would be consuming PCPUs instead of VCPUs and > then update the inventory. This is problematic though, because our > solution for the issue with (1) means pinned instances can continue to > request VCPU resources, which in turn means we could end up with some > pinned instances on a host consuming PCPU and other consuming VCPU. > That obviously can't happen, so we need to change tacks slightly. The > two obvious solutions would be to either (a) remove the workaround > option so the scheduler would immediately start requesting PCPUs and > just advise operators to upgrade their hosts for pinned instances asap > or (b) add a different option, defaulting to True, that would apply to > both the scheduler and compute nodes and prevent not only translation > of flavors in the scheduler but also the reporting PCPUs and reshaping > of allocations until disabled. > > I'm currently leaning towards (a) because it's a *lot* simpler, far > more robust (IMO) and lets us finish this effort in a single cycle, but > I imagine this could make upgrades very painful for operators if they > can't fast track their compute node upgrades. (b) is more complex and > would have some constraints, chief among them being that the option > would have to be disabled at some point post-release and would have to > be disabled on the scheduler first (to prevent the mismash or VCPU and > PCPU resource allocations) above. It also means this becomes a three > cycle effort at minimum, since this new option will default to True in > Train, before defaulting to False and being deprecated in U and finally > being removed in V. As such, I'd like some input, particularly from > operators using pinned instances in larger deployments. What are your > thoughts, and are there any potential solutions that I'm missing here? if we go with (b) i would move the config your of the workarond section to the default seachtion and call it pcpus_in_placement and have it default false in train. i.e. we dont enabel the featue in train by default. in installer tools we would update them to set the configvale to true so new installs use this feature. in U we would cahnge teh default to True and deprecate as you said and finally remove in V. we should add a nova status check too for the U upgrade so that operators can define the correct config values before the upgrade. if we go with (a) then we would want to add that check for train i think. operators would need to add the new config options to all host before they upgrade. this could be problematic in some cases as the meaning of cpu_shared_set changes between stine and train. in stine it is used for emultor threads only, in train it will be used for all floating vms vcpus. (a) would also require you to upgrade all host in one go more or less. for fast forward upgrades this is requried anyway since we cant have contol plane manging agent that are older then n-1 but not all tool support FFU or recommend it. > > Cheers, > Stephen > > [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > From a.settle at outlook.com Thu Aug 15 13:37:38 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 15 Aug 2019 13:37:38 +0000 Subject: [all] [tc] PDF Community Goal Update Message-ID: Hi all, Apologies for the radio silence regarding the PDF Community Goal. Due to vacation and personal circumstance, I've been "offline" for the better part of the last 2 months. Update * Stephen Finucane has been working on adding Python 3 support to rst2pdf * Common issues are being tracked within this etherpad [1] * Overall status: https://review.opendev.org/#/q/topic:build-pdf-docs Help needed * We would appreciate anyone who is comfortable with Python to help and volunteer to test the rst2pdf output with Python 2. Working within a larger project like Neutron to see how much it can do, as they have specific styling capabilities. * NOTE: The original discussion included using the LaTeX builder instead of rst2pdf. However, the LaTeX builder is not playing ball as nicely as we'd like, so we're trying to figure out if this would be easier. The LaTeX builder is still the primary plan, since we still don't know what we're going with and the overlap between the two is significant. Questions? Thank you, Alex [1] https://etherpad.openstack.org/p/pdf-goal-train-common-problems -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Aug 15 13:40:56 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 15 Aug 2019 06:40:56 -0700 Subject: [keystone][edge] Edge Hacking Days - August 16 Message-ID: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> Hi, It is a friendly reminder that we are having the second edge hacking days in August this Friday (August 16). The dial-in information is the same, you can find the details here: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. We will keep on working on two items: * Keystone to Keystone federation testing in DevStack * Building the centralized edge reference architecture on Packet HW using TripleO Please let me know if you have any questions. See you on Friday! :) Thanks, Ildikó From gmann at ghanshyammann.com Thu Aug 15 13:45:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Aug 2019 22:45:46 +0900 Subject: [nova] API updates week 19-33 Message-ID: <16c9586f90d.126bdddaf76391.2502147131477263201@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. API Related BP : ============ COMPLETED: 1. Support adding description while locking an instance: - https://blueprints.launchpad.net/nova/+spec/add-locked-reason 2. Add host and hypervisor_hostname flag to create server - https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server Code Ready for Review: ------------------------------ 1. Specifying az when restore shelved server - Topic: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) - Weekly Progress: This has been rebased as 2.75 microversion is already merged. This will need again rebase as 2.76 is reserved for 'Add 'power-update' external event' 2. Nova API cleanup - Topic: https://review.opendev.org/#/q/topic:bp/api-consistency-cleanup+(status:open+OR+status:merged) - Weekly Progress: Nova patch is merged. python-novaclient patch is pending. 3. Show Server numa-topology - Topic: https://review.opendev.org/#/q/topic:bp/show-server-numa-topology+(status:open+OR+status:merged) - Weekly Progress: Alex is +1 on nova change, but this is with microversion number as 2.76. This might need to be hold on ? 4. Nova API policy improvement - Topic: https://review.openstack.org/#/q/topic:bp/policy-default-refresh+(status:open+OR+status:merged) - Weekly Progress: First set of os-service and Admin Action API policy series is ready to review. I have sent the review guide over ML - http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008504.html 5. Add 'power-update' external event: - Topic: https://review.opendev.org/#/q/topic:bp/nova-support-instance-power-update+(status:open+OR+status:merged) - Weekly Progress: This is reserved for 2.76 and we should merge this soon as many other microversion changes are waiting for grabbing 2.76. I do not have updates on the current state, maybe matt or surya can add more if needed. 6. Add User-id field in migrations table - Topic: https://review.opendev.org/#/q/topic:bp/add-user-id-field-to-the-migrations-table+(status:open+OR+status:merged) - Weekly Progress: Changes are up for review but with microversion 2.76. We can rebase the microverison number later and not blocking for review. I will review it next week. 7. Support delete_on_termination in volume attach api -Spec: https://review.opendev.org/#/q/topic:bp/support-delete-on-termination-in-server-attach-volume+(status:open+OR+status:merged) - Weekly Progress: Spec is merged and code is up for review. This is another one from Brin. Ready for review and rebase on available microversion number can be done later. Specs are merged and code in-progress: ------------------------------ ------------------ 1. Detach and attach boot volumes: - Topic: https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: No Progress. Patches are in merge conflict. Spec Ready for Review: ----------------------------- 1. Support for changing deleted_on_termination after boot -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: This has been added in backlog. Previously approved Spec needs to be re-proposed for Train: --------------------------------------------------------------------------- 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - I remember I planned this to re-propose but could not get time. If anyone would like to help on this please repropose. otherwise I will start this in U cycle. 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - This also need volutneer - http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007411.html Others: 1. Add API ref guideline for body text - 2 api-ref are left to fix. Bugs: ==== No progress report in this week. NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. -gmann From ed at leafe.com Thu Aug 15 13:54:37 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Aug 2019 08:54:37 -0500 Subject: [User-committee] [uc] Less than 4 days left to nominate for the UC! In-Reply-To: References: Message-ID: On Aug 12, 2019, at 4:44 PM, Ed Leafe wrote: > A week has gone by since nominations opened, and we have yet to receive a single nomination! The nomination period will close in less than a day. So far we have 1 candidate, but there are two positions up for election. So if you’ve been hesitating, don’t wait any longer! The info for how to nominate from my previous email is below: > Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: > > Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). > > Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. > > > -- Ed Leafe > > [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html -- Ed Leafe From ed at leafe.com Thu Aug 15 13:57:20 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Aug 2019 08:57:20 -0500 Subject: [User-committee] [uc] Less than *1 DAY* left to nominate for the UC! In-Reply-To: References: Message-ID: <6AF63EFF-2845-494D-8D01-EB9902F604E6@leafe.com> (Re-sending with a more accurate subject line) On Aug 12, 2019, at 4:44 PM, Ed Leafe wrote: > A week has gone by since nominations opened, and we have yet to receive a single nomination! The nomination period will close in less than a day. So far we have 1 candidate, but there are two positions up for election. So if you’ve been hesitating, don’t wait any longer! The info for how to nominate from my previous email is below: > Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: > > Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). > > Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. > > > -- Ed Leafe > > [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html -- Ed Leafe _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee From pierre at stackhpc.com Thu Aug 15 14:32:55 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 15 Aug 2019 16:32:55 +0200 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: Some time after I posted to the list, I started to receive notifications. If someone fixed it, thanks a lot. As for the 10 stories limit, it appears to be specific to a project view, as I get the configured page size for global "Projects" and "Stories" lists. On Wed, 14 Aug 2019 at 14:48, Pierre Riteau wrote: > > Hello, > > I am reviving this thread as I have never received any email > notifications from starred projects in Storyboard, despite enabling > them multiple times. Although the change appears to be saved > correctly, if I log out and log back in, my preferences are reset to > defaults (not only for email settings, but also for page size). I also > noticed that increasing "Page size" doesn't have any effect within the > same session, I always see 10 results per page. > > Is there a known issue with persisting preferences in Storyboard? > > Thanks, > Pierre > > On Sun, 19 May 2019 at 06:17, Akihiro Motoki wrote: > > > > Thanks for the information. > > > > I re-enabled email notification and then started to receive notifications. I am not sure why this solved the problem but it now works for me. > > > > > > 2019年5月15日(水) 22:43 : > >> > >> On 2019-05-15 13:58, Akihiro Motoki wrote: > >> > Hi, > >> > > >> > Is there a way to get email notification on stories/tasks of > >> > subscribed projects in storyboard? > >> > >> Yes, go to your preferences > >> (https://storyboard.openstack.org/#!/profile/preferences) > >> by clicking on your name in the top right, then Preferences. > >> > >> Scroll to the bottom and check the "Enable notification emails" > >> checkbox, then > >> click "Save". There's a UI bug where sometimes the displayed preferences > >> will > >> look like the save button didn't work, but rest assured that it did > >> unless you > >> get an error message. > >> > >> Once you've done this the email associated with your OpenID will receive > >> notification emails for things you're subscribed to (which includes > >> changes on > >> stories/tasks related to projects you're subscribed to). > >> > >> Thanks, > >> > >> Adam (SotK) > >> From pierre at stackhpc.com Thu Aug 15 14:35:53 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 15 Aug 2019 16:35:53 +0200 Subject: [blazar] IRC meeting today Message-ID: Hello, Today we have our biweekly Blazar IRC meeting at 16:00 UTC on #openstack-meeting-alt: https://wiki.openstack.org/wiki/Meetings/Blazar#Agenda_for_15_Aug_2019_.28Americas.29 We can update on the status of upstream contributions from our user community. Everyone is welcome to join and bring up any other topic. Cheers, Pierre From colleen at gazlene.net Thu Aug 15 14:38:31 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 15 Aug 2019 07:38:31 -0700 Subject: =?UTF-8?Q?[keystone]_Feature_proposal_freeze_exception_for_refreshable_g?= =?UTF-8?Q?roup_membership?= Message-ID: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> Work is in progress to implement refreshable group membership in keystone[1]. In order to allow for some breathing room for thorough discussion on the implementation details, we're proposing a 1-week extension to our scheduled feature proposal freeze deadline (this week)[2]. Please let me or the team know if you have any concerns about this. Colleen [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/expiring-group-memberships.html [2] https://releases.openstack.org/train/schedule.html From kristi at nikolla.me Thu Aug 15 15:12:56 2019 From: kristi at nikolla.me (Kristi Nikolla) Date: Thu, 15 Aug 2019 11:12:56 -0400 Subject: [keystone] Feature proposal freeze exception for refreshable group membership In-Reply-To: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> References: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> Message-ID: Thanks Colleen! :) On Thu, Aug 15, 2019 at 10:46 AM Colleen Murphy wrote: > Work is in progress to implement refreshable group membership in > keystone[1]. In order to allow for some breathing room for thorough > discussion on the implementation details, we're proposing a 1-week > extension to our scheduled feature proposal freeze deadline (this week)[2]. > Please let me or the team know if you have any concerns about this. > > Colleen > > [1] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/expiring-group-memberships.html > [2] https://releases.openstack.org/train/schedule.html > > -- Kristi -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 15 15:34:54 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 15 Aug 2019 11:34:54 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: Is there any agreement from stable-maint-core? Asking because we're unable to modify ironic's stable maintenance group directly. -Julia On Mon, Aug 12, 2019 at 11:00 AM Ruby Loo wrote: > > +2. Good idea! :) > > --ruby > > On Fri, Aug 9, 2019 at 9:06 AM Dmitry Tantsur wrote: >> >> Hi folks! >> >> I'd like to propose adding Riccardo to our stable team. He's been consistently >> checking stable patches [1], and we're clearly understaffed when it comes to >> stable reviews. Thoughts? >> >> Dmitry >> >> [1] >> https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master >> From corvus at inaugust.com Thu Aug 15 17:04:53 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Aug 2019 10:04:53 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87y305onco.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 06 Aug 2019 17:01:11 -0700") References: <87y305onco.fsf@meyer.lemoncheese.net> Message-ID: <87wofepdfu.fsf@meyer.lemoncheese.net> Hi, We have made the switch to begin storing all of the build logs from Zuul in Swift. Each build's logs will be stored in one of 7 randomly chosen Swift regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those providers! You'll note that the links in Gerrit to the Zuul jobs now go to a page on the Zuul web app. A lot of the features previously available on the log server are now available there, plus some new ones. If you're looking for a link to a docs preview build, you'll find that on the build page under the "Artifacts" section now. If you're curious about where your logs ended up, you can see the Swift hostname under the "logs_url" row in the summary table. Please let us know if you have any questions or encounter any issues, either here, or in #openstack-infra on IRC. -Jim From openstack at nemebean.com Thu Aug 15 17:45:43 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Aug 2019 12:45:43 -0500 Subject: [glance] worker, thread, taskflow interplay In-Reply-To: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> References: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> Message-ID: <47aab914-f9b5-e8ee-bf1e-37b7b761f9c6@nemebean.com> On 8/15/19 2:55 AM, Gregory Orange wrote: > We are trying to figure out how these two settings interplay in: > > [DEFAULT]/workers > [taskflow_executor]/max_workers This depends on the executor being used. There are both thread and process executors, so it will either affect the number of threads or processes started. > [oslo_messaging_zmq]/rpc_thread_pool_size This is a zeromq-specific config opt that has been removed from recent versions of oslo.messaging (along with the zeromq driver). More generally, the thread_pool options in oslo.messaging will affect how many threads are created to handle messages. I believe that would be per-process, not per-service (but someone correct me if I'm wrong). > > Just setting workers makes a bit of sense, and based on our testing: > =0 creates one process > =1 creates 1 plus 1 child > =n creates 1 plus n children > > Are there green threads (i.e. coroutines, per https://eventlet.net/doc/basic_usage.html) within every process, regardless of the value of workers? Does max_workers affect that? > > We have read some Glance doco, hunted about various bug reports[1] and other discussions online to get some insight, but I think we're not clear on it. Can anyone explain this a bit better to me? This is all in Rocky. > > Thank you, > Greg. > > > [1] https://bugs.launchpad.net/glance/+bug/1748916 > From openstack at fried.cc Thu Aug 15 19:11:15 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 15 Aug 2019 14:11:15 -0500 Subject: [all][infra] Zuul logs are in swift Message-ID: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Hi infra. I wanted to blast out a handful of issues I've had since the cutover to swift. I'm using Chrome Version 76.0.3809.100 (Official Build) (64-bit) on bionic (18.04.3). - Hot tip: if you want the dynamic logs (with timestamp links and sev filters), use the twisties. Clicking through gets you to the raw files. The former was not obvious to me. - Some in-app logs aren't working. E.g. when I try to look at controller=>logs=>screen-n-cpu.txt.gz from [1], it redirects me to [2]. The hover [3] has a double slash in it, not sure if that's related, but when I try squashing to one slash, I get an error... sometimes [4]. - When the in-app logs do render, they don't wrap. There's a horizontal scroll bar, but it's at the bottom of an inner frame, so it's off the screen most of the time and therefore not useful. (I don't have horizontal mouse scroll capabilities; maybe I should look into that.) - The timestamp links in app anchor the line at the "top" - which (for me, anyway) is "underneath" the header menu (Status Projects Jobs Labels Nodes Builds Buildsets), so I have to scroll up to get the anchored line and a few of its successors. Thanks as always for all your hard work. efried [1] https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/logs [2] https://zuul.opendev.org/tenants [3] https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/log/controller//logs/screen-n-cpu.txt.gz [4] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-15.log.html#t2019-08-15T18:49:04 From mriedemos at gmail.com Thu Aug 15 19:22:00 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:22:00 -0500 Subject: [nova] The race for 2.76 In-Reply-To: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> References: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> Message-ID: <8a2dfaa7-5a79-c1dc-49d6-cbdcf4e543ce@gmail.com> On 8/13/2019 7:25 AM, Matt Riedemann wrote: > There are several compute API microversion changes that are conflicting > and will be fighting for 2.76, but I think we're trying to prioritize > this one [1] for the ironic power sync external event handling since (1) > Surya is going to be on vacation soon, (2) there is an ironic change > that depends on it which has had review [2] and (3) the nova change has > had quite a bit of review already. > > As such I think others waiting to rebase from 2.75 to 2.76 should > probably hold off until [1] is approved which should happen today or > tomorrow. > > [1] https://review.opendev.org/#/c/645611/ > [2] https://review.opendev.org/#/c/664842/ These are both approved now so let the rebasing begin! -- Thanks, Matt From mriedemos at gmail.com Thu Aug 15 19:24:05 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:24:05 -0500 Subject: [question][placement][rest api][403] In-Reply-To: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> References: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> Message-ID: <048dbc78-b205-5aea-fe90-73b0e6854946@gmail.com> On 8/15/2019 1:09 AM, 崔恒香 wrote: > This is my problem with placement rest api.(Token and endpoint were OK) > >         {"errors": [{"status": 403, >                          "title": "Forbidden", > "detail": "Access was denied to this resource.\n\n Policy does not allow > placement:resource_providers:list to be performed.  ", >                          "request_id": > "req-5b409f22-7741-4948-be6f-ea28c2896a3f" >                         }]} This doesn't give much information. Does the token have the admin role in it? Has the placement:resource_providers:list policy rule been changed from the default (rule:admin_api)? -- Thanks, Matt From mriedemos at gmail.com Thu Aug 15 19:28:42 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:28:42 -0500 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: On 8/9/2019 4:11 PM, Artom Lifshitz wrote: > tl;dr If you care about NUMA live migration, check out [1] and test in > in your env(s), or review it. > As I've said in IRC a few times, this feature was mentioned (at the last summit/PTG in Denver) as being critical for the next StarlingX release so I'd really hope the StarlingX community can help review and test this. I know there was some help from WindRiver in Stein which uncovered some issues, so it would be good to have that same kind of attention here. Feature freeze for Train is less than a month away (Sept 12). > > So if you care about NUMA-aware live migration and have some spare > time and hardware (if you're in the former category I don't think I > need to explain what kind of hardware - though I'll try to answer > questions as best I can), I would greatly appreciate it if you > deployed the patches and tested them. I've done that myself, of > course, but, as at the end of Stein, I'm sure there are edge cases > that I didn't think of (though I'm selfishly hoping that there > aren't). Again the testing here with real hardware is key, and that's something I'd hope Intel/WindRiver/StarlingX folk can help with since I personally don't have a lab sitting around available for NUMA testing. Since we won't have third party CI for this feature, it's going to be important that at least someone is hitting this with a real environment, ideally with mixed Stein and Train compute services as well to make sure it behaves properly during rolling upgrades. -- Thanks, Matt From dtroyer at gmail.com Thu Aug 15 20:23:11 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 15 Aug 2019 15:23:11 -0500 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: On Thu, Aug 15, 2019 at 2:31 PM Matt Riedemann wrote: > As I've said in IRC a few times, this feature was mentioned (at the last > summit/PTG in Denver) as being critical for the next StarlingX release > so I'd really hope the StarlingX community can help review and test > this. I know there was some help from WindRiver in Stein which uncovered > some issues, so it would be good to have that same kind of attention > here. Feature freeze for Train is less than a month away (Sept 12). StarlingX does have time built in for this testing, intending to be complete before the STX 2.0 release at the end of August. I've suggested that we need to test both Train and our Stein backport but I am not the one with the resources to allocate. > Again the testing here with real hardware is key, and that's something > I'd hope Intel/WindRiver/StarlingX folk can help with since I personally > don't have a lab sitting around available for NUMA testing. Since we > won't have third party CI for this feature, it's going to be important > that at least someone is hitting this with a real environment, ideally > with mixed Stein and Train compute services as well to make sure it > behaves properly during rolling upgrades. Oddly enough, in my $OTHER_DAY_JOB Intel's new Third Party CI is at the top of my list and we are getting dangerously close there in general, but this testing is unfortunately not first in line. dt -- Dean Troyer dtroyer at gmail.com From openstack at nemebean.com Thu Aug 15 20:57:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Aug 2019 15:57:36 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190815014957.GB5923@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> <20190815014957.GB5923@fedora19.localdomain> Message-ID: <0754e889-1e5d-94de-1ebb-646e995eff0e@nemebean.com> On 8/14/19 8:49 PM, Ian Wienand wrote: > On Wed, Aug 14, 2019 at 12:07:01PM -0500, Ben Nemec wrote: >> I have a PoC patch up in devstack[0] to start using the openstack-server >> client. It passed the basic devstack test and looking through the logs you >> can see that openstack calls are now completing in fractions of a second as >> opposed to 2.5 to 3, so I think it's working as intended. > > I see this as having a couple of advantages > > * no bespoke API interfacing code to maintain > * the wrapper is custom but pretty small > * plugins can benefit by using the same wrapper > * we can turn the wrapper off and fall back to the same calls directly > with the client (also good for local interaction) > * in a similar theme, it's still pretty close to "what I'd type on the > command line to do this" which is a bit of a devstack theme > > So FWIW I'm positive on the direction, thanks! > > -i > > (some very experienced people have said "we know it's slow" and I > guess we should take advice on if this is a temporary work-around, or > an actual solution) > Okay, I've got https://review.opendev.org/#/c/676016/ passing devstack ci now and I think it's ready for initial review. I don't know if everything I'm doing will fly with the devstack folks, but the reasons why should be covered in the commit message. I'm open to suggestions on alternate ways to accomplish the same things. From corvus at inaugust.com Thu Aug 15 22:23:56 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Aug 2019 15:23:56 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> (Eric Fried's message of "Thu, 15 Aug 2019 14:11:15 -0500") References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Message-ID: <875zmym5j7.fsf@meyer.lemoncheese.net> Eric Fried writes: > - Hot tip: if you want the dynamic logs (with timestamp links and sev > filters), use the twisties. Clicking through gets you to the raw files. > The former was not obvious to me. Good point; we should work on the UI for that. > - Some in-app logs aren't working. E.g. when I try to look at > controller=>logs=>screen-n-cpu.txt.gz from [1], it redirects me to [2]. > The hover [3] has a double slash in it, not sure if that's related, but > when I try squashing to one slash, I get an error... sometimes [4]. > - When the in-app logs do render, they don't wrap. There's a horizontal > scroll bar, but it's at the bottom of an inner frame, so it's off the > screen most of the time and therefore not useful. (I don't have > horizontal mouse scroll capabilities; maybe I should look into that.) > - The timestamp links in app anchor the line at the "top" - which (for > me, anyway) is "underneath" the header menu (Status Projects Jobs Labels > Nodes Builds Buildsets), so I have to scroll up to get the anchored line > and a few of its successors. Thanks! Fixes for all of these (plus one more: making the log text easier to select for copy/paste) are in-flight. -Jim From smooney at redhat.com Fri Aug 16 01:20:03 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Aug 2019 02:20:03 +0100 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: <45af5fc47a54217f6756bc854d452e7164b30757.camel@redhat.com> On Thu, 2019-08-15 at 15:23 -0500, Dean Troyer wrote: > On Thu, Aug 15, 2019 at 2:31 PM Matt Riedemann wrote: > > As I've said in IRC a few times, this feature was mentioned (at the last > > summit/PTG in Denver) as being critical for the next StarlingX release > > so I'd really hope the StarlingX community can help review and test > > this. I know there was some help from WindRiver in Stein which uncovered > > some issues, so it would be good to have that same kind of attention > > here. Feature freeze for Train is less than a month away (Sept 12). > > StarlingX does have time built in for this testing, intending to be > complete before the STX 2.0 release at the end of August. I've > suggested that we need to test both Train and our Stein backport but I > am not the one with the resources to allocate. i doubt you will be able to safely backport this to Stein as it contains RPC/object chagnes which would normally will break things on upgrade. e.g. if you backport this to Stine in STX 1.Y.Z going form 1.0 to 1.Y.Z would require you to treat it like a majour upgrde and upgrade all your contoller first followed by the compute to ensure you never generate a copy of the updated object before the nodes that recive tehm are updated. if you don't do that then service will start exploding. we did a part backport of numa aware vswitch internally and had to drop all the object changes and schduler change and only backprot the virt driver chagnes as we could not figure out a safe way to backprot ovo changes that would not break deployment if you didnt syncquest eh update like a majory version upgrade wiche we cant assume for z releases (x.y.z). but glad to hear that in either case ye do plan to test it in some capasity. i have dual numa hardware i own that i plan to test it on personally but the more the better. > > > Again the testing here with real hardware is key, and that's something > > I'd hope Intel/WindRiver/StarlingX folk can help with since I personally > > don't have a lab sitting around available for NUMA testing. Since we > > won't have third party CI for this feature, it's going to be important > > that at least someone is hitting this with a real environment, ideally > > with mixed Stein and Train compute services as well to make sure it > > behaves properly during rolling upgrades. > > Oddly enough, in my $OTHER_DAY_JOB Intel's new Third Party CI is at > the top of my list and we are getting dangerously close there in > general, but this testing is unfortunately not first in line. speak of that. i see igor rebased https://review.opendev.org/#/c/652197/ i havent really look at that since may and it looks like some files permission have change so its currenlty broken. im not sure if he/ye planned on taking that over or if he was just interested either is fine. my fist party ci solution has kind of stalled since i just have not had time to work on it (given it snot part of my $OTHER_DAY_JOB) so looking forward to the third part ci you are working on. if i find time to work on that again i will but it still didnt have full parity with what the intel nfv ci was testing as it was running with the singel numa node guest we have in the gate but it would be nice to have even basic first party test of pinning/hugepages at some point. even though i wrote it i dont liek the fact that i was force to use fedroa with the virt preview repos enabled ot get a new enough qemu/libvirt to even to patial testing witout nested virt so i would still guess the third part ci would be more reliyable since it can actully use nested vert provide you replace the default ubuntu kernel with somthing based on 4.19 > > dt > From zhangbailin at inspur.com Fri Aug 16 02:44:41 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 16 Aug 2019 02:44:41 +0000 Subject: [lists.openstack.org]Re: [nova] The race for 2.76 Message-ID: > On 8/13/2019 7:25 AM, Matt Riedemann wrote: > > There are several compute API microversion changes that are > > conflicting and will be fighting for 2.76, but I think we're trying to > > prioritize this one [1] for the ironic power sync external event > > handling since (1) Surya is going to be on vacation soon, (2) there is > > an ironic change that depends on it which has had review [2] and (3) > > the nova change has had quite a bit of review already. > > > > As such I think others waiting to rebase from 2.75 to 2.76 should > > probably hold off until [1] is approved which should happen today or > > tomorrow. > > > > [1] https://review.opendev.org/#/c/645611/ > > [2] https://review.opendev.org/#/c/664842/ > These are both approved now so let the rebasing begin! "Specifying az when restore shelved server" have been updated, and now it's in the nova-runway, please review, thanks. Links: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) -- Thanks, Matt From soulxu at gmail.com Fri Aug 16 04:09:01 2019 From: soulxu at gmail.com (Alex Xu) Date: Fri, 16 Aug 2019 12:09:01 +0800 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: Stephen Finucane 于2019年8月15日周四 下午8:25写道: > tl;dr: Is breaking booting of pinned instances on Stein compute nodes > in a Train deployment an acceptable thing to do, and if not, how do we > best handle the VCPU->PCPU migration in Train? > > I've been working through the cpu-resources spec [1] and have run into > a tricky issue I'd like some input on. In short, this spec means that > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > consuming a new resources type, PCPU, instead of VCPU. Many things need > to change to make this happen but the key changes are: > > 1. The scheduler needs to start modifying requests for pinned instances > to request PCPU resources instead of VCPU resources > 2. The libvirt driver needs to start reporting PCPU resources > 3. The libvirt driver needs to do a reshape, moving all existing > allocations of VCPUs to PCPUs, if the instance holding that > allocation is pinned > > The first two of these steps presents an issue for which we have a > solution, but the solutions we've chosen are now resulting in this new > issue. > > * For (1), the translation of VCPU to PCPU in the scheduler means > compute nodes must now report PCPU in order for a pinned instance to > land on that host. Since controllers are upgraded before compute > nodes and all compute nodes aren't necessarily upgraded in one go > (particularly for edge or other large or multi-cell deployments), > this can mean there will be a period of time where there are very > few or no hosts available on which to schedule pinned instances. > > * For (2), we're hampered by the fact that there is no clear way to > determine if a host is used for pinned instances or not. Because of > this, we can't determine if a host should be reporting PCPU or VCPU > inventory. > > The solution we have for the issues with (1) is to add a workaround > option that would disable this translation, allowing operators time to > upgrade all their compute nodes to report PCPU resources before > anything starts using them. For (2), we've decided to temporarily (i.e. > for one release or until configuration is updated) report both, in the > expectation that everyone using pinned instances has followed the long- > standing advice to separate hosts intended for pinned instances from > those intended for unpinned instances using host aggregates (e.g. even > if we started reporting PCPUs on a host, nothing would consume that due > to 'pinned=False' aggregate metadata or similar). These actually > benefit each other, since if instances are still consuming VCPUs then > the hosts need to continue reporting VCPUs. However, both interfere > with our ability to do the reshape. > > Normally, a reshape is a one time thing. The way we'd planned to > determine if a reshape was necessary was to check if PCPU inventory was > registered against the host and, if not, whether there were any pinned > instances on the host. If PCPU inventory was not available and there > were pinned instances, we would update the allocations for these > instances so that they would be consuming PCPUs instead of VCPUs and > then update the inventory. This is problematic though, because our > solution for the issue with (1) means pinned instances can continue to > request VCPU resources, which in turn means we could end up with some > pinned instances on a host consuming PCPU and other consuming VCPU. > That obviously can't happen, so we need to change tacks slightly. The > two obvious solutions would be to either (a) remove the workaround > option so the scheduler would immediately start requesting PCPUs and > just advise operators to upgrade their hosts for pinned instances asap > or (b) add a different option, defaulting to True, that would apply to > both the scheduler and compute nodes and prevent not only translation > of flavors in the scheduler but also the reporting PCPUs and reshaping > of allocations until disabled. > > The step I'm thinking is: 1. upgrade control plane, disable request PCPU, still request VCPU. 2. rolling upgrade compute node, compute nodes begin to report both PCPU and VCPU. But the request still add to VCPU. 3. enabling the PCPU request, the new request is request PCPU. In this point, some of instances are using VCPU, some of instances are using PCPU on same node. And the amount VCPU + PCPU will double the available cpu resources. The NUMATopology filter is responsible for stop over-consuming the total number of cpu. 4. rolling update compute node's configure to use cpu_dedicated_set, that trigger the reshape existed VCPU consuming to PCPU consuming. New request is going to PCPU at step3, no more VCPU request at this point. Roll upgrade node to get rid of existed VCPU consuming. 5. done > I'm currently leaning towards (a) because it's a *lot* simpler, far > more robust (IMO) and lets us finish this effort in a single cycle, but > I imagine this could make upgrades very painful for operators if they > can't fast track their compute node upgrades. (b) is more complex and > would have some constraints, chief among them being that the option > would have to be disabled at some point post-release and would have to > be disabled on the scheduler first (to prevent the mismash or VCPU and > PCPU resource allocations) above. It also means this becomes a three > cycle effort at minimum, since this new option will default to True in > Train, before defaulting to False and being deprecated in U and finally > being removed in V. As such, I'd like some input, particularly from > operators using pinned instances in larger deployments. What are your > thoughts, and are there any potential solutions that I'm missing here? > > Cheers, > Stephen > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Fri Aug 16 04:43:32 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Fri, 16 Aug 2019 11:43:32 +0700 Subject: [neutron][OVN] Instance not getting metadata Message-ID: Hi, I have set up OpenStack with OVN enabled (manual install) and I can create Instance, associate floating IP and test the ping. But my Instance not getting metadata from OpenStack. I check the reference architecture, https://docs.openstack.org/networking-ovn/queens/admin/refarch/refarch.html the compute nodes should have installed ovn-metadata-agent but I don't find any configuration document about ovn-metadata-agent. I have configured the configuration files like this: [DEFAULT] nova_metadata_ip = 10.100.100.10 [ovs] ovsdb_connection = unix:/var/run/openvswitch/db.sock [ovn] ovn_sb_connection = tcp:10.101.101.10:6642 But the agent always fails. Full logs: http://paste.openstack.org/show/757805/ Anyone know whats happen and how to fix this error? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 16 05:19:34 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 16 Aug 2019 15:19:34 +1000 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: <20190816051934.GD15862@thor.bakeyournoodle.com> On Thu, Aug 15, 2019 at 11:34:54AM -0400, Julia Kreger wrote: > Is there any agreement from stable-maint-core? Asking because we're > unable to modify ironic's stable maintenance group directly. Sorry. +2 +W Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mark at stackhpc.com Fri Aug 16 08:32:41 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 16 Aug 2019 09:32:41 +0100 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87wofepdfu.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> Message-ID: On Thu, 15 Aug 2019 at 18:05, James E. Blair wrote: > > Hi, > > We have made the switch to begin storing all of the build logs from Zuul > in Swift. > > Each build's logs will be stored in one of 7 randomly chosen Swift > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > providers! > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > on the Zuul web app. A lot of the features previously available on the > log server are now available there, plus some new ones. > > If you're looking for a link to a docs preview build, you'll find that > on the build page under the "Artifacts" section now. > > If you're curious about where your logs ended up, you can see the Swift > hostname under the "logs_url" row in the summary table. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. One minor thing I noticed is that the emails to openstack-stable-maint list no longer reference the branch. It was previously visible in the URL, e.g. - openstack-tox-py27 https://logs.opendev.org/periodic-stable/opendev.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/649bbb2/ : RETRY_LIMIT in 3m 08s However now it is not: openstack-tox-py27 https://zuul.opendev.org/t/openstack/build/464ae8b594cf4dc5b6da532c4ea179a7 : RETRY_LIMIT in 3m 31s I can see the branch if I click through to the linked Zuul build page. > > -Jim > From sfinucan at redhat.com Fri Aug 16 09:58:50 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 16 Aug 2019 10:58:50 +0100 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: On Fri, 2019-08-16 at 12:09 +0800, Alex Xu wrote: > Stephen Finucane 于2019年8月15日周四 下午8:25写道: > > tl;dr: Is breaking booting of pinned instances on Stein compute > > nodes > > > > in a Train deployment an acceptable thing to do, and if not, how do > > we > > > > best handle the VCPU->PCPU migration in Train? > > > > > > > > I've been working through the cpu-resources spec [1] and have run > > into > > > > a tricky issue I'd like some input on. In short, this spec means > > that > > > > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > > > > consuming a new resources type, PCPU, instead of VCPU. Many things > > need > > > > to change to make this happen but the key changes are: > > > > > > > > 1. The scheduler needs to start modifying requests for pinned > > instances > > > > to request PCPU resources instead of VCPU resources > > > > 2. The libvirt driver needs to start reporting PCPU resources > > > > 3. The libvirt driver needs to do a reshape, moving all existing > > > > allocations of VCPUs to PCPUs, if the instance holding that > > > > allocation is pinned > > > > > > > > The first two of these steps presents an issue for which we have a > > > > solution, but the solutions we've chosen are now resulting in this > > new > > > > issue. > > > > > > > > * For (1), the translation of VCPU to PCPU in the scheduler means > > > > compute nodes must now report PCPU in order for a pinned > > instance to > > > > land on that host. Since controllers are upgraded before compute > > > > nodes and all compute nodes aren't necessarily upgraded in one > > go > > > > (particularly for edge or other large or multi-cell > > deployments), > > > > this can mean there will be a period of time where there are > > very > > > > few or no hosts available on which to schedule pinned instances. > > > > > > > > * For (2), we're hampered by the fact that there is no clear way > > to > > > > determine if a host is used for pinned instances or not. Because > > of > > > > this, we can't determine if a host should be reporting PCPU or > > VCPU > > > > inventory. > > > > > > > > The solution we have for the issues with (1) is to add a workaround > > > > option that would disable this translation, allowing operators time > > to > > > > upgrade all their compute nodes to report PCPU resources before > > > > anything starts using them. For (2), we've decided to temporarily > > (i.e. > > > > for one release or until configuration is updated) report both, in > > the > > > > expectation that everyone using pinned instances has followed the > > long- > > > > standing advice to separate hosts intended for pinned instances > > from > > > > those intended for unpinned instances using host aggregates (e.g. > > even > > > > if we started reporting PCPUs on a host, nothing would consume that > > due > > > > to 'pinned=False' aggregate metadata or similar). These actually > > > > benefit each other, since if instances are still consuming VCPUs > > then > > > > the hosts need to continue reporting VCPUs. However, both interfere > > > > with our ability to do the reshape. > > > > > > > > Normally, a reshape is a one time thing. The way we'd planned to > > > > determine if a reshape was necessary was to check if PCPU inventory > > was > > > > registered against the host and, if not, whether there were any > > pinned > > > > instances on the host. If PCPU inventory was not available and > > there > > > > were pinned instances, we would update the allocations for these > > > > instances so that they would be consuming PCPUs instead of VCPUs > > and > > > > then update the inventory. This is problematic though, because our > > > > solution for the issue with (1) means pinned instances can continue > > to > > > > request VCPU resources, which in turn means we could end up with > > some > > > > pinned instances on a host consuming PCPU and other consuming VCPU. > > > > That obviously can't happen, so we need to change tacks slightly. > > The > > > > two obvious solutions would be to either (a) remove the workaround > > > > option so the scheduler would immediately start requesting PCPUs > > and > > > > just advise operators to upgrade their hosts for pinned instances > > asap > > > > or (b) add a different option, defaulting to True, that would apply > > to > > > > both the scheduler and compute nodes and prevent not only > > translation > > > > of flavors in the scheduler but also the reporting PCPUs and > > reshaping > > > > of allocations until disabled. > > > > > > The step I'm thinking is: > > 1. upgrade control plane, disable request PCPU, still request VCPU. > 2. rolling upgrade compute node, compute nodes begin to report both > PCPU and VCPU. But the request still add to VCPU. > 3. enabling the PCPU request, the new request is request PCPU. > In this point, some of instances are using VCPU, some of > instances are using PCPU on same node. And the amount VCPU + PCPU > will double the available cpu resources. The NUMATopology filter is > responsible for stop over-consuming the total number of cpu. > 4. rolling update compute node's configure to use cpu_dedicated_set, > that trigger the reshape existed VCPU consuming to PCPU consuming. > New request is going to PCPU at step3, no more VCPU request at > this point. Roll upgrade node to get rid of existed VCPU consuming. > 5. done This had been my initial plan. The issue is that by reporting both PCPU and VCPU in (2), our compute node's resource provider will now have PCPU inventory available (though it won't be used). This is problematic since "does this resource provider have PCPU inventory" is one of the questions I need to ask to determine if I should do a reshape. If I can't rely on this heuristic, I need to start querying for allocation information (so I can ask "does this resource provider have PCPU *allocations*") every time I start a compute node. I'm guessing this is expensive, since we don't do it by default. Stephen > > I'm currently leaning towards (a) because it's a *lot* simpler, far > > > > more robust (IMO) and lets us finish this effort in a single cycle, > > but > > > > I imagine this could make upgrades very painful for operators if > > they > > > > can't fast track their compute node upgrades. (b) is more complex > > and > > > > would have some constraints, chief among them being that the option > > > > would have to be disabled at some point post-release and would have > > to > > > > be disabled on the scheduler first (to prevent the mismash or VCPU > > and > > > > PCPU resource allocations) above. It also means this becomes a > > three > > > > cycle effort at minimum, since this new option will default to True > > in > > > > Train, before defaulting to False and being deprecated in U and > > finally > > > > being removed in V. As such, I'd like some input, particularly from > > > > operators using pinned instances in larger deployments. What are > > your > > > > thoughts, and are there any potential solutions that I'm missing > > here? > > > > > > > > Cheers, > > > > Stephen > > > > > > > > [1] > > https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 16 10:44:04 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Aug 2019 10:44:04 +0000 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: <20190816104403.txlg6sbqdroz2ghm@yuggoth.org> On 2019-08-15 16:32:55 +0200 (+0200), Pierre Riteau wrote: > Some time after I posted to the list, I started to receive > notifications. If someone fixed it, thanks a lot. [...] I had flagged your report to look into once I wasn't bouncing between airplanes, but had not done so yet. I still intend to check the MTA logs for any earlier delivery failures to your address once I get home. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 16 12:04:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Aug 2019 12:04:10 +0000 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: <20190816120409.t5cyb345cygytrgj@yuggoth.org> On 2019-08-14 14:48:30 +0200 (+0200), Pierre Riteau wrote: > I am reviving this thread as I have never received any email > notifications from starred projects in Storyboard, despite > enabling them multiple times. [...] It looks from the MTA logs like it began to send you notifications on 2019-08-14 at 14:56:23 UTC. I don't see any indication of any messages getting rejected. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From no-reply at openstack.org Fri Aug 16 12:52:54 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 16 Aug 2019 12:52:54 -0000 Subject: kayobe 6.0.0.0rc1 (stein) Message-ID: Hello everyone, A new release candidate for kayobe for the end of the Stein cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kayobe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Stein release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/stein release branch at: https://opendev.org/openstack/kayobe/log/?h=stable/stein Release notes for kayobe can be found at: https://docs.openstack.org/releasenotes/kayobe/ From lucasagomes at gmail.com Fri Aug 16 14:20:58 2019 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Fri, 16 Aug 2019 15:20:58 +0100 Subject: [neutron][OVN] Instance not getting metadata In-Reply-To: References: Message-ID: Hi Zufar, The ovn-metadata-agent is trying to connection with the local OVSDB instance via UNIX socket. The same socket used by the "ovn-controller" process running on your compute nodes. For example: $ ps ax | grep ovn-controller 1640 ? S wrote: > > Hi, > > I have set up OpenSt