From anlin.kong at gmail.com Thu Aug 1 00:01:30 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 1 Aug 2019 12:01:30 +1200 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Hi Bernd, There were a lot of people asked the same question before, unfortunately, I don't know the answer either(we are still using an old version of Ceilometer). The original cpu_util support has been removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no doc in Gnocchi mentioned how to achieve the same thing and no clear answer from the Gnocchi maintainers. It'd be much appreciated if you could find the answer in the end, or there will be someone who has the already solved the issue. Best regards, Lingxian Kong Catalyst Cloud On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch wrote: > The message at the end of this email is some three months old. I have the > same problem. The question is: *How to use the new rate metrics in > Gnocchi. *I am using a Stein Devstack for my tests. > > For example, I need the CPU rate, formerly named *cpu_util*. I created a > new archive policy that uses *rate:mean* aggregation and has a 1 minute > granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > > +---------------------+------------------------------------------------------------------+ > | Field | > Value | > > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean | > | back_window | > 0 | > | definition | - points: 10080, granularity: 0:01:00, timespan: 7 > days, 0:00:00 | > | name | > ceilometer-medium-rate | > > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in *pipeline.yaml*: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: > - name: meter_sink > publishers: > - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift > *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate would > magically appear in the metric list. But no: All metrics are linked to > archive policy *medium*, and looking at the details of an instance, I > don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > > +-----------------------+---------------------------------------------------------------------+ > | Field | > Value | > > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > | | cpu: > af930d9a-a218-4230-b729-fee7e3796944 | > | | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80 | > | | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | > | | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183 | > | | memory.swap.in: > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | > | | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4 | > | | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b | > | | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | > | | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | > | original_resource_id | > ae3659d6-8998-44ae-a494-5248adbebe11 | > ... > > | type | > instance | > | user_id | > a9c935f52e5540fc9befae7f91b4b3ae | > > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What do I > have to do to get CPU usage rates? Do I have to create metrics? Do I have > to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is correct, I > would be happy to write documentation once I understand how it works. > > Thanks a lot. > > Bernd > On 5/10/2019 3:49 PM, info at dantalion.nl wrote: > > Hello, > > I am working on Watcher and we are currently changing how metrics are > retrieved from different datasources such as Monasca or Gnocchi. Because > of this major overhaul I would like to validate that everything is > working correctly. > > Almost all of the optimization strategies in Watcher require the cpu > utilization of an instance as metric but with newer versions of > Ceilometer this has become unavailable. > > On IRC I received the information that Gnocchi could be used to > configure an aggregate and this aggregate would then report cpu > utilization, however, I have been unable to find documentation on how to > achieve this. > > I was also notified that cpu_util is something that could be computed > from other metrics. When readinghttps://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute > the documentation seems to agree on this as it states that cpu_util is > measured by using a 'rate of change' transformer. But I have not been > able to find how this can be computed. > > I was hoping someone could spare the time to provide documentation or > information on how this currently is best achieved. > > Kind Regards, > Corne Lukken (Dantali0n) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 01:16:25 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 10:16:25 +0900 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Lingxian, Thanks for "bumping" my request and keeping it alive. The reason I need an answer: I am updating courseware to Stein that includes autoscaling based on CPU and disk I/O rates. Looks like I am "cutting edge" :) I don't think the problem is in the Gnocchi camp, but rather Ceilometer. To store rates of measures in z, the following is needed: * A /metric/. Raw measures are sent to the metric. * An /archive policy/. The metric has an archive policy. * The archive policy includes one or more /rate aggregates/ My cloud has archive policies with rate aggregates, but the question is about the first bullet: *How can I configure Ceilometer so that it creates the corresponding metrics and sends measures to them. *In other words, how is Ceilometer's output connected to my archive policy. From my experience, just adding the archive policy to Ceilometer's publishers is not sufficient. Ceilometer's source code includes /.../publisher/data/gnocchi_resources.yaml/, which might well be the place where this can be configured. I am not sure how to do it though, and this file is not documented. I can read the source, but my developer skills are insufficient for understanding how everything fits together. Bernd On 8/1/2019 9:01 AM, Lingxian Kong wrote: > Hi Bernd, > > There were a lot of people asked the same question before, > unfortunately, I don't know the answer either(we are still using an > old version of Ceilometer). The original cpu_util support has been > removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no > doc in Gnocchi mentioned how to achieve the same thing and no clear > answer from the Gnocchi maintainers. > > It'd be much appreciated if you could find the answer in the end, or > there will be someone who has the already solved the issue. > > Best regards, > Lingxian Kong > Catalyst Cloud > > > On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > wrote: > > The message at the end of this email is some three months old. I > have the same problem. The question is: *How to use the new rate > metrics in Gnocchi. *I am using a Stein Devstack for my tests.* > * > > For example, I need the CPU rate, formerly named /cpu_util/. I > created a new archive policy that uses /rate:mean/ aggregation and > has a 1 minute granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > +---------------------+------------------------------------------------------------------+ > | Field               | Value | > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean                                                  | > | back_window         | 0 | > | definition          | - points: 10080, granularity: 0:01:00, > timespan: 7 days, 0:00:00 | > | name                | ceilometer-medium-rate | > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in /pipeline.yaml/: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: >     - name: meter_sink >       publishers: >           - > gnocchi://?archive_policy=medium&filter_project=gnocchi_swift > *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate > would magically appear in the metric list. But no: All metrics are > linked to archive policy /medium/, and looking at the details of > an instance, I don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > +-----------------------+---------------------------------------------------------------------+ > | Field                 | Value | > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics               | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > |                       | cpu: af930d9a-a218-4230-b729-fee7e3796944 | > |                       | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | > |                       | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | > |                       | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183               | > |                       | memory.swap.in : > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | > |                       | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4               | > |                       | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b                  | > |                       | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | > |                       | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | > | original_resource_id  | ae3659d6-8998-44ae-a494-5248adbebe11 | > ... > > | type                  | instance | > | user_id               | a9c935f52e5540fc9befae7f91b4b3ae | > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What > do I have to do to get CPU usage rates? Do I have to create > metrics? Do//I have to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is > correct, I would be happy to write documentation once I understand > how it works. > > Thanks a lot. > > Bernd > > On 5/10/2019 3:49 PM, info at dantalion.nl > wrote: >> Hello, >> >> I am working on Watcher and we are currently changing how metrics are >> retrieved from different datasources such as Monasca or Gnocchi. Because >> of this major overhaul I would like to validate that everything is >> working correctly. >> >> Almost all of the optimization strategies in Watcher require the cpu >> utilization of an instance as metric but with newer versions of >> Ceilometer this has become unavailable. >> >> On IRC I received the information that Gnocchi could be used to >> configure an aggregate and this aggregate would then report cpu >> utilization, however, I have been unable to find documentation on how to >> achieve this. >> >> I was also notified that cpu_util is something that could be computed >> from other metrics. When reading >> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >> the documentation seems to agree on this as it states that cpu_util is >> measured by using a 'rate of change' transformer. But I have not been >> able to find how this can be computed. >> >> I was hoping someone could spare the time to provide documentation or >> information on how this currently is best achieved. >> >> Kind Regards, >> Corne Lukken (Dantali0n) >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 01:23:37 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 10:23:37 +0900 Subject: [rdo] [packstack] failing to setup Stein with Openvswitch/VXLAN networking In-Reply-To: References: Message-ID: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> Thanks Alfredo. Yes, a single eth0: $ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 ... 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000     link/ether 52:54:00:dd:2c:f3 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.202/24 brd 192.168.1.255 scope global eth0        valid_lft forever preferred_lft forever     inet6 240d:1a:3c5:3d00:5054:ff:fedd:2cf3/64 scope global mngtmpaddr dynamic        valid_lft 10790sec preferred_lft 10790sec     inet6 fe80::5054:ff:fedd:2cf3/64 scope link        valid_lft forever preferred_lft forever and ifcfg-eth0 is unchanged since installation (perhaps I should clean it up a little; could IPv6 be causing problems?): $ cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=75c26b23-f3fb-4d3f-a473-5354075d5a25 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.202 PREFIX=24 GATEWAY=192.168.1.1 DNS1=192.168.1.16 DNS2=1.1.1.1 DOMAIN=home IPV6_PRIVACY=no More info: This is a VM running /CentOS 7.6.1810 /on a Fedora KVM host. eth0 is bridged. I did disable and stop network manager and firewalld on the guest. Bernd. On 7/31/2019 10:13 PM, Alfredo Moralejo Alonso wrote: > Hi, > > So, IIUC, your server has a single NIC, eth0, right? > > Could you provide the configuration file for eth0 before running > packstack?, i guess you are using ifcfg files? > > Best regards, > > Alfredo > > > On Wed, Jul 31, 2019 at 2:20 AM Bernd Bausch > wrote: > > Trying to set up a Stein cloud with Packstack. I want the > Openvswitch mech driver and VXLAN type driver. A few weeks ago, > the following invocation was successful: > > |sudo packstack --debug --allinone --default-password pw \ > --os-neutron-ovs-bridge-interfaces=br-ex:eth0 \ > --os-neutron-ml2-tenant-network-types=vxlan \ > --os-neutron-ml2-mechanism-drivers=openvswitch \ > --os-neutron-ml2-type-drivers=vxlan,flat \ > --os-neutron-l2-agent=openvswitch \ > --provision-demo-floatrange=10.1.1.0/24\ > --provision-demo-allocation-pools > '["start=10.1.1.10,end=10.1.1.50"]'\ --os-heat-install=y > --os-heat-cfn-install=y||| > > Now, it fails during network setup. My network connection to the > Packstack server is severed, and it turns out that its only > network interface /eth0 /has no IP address and is down. No bridge > exists. > > In the /network.pp.finished /file, I find various /ovs-vsctl > /commands including /add-br/, and a command /ifdown eth0 /which > fails with exit code 1 (no error message from the /ifdown /command > is logged). > > *Can somebody recommend the options required to successfully > deploy a Stein cloud based on the Openvswitch and VXLAN drivers?* > > Thanks much, > > Bernd > > > > || > > || > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 02:39:05 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 11:39:05 +0900 Subject: [rdo] [packstack] failing to setup Stein with Openvswitch/VXLAN networking In-Reply-To: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> References: <173d57de-7282-4600-d230-191b9b17ac31@gmail.com> Message-ID: <4ab53229-9c7a-70e4-7c95-8e466d6463ca@gmail.com> Hmmmmm. I just succeeded in setting up my Packstack after removing IPv6 from the kernel and cleaning out ifgcfg-eth0. Since I don't need IPv6, I am good. My config: $ *cat /etc/sysconfig/network-scripts/ifcfg-eth0* TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.1.202 PREFIX=24 GATEWAY=192.168.1.1 DNS1=192.168.1.16 DNS2=1.1.1.1 DOMAIN=home IPV6INIT=no $ *cat /etc/sysctl.conf* net.ipv6.conf.eth0.disable_ipv6=1 Thanks again, Alfredo. On 8/1/2019 10:23 AM, Bernd Bausch wrote: > > Thanks Alfredo. Yes, a single eth0: > > $ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1000 > ... > 2: eth0: mtu 1500 qdisc pfifo_fast > state UP group default qlen 1000 >     link/ether 52:54:00:dd:2c:f3 brd ff:ff:ff:ff:ff:ff >     inet 192.168.1.202/24 brd 192.168.1.255 scope global eth0 >        valid_lft forever preferred_lft forever >     inet6 240d:1a:3c5:3d00:5054:ff:fedd:2cf3/64 scope global > mngtmpaddr dynamic >        valid_lft 10790sec preferred_lft 10790sec >     inet6 fe80::5054:ff:fedd:2cf3/64 scope link >        valid_lft forever preferred_lft forever > > and ifcfg-eth0 is unchanged since installation (perhaps I should clean > it up a little; could IPv6 be causing problems?): > > $ cat /etc/sysconfig/network-scripts/ifcfg-eth0 > TYPE=Ethernet > PROXY_METHOD=none > BROWSER_ONLY=no > BOOTPROTO=none > DEFROUTE=yes > IPV4_FAILURE_FATAL=no > IPV6INIT=yes > IPV6_AUTOCONF=yes > IPV6_DEFROUTE=yes > IPV6_FAILURE_FATAL=no > IPV6_ADDR_GEN_MODE=stable-privacy > NAME=eth0 > UUID=75c26b23-f3fb-4d3f-a473-5354075d5a25 > DEVICE=eth0 > ONBOOT=yes > IPADDR=192.168.1.202 > PREFIX=24 > GATEWAY=192.168.1.1 > DNS1=192.168.1.16 > DNS2=1.1.1.1 > DOMAIN=home > IPV6_PRIVACY=no > > More info: This is a VM running /CentOS 7.6.1810 /on a Fedora KVM > host. eth0 is bridged. I did disable and stop network manager and > firewalld on the guest. > > Bernd. > > On 7/31/2019 10:13 PM, Alfredo Moralejo Alonso wrote: >> Hi, >> >> So, IIUC, your server has a single NIC, eth0, right? >> >> Could you provide the configuration file for eth0 before running >> packstack?, i guess you are using ifcfg files? >> >> Best regards, >> >> Alfredo >> >> >> On Wed, Jul 31, 2019 at 2:20 AM Bernd Bausch > > wrote: >> >> Trying to set up a Stein cloud with Packstack. I want the >> Openvswitch mech driver and VXLAN type driver. A few weeks ago, >> the following invocation was successful: >> >> |sudo packstack --debug --allinone --default-password pw \ >> --os-neutron-ovs-bridge-interfaces=br-ex:eth0 \ >> --os-neutron-ml2-tenant-network-types=vxlan \ >> --os-neutron-ml2-mechanism-drivers=openvswitch \ >> --os-neutron-ml2-type-drivers=vxlan,flat \ >> --os-neutron-l2-agent=openvswitch \ >> --provision-demo-floatrange=10.1.1.0/24\ >> --provision-demo-allocation-pools >> '["start=10.1.1.10,end=10.1.1.50"]'\ --os-heat-install=y >> --os-heat-cfn-install=y||| >> >> Now, it fails during network setup. My network connection to the >> Packstack server is severed, and it turns out that its only >> network interface /eth0 /has no IP address and is down. No bridge >> exists. >> >> In the /network.pp.finished /file, I find various /ovs-vsctl >> /commands including /add-br/, and a command /ifdown eth0 /which >> fails with exit code 1 (no error message from the /ifdown >> /command is logged). >> >> *Can somebody recommend the options required to successfully >> deploy a Stein cloud based on the Openvswitch and VXLAN drivers?* >> >> Thanks much, >> >> Bernd >> >> >> >> || >> >> || >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregory.orange at pawsey.org.au Thu Aug 1 03:12:19 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 1 Aug 2019 11:12:19 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM Message-ID: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> Hi everyone, We have a Queens/Rocky environment with haproxy in front of most services. Recently we've found a problem when creating multiple instances (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we use Horizon or Terraform, so I've continued on with Terraform since it's easier to repeat attempts. WORKS: 340MB image, create 80x instances, 40 at a time FAILS: 20GB image, create 40x instances, 20 at a time ... Changed haproxy config, added "balance roundrobin" to glance, cinder, nova, neutron stanzas (there was no 'balance' config before, not sure what it would have been doing) ... WORKS sometimes[1]: 20GB image, create 40x instances, 20 at a time FAILS: 20GB image, create 80x instances, 40 at a time The failure condition: * The active haproxy server has a single core go to 100% usage (the rest are idle) * One glance server's RAM usage grows rapidly and continuously * Some instances that are building complete * Creating new instances fails (BUILD state forever) * Horizon becomes unresponsive * Ceph and Cinder don't appear to be overloaded (ceph -s, logs, system state) * These states do not recover until we take the following actions... To recover: * Kill any remaining (Terraform) attempts to launch instances * Stop haproxy on the active server * Wait a few seconds * Start haproxy again [1] When we create enough to not quite overload it, haproxy server goes to 100% on one core but recovers once the instances are (slooowly) created. The cause of the problem is not clear (e.g. from haproxy and glance logs, system state), and I'm looking for pointers on where to look or what to try next. Can you help? Thank you, Greg. From gregory.orange at pawsey.org.au Thu Aug 1 07:19:46 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 1 Aug 2019 15:19:46 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> Message-ID: <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Hi again everyone, On 1/8/19 11:12 am, Gregory Orange wrote: > We have a Queens/Rocky environment with haproxy in front of most services. Recently we've found a problem when creating multiple instances (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we use Horizon or Terraform, so I've continued on with Terraform since it's easier to repeat attempts. As a followup, I found a neutron server stuck with one of its cores consumed to 100%, and RAM and swap exhausted. After rebooting that server, everything worked fine. Over the next hour, RAM and swap was exhausted again by lots of spawning processes (a few hundred neutron-rootwrap-daemon), and oom-killer cleaned it up, resulting in a loop where it fills and empties RAM every 20-60 minutes. We have some other neutron changes planned, so for now we have left that one turned off, and the other two (which have less RAM) are working fine without these symptoms. Strange, but I'm glad to have found something, and that it's working for now. Regards, Greg. From ruslanas at lpic.lt Thu Aug 1 07:57:39 2019 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 1 Aug 2019 09:57:39 +0200 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Message-ID: when in newton release were introduced role separation, we divided memory hungry processes into 4 different VM's on 3 physical boxes: 1) Networker: all Neutron agent processes (network throughput) 2) Systemd: all services started by systemd (Neutron) 3) pcs: all services controlled by pcs (Galera + RabbitMQ) 4) horizon not sure how to do now, I think I will go for VMs again and those VMs will include containers. It is easier to recover and rebuild the whole OpenStack. Gregory > do you have local storage for swift and cinder background? if local; then do you use RAID? if yes; then which RAID?; fi do you use ssd? do you use CEPH as background for cinder and swift? fi also double check where _base image is located? is it in /var/lib/nova/instances/_base/* ? and flavor disks stored in /var/lib/nova/instances ? (can check on compute by: virsh domiflist instance-00000## ) On Thu, 1 Aug 2019 at 09:25, Gregory Orange wrote: > Hi again everyone, > > On 1/8/19 11:12 am, Gregory Orange wrote: > > We have a Queens/Rocky environment with haproxy in front of most > services. Recently we've found a problem when creating multiple instances > (2 VCPUs, 6GB RAM) from large images. The behaviour is the same whether we > use Horizon or Terraform, so I've continued on with Terraform since it's > easier to repeat attempts. > > As a followup, I found a neutron server stuck with one of its cores > consumed to 100%, and RAM and swap exhausted. After rebooting that server, > everything worked fine. Over the next hour, RAM and swap was exhausted > again by lots of spawning processes (a few hundred > neutron-rootwrap-daemon), and oom-killer cleaned it up, resulting in a loop > where it fills and empties RAM every 20-60 minutes. We have some other > neutron changes planned, so for now we have left that one turned off, and > the other two (which have less RAM) are working fine without these symptoms. > > Strange, but I'm glad to have found something, and that it's working for > now. > > Regards, > Greg. > > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Thu Aug 1 08:58:18 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Aug 2019 18:58:18 +1000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> Message-ID: <20190801085818.GD2077@fedora19.localdomain> On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > Given my change shows this can be so much quicker is there any > interest in modifying devstack to be faster here? And if so what do > we think an appropriate approach would be? My first concern was if anyone considered openstack-client setting these things up as actually part of the testing. I'd say not, comments in [1] suggest similar views. My second concern is that we do keep sufficient track of complexity v speed; obviously doing things in a sequential manner via a script is pretty simple to follow and as we start putting things into scripts we make it harder to debug when a monoscript dies and you have to start pulling apart where it was. With just a little json fiddling we can currently pull good stats from logstash ([2]) so I think as we go it would be good to make sure we account for the time using appropriate wrappers, etc. Then the third concern is not to break anything for plugins -- devstack has a very very loose API which basically relies on plugin authors using a combination of good taste and copying other code to decide what's internal or not. Which made me start thinking I wonder if we look at this closely, even without replacing things we might make inroads? For example [3]; it seems like SERVICE_DOMAIN_NAME is never not default, so the get_or_create_domain call is always just overhead (the result is never used). Then it seems that in the gate, basically all of the "get_or_create" calls will really just be "create" calls? Because we're always starting fresh. So we could cut out about half of the calls there pre-checking if we know we're under zuul (proof-of-concept [4]). Then we have blocks like: get_or_add_user_project_role $member_role $demo_user $demo_project get_or_add_user_project_role $admin_role $admin_user $demo_project get_or_add_user_project_role $another_role $demo_user $demo_project get_or_add_user_project_role $member_role $demo_user $invis_project If we wrapped that in something like start_osc_session ... end_osc_session which sets a variable that means instead of calling directly, those functions write their arguments to a tmp file. Then at the end call, end_osc_session does $ osc "$(< tmpfile)" and uses the inbuilt batching? If that had half the calls by skipping the "get_or" bit, and used common authentication from batching, would that help? And then I don't know if all the projects and groups are required for every devstack run? Maybe someone skilled in the art could do a bit of an audit and we could cut more of that out too? So I guess my point is that maybe we could tweak what we have a bit to make some immediate wins, before anyone has to rewrite too much? -i [1] https://review.opendev.org/673018 [2] https://ethercalc.openstack.org/rzuhevxz7793 [3] https://review.opendev.org/673941 [4] https://review.opendev.org/673936 From ralf.teckelmann at bertelsmann.de Thu Aug 1 09:11:55 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Thu, 1 Aug 2019 09:11:55 +0000 Subject: AW: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> , Message-ID: Hello Bernd, Hello Lingxian, +1 You are not alone in your fruitless endeavor. Sadly, I can not come up with a solution. We are stuck at the same point. Maybe some day a dedicated member of the OpenStack community give the ceilometer guys a push to explain their service. For us, also using Stein, it is in the state of "not production ready". Cheers, Ralf T. ________________________________ Von: Bernd Bausch Gesendet: Donnerstag, 1. August 2019 03:16:25 An: Lingxian Kong Cc: openstack-discuss Betreff: Re: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics Lingxian, Thanks for "bumping" my request and keeping it alive. The reason I need an answer: I am updating courseware to Stein that includes autoscaling based on CPU and disk I/O rates. Looks like I am "cutting edge" :) I don't think the problem is in the Gnocchi camp, but rather Ceilometer. To store rates of measures in z, the following is needed: * A metric. Raw measures are sent to the metric. * An archive policy. The metric has an archive policy. * The archive policy includes one or more rate aggregates My cloud has archive policies with rate aggregates, but the question is about the first bullet: How can I configure Ceilometer so that it creates the corresponding metrics and sends measures to them. In other words, how is Ceilometer's output connected to my archive policy. From my experience, just adding the archive policy to Ceilometer's publishers is not sufficient. Ceilometer's source code includes .../publisher/data/gnocchi_resources.yaml, which might well be the place where this can be configured. I am not sure how to do it though, and this file is not documented. I can read the source, but my developer skills are insufficient for understanding how everything fits together. Bernd On 8/1/2019 9:01 AM, Lingxian Kong wrote: Hi Bernd, There were a lot of people asked the same question before, unfortunately, I don't know the answer either(we are still using an old version of Ceilometer). The original cpu_util support has been removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no doc in Gnocchi mentioned how to achieve the same thing and no clear answer from the Gnocchi maintainers. It'd be much appreciated if you could find the answer in the end, or there will be someone who has the already solved the issue. Best regards, Lingxian Kong Catalyst Cloud On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > wrote: The message at the end of this email is some three months old. I have the same problem. The question is: How to use the new rate metrics in Gnocchi. I am using a Stein Devstack for my tests. For example, I need the CPU rate, formerly named cpu_util. I created a new archive policy that uses rate:mean aggregation and has a 1 minute granularity: $ gnocchi archive-policy show ceilometer-medium-rate +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | aggregation_methods | rate:mean, mean | | back_window | 0 | | definition | - points: 10080, granularity: 0:01:00, timespan: 7 days, 0:00:00 | | name | ceilometer-medium-rate | +---------------------+------------------------------------------------------------------+ I added the new policy to the publishers in pipeline.yaml: $ tail -n5 /etc/ceilometer/pipeline.yaml sinks: - name: meter_sink publishers: - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift - gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift After restarting all of Ceilometer, my hope was that the CPU rate would magically appear in the metric list. But no: All metrics are linked to archive policy medium, and looking at the details of an instance, I don't detect anything rate-related: $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 +-----------------------+---------------------------------------------------------------------+ | Field | Value | +-----------------------+---------------------------------------------------------------------+ ... | metrics | compute.instance.booting.time: 76fac1f5-962e-4ff2-8790-1f497c99c17d | | | cpu: af930d9a-a218-4230-b729-fee7e3796944 | | | disk.ephemeral.size: 0e838da3-f78f-46bf-aefb-aeddf5ff3a80 | | | disk.root.size: 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | | | memory.resident: 09efd98d-c848-4379-ad89-f46ec526c183 | | | memory.swap.in: 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | | | memory.swap.out: 4d012697-1d89-4794-af29-61c01c925bb4 | | | memory.usage: 93eab625-0def-4780-9310-eceff46aab7b | | | memory: ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | | | vcpus: e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | | original_resource_id | ae3659d6-8998-44ae-a494-5248adbebe11 | ... | type | instance | | user_id | a9c935f52e5540fc9befae7f91b4b3ae | +-----------------------+---------------------------------------------------------------------+ Obviously, I am missing something. Where is the missing link? What do I have to do to get CPU usage rates? Do I have to create metrics? Do I have to ask Ceilometer to create metrics? How? Right now, no instructions seem to exist at all. If that is correct, I would be happy to write documentation once I understand how it works. Thanks a lot. Bernd On 5/10/2019 3:49 PM, info at dantalion.nl wrote: Hello, I am working on Watcher and we are currently changing how metrics are retrieved from different datasources such as Monasca or Gnocchi. Because of this major overhaul I would like to validate that everything is working correctly. Almost all of the optimization strategies in Watcher require the cpu utilization of an instance as metric but with newer versions of Ceilometer this has become unavailable. On IRC I received the information that Gnocchi could be used to configure an aggregate and this aggregate would then report cpu utilization, however, I have been unable to find documentation on how to achieve this. I was also notified that cpu_util is something that could be computed from other metrics. When reading https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute the documentation seems to agree on this as it states that cpu_util is measured by using a 'rate of change' transformer. But I have not been able to find how this can be computed. I was hoping someone could spare the time to provide documentation or information on how this currently is best achieved. Kind Regards, Corne Lukken (Dantali0n) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 1 11:26:19 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 1 Aug 2019 07:26:19 -0400 Subject: [ironic] Shanghai Planning Message-ID: Greetings everyone, I just wanted to remind my fellow ironic contributors that the planning and coordination etherpad for Shanghai[0] is available. Please add any topics and indicate your attendance as soon as possible. Thanks, -Julia [0]: https://etherpad.openstack.org/p/PVG-Ironic-Planning From dtantsur at redhat.com Thu Aug 1 11:27:48 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 1 Aug 2019 13:27:48 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: On 7/31/19 8:21 PM, Kendall Nelson wrote: > Hello Everyone! > > Development Focus > ----------------- > We are now past the Train-2 milestone, and entering the last development phase > of the cycle. Teams should be focused on implementing planned work for the > cycle.Now is  a good time to review those plans and reprioritize anything if > needed based on the what progress has been made and what looks realistic to > complete in the next few weeks. > > General Information > ------------------- > The following cycle-with-intermediary deliverables have not done any > intermediary release yet during this cycle. The cycle-with-rc release model is > more suited for deliverables that plan to be released only once per cycle. I respectfully disagree. I will reserve my opinion on whether cycle-with-rc suits *anyone*, but in our case I'd prefer to have an option of releasing something in the middle of a cycle even if we don't exercise this option way too often. I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any of our projects. Dmitry > As a > result, we will be proposing to change the release model for the following > deliverables: > > blazar-dashboard > > cloudkitty-dashboard > ec2-api > freezer-web-ui > freezer > heat-agents > heat-dashboard > ironic-ui > karbor-dashboard > karbor > kuryr-kubernetes > magnum-ui > manila-ui > masakari-dashboard > monasca-agent > monasca-api > monasca-ceilometer > monasca-events-api > monasca-kibana-plugin > monasca-log-api > monasca-notification > monasca-persister > monasca-thresh > monasca-transform > monasca-ui > murano-agent > networking-baremetal > networking-generic-switch > networking-hyperv > neutron-fwaas-dashboard > neutron-vpnaas-dashboard > requirements > sahara-extra > senlin-dashboard > solum-dashboard > tacker-horizon > tricircle > vitrage-dashboard > vitrage > watcher-dashboard > > > PTLs and release liaisons for each of those deliverables can either +1 the > release model changewhen we get them pushed, or propose an intermediary release > for that deliverable. In absence of answer by the start of R-10 week we'll > consider that the switch to cycle-with-rc is preferable. > > Upcoming Deadlines & Dates > -------------------------- > Non-client library freeze: September 05 (R-6 week) > Client library freeze: September 12 (R-5 week) > Train-3 milestone: September 12 (R-5 week) > > -Kendall (diablo_rojo) + the Release Management Team > From alex.kavanagh at canonical.com Thu Aug 1 11:49:30 2019 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Thu, 1 Aug 2019 12:49:30 +0100 Subject: [horizon] Stein, multi-domain, admin, can't list users, projects (maybe networks) (bug#1830782) Message-ID: Hi I'm trying to resolve/solve the issue that is described in bug 183072 [1], and I'm looking for help in how it might be resolved. To recap the bug quickly: 1. horizon, multi-domain enabled. 2. 'admin' user is in 'admin_domain' and 'admin' project 3. Log in as that 'admin' user in 'admin_domain'. 4. Create test domain. 5. Set domain context to 'test' domain 6. Create a user in the 'test' domain. 7. Can't see that user in the user list. 8. Do same for project; can't see the project. In the bug comments at [2] (comment 38) I've recorded the results after adding some debug code to keystone and horizon and came to the following tentative conclusion: 1. Horizon uses a domain scoped token for listing users when the domain context is set. In this case that token is domain-scoped to 'admin_domain' 2. Keystone at the stein release, due to a change introduced in [3] for the users (detail in [4]) filters users that are not in the domain of the domain scoped token. 3. Thus, the domains for the 'test' domain are filtered out and are not seen in the horizon dashboard. 4. I believe this is the same for projects. In order to solve this, I suspect one or more of the following would need to be done. However, I'm not familiar enough with the horizon codebase to know where to start. 1. In horizon, if the user is an admin user, then don't use a domain-scoped token for listing users, projects, or anything else. 2. Alternatively, obtain a domain scoped token for the domain context that is set. (I'm not familiar enough with keystone to know whether it's possible for the admin user to get 'any' domain scoped token for any domain???) Incidentally, the openstack CLI doesn't use domain scoped tokens for list users in a domain; I don't know whether this is an appropriate approach to take in horizon. Thanks very much in advance. Happy to chat on IRC if that's useful (I'm UTC TZ). Best regards Alex. [1] https://bugs.launchpad.net/openstack-bundles/+bug/1830782 [2] https://bugs.launchpad.net/openstack-bundles/+bug/1830782/comments/38 [3] https://review.opendev.org/#/c/647587/ [4] https://review.opendev.org/#/c/647587/3/keystone/api/users.py -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 1 12:20:49 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 1 Aug 2019 21:20:49 +0900 Subject: AW: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: <5af391d8-e907-0622-5275-d40297d12818@gmail.com> I have a solution. At least it works for me. Be aware that this is Devstack, but I think nothing I did to solve my problem is Devstack-specific. Also, I don't know whether there are more efficient or canonical ways to reconfigure Ceilometer. But it's good enough for me. These are my steps - you may not need all of them. * in *pipeline.yaml*, set publisher to gnocchi:// * in *the resource definition file*, define my new archive policy. By default, this file resides in the Ceilometer source tree .../ceilometer/publisher/data/gnocchi_resources.yaml, but you can use config parameter resources_definition_file to change the default (I didn't try). Example:         - name: ceilometer-medium-rate           aggregation_methods:           - mean           - rate:mean          back_window: 0          definition:            - granularity: 1 minute              timespan: 7 days            - granularity: 1 hour              timespan: 365 days * in the same resource definition file, *adjust the archive policy *of rate metrics. Example:        - resource_type: instance          metrics:          ...            cpu:              archive_policy_name: ceilometer-medium-rate * *delete all existing metrics and resources *from Gnocchi Probably only necessary when Ceilometer is running, and not needed if you reconfigure it before its first start. This is a drastic measure, but if you do it at the beginning of a deployment, it won't cause loss of much data. Why is this required? A metric contains an archive policy that can't be changed. Thus existing metrics need to be recreated. Why remove resources? Because they reference the metrics that I removed. * *restart all Ceilometer services* This is required for re-reading the pipeline and the resource definition files. Ceilometer will create resources and metrics as needed when it sends its samples to Gnocchi. I tested this by running a CPU hogging instance and listing its measures after a few minutes:     gnocchi measures show --resource f28f6b78-9dd5-49cc-a6ac-28cb14477bf0                           --aggregation rate:mean cpu +---------------------------+-------------+---------------+     | timestamp                 | granularity |         value |     +---------------------------+-------------+---------------+     | 2019-08-01T20:23:00+09:00 |        60.0 |  1810000000.0 |     | 2019-08-01T20:24:00+09:00 |        60.0 | 39940000000.0 |     | 2019-08-01T20:25:00+09:00 |        60.0 | 40110000000.0 | This means that the instance accumulated 39940000000 nanoseconds of CPU time in the 60 seconds at 20:24:00. Note that the old /cpu_util /was expressed in percent, so that Aodh alarms and Heat autoscaling definitions must be adapted. Good luck. Hire me as Ceilometer consultant if you get stuck :) Bernd On 8/1/2019 6:11 PM, Teckelmann, Ralf, NMU-OIP wrote: > > Hello Bernd, Hello Lingxian, > > > +1 > > > You are not alone in your fruitless endeavor. Sadly, I can not come up > with a solution. > > We are stuck at the same point. > > > Maybe some day a dedicated member of the OpenStack community give the > ceilometer guys a push to explain their service. > For us, also using Stein, it is in the state of "not production ready". > > Cheers, > > Ralf T. > ------------------------------------------------------------------------ > *Von:* Bernd Bausch > *Gesendet:* Donnerstag, 1. August 2019 03:16:25 > *An:* Lingxian Kong > *Cc:* openstack-discuss > *Betreff:* Re: [telemetry][ceilometer][gnocchi] How to configure > aggregate for cpu_util or calculate from metrics > > Lingxian, > > Thanks for "bumping" my request and keeping it alive. The reason I > need an answer: I am updating courseware to Stein that includes > autoscaling based on CPU and disk I/O rates. Looks like I am "cutting > edge" :) > > I don't think the problem is in the Gnocchi camp, but rather > Ceilometer. To store rates of measures in z, the following is needed: > > * A /metric/. Raw measures are sent to the metric. > * An /archive policy/. The metric has an archive policy. > * The archive policy includes one or more /rate aggregates/ > > My cloud has archive policies with rate aggregates, but the question > is about the first bullet: *How can I configure Ceilometer so that it > creates the corresponding metrics and sends measures to them. *In > other words, how is Ceilometer's output connected to my archive > policy. From my experience, just adding the archive policy to > Ceilometer's publishers is not sufficient. > > Ceilometer's source code includes > /.../publisher/data/gnocchi_resources.yaml/, which might well be the > place where this can be configured. I am not sure how to do it though, > and this file is not documented. I can read the source, but my > developer skills are insufficient for understanding how everything > fits together. > > Bernd > > On 8/1/2019 9:01 AM, Lingxian Kong wrote: >> Hi Bernd, >> >> There were a lot of people asked the same question before, >> unfortunately, I don't know the answer either(we are still using an >> old version of Ceilometer). The original cpu_util support has been >> removed from Ceilometer in favor of Gnocchi, but AFAIK, there is no >> doc in Gnocchi mentioned how to achieve the same thing and no clear >> answer from the Gnocchi maintainers. >> >> It'd be much appreciated if you could find the answer in the end, or >> there will be someone who has the already solved the issue. >> >> Best regards, >> Lingxian Kong >> Catalyst Cloud >> >> >> On Wed, Jul 31, 2019 at 1:28 PM Bernd Bausch > > wrote: >> >> The message at the end of this email is some three months old. I >> have the same problem. The question is: *How to use the new rate >> metrics in Gnocchi. *I am using a Stein Devstack for my tests.* >> * >> >> For example, I need the CPU rate, formerly named /cpu_util/. I >> created a new archive policy that uses /rate:mean/ aggregation >> and has a 1 minute granularity: >> >> $ gnocchi archive-policy show ceilometer-medium-rate >> +---------------------+------------------------------------------------------------------+ >> | Field               | Value | >> +---------------------+------------------------------------------------------------------+ >> | aggregation_methods | rate:mean, mean | >> | back_window         | 0 | >> | definition          | - points: 10080, granularity: 0:01:00, >> timespan: 7 days, 0:00:00 | >> | name                | ceilometer-medium-rate | >> +---------------------+------------------------------------------------------------------+ >> >> I added the new policy to the publishers in /pipeline.yaml/: >> >> $ tail -n5 /etc/ceilometer/pipeline.yaml >> sinks: >>     - name: meter_sink >>       publishers: >>           - >> gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >> *- >> gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* >> >> After restarting all of Ceilometer, my hope was that the CPU rate >> would magically appear in the metric list. But no: All metrics >> are linked to archive policy /medium/, and looking at the details >> of an instance, I don't detect anything rate-related: >> >> $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 >> +-----------------------+---------------------------------------------------------------------+ >> | Field                 | Value | >> +-----------------------+---------------------------------------------------------------------+ >> ... >> | metrics               | compute.instance.booting.time: >> 76fac1f5-962e-4ff2-8790-1f497c99c17d | >> |                       | cpu: af930d9a-a218-4230-b729-fee7e3796944 | >> |                       | disk.ephemeral.size: >> 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | >> |                       | disk.root.size: >> 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e | >> |                       | memory.resident: >> 09efd98d-c848-4379-ad89-f46ec526c183               | >> |                       | memory.swap.in >> : >> 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb | >> |                       | memory.swap.out: >> 4d012697-1d89-4794-af29-61c01c925bb4               | >> |                       | memory.usage: >> 93eab625-0def-4780-9310-eceff46aab7b | >> |                       | memory: >> ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1 | >> |                       | vcpus: >> e1c5acaf-1b10-4d34-98b5-3ad16de57a98 | >> | original_resource_id  | ae3659d6-8998-44ae-a494-5248adbebe11 | >> ... >> >> | type                  | instance | >> | user_id               | a9c935f52e5540fc9befae7f91b4b3ae | >> +-----------------------+---------------------------------------------------------------------+ >> >> Obviously, I am missing something. Where is the missing link? >> What do I have to do to get CPU usage rates? Do I have to create >> metrics? Do//I have to ask Ceilometer to create metrics? How? >> >> Right now, no instructions seem to exist at all. If that is >> correct, I would be happy to write documentation once I >> understand how it works. >> >> Thanks a lot. >> >> Bernd >> >> On 5/10/2019 3:49 PM, info at dantalion.nl >> wrote: >>> Hello, >>> >>> I am working on Watcher and we are currently changing how metrics are >>> retrieved from different datasources such as Monasca or Gnocchi. Because >>> of this major overhaul I would like to validate that everything is >>> working correctly. >>> >>> Almost all of the optimization strategies in Watcher require the cpu >>> utilization of an instance as metric but with newer versions of >>> Ceilometer this has become unavailable. >>> >>> On IRC I received the information that Gnocchi could be used to >>> configure an aggregate and this aggregate would then report cpu >>> utilization, however, I have been unable to find documentation on how to >>> achieve this. >>> >>> I was also notified that cpu_util is something that could be computed >>> from other metrics. When reading >>> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >>> the documentation seems to agree on this as it states that cpu_util is >>> measured by using a 'rate of change' transformer. But I have not been >>> able to find how this can be computed. >>> >>> I was hoping someone could spare the time to provide documentation or >>> information on how this currently is best achieved. >>> >>> Kind Regards, >>> Corne Lukken (Dantali0n) >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Thu Aug 1 14:33:10 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 1 Aug 2019 10:33:10 -0400 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: References: Message-ID: On Thu, Aug 1, 2019 at 7:28 AM Dmitry Tantsur wrote: > On 7/31/19 8:21 PM, Kendall Nelson wrote: > > Hello Everyone! > > > > Development Focus > > ----------------- > > We are now past the Train-2 milestone, and entering the last development > phase > > of the cycle. Teams should be focused on implementing planned work for > the > > cycle.Now is a good time to review those plans and reprioritize > anything if > > needed based on the what progress has been made and what looks realistic > to > > complete in the next few weeks. > > > > General Information > > ------------------- > > The following cycle-with-intermediary deliverables have not done any > > intermediary release yet during this cycle. The cycle-with-rc release > model is > > more suited for deliverables that plan to be released only once per > cycle. > > I respectfully disagree. I will reserve my opinion on whether > cycle-with-rc > suits *anyone*, but in our case I'd prefer to have an option of releasing > something in the middle of a cycle even if we don't exercise this option > way too > often. > +1. Kendall's key phrase is "plan to be released once per cycle". I agree these teams should ask themselves if they plan to only release at the end of each cycle. However, it isn't unreasonable to not plan cycle-based releases, but release once per cycle by chance. For example, I think we would just release ironic-ui whenever there was something substantial. There just (sadly) hasn't been anything substantial yet this cycle. Let's not force anyone to change the model or release a project, please. // jim > I'm not an ironic PTL, bit anyway please note that I'm -1 on the change > for any > of our projects. > > Dmitry > > > As a > > result, we will be proposing to change the release model for the > following > > deliverables: > > > > blazar-dashboard > > > > cloudkitty-dashboard > > ec2-api > > freezer-web-ui > > freezer > > heat-agents > > heat-dashboard > > ironic-ui > > karbor-dashboard > > karbor > > kuryr-kubernetes > > magnum-ui > > manila-ui > > masakari-dashboard > > monasca-agent > > monasca-api > > monasca-ceilometer > > monasca-events-api > > monasca-kibana-plugin > > monasca-log-api > > monasca-notification > > monasca-persister > > monasca-thresh > > monasca-transform > > monasca-ui > > murano-agent > > networking-baremetal > > networking-generic-switch > > networking-hyperv > > neutron-fwaas-dashboard > > neutron-vpnaas-dashboard > > requirements > > sahara-extra > > senlin-dashboard > > solum-dashboard > > tacker-horizon > > tricircle > > vitrage-dashboard > > vitrage > > watcher-dashboard > > > > > > PTLs and release liaisons for each of those deliverables can either +1 > the > > release model changewhen we get them pushed, or propose an intermediary > release > > for that deliverable. In absence of answer by the start of R-10 week > we'll > > consider that the switch to cycle-with-rc is preferable. > > > > Upcoming Deadlines & Dates > > -------------------------- > > Non-client library freeze: September 05 (R-6 week) > > Client library freeze: September 12 (R-5 week) > > Train-3 milestone: September 12 (R-5 week) > > > > -Kendall (diablo_rojo) + the Release Management Team > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Aug 1 15:00:10 2019 From: donny at fortnebula.com (Donny Davis) Date: Thu, 1 Aug 2019 11:00:10 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190801085818.GD2077@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <20190801085818.GD2077@fedora19.localdomain> Message-ID: These jobs seem to timeout from every provider on the regular[1], but the issue is surely more apparent with tempest on FN. The result is quite a bit of lost time. 361 jobs that run for several hours results in a little over a 1000 hours of lost cycles. [1] http://logstash.openstack.org/#/dashboard/file/logstash.json?query=filename:%5C%22job-output.txt%5C%22%20AND%20message:%5C%22RUN%20END%20RESULT_TIMED_OUT%5C%22&from=7d On Thu, Aug 1, 2019 at 5:01 AM Ian Wienand wrote: > On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > > Given my change shows this can be so much quicker is there any > > interest in modifying devstack to be faster here? And if so what do > > we think an appropriate approach would be? > > My first concern was if anyone considered openstack-client setting > these things up as actually part of the testing. I'd say not, > comments in [1] suggest similar views. > > My second concern is that we do keep sufficient track of complexity v > speed; obviously doing things in a sequential manner via a script is > pretty simple to follow and as we start putting things into scripts we > make it harder to debug when a monoscript dies and you have to start > pulling apart where it was. With just a little json fiddling we can > currently pull good stats from logstash ([2]) so I think as we go it > would be good to make sure we account for the time using appropriate > wrappers, etc. > > Then the third concern is not to break anything for plugins -- > devstack has a very very loose API which basically relies on plugin > authors using a combination of good taste and copying other code to > decide what's internal or not. > > Which made me start thinking I wonder if we look at this closely, even > without replacing things we might make inroads? > > For example [3]; it seems like SERVICE_DOMAIN_NAME is never not > default, so the get_or_create_domain call is always just overhead (the > result is never used). > > Then it seems that in the gate, basically all of the "get_or_create" > calls will really just be "create" calls? Because we're always > starting fresh. So we could cut out about half of the calls there > pre-checking if we know we're under zuul (proof-of-concept [4]). > > Then we have blocks like: > > get_or_add_user_project_role $member_role $demo_user $demo_project > get_or_add_user_project_role $admin_role $admin_user $demo_project > get_or_add_user_project_role $another_role $demo_user $demo_project > get_or_add_user_project_role $member_role $demo_user $invis_project > > If we wrapped that in something like > > start_osc_session > ... > end_osc_session > > which sets a variable that means instead of calling directly, those > functions write their arguments to a tmp file. Then at the end call, > end_osc_session does > > $ osc "$(< tmpfile)" > > and uses the inbuilt batching? If that had half the calls by skipping > the "get_or" bit, and used common authentication from batching, would > that help? > > And then I don't know if all the projects and groups are required for > every devstack run? Maybe someone skilled in the art could do a bit > of an audit and we could cut more of that out too? > > So I guess my point is that maybe we could tweak what we have a bit to > make some immediate wins, before anyone has to rewrite too much? > > -i > > [1] https://review.opendev.org/673018 > [2] https://ethercalc.openstack.org/rzuhevxz7793 > [3] https://review.opendev.org/673941 > [4] https://review.opendev.org/673936 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Aug 1 16:08:10 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 12:08:10 -0400 Subject: [tc][forum] documenting "forum 101" Message-ID: Hi everyone, We've discussed the idea of building a document which provides information on how to host a good forum session, tips and tricks, how to make the best out of them. It seems there's content all over the place but not aggregated in a single good spot that we can point attendees/hosts to. context: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-01.log.html#t2019-08-01T15:40:09 Do we have any volunteers from the TC (or even community members) that are interested in this effort? Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amotoki at gmail.com Thu Aug 1 16:52:13 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Aug 2019 01:52:13 +0900 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: References: Message-ID: On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur wrote: > > On 7/31/19 8:21 PM, Kendall Nelson wrote: > > Hello Everyone! > > > > Development Focus > > ----------------- > > We are now past the Train-2 milestone, and entering the last development phase > > of the cycle. Teams should be focused on implementing planned work for the > > cycle.Now is a good time to review those plans and reprioritize anything if > > needed based on the what progress has been made and what looks realistic to > > complete in the next few weeks. > > > > General Information > > ------------------- > > The following cycle-with-intermediary deliverables have not done any > > intermediary release yet during this cycle. The cycle-with-rc release model is > > more suited for deliverables that plan to be released only once per cycle. > > I respectfully disagree. I will reserve my opinion on whether cycle-with-rc > suits *anyone*, but in our case I'd prefer to have an option of releasing > something in the middle of a cycle even if we don't exercise this option way too > often. > > I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any > of our projects. I agree with Dmitry. cycle-with-intermediary model allows project teams to release somethings at any time during a release when they want. On the other hand, cycle-with-intermediary means at least one release along with a release cycle. "cycle-with-rc" means such deliverable can only *one* release per cycle. "cycle-with-rc" might be a good option for some projects but I think it is not forced. If some deliverable tends to have less changes and it is not worth cutting a release, another option might be "independent". My understanding is that "independent" release model does not allow us to have stable branches, so it might be a thing considered carefully when we switch some deliverable to "independent". Talking about horizon plugins, as a neutron release liaison, neutron-fwaas/vpnaas-dashboard hit similar situation to ironic-ui. we don't have any substantial changes till now in this cycle. I guess this situation may continues in further releases in most horizon plugins. I am not sure which release model is appropriate. horizon adopts release-with-rc model now and horizon plugins are usually assumed to work with a specific release of horizon, so "independent" might not fit. release-with-intermediary or release-with-rc may fit, but there are cases where they have only infra related changes in a cycle. Thanks, Akihiro Motoki > > Dmitry > > > As a > > result, we will be proposing to change the release model for the following > > deliverables: > > > > blazar-dashboard > > > > cloudkitty-dashboard > > ec2-api > > freezer-web-ui > > freezer > > heat-agents > > heat-dashboard > > ironic-ui > > karbor-dashboard > > karbor > > kuryr-kubernetes > > magnum-ui > > manila-ui > > masakari-dashboard > > monasca-agent > > monasca-api > > monasca-ceilometer > > monasca-events-api > > monasca-kibana-plugin > > monasca-log-api > > monasca-notification > > monasca-persister > > monasca-thresh > > monasca-transform > > monasca-ui > > murano-agent > > networking-baremetal > > networking-generic-switch > > networking-hyperv > > neutron-fwaas-dashboard > > neutron-vpnaas-dashboard > > requirements > > sahara-extra > > senlin-dashboard > > solum-dashboard > > tacker-horizon > > tricircle > > vitrage-dashboard > > vitrage > > watcher-dashboard > > > > > > PTLs and release liaisons for each of those deliverables can either +1 the > > release model changewhen we get them pushed, or propose an intermediary release > > for that deliverable. In absence of answer by the start of R-10 week we'll > > consider that the switch to cycle-with-rc is preferable. > > > > Upcoming Deadlines & Dates > > -------------------------- > > Non-client library freeze: September 05 (R-6 week) > > Client library freeze: September 12 (R-5 week) > > Train-3 milestone: September 12 (R-5 week) > > > > -Kendall (diablo_rojo) + the Release Management Team > > > > From mnaser at vexxhost.com Thu Aug 1 17:28:15 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 13:28:15 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, I’ve prepared the planning for the next TC meeting which will happen on Thursday, August 8th at 1400 UTC. If you’d like to add a topic of discussion, please feel free to do so. We will be cutting off the agenda on Wednesday the 7th so any subject added afterwards won’t appear in the plan. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From doug at doughellmann.com Thu Aug 1 19:02:32 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 1 Aug 2019 15:02:32 -0400 Subject: =?utf-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Releas?= =?utf-8?Q?e_countdown_for_week_R-11=2C_July_29_-_August_2?= In-Reply-To: References: Message-ID: > On Aug 1, 2019, at 12:52 PM, Akihiro Motoki wrote: > > On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur > wrote: >> >> On 7/31/19 8:21 PM, Kendall Nelson wrote: >>> Hello Everyone! >>> >>> Development Focus >>> ----------------- >>> We are now past the Train-2 milestone, and entering the last development phase >>> of the cycle. Teams should be focused on implementing planned work for the >>> cycle.Now is a good time to review those plans and reprioritize anything if >>> needed based on the what progress has been made and what looks realistic to >>> complete in the next few weeks. >>> >>> General Information >>> ------------------- >>> The following cycle-with-intermediary deliverables have not done any >>> intermediary release yet during this cycle. The cycle-with-rc release model is >>> more suited for deliverables that plan to be released only once per cycle. >> >> I respectfully disagree. I will reserve my opinion on whether cycle-with-rc >> suits *anyone*, but in our case I'd prefer to have an option of releasing >> something in the middle of a cycle even if we don't exercise this option way too >> often. >> >> I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any >> of our projects. > > I agree with Dmitry. cycle-with-intermediary model allows project > teams to release > somethings at any time during a release when they want. On the other hand, > cycle-with-intermediary means at least one release along with a release cycle. > "cycle-with-rc" means such deliverable can only *one* release per cycle. > "cycle-with-rc" might be a good option for some projects but I think > it is not forced. > > If some deliverable tends to have less changes and it is not worth > cutting a release, > another option might be "independent". My understanding is that > "independent" release > model does not allow us to have stable branches, so it might be a > thing considered carefully > when we switch some deliverable to "independent”. That’s not quite right. Independent deliverables can have stable branches, but they are not considered part of the OpenStack release because they are not managed by the release team. > > Talking about horizon plugins, as a neutron release liaison, > neutron-fwaas/vpnaas-dashboard > hit similar situation to ironic-ui. we don't have any substantial > changes till now in this cycle. > I guess this situation may continues in further releases in most > horizon plugins. > I am not sure which release model is appropriate. > horizon adopts release-with-rc model now and horizon plugins > are usually assumed to work with a specific release of horizon, so > "independent" might not fit. > release-with-intermediary or release-with-rc may fit, but there are > cases where they have > only infra related changes in a cycle. There are far far too many deliverables for our small release team to keep up with everyone following different procedures for branching, and branching incorrectly has too many bad ramifications to leave it to chance. We have therefore tried to describe several release models to meet teams’ needs, and to allow the release team to automate managing the deliverables in groups that all follow the same procedures so we end up with consistent results. The fact that most of the rest of the community have not needed to pay much attention to issues around branch management says to me that this approach has been working. As Thierry pointed out on IRC, there are reasons to require a release beyond the software having significant features or bug fixes. The reason we need a release for cycle-with-intermediary projects before the end of the cycle is that when we reach the final release deadline we need something to use as a place to create the stable branch (we only branch from tagged releases). In the past, we used the last release from the previous cycle as a fallback when teams missed other cycle deadlines. That resulted in creating a new stable branch that had none of the bug fixes or CI changes that had been on master, and which was therefore broken and required extra effort to fix. So, we now ask for an early release to give us a relatively recent one from the current cycle, rather than using the final release from the previous cycle. The alternative, using the cycle-with-rc release model, means that the release team will automatically generate release candidates and a final release for the team. In cases where the team does not intend to release more than one version in a cycle, this is easier for the project team and not much more work for the release team since the deliverable is handled as part of the batch of all similar deliverables. Updating the release model is the default when there are no releases because it reflects what is actually happening with the deliverable and the release team can manage the change on its own, and Kendall’s email is the notification which is supposed to trigger the conversation for each deliverable so that project teams can decide how to proceed down one of the two paths proposed. Doing nothing isn’t really an option, though. So, if you have a cycle-with-intermediary deliverable with changes that you haven’t considered “substantial” enough to trigger a release previously, and you do not want to change the release model, this is the point at which you should do a release anyway to avoid issues at the end of the cycle. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Aug 1 20:10:16 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 16:10:16 -0400 Subject: [openstack-ansible] Shanghai Summit Planning Message-ID: Hey everyone! Here's the link to the Etherpad for this year's Shanghai summit initial planning. You can put your name if you're attending and also write down your topic of discussion ideas. Looking forward to seeing you there! https://etherpad.openstack.org/p/PVG-OSA-PTG Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Thu Aug 1 20:11:11 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 1 Aug 2019 16:11:11 -0400 Subject: [tc] Shanghai Summit Planning Message-ID: Hey everyone! Here's the link to the Etherpad for this year's Shanghai summit initial planning. You can put your name if you're attending and also write down your topic of discussion ideas. Looking forward to seeing you there! https://etherpad.openstack.org/p/PVG-TC-PTG Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From kennelson11 at gmail.com Fri Aug 2 00:49:49 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 1 Aug 2019 17:49:49 -0700 Subject: [tc][forum] documenting "forum 101" In-Reply-To: References: Message-ID: I am happy to help review/comment on it (assuming its going to live in a repo somewhere) and could probably help brainstorm/draft something, but I'd rather not be the first one to raise a hand.. Also, once its done, we should probably point to it from here: https://wiki.openstack.org/wiki/Forum#Forum_tips -Kendall (diablo_rojo) On Thu, Aug 1, 2019 at 9:12 AM Mohammed Naser wrote: > Hi everyone, > > We've discussed the idea of building a document which provides > information on how to host a good forum session, tips and tricks, how > to make the best out of them. It seems there's content all over the > place but not aggregated in a single good spot that we can point > attendees/hosts to. > > context: > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-01.log.html#t2019-08-01T15:40:09 > > Do we have any volunteers from the TC (or even community members) that > are interested in this effort? > > Thanks, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Fri Aug 2 01:36:48 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 1 Aug 2019 20:36:48 -0500 Subject: [Security SIG] Weekly Newsletter - Aug 01st 2019 Message-ID: Apologies for the lack of a newsletter last week, back to regularly scheduled programming #Week of: 01 Aug 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-08-01-15.00.html - Security Guide Docs Update - The first set of changes have been made - https://review.opendev.org/#/q/project:openstack/security-doc - Decided to link directly to the keystone federation setup page instead of maintaining an out-of-date copy - Currently looking to update the various checklists for each service - Nova/Cinder Policy - Currently checking out inconsistencies between the documentation of generating policy files for both nova and cinder - https://review.opendev.org/#/c/673349/ # VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - IFLA_BR_AGEING_TIME of 0 causes flooding across bridges: https://bugs.launchpad.net/os-vif/+bug/1837252 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rony.khan at brilliant.com.bd Fri Aug 2 05:01:19 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Fri, 2 Aug 2019 11:01:19 +0600 Subject: Openstack IPv6 neutron confiuraton Message-ID: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> Hi, We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. Thanks & B'Rgds, Rony -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 2 07:16:20 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 2 Aug 2019 09:16:20 +0200 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> References: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> Message-ID: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Hi, In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > On 2 Aug 2019, at 07:01, Md. Farhad Hasan Khan wrote: > > Hi, > We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. > > Thanks & B’Rgds, > Rony — Slawek Kaplonski Senior software engineer Red Hat From frickler at x-ion.de Fri Aug 2 08:10:15 2019 From: frickler at x-ion.de (Jens Harbott) Date: Fri, 02 Aug 2019 10:10:15 +0200 Subject: =?utf-8?q?Re=3A?= Openstack IPv6 neutron confiuraton In-Reply-To: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Message-ID: <62-5d43f000-5-50968180@101299267> On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > Hi, > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 From dtantsur at redhat.com Fri Aug 2 08:10:38 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 2 Aug 2019 10:10:38 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: Top-posting because I'm not answering to anything specific. Have you considered allowing intermediary releases with cycle-with-rc? Essentially combining the two models into one? On 8/1/19 9:02 PM, Doug Hellmann wrote: > > >> On Aug 1, 2019, at 12:52 PM, Akihiro Motoki > > wrote: >> >> On Thu, Aug 1, 2019 at 8:29 PM Dmitry Tantsur > > wrote: >>> >>> On 7/31/19 8:21 PM, Kendall Nelson wrote: >>>> Hello Everyone! >>>> >>>> Development Focus >>>> ----------------- >>>> We are now past the Train-2 milestone, and entering the last development phase >>>> of the cycle. Teams should be focused on implementing planned work for the >>>> cycle.Now is  a good time to review those plans and reprioritize anything if >>>> needed based on the what progress has been made and what looks realistic to >>>> complete in the next few weeks. >>>> >>>> General Information >>>> ------------------- >>>> The following cycle-with-intermediary deliverables have not done any >>>> intermediary release yet during this cycle. The cycle-with-rc release model is >>>> more suited for deliverables that plan to be released only once per cycle. >>> >>> I respectfully disagree. I will reserve my opinion on whether cycle-with-rc >>> suits *anyone*, but in our case I'd prefer to have an option of releasing >>> something in the middle of a cycle even if we don't exercise this option way too >>> often. >>> >>> I'm not an ironic PTL, bit anyway please note that I'm -1 on the change for any >>> of our projects. >> >> I agree with Dmitry. cycle-with-intermediary model allows project >> teams to release >> somethings at any time during a release when they want. On the other hand, >> cycle-with-intermediary means at least one release along with a release cycle. >> "cycle-with-rc" means such deliverable can only *one* release per cycle. >> "cycle-with-rc" might be a good option for some projects but I think >> it is not forced. >> >> If some deliverable tends to have less changes and it is not worth >> cutting a release, >> another option might be "independent". My understanding is that >> "independent" release >> model does not allow us to have stable branches, so it might be a >> thing considered carefully >> when we switch some deliverable to "independent”. > > That’s not quite right. Independent deliverables can have stable branches, but > they are not considered part of the OpenStack release because they are not > managed by the release team. > >> >> Talking about horizon plugins, as a neutron release liaison, >> neutron-fwaas/vpnaas-dashboard >> hit similar situation  to ironic-ui. we don't have any substantial >> changes till now in this cycle. >> I guess this situation may continues in further releases in most >> horizon plugins. >> I am not sure which release model is appropriate. >> horizon adopts release-with-rc model now and horizon plugins >> are usually assumed to work with a specific release of horizon, so >> "independent" might not fit. >> release-with-intermediary or release-with-rc may fit, but there are >> cases where they have >> only infra related changes in a cycle. > > There are far far too many deliverables for our small release team to keep up > with everyone following different procedures for branching, and branching > incorrectly has too many bad ramifications to leave it to chance. We have > therefore tried to describe several release models to meet teams’ needs, and to > allow the release team to automate managing the deliverables in groups that all > follow the same procedures so we end up with consistent results. The fact that > most of the rest of the community have not needed to pay much attention to > issues around branch management says to me that this approach has been working. > > As Thierry pointed out on IRC, there are reasons to require a release beyond the > software having significant features or bug fixes. The reason we need a release > for cycle-with-intermediary projects before the end of the cycle is that when we > reach the final release deadline we need something to use as a place to create > the stable branch (we only branch from tagged releases). In the past, we used > the last release from the previous cycle as a fallback when teams missed other > cycle deadlines. That resulted in creating a new stable branch that had none of > the bug fixes or CI changes that had been on master, and which was therefore > broken and required extra effort to fix. So, we now ask for an early release to > give us a relatively recent one from the current cycle, rather than using the > final release from the previous cycle. > > The alternative, using the cycle-with-rc release model, means that the release > team will automatically generate release candidates and a final release for the > team. In cases where the team does not intend to release more than one version > in a cycle, this is easier for the project team and not much more work for the > release team since the deliverable is handled as part of the batch of all > similar deliverables. Updating the release model is the default when there are > no releases because it reflects what is actually happening with the deliverable > and the release team can manage the change on its own, and Kendall’s email is > the notification which is supposed to trigger the conversation for each > deliverable so that project teams can decide how to proceed down one of the two > paths proposed. Doing nothing isn’t really an option, though. > > So, if you have a cycle-with-intermediary deliverable with changes that you > haven’t considered “substantial” enough to trigger a release previously, and you > do not want to change the release model, this is the point at which you should > do a release anyway to avoid issues at the end of the cycle. > > Doug > From laszlo.budai at gmail.com Fri Aug 2 08:53:14 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Fri, 2 Aug 2019 11:53:14 +0300 Subject: [nova] local ssd disk performance Message-ID: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Hello all, we have a problem with the performance of the disk IO in a KVM instance. We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. Kind regards, Laszlo From thierry at openstack.org Fri Aug 2 09:33:30 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 11:33:30 +0200 Subject: =?UTF-8?B?UmU6IOKAi1tyZWxlYXNlXSAoYSBiaXQgYmVsYXRlZCkgUmVsZWFzZSBj?= =?UTF-8?Q?ountdown_for_week_R-11=2c_July_29_-_August_2?= In-Reply-To: References: Message-ID: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> The release management team discussed this topic at the meeting yesterday. The current process works well in the case where you *know" you will only do one release (release-once), or you *know* you will do more than one release (release-often). We agree that it does not handle well the case where you actually have no idea how many releases you will do (release-if-needed). We need to add a bit of flexibility there, but: - the release team still needs to use a very limited number of standard release models, and know as much as possible in advance. We handle hundreds of OpenStack deliverables, we can't have everyone use their own release variant. - we don't want to disconnect project teams from their releases (we still want teams to trigger release points and feel responsible for the resulting artifact). Here is the proposal we came up with: - The general idea is, by milestone-2, you should have picked your release model. If you plan to release-once, you should use the cycle-with-rcs model. If you plan to release-often, you should be cycle-with-intermediary. In the hopefully rare case where you have no idea and would like to release-if-needed, continue to read. - Between milestone-2 and milestone-3, we look up cycle-with-intermediary things that have not done a release yet. For those, we propose a switch to cycle-with-rcs, and use that to start a discussion. At that point four things can happen: (1) you realize you could do an intermediary release, and do one now. Patch to change release model is abandoned. (2) you realize you only want to do one release this cycle, and +1 the patch. (3) you still have no idea where you're going for this deliverable this cycle and would like to release as-needed: you -1 the patch. You obviously commit to producing a release before RC1 freeze. If by RC1 freeze we still have no release, we'll force one. (4) you realize that the deliverable should be abandoned, or should be disconnected from the "OpenStack" release and be made independent, or some other solution. You -1 the patch and propose an alternative. In all cases that initial patch is the occasion to raise the discussion and cover that blind spot, well in advance of the final weeks of the release where we don't have time to handle differently each of our hundreds of deliverables. Dmitry Tantsur wrote: > Have you considered allowing intermediary releases with cycle-with-rc? > Essentially combining the two models into one? You really only have two different scenarios. A- the final release is more usable and important than the others B- the final release is just another release, just happens to have a stable branch cut from it In scenario (A), you use RCs to apply more care and make sure the one and only release works well. You can totally do other "releases" during the cycle, but since those are not using RCs and are not as carefully vetted, they use "beta" numbering. In scenario (B), all releases are equally important and usable. There is no reason to use RCs for one and not the others. -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 09:59:34 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 11:59:34 +0200 Subject: [tc][forum] Shanghai Forum selection committee Message-ID: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Hi, TC members! We need two TC members to serve on the Shanghai forum selection committee, and help select, refine and potentially merge the forum session proposals from the wider community. Beyond encouraging people to submit proposals, the bulk of the selection committee work happens after the submission deadline (planned for Sept 16th) and the Forum program final selection (planned for Oct 7th). Since we'll have TC renewal elections in progress, it's simpler to pick between members that are not standing for reelection in September. That would be: asettle mugsie jroll mnaser ricolin, ttx and zaneb. Anyone interested? -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 12:17:42 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 14:17:42 +0200 Subject: [neutron][release] neutron-interconnection release in Train Message-ID: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> Hi Neutron folks, We are now past the Train membership freeze[1] and neutron-interconnection is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/neutron-interconnection.yaml file to the openstack/releases repository), we will not include a release of neutron-interconnection in OpenStack Train. This may just be fine, for example if neutron-interconnection needs more time before a release, or if it is released independently from OpenStack releases. [1] https://releases.openstack.org/train/schedule.html#t-mf -- Thierry Carrez (ttx) From thierry at openstack.org Fri Aug 2 12:17:44 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 14:17:44 +0200 Subject: [winstackers][release] compute-hyperv release in Train Message-ID: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> Hi winstackers, We are now past the Train membership freeze[1] and compute-hyperv is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/compute-hyperv.yaml file to the openstack/releases repository), we will not include a release of compute-hyperv in OpenStack Train. This may just be fine, for example if compute-hyperv needs more time before a release, or if it is released independently from OpenStack releases. [1] https://releases.openstack.org/train/schedule.html#t-mf -- Thierry Carrez (ttx) From amotoki at gmail.com Fri Aug 2 12:29:17 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Aug 2019 21:29:17 +0900 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <62-5d43f000-5-50968180@101299267> References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> Message-ID: At a quick look through Jens's guide, it looks like a really nice tutorial to follow. External connectivity is really an important point on IPv6 networking with neutron. I think my presentation at Boston summit two years ago still works [1]. This is based on my experience when I was involved in IPv6 POC with our customers (around Autumn in 2016). While Jens's guide covers detail commands for all, mine helps understanding some backgrounds on neutron with IPv6. [1] https://www.slideshare.net/ritchey98/openstack-neutron-ipv6-lessons Thanks, Akihiro Motoki (irc: amotoki) On Fri, Aug 2, 2019 at 5:14 PM Jens Harbott wrote: > > On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > > > Hi, > > > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. > > For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. > > [1] https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/11/neutron-pike-ipv6.html > > > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > > Also noting that the good reference article at the end of this doc sadly has disappeared, though you can still find it via the web archives. See also https://review.opendev.org/674018 > > From cdent+os at anticdent.org Fri Aug 2 12:37:41 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 2 Aug 2019 13:37:41 +0100 (BST) Subject: [placement] update 19-30 Message-ID: HTML: https://anticdent.org/placement-update-19-30.html Pupdate 19-30 is brought to you by the letter P for Performance. # Most Important The main things on the Placement radar are implementing Consumer Types and cleanups, performance analysis, and documentation related to nested resource providers. # What's Changed * [os-traits 0.16.0 was released](https://review.opendev.org/673294) was released, with a corresponding [canary update](https://review.opendev.org/673964). * The [ProviderIds namedtuple was removed](https://review.opendev.org/673788). This is the first in a series of performance optimizations discovered as part of the analysis described below in the Cleanup section. # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 23 (2) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 3 (1) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 5 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 4 (0). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new '--amend' option which can update resource provider inventory without requiring the user to pass a full replacement for inventory # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. I started some performance analysis this week. Initially I started working with [placement master in a container](https://anticdent.org/profiling-placement-in-docker.html) but as I started making changes I moved back to [container-less](https://anticdent.org/profiling-wsgi-apps.html). What I discovered was that there is quite a bit of redundancy in the code in the `objects` package that I was able to remove. For example we were creating at least twice as many ProviderSummary objects than required in a situation with multiple request groups. It's likely there would have been more duplicates with more request groups. That's improved in [this change](https://review.opendev.org/674254), which is at the end of a stack of several other like-minded improvements. The improvements in that stack will not be obvious until the [more complex nested topology](https://review.opendev.org/#/c/673513/) is generally available. My analysis was based on that topology. Not to put too fine a point on it, but this kind of incremental analysis and improvement is something I think we (the we that is the community of OpenStack) should be doing far more often. It is _incredibly_ revealing about how the system works and opportunities for making the code both work better and be easier to maintain. One outcome of this work will be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There is one [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And two [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Nova: nova-manage: heal port allocations * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * zun: [WIP] Use placement for unified resource management * Kayobe: Build placement images by default * blazar: Fix placement operations in multi-region deployments * Nova: libvirt: Start reporting PCPU inventory to placement * Nova: support move ops with qos ports * Nova: get_ksa_adapter: nix by-service-type confgrp hack * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * Nova: Update HostState.\*\_allocation_ratio earlier * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement # End I started working with around approximately 20,000 providers this week. Only 980,000 to go. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From massimo.sgaravatto at gmail.com Fri Aug 2 13:20:59 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 2 Aug 2019 15:20:59 +0200 Subject: [ops] [nova] Problems migrating an instance with libvirt if the image got deleted ? Message-ID: Hi I remember I had problems in the past trying to resize or migrate instances launched using an image that was then deleted (I use kvm as hypervisor). This is indeed confirmed e.g. here: https://storyboard.openstack.org/#!/story/2004892 https://docs.openstack.org/operations-guide/ops-user-facing-operations.html#deleting-images Now I am not able to reproduce anymore the problem, i.e.: 1- I create an image 2- I launch an instance using this image 3- I delete the image 4- I migrate the instance (nova migrate From jim at jimrollenhagen.com Fri Aug 2 13:57:18 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 2 Aug 2019 09:57:18 -0400 Subject: =?UTF-8?Q?Re=3A_=E2=80=8B=5Brelease=5D_=28a_bit_belated=29_Release_countdown_f?= =?UTF-8?Q?or_week_R=2D11=2C_July_29_=2D_August_2?= In-Reply-To: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> References: <9397f0b9-df46-b2ef-906b-7eba7660784d@openstack.org> Message-ID: On Fri, Aug 2, 2019 at 5:39 AM Thierry Carrez wrote: > The release management team discussed this topic at the meeting yesterday. > > The current process works well in the case where you *know" you will > only do one release (release-once), or you *know* you will do more than > one release (release-often). We agree that it does not handle well the > case where you actually have no idea how many releases you will do > (release-if-needed). > > We need to add a bit of flexibility there, but: > > - the release team still needs to use a very limited number of standard > release models, and know as much as possible in advance. We handle > hundreds of OpenStack deliverables, we can't have everyone use their own > release variant. > > - we don't want to disconnect project teams from their releases (we > still want teams to trigger release points and feel responsible for the > resulting artifact). > > Here is the proposal we came up with: > > - The general idea is, by milestone-2, you should have picked your > release model. If you plan to release-once, you should use the > cycle-with-rcs model. If you plan to release-often, you should be > cycle-with-intermediary. In the hopefully rare case where you have no > idea and would like to release-if-needed, continue to read. > > - Between milestone-2 and milestone-3, we look up > cycle-with-intermediary things that have not done a release yet. For > those, we propose a switch to cycle-with-rcs, and use that to start a > discussion. > > At that point four things can happen: > > (1) you realize you could do an intermediary release, and do one now. > Patch to change release model is abandoned. > > (2) you realize you only want to do one release this cycle, and +1 the > patch. > > (3) you still have no idea where you're going for this deliverable this > cycle and would like to release as-needed: you -1 the patch. You > obviously commit to producing a release before RC1 freeze. If by RC1 > freeze we still have no release, we'll force one. > > (4) you realize that the deliverable should be abandoned, or should be > disconnected from the "OpenStack" release and be made independent, or > some other solution. You -1 the patch and propose an alternative. > > In all cases that initial patch is the occasion to raise the discussion > and cover that blind spot, well in advance of the final weeks of the > release where we don't have time to handle differently each of our > hundreds of deliverables. > This process seems reasonable, thanks Thierry! :) // jim > > Dmitry Tantsur wrote: > > Have you considered allowing intermediary releases with cycle-with-rc? > > Essentially combining the two models into one? > > You really only have two different scenarios. > > A- the final release is more usable and important than the others > B- the final release is just another release, just happens to have a > stable branch cut from it > > In scenario (A), you use RCs to apply more care and make sure the one > and only release works well. You can totally do other "releases" during > the cycle, but since those are not using RCs and are not as carefully > vetted, they use "beta" numbering. > > In scenario (B), all releases are equally important and usable. There is > no reason to use RCs for one and not the others. > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 2 13:58:42 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 2 Aug 2019 15:58:42 +0200 Subject: [neutron][release] neutron-interconnection release in Train In-Reply-To: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> References: <2acc2939-77f4-b5cb-bbb8-b87b991ad38d@openstack.org> Message-ID: <18632AA7-E5D1-4191-ACB5-077D09090503@redhat.com> Hi, I don’t think that there is anything to release in this project currently. It has only some basic stuff and it’s not functional yet. Please check commits history for the project: https://opendev.org/openstack/neutron-interconnection/commits/branch/master > On 2 Aug 2019, at 14:17, Thierry Carrez wrote: > > Hi Neutron folks, > > We are now past the Train membership freeze[1] and neutron-interconnection is not listed as a train deliverable yet. Unless you act very quickly (and add a deliverables/train/neutron-interconnection.yaml file to the openstack/releases repository), we will not include a release of neutron-interconnection in OpenStack Train. > > This may just be fine, for example if neutron-interconnection needs more time before a release, or if it is released independently from OpenStack releases. > > [1] https://releases.openstack.org/train/schedule.html#t-mf > > -- > Thierry Carrez (ttx) > — Slawek Kaplonski Senior software engineer Red Hat From jim at jimrollenhagen.com Fri Aug 2 13:59:00 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 2 Aug 2019 09:59:00 -0400 Subject: [tc][forum] Shanghai Forum selection committee In-Reply-To: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> References: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Message-ID: On Fri, Aug 2, 2019 at 6:07 AM Thierry Carrez wrote: > Hi, TC members! > > We need two TC members to serve on the Shanghai forum selection > committee, and help select, refine and potentially merge the forum > session proposals from the wider community. > > Beyond encouraging people to submit proposals, the bulk of the selection > committee work happens after the submission deadline (planned for > Sept > 16th) and the Forum program final selection (planned for Oct 7th). > > Since we'll have TC renewal elections in progress, it's simpler to pick > between members that are not standing for reelection in September. That > would be: asettle mugsie jroll mnaser ricolin, ttx and zaneb. > > Anyone interested? > I'm happy to help, but won't be attending the forum. Is that okay? // jim > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Aug 2 14:44:48 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 2 Aug 2019 16:44:48 +0200 Subject: [tc][forum] Shanghai Forum selection committee In-Reply-To: References: <5c6896d1-6e86-ad66-5351-35fefdac08f5@openstack.org> Message-ID: <3c79e99b-ed7b-980b-aadd-d222890fefb0@openstack.org> Jim Rollenhagen wrote: > I'm happy to help, but won't be attending the forum. Is that okay? Sure, that makes you a disinterested party :) -- Thierry Carrez (ttx) From mriedemos at gmail.com Fri Aug 2 15:21:02 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 10:21:02 -0500 Subject: [ops] [nova] Problems migrating an instance with libvirt if the image got deleted ? In-Reply-To: References: Message-ID: On 8/2/2019 8:20 AM, Massimo Sgaravatto wrote: > Now I am not able to reproduce anymore the problem, i.e.: I think the bug was just fixed [1]. https://review.opendev.org/#/q/Id0f05bb1275cc816d98b662820e02eae25dc57a3 The commit message on that change says live migration but the code that was changed is used by cold migration flows as well so likely benefited from the fix. -- Thanks, Matt From mriedemos at gmail.com Fri Aug 2 15:27:47 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 10:27:47 -0500 Subject: [nova] local ssd disk performance In-Reply-To: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: <21341465-4761-62c3-5bf8-57cdc9efc7f9@gmail.com> On 8/2/2019 3:53 AM, Budai Laszlo wrote: > 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). Yes it's a known bug: https://bugs.launchpad.net/nova/+bug/1831657 As noted within that bug report, WindRiver had a patch at one point to make that work but it's long out of date, someone would have to polish it off and get it working again. The good news is we have a nova-lvm CI job which is currently skipping resize tests but in the patch that implements migrate for lvm we could unskip those tests and make sure everything is working in that nova-lvm CI job. We just need contributors that care about it to do the work (there seem to be several people that want this, but a dearth of people actually making it happen). > 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case I could dredge up the ML thread on this but while this is an option, (or even using the now-either-deprecated-or-deleted cinder local block volume type driver), it could quickly become a management nightmare since enforcing compute/volume locality with availability zones becomes a mess at scale. If you only have a half dozen computes or something then maybe that's not a problem in a private cloud shop, but it's definitely a problem at larger scale, and also complicated if you set the [cinder]cross_az_attach=False value in nova.conf because of known bugs [1] with that. [1] https://bugs.launchpad.net/nova/+bug/1694844 - yes I'm a bad person for not having cleaned up that patch yet but I haven't felt much urgency either. -- Thanks, Matt From daniel at speichert.pl Fri Aug 2 15:50:03 2019 From: daniel at speichert.pl (Daniel Speichert) Date: Fri, 2 Aug 2019 17:50:03 +0200 Subject: [nova] local ssd disk performance In-Reply-To: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: preallocate_images = space This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. Best Regards Daniel On 8/2/2019 10:53, Budai Laszlo wrote: > Hello all, > > we have a problem with the performance of the disk IO in a KVM instance. > We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > > 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case > 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > > do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > > Kind regards, > Laszlo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Fri Aug 2 16:22:08 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Fri, 2 Aug 2019 19:22:08 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: Thank you Daniel, My colleague found the same solution in the meantime. And that helped us as well. Kind regards, Laszlo On 8/2/19 6:50 PM, Daniel Speichert wrote: > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > preallocate_images = space > > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > > Best Regards > Daniel > > On 8/2/2019 10:53, Budai Laszlo wrote: >> Hello all, >> >> we have a problem with the performance of the disk IO in a KVM instance. >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... >> >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). >> 2. use cinder with lvm backend and instance locality, we could migrate the instances, but the performance is less than half of the previous case >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. >> >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. >> >> Kind regards, >> Laszlo >> From kennelson11 at gmail.com Fri Aug 2 18:54:50 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 2 Aug 2019 11:54:50 -0700 Subject: [winstackers][release] compute-hyperv release in Train In-Reply-To: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> References: <3b557e42-25ea-4c74-d119-3815cdb95a0b@openstack.org> Message-ID: I have a patch out to add the empty deliverable file for compute-hyperv[1] after chatting to Mark and Cao Yuan. -Kendall (diablo_rojo) [1] https://review.opendev.org/#/c/674405/ On Fri, Aug 2, 2019 at 5:19 AM Thierry Carrez wrote: > Hi winstackers, > > We are now past the Train membership freeze[1] and compute-hyperv is not > listed as a train deliverable yet. Unless you act very quickly (and add > a deliverables/train/compute-hyperv.yaml file to the openstack/releases > repository), we will not include a release of compute-hyperv in > OpenStack Train. > > This may just be fine, for example if compute-hyperv needs more time > before a release, or if it is released independently from OpenStack > releases. > > [1] https://releases.openstack.org/train/schedule.html#t-mf > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Aug 2 20:15:20 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 2 Aug 2019 16:15:20 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-6) Message-ID: This is the goal-6 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 6 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train+status:open+(+label:Verified-1+OR+label:Verified-2+) == Ongoing Work == All patches have been submitted for all applicable projects. Note: I need to resubmit most of the OpenStack Charms but the project is currently in release freeze so I'm holding off on consuming 3rd party gate resources. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Completed Work == Merged patches: https://review.openstack.org/#/q/topic:python3-train+is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals /train/python3-updates.html [2] Train release schedule: https://releases.openstack.org/train/schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 2 20:19:23 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 2 Aug 2019 15:19:23 -0500 Subject: [nova][ops] Documenting nova tunables at scale Message-ID: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> I wanted to send this to get other people's feedback if they have particular nova configurations once they hit a certain scale (hundreds or thousands of nodes). Every once in awhile in IRC I'll be chatting with someone about configuration changes they've made running at large scale to avoid, for example, hammering the control plane. I don't know how many times I've thought, "it would be nice if we had a doc highlighting some of these things so a new operator could come along and see, oh I've never tried changing that value before". I haven't started that doc, but I've started a bug report for people to dump some of their settings. The most common ones could go into a simple admin doc to start. I know there is more I've thought about in the past that I don't have in here but this is just a starting point so I don't make the mistake of not taking action on this again. https://bugs.launchpad.net/nova/+bug/1838819 -- Thanks, Matt From tony at bakeyournoodle.com Sat Aug 3 00:10:18 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sat, 3 Aug 2019 10:10:18 +1000 Subject: stestr Python 2 Support In-Reply-To: <20190724131136.GA11582@sinanju.localdomain> References: <20190724131136.GA11582@sinanju.localdomain> Message-ID: <20190803001018.GA2352@thor.bakeyournoodle.com> On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > Hi Everyone, > > I just wanted to send a quick update about the state of python 2 support in > stestr, since OpenStack is the largest user of the project. With the recent > release of stestr 2.4.0 we've officially deprecated the python 2.7 support > in stestr. It will emit a DeprecationWarning whenever it's run from the CLI > with python 2.7 now. The plan (which is not set in stone) is that we will be > pushing a 3.0.0 release that removes the python 2 support and compat code from > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on Jan. > 1st). I don't believe this conflicts with the current Python version support > plans in OpenStack [1] but just wanted to make sure people were aware so that > there are no surprises when stestr stops working with Python 2 in 3.0.0. Thanks Matt. I know it's a little meta but if something really strange were to happen would you be open to doing 2.X.Y releases while we still have maintained branches that use it? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From colleen at gazlene.net Sat Aug 3 01:03:57 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 02 Aug 2019 18:03:57 -0700 Subject: [keystone] Keystone Team Update - Week of 29 July 2019 Message-ID: <1eb8bd54-2fb5-4665-aef0-a3259f131ba7@www.fastmail.com> # Keystone Team Update - Week of 29 July 2019 ## News ### CI instability The volume of policy deprecation warnings we generate in our unit tests has gotten to such a critical level that it appears to be causing serious instability in our unit test CI, possibly even affecting the CI infrastructure itself[1]. It's been suggested that we use the warnings module's filtering capabilities to suppress these warnings in the unit test output, but it seems that the sheer number of warnings that need to be suppressed makes the filtering so inefficient that the tests are even more likely to time out. We could do what the warnings actually suggest and override the deprecated policies in the tests, but it seems most of our unit tests aren't even ready to handle the new policies. Investigation is ongoing[2]. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-01.log.html#t2019-08-01T15:05:40 [2] https://review.opendev.org/673933 ### External auth In this week's meeting we discussed[3] how best to document external auth and agreed it's probably best to deprecate it entirely. We're seeking input from operators on how this may affect them[4]. [3] http://eavesdrop.openstack.org/meetings/keystone/2019/keystone.2019-07-30-16.00.log.html#l-38 [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/008127.html ## Action Items None outstanding ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. We will skip next week's office hours since we don't have a topic planned. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 7 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 37 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories System scope/default roles (https://trello.com/c/ERo50T7r , https://trello.com/c/RlYyb4DU) - https://review.opendev.org/#/q/status:open+topic:implement-default-roles+label:verified%253D%252B1 Application credential access rules (https://trello.com/c/XyBGhKrE) - https://review.opendev.org/#/q/status:open+topic:bp/whitelist-extension-for-app-creds+NOT+label:workflow%253D-1 Caching Guide (https://trello.com/c/UCFt3mfF) - https://review.opendev.org/672120 (Update the caching guide) Predictable IDs (https://trello.com/c/MVuu6DbU) - https://review.opendev.org/651655 (Predictable IDs for Roles) Oslo.limit (https://trello.com/c/KGGkNijR) - https://review.opendev.org/667242 (Add usage example) - https://review.opendev.org/666444 (Flush out basic enforcer and model relationship) - https://review.opendev.org/666085 (Add ksa connection logic) YAML Catalog (https://trello.com/c/Qv14G0xp) - https://review.opendev.org/483514 (Add yaml-loaded filesystem catalog backend) * Needs Discussion - https://review.opendev.org/669959 (discourage using X.509 with external auth) - https://review.opendev.org/655166 (Allows to use application credentials through group membership) * Oldest - https://review.opendev.org/448755 (Add federated support for creating a user) * Closes bugs - https://review.opendev.org/674122 (Fix websso auth loop) - https://review.opendev.org/672350 (Fixing dn_to_id function for cases were id is not in the DN) - https://review.opendev.org/674139 (Cleanup session on delete) ## Bugs This week we opened 6 new bugs and closed 6. Bugs opened (6) Bug #1838592 (keystone:High) opened by Guang Yee https://bugs.launchpad.net/keystone/+bug/1838592 Bug #1838554 (keystone:Low) opened by Mihail Milev https://bugs.launchpad.net/keystone/+bug/1838554 Bug #1836618 (keystone:Undecided) opened by Ghanshyam Mann https://bugs.launchpad.net/keystone/+bug/1836618 Bug #1838231 (keystone:Undecided) opened by Raviteja Polina https://bugs.launchpad.net/keystone/+bug/1838231 Bug #1838704 (keystoneauth:Undecided) opened by Alex Schultz https://bugs.launchpad.net/keystoneauth/+bug/1838704 Bug #1836568 (oslo.policy:Undecided) opened by Colleen Murphy https://bugs.launchpad.net/oslo.policy/+bug/1836568 Bugs closed (4) Bug #1837061 (keystone:Wishlist) https://bugs.launchpad.net/keystone/+bug/1837061 Bug #1791111 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1791111 Bug #1836618 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1836618 Bug #1837010 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1837010 Bugs fixed (2) Bug #1724645 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1724645 Bug #1837407 (keystone:Low) fixed by Chason Chan https://bugs.launchpad.net/keystone/+bug/1837407 ## Milestone Outlook https://releases.openstack.org/train/schedule.html Feature proposal freeze happens in two weeks. Feature freeze follows four weeks after that. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From rony.khan at brilliant.com.bd Sat Aug 3 09:23:09 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Sat, 3 Aug 2019 15:23:09 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: References: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> <62-5d43f000-5-50968180@101299267> Message-ID: <035701d549dd$1187cc10$34976430$@brilliant.com.bd> Hi Motoki, Thanks. I shall check the slide. Thanks & B'Rgds, Rony -----Original Message----- From: Akihiro Motoki [mailto:amotoki at gmail.com] Sent: Friday, August 2, 2019 6:29 PM To: rony.khan at brilliant.com.bd; OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton At a quick look through Jens's guide, it looks like a really nice tutorial to follow. External connectivity is really an important point on IPv6 networking with neutron. I think my presentation at Boston summit two years ago still works [1]. This is based on my experience when I was involved in IPv6 POC with our customers (around Autumn in 2016). While Jens's guide covers detail commands for all, mine helps understanding some backgrounds on neutron with IPv6. [1] https://www.slideshare.net/ritchey98/openstack-neutron-ipv6-lessons Thanks, Akihiro Motoki (irc: amotoki) On Fri, Aug 2, 2019 at 5:14 PM Jens Harbott wrote: > > On Friday, August 02, 2019 09:16 CEST, Slawek Kaplonski wrote: > > > Hi, > > > > In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. > > In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. > > For private networking this is true, if you want public connectivity with IPv6, you need to be aware that there is no SNAT and no floating IPs with IPv6. Instead you need to assign globally routable IPv6 addresses directly to your tenant subnets and use address-scopes plus neutron-dynamic-routing in order to make sure that these addresses get indeed routed to the internet. I have written a small guide how to do this[1], feedback is welcome. > > [1] > https://cloudbau.github.io/openstack/neutron/networking/ipv6/2017/09/1 > 1/neutron-pike-ipv6.html > > > There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > > Also noting that the good reference article at the end of this doc > sadly has disappeared, though you can still find it via the web > archives. See also https://review.opendev.org/674018 > > From rony.khan at brilliant.com.bd Sat Aug 3 09:23:50 2019 From: rony.khan at brilliant.com.bd (Md. Farhad Hasan Khan) Date: Sat, 3 Aug 2019 15:23:50 +0600 Subject: Openstack IPv6 neutron confiuraton In-Reply-To: <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> References: <027e01d548ef$543f5d00$fcbe1700$@brilliant.com.bd> <57C0039B-67D9-4699-B642-70C9EF7AB733@redhat.com> Message-ID: <035801d549dd$296da7f0$7c48f7d0$@brilliant.com.bd> Hi Slawek, Thanks. I shall check documents links. Thanks & B'Rgds, Rony -----Original Message----- From: Slawek Kaplonski [mailto:skaplons at redhat.com] Sent: Friday, August 2, 2019 1:16 PM To: rony.khan at brilliant.com.bd Cc: OpenStack Discuss Subject: Re: Openstack IPv6 neutron confiuraton Hi, In tenant networks IPv6 packets are going same way as IPv4 packets. There is no differences between IPv4 and IPv6 AFAIK. In https://docs.openstack.org/neutron/latest/admin/deploy-ovs.html You can find some deployment examples and explanation when ovs mechanism driver is used and in https://docs.openstack.org/neutron/latest/admin/deploy-lb.html there is similar doc for linuxbridge driver. There are differences with e.g. how DHCP is handled for IPv6. Please check https://docs.openstack.org/neutron/latest/admin/config-ipv6.html for details. > On 2 Aug 2019, at 07:01, Md. Farhad Hasan Khan wrote: > > Hi, > We already have IPv4 vxLAN in our openstack. Now we want to add IPv6 for tenant network. Looking for help how to configure ipv6 tenant network in openstack neutron. Kindly help me to understand how ipv6 packet flow. Please give me some documents with network diagram. > > Thanks & B’Rgds, > Rony — Slawek Kaplonski Senior software engineer Red Hat From donny at fortnebula.com Sat Aug 3 17:41:33 2019 From: donny at fortnebula.com (Donny Davis) Date: Sat, 3 Aug 2019 13:41:33 -0400 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well. If you don't mind me asking, what kind of i/o performance are you looking for? On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo wrote: > Thank you Daniel, > > My colleague found the same solution in the meantime. And that helped us > as well. > > Kind regards, > Laszlo > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > For the case of simply using local disk mounted for /var/lib/nova and > raw disk image type, you could try adding to nova.conf: > > > > preallocate_images = space > > > > This implicitly changes the I/O method in libvirt from "threads" to > "native", which in my case improved performance a lot (10 times) and > generally is the best performance I could get. > > > > Best Regards > > Daniel > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > >> Hello all, > >> > >> we have a problem with the performance of the disk IO in a KVM > instance. > >> We are trying to provision VMs with high performance SSDs. we have > investigated different possibilities with different results ... > >> > >> 1. configure Nova to use local LVM storage (images_types = lvm) - > provided the best performance, but we could not migrate our instances > (seems to be a bug). > >> 2. use cinder with lvm backend and instance locality, we could migrate > the instances, but the performance is less than half of the previous case > >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = > raw in nova. We could migrate, but the write performance dropped to ~20% of > the images_types = lvm performance and read performance is ~65% of the lvm > case. > >> > >> do you have any idea to improve the performance for any of the cases 2 > or 3 which allows migration. > >> > >> Kind regards, > >> Laszlo > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Aug 3 18:56:44 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 3 Aug 2019 20:56:44 +0200 Subject: [keystone] [stein] user_enabled_emulation config problem Message-ID: Hello all, I have an issue using user_enabled_emulation with my LDAP solution. I set: user_tree_dn = ou=Users,o=UCO user_objectclass = inetOrgPerson user_id_attribute = uid user_name_attribute = uid user_enabled_emulation = true user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO user_enabled_emulation_use_group_config = true group_tree_dn = ou=Groups,o=UCO group_objectclass = posixGroup group_id_attribute = cn group_name_attribute = cn group_member_attribute = memberUid group_members_are_ids = true Keystone properly lists members of the Users group but they all remain disabled. Did I misinterpret something? Kind regards, Radek -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjamgade at suse.de Thu Aug 1 11:20:14 2019 From: sjamgade at suse.de (Sumit Jamgade) Date: Thu, 1 Aug 2019 13:20:14 +0200 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Hey Bernd, Can you try with just one publisher instead of 2 and also drop the archive_policy query parameter and its value. Then ceilometer should publish metrics based on map defined gnocchi_resources.yaml And while you are at it. Could you post a list of archive policies already defined in gnocchi, I believe this list should match what is listed in gnocchi_resources.yaml. Hope that helps Sumit On 7/31/19 3:22 AM, Bernd Bausch wrote: > > The message at the end of this email is some three months old. I have > the same problem. The question is: *How to use the new rate metrics in > Gnocchi. *I am using a Stein Devstack for my tests.* > * > > For example, I need the CPU rate, formerly named /cpu_util/. I created > a new archive policy that uses /rate:mean/ aggregation and has a 1 > minute granularity: > > $ gnocchi archive-policy show ceilometer-medium-rate > +---------------------+------------------------------------------------------------------+ > | Field               | > Value                                                            | > +---------------------+------------------------------------------------------------------+ > | aggregation_methods | rate:mean, > mean                                                  | > | back_window         | > 0                                                                | > | definition          | - points: 10080, granularity: 0:01:00, > timespan: 7 days, 0:00:00 | > | name                | > ceilometer-medium-rate                                           | > +---------------------+------------------------------------------------------------------+ > > I added the new policy to the publishers in /pipeline.yaml/: > > $ tail -n5 /etc/ceilometer/pipeline.yaml > sinks: >     - name: meter_sink >       publishers: >           - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >           *- > gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* > > After restarting all of Ceilometer, my hope was that the CPU rate > would magically appear in the metric list. But no: All metrics are > linked to archive policy /medium/, and looking at the details of an > instance, I don't detect anything rate-related: > > $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 > +-----------------------+---------------------------------------------------------------------+ > | Field                 | > Value                                                               | > +-----------------------+---------------------------------------------------------------------+ > ... > | metrics               | compute.instance.booting.time: > 76fac1f5-962e-4ff2-8790-1f497c99c17d | > |                       | cpu: > af930d9a-a218-4230-b729-fee7e3796944                           | > |                       | disk.ephemeral.size: > 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | > |                       | disk.root.size: > 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | > |                       | memory.resident: > 09efd98d-c848-4379-ad89-f46ec526c183               | > |                       | memory.swap.in: > 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | > |                       | memory.swap.out: > 4d012697-1d89-4794-af29-61c01c925bb4               | > |                       | memory.usage: > 93eab625-0def-4780-9310-eceff46aab7b                  | > |                       | memory: > ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1                        | > |                       | vcpus: > e1c5acaf-1b10-4d34-98b5-3ad16de57a98                         | > | original_resource_id  | > ae3659d6-8998-44ae-a494-5248adbebe11                                | > ... > > | type                  | > instance                                                            | > | user_id               | > a9c935f52e5540fc9befae7f91b4b3ae                                    | > +-----------------------+---------------------------------------------------------------------+ > > Obviously, I am missing something. Where is the missing link? What do > I have to do to get CPU usage rates? Do I have to create metrics? > Do//I have to ask Ceilometer to create metrics? How? > > Right now, no instructions seem to exist at all. If that is correct, I > would be happy to write documentation once I understand how it works. > > Thanks a lot. > > Bernd > > On 5/10/2019 3:49 PM, info at dantalion.nl wrote: >> Hello, >> >> I am working on Watcher and we are currently changing how metrics are >> retrieved from different datasources such as Monasca or Gnocchi. Because >> of this major overhaul I would like to validate that everything is >> working correctly. >> >> Almost all of the optimization strategies in Watcher require the cpu >> utilization of an instance as metric but with newer versions of >> Ceilometer this has become unavailable. >> >> On IRC I received the information that Gnocchi could be used to >> configure an aggregate and this aggregate would then report cpu >> utilization, however, I have been unable to find documentation on how to >> achieve this. >> >> I was also notified that cpu_util is something that could be computed >> from other metrics. When reading >> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >> the documentation seems to agree on this as it states that cpu_util is >> measured by using a 'rate of change' transformer. But I have not been >> able to find how this can be computed. >> >> I was hoping someone could spare the time to provide documentation or >> information on how this currently is best achieved. >> >> Kind Regards, >> Corne Lukken (Dantali0n) >> From joseph.r.email at gmail.com Sat Aug 3 00:40:11 2019 From: joseph.r.email at gmail.com (Joe Robinson) Date: Sat, 3 Aug 2019 10:40:11 +1000 Subject: [nova][ops] Documenting nova tunables at scale In-Reply-To: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> References: <75119870-05f6-04c7-8610-ca6c1feabb10@gmail.com> Message-ID: Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along and > see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people to > dump some of their settings. The most common ones could go into a simple > admin doc to start. > > I know there is more I've thought about in the past that I don't have in > here but this is just a starting point so I don't make the mistake of > not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vungoctan252 at gmail.com Thu Aug 1 04:43:13 2019 From: vungoctan252 at gmail.com (Vu Tan) Date: Thu, 1 Aug 2019 11:43:13 +0700 Subject: [masakari] how to install masakari on centos 7 In-Reply-To: References: <35400f83-c29d-475a-8d36-d56b3cf16d30@email.android.com> Message-ID: Hi Patil, May I know how is it going ? On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: > Hi Patil, > Thank you for your reply, please instruct me if you successfully install > it. Thanks a lot > > On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > wrote: > >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and let >> you know the solution or will update you what steps I have followed to >> bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running but >> it is not running as it should be, a lots of error in logs, here is a >> sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, in >> _is_daemo >> n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, can >> you please take a look at my configuration to see what is wrong or what is >> missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING keystonemiddleware._common.config >> [-] The option "__file__" in conf is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING keystonemiddleware._common.config >> [-] The option "here" in conf is not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token [-] >> AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova >> os_privileged_user_password = P at ssword >> masakari_api_listen = controller >> masakari_api_listen_port = 15868 >> debug = False >> auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 >> auth_url = http://controller:5000 >> auth_type = password >> project_domain_id = default >> project_domain_name = default >> user_domain_name = default >> user_domain_id = default >> project_name = service >> username = masakari >> password = P at ssword >> region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar > > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded on >> the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options into >> your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig on >> > top level of masakari but the output is error, is this masakari still >> > in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig >> > genconfig create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run masakari-api >> > with this command: masakari-api, but I dont know whether the process is >> > running like that or is it just hang there, here is what it shows when >> > I run the command, I leave it there for a while but it does not change >> > anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING keystonemiddleware._common.config >> > [-] The option "__file__" in conf is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING keystonemiddleware._common.config >> > [-] The option "here" in conf is not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token [-] >> > AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This is >> > backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] Starting 4 >> > workers >> > 2019-07-08 15:21:09.984 30274 INFO masakari.masakari_api.wsgi.server >> > [-] (30274) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO masakari.masakari_api.wsgi.server >> > [-] (30275) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO masakari.masakari_api.wsgi.server >> > [-] (30277) wsgi starting up on http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO masakari.masakari_api.wsgi.server >> > [-] (30276) wsgi starting up on http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> > > >>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured some >> > stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to systemctl >> > command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, you >> > will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api and >> > masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of prefix >> > on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step by >> > step or other documentation to get started with. Which part should be >> > installed on controller, which is should be on compute, and what is the >> > prerequisite to install masakari, I have installed corosync and >> > pacemaker on compute and controller nodes, , what else do I need to do >> > ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take a >> > look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller >> > # Enable to specify port other than default >> > masakari_api_listen_port = 15868 >> > debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path >> > api_paste_config = /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 >> > auth_url = http://controller:5000 >> > auth_type = password >> > project_domain_id = default >> > user_domain_id = default >> > project_name = service >> > username = masakari >> > password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the intended >> recipient, please advise the sender by replying promptly to this email and >> then delete and destroy this email and any attachments without any further >> use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the intended >> recipient, please advise the sender by replying promptly to this email and >> then delete and destroy this email and any attachments without any further >> use, copying or forwarding. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.posada.sanchez at gmail.com Fri Aug 2 21:45:03 2019 From: oscar.posada.sanchez at gmail.com (Oscar Omar Posada Sanchez) Date: Fri, 2 Aug 2019 16:45:03 -0500 Subject: [training-labs] what is access domain Message-ID: Hi Team, I am starting to study openStack I am following this reference https://github.com/openstack/training-labs but I can not find the access domain in the first login already installed the laboratory. Could you tell me, thanks. -- Por su atención y tiempo, gracias. Que pase feliz día. ------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 00:48:45 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 3 Aug 2019 20:48:45 -0400 Subject: [all][tc] U Cycle Naming Poll Message-ID: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html From berndbausch at gmail.com Sun Aug 4 07:50:09 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 4 Aug 2019 16:50:09 +0900 Subject: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? Message-ID: Prior to Stein, Ceilometer issued a metric named /cpu_util/, which I could use to trigger alarms and autoscaling when CPU utilization was too high. cpu_util doesn't exist anymore. Instead, we are asked to use Gnocchi's /rate/ feature. However, when using rates, alarms on a group of resources require more parameters than just one metric: Both an aggregation and a reaggregation method are needed. For example, a group of instances that implement "myapp": gnocchi measures aggregation -m cpu --reaggregation mean --aggregation rate:mean --query server_group=myapp --resource-type instance Actually, this command uses a deprecated API (but from what I can see, Aodh still uses it). The new way is like this: gnocchi aggregates --resource-type instance '(aggregate rate:mean (metric cpu mean))' server_group=myapp If rate:mean is in the archive policy, it also works the other way around: gnocchi aggregates --resource-type instance '(aggregate mean (metric cpu rate:mean))' server_group=myapp Without reaggregation, I get quite unexpected numbers, including negative CPU rates. If you want to understand why, see this discussion with one of the Gnocchi maintainers [1]. *My problem*: Aodh allows me to set an aggregation method, but not a reaggregation method. How can I create alarms based on rates? The problem extends to Heat and autoscaling. Thanks much, Bernd. [1] https://github.com/gnocchixyz/gnocchi/issues/1044 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Aug 4 07:57:27 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 4 Aug 2019 15:57:27 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann wrote: > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based on > input from communty members. The name critera are described in [1], and > this time around we were looking for names starting with U associated with > China. With some extra assistance from local community members (thank you > to everyone who helped!), we have a list of candidate names that will go > into the poll. Below is a subset of the names propsed, including those that > meet the standard criteria and some of the suggestions that do not. Before > we start the poll, the process calls for us to provide a period of 1 week > so that any names removed from the proposals can be discussed and any > last-minute objections can be raised. We will start the poll next week > using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared > among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is > in Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Aug 4 13:47:40 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 4 Aug 2019 13:47:40 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190804134740.vjaje7mtmrun7vyw@yuggoth.org> On 2019-08-04 15:57:27 +0800 (+0800), Zhipeng Huang wrote: > One of the most important one is missing: Urumqi: > https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . [...] I had suggested we exclude it for reasons of cultural sensitivity, since this is the 10-year anniversary of the July 2009 Ürümqi riots and September 2009 Xinjiang unrest there and thought it would probably be best not to seem like we're commemorating that. If most folks in China don't see it as an insensitive choice then we could presumably readd Urumqi as an option, but it was omitted out of caution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Sun Aug 4 14:11:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 10:11:27 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 4, 2019, at 3:57 AM, Zhipeng Huang wrote: > > One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? > > For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Sun Aug 4 15:04:49 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Sun, 4 Aug 2019 15:04:49 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> I would also prefer not to establish too much precedence for non-geographical names. I feel Train should remain a special case (as it was related to the conference location, although not a geographical relation). We’ve got some good choices in the U* place names (although I’ll need some help with pronunciation, like Bexar) Tim On 4 Aug 2019, at 16:11, Doug Hellmann > wrote: On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Aug 4 16:18:15 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 5 Aug 2019 00:18:15 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Not quite understand what [GR] stands for but at least for top 6 ones the correct Romanized Pinyin spelling does not start with U :) All the rest looks fine :) On Sun, Aug 4, 2019 at 10:11 PM Doug Hellmann wrote: > > > On Aug 4, 2019, at 3:57 AM, Zhipeng Huang wrote: > > One of the most important one is missing: Urumqi: > https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should > actually not be counted since their Romanized spelling (pinyin) does not > start with U. > > > Jeremy has already addressed the reason for dropping Urumqui. I made > similar judgement calls on some of the suggestions not related to geography > because I easily found negative connotations for them. > > For the other items you refer to, I do see spellings starting with U there > in the list we were given. I do not claim to understand the differences in > the way those names have been translated into those forms, though. Are you > saying those are invalid spellings? > > > For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin > spelling since the name is following native's language. > > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > >> Every OpenStack development cycle and release has a code-name. As with >> everything we do, the process of choosing the name is open and based on >> input from communty members. The name critera are described in [1], and >> this time around we were looking for names starting with U associated with >> China. With some extra assistance from local community members (thank you >> to everyone who helped!), we have a list of candidate names that will go >> into the poll. Below is a subset of the names propsed, including those that >> meet the standard criteria and some of the suggestions that do not. Before >> we start the poll, the process calls for us to provide a period of 1 week >> so that any names removed from the proposals can be discussed and any >> last-minute objections can be raised. We will start the poll next week >> using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is >> shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet >> transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is >> in Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in >> Mongolian; this is a common Latin-alphabet transcription of the name. >> Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> >> > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open > Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 16:19:26 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 12:19:26 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <40516BD8-B533-4D73-ADA0-C9D5C10C0AFC@cern.ch> Message-ID: <200E2ADB-52B2-427A-B5D3-022C0416A3B6@doughellmann.com> > On Aug 4, 2019, at 11:04 AM, Tim Bell wrote: > > I would also prefer not to establish too much precedence for non-geographical names. I feel Train should remain a special case (as it was related to the conference location, although not a geographical relation). > > We’ve got some good choices in the U* place names (although I’ll need some help with pronunciation, like Bexar) > > Tim My understanding is that most of the other names were suggested by the Chinese contributor community, so I felt comfortable leaving them on the list even though I will be voting for one of the place names. Doug > >> On 4 Aug 2019, at 16:11, Doug Hellmann > wrote: >> >> >> >>> On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: >>> >>> One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. >> >> Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. >> >> For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? >> >>> >>> For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. >>> >>> On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: >>> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >>> >>> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >>> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >>> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >>> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >>> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >>> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >>> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >>> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >>> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >>> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >>> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >>> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >>> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >>> Uma http://www.fallingrain.com/world/CH/20/Uma.html >>> Unicorn >>> Urban >>> Unique >>> Umpire >>> Utopia >>> Umbrella >>> Ultimate >>> >>> [1] https://governance.openstack.org/tc/reference/release-naming.html >>> >>> >>> >>> >>> -- >>> Zhipeng (Howard) Huang >>> >>> Principle Engineer >>> OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Aug 4 16:36:27 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 4 Aug 2019 12:36:27 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 4, 2019, at 12:18 PM, Zhipeng Huang wrote: > > Not quite understand what [GR] stands for but at least for top 6 ones the correct Romanized Pinyin spelling does not start with U :) > > All the rest looks fine :) It is another romanization system [1]. During the brainstorming process we were told that using it was acceptable, although less common than Pinyin. [1] https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh > > On Sun, Aug 4, 2019 at 10:11 PM Doug Hellmann > wrote: > > >> On Aug 4, 2019, at 3:57 AM, Zhipeng Huang > wrote: >> >> One of the most important one is missing: Urumqi: https://en.wikipedia.org/wiki/%C3%9Cr%C3%BCmqi . The top 6 should actually not be counted since their Romanized spelling (pinyin) does not start with U. > > Jeremy has already addressed the reason for dropping Urumqui. I made similar judgement calls on some of the suggestions not related to geography because I easily found negative connotations for them. > > For the other items you refer to, I do see spellings starting with U there in the list we were given. I do not claim to understand the differences in the way those names have been translated into those forms, though. Are you saying those are invalid spellings? > >> >> For cities like Urumqi, Ulanqab, Ulanhot, there are no need for the pinyin spelling since the name is following native's language. >> >> On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: >> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> >> >> >> -- >> Zhipeng (Howard) Huang >> >> Principle Engineer >> OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C >> > > > > -- > Zhipeng (Howard) Huang > > Principle Engineer > OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun Aug 4 16:37:39 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 4 Aug 2019 16:37:39 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> On 2019-08-05 00:18:15 +0800 (+0800), Zhipeng Huang wrote: > Not quite understand what [GR] stands for [...] "Gwoyeu Romatzyh (pinyin: Guóyǔ Luómǎzì, literally "National Language Romanization"), abbreviated GR, is a system for writing Mandarin Chinese in the Latin alphabet. The system was conceived by Yuen Ren Chao and developed by a group of linguists including Chao and Lin Yutang from 1925 to 1926. Chao himself later published influential works in linguistics using GR. In addition a small number of other textbooks and dictionaries in GR were published in Hong Kong and overseas from 1942 to 2000." https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Sun Aug 4 18:52:07 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 4 Aug 2019 13:52:07 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Message-ID: Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Aug 5 00:17:00 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 5 Aug 2019 08:17:00 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <20190804163739.sbczsg3zny7hyyfx@yuggoth.org> Message-ID: Fascinating ! All good choices :) On Mon, Aug 5, 2019 at 12:40 AM Jeremy Stanley wrote: > On 2019-08-05 00:18:15 +0800 (+0800), Zhipeng Huang wrote: > > Not quite understand what [GR] stands for > [...] > > "Gwoyeu Romatzyh (pinyin: Guóyǔ Luómǎzì, literally "National > Language Romanization"), abbreviated GR, is a system for writing > Mandarin Chinese in the Latin alphabet. The system was conceived by > Yuen Ren Chao and developed by a group of linguists including Chao > and Lin Yutang from 1925 to 1926. Chao himself later published > influential works in linguistics using GR. In addition a small > number of other textbooks and dictionaries in GR were published in > Hong Kong and overseas from 1942 to 2000." > > https://en.wikipedia.org/wiki/Gwoyeu_Romatzyh > > -- > Jeremy Stanley > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Mon Aug 5 02:37:04 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 5 Aug 2019 02:37:04 +0000 Subject: =?utf-8?B?cmVwbHk6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVJlOiBbbm92YV1b?= =?utf-8?Q?ops]_Documenting_nova_tunables_at_scale?= Message-ID: <3d06c3e9d83d4cc7a0fe12df15091130@inspur.com> Agree with this approach. The large-scale test scenario configuration manual is very meaningful. When the OpenStack deployment scale reaches a certain level (for example, nodes >= 200,500… etc.), various exception scenarios will occur, rabitmq blocking conditions, and create a batch of servers. It’s success rate (There will be some configurations to be care. e.g. scheduling policies, rpc wait time, amount of works etc.). If there is a reference manual, this is very friendly. > On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, > wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along and > see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people to > dump some of their settings. The most common ones could go into a simple > admin doc to start. > > I know there is more I've thought about in the past that I don't have in > here but this is just a starting point so I don't make the mistake of > not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at liuyulong.me Mon Aug 5 06:18:19 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Mon, 5 Aug 2019 14:18:19 +0800 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1, Rodolfo always give us valuable review comments and keep producing high quality code. Welcome to the core team! ------------------ Original ------------------ From: "Miguel Lavalle"; Date: Mon, Aug 5, 2019 02:52 AM To: "openstack-discuss"; Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon Aug 5 06:39:21 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Mon, 5 Aug 2019 18:39:21 +1200 Subject: [openstack-dev][magnum] Project updates In-Reply-To: <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> References: <20190731171049.gayatjbtjvgxya25@yuggoth.org> <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> Message-ID: Hi all, The issue of Magnum being "Certified Kubernetes Installer" has been fixed, see https://landscape.cncf.io/organization=open-stack&selected=magnum Thanks. On 1/08/19 6:45 AM, feilong wrote: > On 1/08/19 5:10 AM, Jeremy Stanley wrote: >> On 2019-07-31 21:03:39 +1200 (+1200), feilong wrote: >> [...] >>> So far, we have done some great work in this cycle which make >>> Magnum to achieve to a higher level. >> [...] >> >> This is all great stuff, thanks for the update! >> >>> Kubernetes is still evolving very fast >> [...] >> >> On that note, the Stein release announcement[0] mentioned that >> Magnum was a "Certified Kubernetes Installer." I don't see it listed >> on the Kubernetes Conformance site[1] now, but it was apparently >> still on there as recently as early July[2]. It seemed like this was >> a big deal at one point, but wasn't kept up. Is there any interest >> from the Magnum maintainers in adding support for recent versions of >> Kubernetes and reacquiring that certification in time for the Train >> release? > TBH, I don't know why it's removed from the list and I didn't get any > notice about that. But now I'm working with Chris to get it get. Thanks > for reminding. > > >> [0] https://www.openstack.org/software/stein/ >> [1] https://www.cncf.io/certification/software-conformance/ >> [2] https://web.archive.org/web/20190705004545/https://www.cncf.io/certification/software-conformance/ -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From skaplons at redhat.com Mon Aug 5 07:04:08 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 5 Aug 2019 09:04:08 +0200 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: <833D103F-BEFF-4173-9138-4184082F6D65@redhat.com> Yes! That is great news. And of course +1 (or even +100) from me :) > On 4 Aug 2019, at 20:52, Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel — Slawek Kaplonski Senior software engineer Red Hat From rico.lin.guanyu at gmail.com Mon Aug 5 07:40:59 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 5 Aug 2019 15:40:59 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann wrote: > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake For the above options, it's not common to use [GR] system, in Shanghai (or almost entire China area). So if we like to reduce confusion and unnecessary arguments also to get recognized by audiences (as width as we can), I don't think these are good choices. As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) I think this might be my fault here. because it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Aug 5 07:55:57 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 5 Aug 2019 16:55:57 +0900 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 from me. He provides valuable reviews and good quality codes and is also responsible. On Mon, Aug 5, 2019 at 3:53 AM Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel From merlin.blom at bertelsmann.de Mon Aug 5 08:43:08 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Mon, 5 Aug 2019 08:43:08 +0000 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax Message-ID: Hey, I would like to aggregate data from the gnocchi database by using the gnocchi aggregates function of the CLI/API The documentation does not cover the operations that are available nor the syntax that has to be used: https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggrega tes Searching for more information I found a GitHub Issue: https://github.com/gnocchixyz/gnocchi/issues/393 But I cannot use the syntax from that ether. My use case: I want to aggregate the vcpus hours per month, vram hours per month, . per server or project. - when an instance is stopped only storage is counted - the exact usage is used e.g. 2 vcpus between 1st and 7th day 4vcpus between 8th and last month no mean calculations Do you have detailed documentation about the gnocchi Aggregates Operation Syntax? Do you have complex examples for gnocchi aggregations? Especially when using the python bindings: conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], resource_type='instance', groupby='original_resource_id') Can you give me advice regarding my use case? Do's and don'ts. Thank you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From bcafarel at redhat.com Mon Aug 5 08:52:22 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 5 Aug 2019 10:52:22 +0200 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 on everything Miguel said! QoS, nova/neutron fixes, all the work to switch to Pyroute2, great reviews to learn from, … And of course +1 for seeing him added to the core list! On Sun, 4 Aug 2019 at 20:54, Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron since > the Mitaka cycle. He has been a driving force over these years in the > implementation an evolution of Neutron's QoS feature, currently leading the > sub-team dedicated to it. Recently he has been working on improving the > interaction with Nova during the port binding process, driven the adoption > of Pyroute2 and has become very active in fixing all kinds of bugs. The > quality and number of his code reviews during the Train cycle are > comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Mon Aug 5 08:54:41 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 5 Aug 2019 11:54:41 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: Hi, well, we used the same command to measure the different storage possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we have measured the disk mounted directly on the host, and we have used the same command to measure the performance in the guests using different ways to attach the storage to the VM. for instance on the host we were able to measure 408MB/s initial writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the guest we got the following, using the different technologies: 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, without preallocate images) Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads 427MB/s. 2. Ephemeral served by nova (images type = lvm, without preallocate images) Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random reads 550MB/s. 3. Cinder attached LVM with instance locality Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random reads 160MB/s. 4. Cinder attached LVM without instance locality Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random reads 105MB/s. 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, witht preallocate images) Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random reads 553MB/s So points 3,4 are using ISCSI. As you can see those numbers are far below the local volume based or the local file based with preallocate images. Could you share some nubers about the performance of your ISCSI based setup? that would allow us to see whether we are doing something wrong related to the iscsi. Thank you. Kind regards, Laszlo On 8/3/19 8:41 PM, Donny Davis wrote: > I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well.  > > If you don't mind me asking, what kind of i/o performance are you looking for? > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo > wrote: > > Thank you Daniel, > > My colleague found the same solution in the meantime. And that helped us as well. > > Kind regards, > Laszlo > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > > >     preallocate_images = space > > > > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > > > > Best Regards > > Daniel > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > >> Hello all, > >> > >> we have a problem with the performance of the disk IO in a KVM instance. > >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > >> > >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > >> 2. use cinder with lvm backend  and instance locality, we could migrate the instances, but the performance is less than half of the previous case > >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > >> > >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > >> > >> Kind regards, > >> Laszlo > >> > > From donny at fortnebula.com Mon Aug 5 12:08:11 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 5 Aug 2019 08:08:11 -0400 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: I am happy to share numbers from my iscsi setup. However these numbers probably won't mean much for your workloads. I tuned my openstack to perform as well as possible for a specific workload (Openstack CI), so some of the things I have put my efforts into are for CI work and not really relevant to general purpose. Also your cinder performance hinges greatly on your networks capabilities. I use a dedicated nic for iscsi traffic, and MTU's are set at 9000 for every device in the iscsi path. *Only* that nic is set at MTU 9000, because if the rest of the openstack network is, it can create more problems than it solves. My network spine is 40G, and each compute node has 4 10G nics. I only use one nic for iscsi traffic. The block storage node has two 40G nics. With that said, I use the fio tool to benchmark performance on linux systems. Here is the command i use to run the benchmark fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 --size=10G --readwrite=randrw --rwmixread=50 >From the block storage node locally Run status group 0 (all jobs): READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), io=79.9GiB (85.8GB), run=26948-27662msec WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), io=80.1GiB (85.0GB), run=26948-27662msec >From inside a vm Run status group 0 (all jobs): READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=30.0GiB (32.2GB), run=69242-69605msec WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=29.0GiB (32.2GB), run=69242-69605msec The vm side of the test is able to push pretty close to the limits of the nic. My cloud also currently has a full workload on it, as I have learned in working to get an optimized for CI cloud... it does matter if there is a workload or not. Are you using raid for your ssd's, if so what type? Do you mind sharing what workload will go on your Openstack deployment? Is it DB, web, general purpose, etc. ~/Donny D On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo wrote: > Hi, > > well, we used the same command to measure the different storage > possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we > have measured the disk mounted directly on the host, and we have used the > same command to measure the performance in the guests using different ways > to attach the storage to the VM. > > for instance on the host we were able to measure 408MB/s initial writes, > 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the > guest we got the following, using the different technologies: > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, without preallocate images) > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads > 427MB/s. > > 2. Ephemeral served by nova (images type = lvm, without preallocate images) > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random > reads 550MB/s. > > 3. Cinder attached LVM with instance locality > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random > reads 160MB/s. > > 4. Cinder attached LVM without instance locality > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random > reads 105MB/s. > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, witht preallocate images) > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random > reads 553MB/s > > > So points 3,4 are using ISCSI. As you can see those numbers are far below > the local volume based or the local file based with preallocate images. > > Could you share some nubers about the performance of your ISCSI based > setup? that would allow us to see whether we are doing something wrong > related to the iscsi. Thank you. > > Kind regards, > Laszlo > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > I am using the cinder-lvm backend right now and performance is quite > good. My situation is similar without the migration parts. Prior to this > arrangement I was using iscsi to mount a disk in /var/lib/nova/instances > and that also worked quite well. > > > > If you don't mind me asking, what kind of i/o performance are > you looking for? > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo > wrote: > > > > Thank you Daniel, > > > > My colleague found the same solution in the meantime. And that > helped us as well. > > > > Kind regards, > > Laszlo > > > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > > For the case of simply using local disk mounted for /var/lib/nova > and raw disk image type, you could try adding to nova.conf: > > > > > > preallocate_images = space > > > > > > This implicitly changes the I/O method in libvirt from "threads" > to "native", which in my case improved performance a lot (10 times) and > generally is the best performance I could get. > > > > > > Best Regards > > > Daniel > > > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > > >> Hello all, > > >> > > >> we have a problem with the performance of the disk IO in a KVM > instance. > > >> We are trying to provision VMs with high performance SSDs. we > have investigated different possibilities with different results ... > > >> > > >> 1. configure Nova to use local LVM storage (images_types = lvm) - > provided the best performance, but we could not migrate our instances > (seems to be a bug). > > >> 2. use cinder with lvm backend and instance locality, we could > migrate the instances, but the performance is less than half of the > previous case > > >> 3. mount the ssd on /var/lib/nova/instances and use the > images_type = raw in nova. We could migrate, but the write performance > dropped to ~20% of the images_types = lvm performance and read performance > is ~65% of the lvm case. > > >> > > >> do you have any idea to improve the performance for any of the > cases 2 or 3 which allows migration. > > >> > > >> Kind regards, > > >> Laszlo > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 5 12:22:16 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 08:22:16 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> > On Aug 5, 2019, at 3:40 AM, Rico Lin wrote: > > > > On Sun, Aug 4, 2019 at 8:53 AM Doug Hellmann > wrote: > > > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > > For the above options, it's not common to use [GR] system, in Shanghai (or almost entire China area). So if we like to reduce confusion and unnecessary arguments also to get recognized by audiences (as width as we can), I don't think these are good choices. Ok, based on your input and Howard’s I will just drop these options from the proposed list. > > As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. Are you saying we should not include any of these names either, or just that when we present the poll we should not include the Pinyin spelling? > > > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > > I think this might be my fault here. because it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) I will update the wiki page and ensure this is correct in the poll. > > > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > > > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > > Uma http://www.fallingrain.com/world/CH/20/Uma.html > > Unicorn > > Urban > > Unique > > Umpire > > Utopia > > Umbrella > > Ultimate > > > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > > > > > > -- > May The Force of OpenStack Be With You, > Rico Lin > irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Mon Aug 5 13:03:27 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 5 Aug 2019 08:03:27 -0500 Subject: [uc]The UC Nomination Period is now open! Message-ID: As the subject says, the nomination period for the August 2019 User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.org mailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. -- Ed Leafe From fungi at yuggoth.org Mon Aug 5 13:15:13 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 13:15:13 +0000 Subject: [openstack-dev][magnum] Project updates In-Reply-To: References: <20190731171049.gayatjbtjvgxya25@yuggoth.org> <45d4effd-ae55-cfe4-60dd-3635d7558eb5@catalyst.net.nz> Message-ID: <20190805131512.uxmktws5nhymuxrx@yuggoth.org> On 2019-08-05 18:39:21 +1200 (+1200), Feilong Wang wrote: > The issue of Magnum being "Certified Kubernetes Installer" has been > fixed, see > https://landscape.cncf.io/organization=open-stack&selected=magnum Thanks. [...] That's great news--congratulations to the Magnum team! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kevin at cloudnull.com Mon Aug 5 13:26:54 2019 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 5 Aug 2019 08:26:54 -0500 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> References: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> Message-ID: Thank you everyone. It's an honor to be considered for a core reviewer position on this team and will strive to not let you down. -- Kevin Carter IRC: Cloudnull On Mon, Jul 29, 2019 at 7:32 AM Giulio Fidente wrote: > On 7/26/19 11:00 PM, Alex Schultz wrote: > > Hey folks, > > > > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. > > tripleo-ansible-core). He has made excellent progress centralizing our > > ansible roles and improving the testing around them. > > > > Please reply with your approval/objections. If there are no objections, > > we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. > > thanks Kevin for igniting a long waited transformation > > +1 > -- > Giulio Fidente > GPG KEY: 08D733BA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Aug 5 13:28:43 2019 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 5 Aug 2019 07:28:43 -0600 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: References: <486e840c-c07b-861d-ea6d-5dd475dd0f6c@redhat.com> Message-ID: Since there were no objections, Kevin has been added to tripleo-ansible-core. Thanks, -Alex On Mon, Aug 5, 2019 at 7:27 AM Carter, Kevin wrote: > Thank you everyone. It's an honor to be considered for a core reviewer > position on this team and will strive to not let you down. > > -- > > Kevin Carter > IRC: Cloudnull > > > On Mon, Jul 29, 2019 at 7:32 AM Giulio Fidente > wrote: > >> On 7/26/19 11:00 PM, Alex Schultz wrote: >> > Hey folks, >> > >> > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. >> > tripleo-ansible-core). He has made excellent progress centralizing our >> > ansible roles and improving the testing around them. >> > >> > Please reply with your approval/objections. If there are no objections, >> > we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. >> >> thanks Kevin for igniting a long waited transformation >> >> +1 >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Aug 5 13:36:53 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 09:36:53 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: You can drop the Pinyin equivalents from the Mongolian/Russian options when drawing up the poll. The Pinyin spelling for those options was added to the Wiki document (by me) to match what was done for the GR options. I found it important to note that the options which conveniently began with U under the typical transliteration systems of those languages only did so because those systems are, essentially, not Chinese. But, all that said, it's pretty rare to see Mongolian/Russian place names written in Pinyin so it would come off as a bit of a confusing distraction to actually include the Pinyin equivalents in the poll itself. Now, my two cents about the poll options: - I'm against using GR spelling... regardless of the actual history of the use of GR, I've already seen some Chinese contributors state that using GR is, at best, weird. - I'm against using Mongolian/Manchu/Russian names... I do not see how a name from one of these languages is at all representative of Shanghai. - I'm against using arbitrary English words... "Train" was fine because it represented the conference itself and something about our community. "Umpire" and "Umbrella" don't represent anything. - I'm in favor of using English words that describe something about the host city or country... for example I think "Urban" is a great because Shanghai is by certain metrics one of the most urban areas in the world, China has many of the world's largest cities, etc. Deciding whether certain options should even appear on the poll is outside my responsibility. On Sat, Aug 3, 2019 at 8:51 PM Doug Hellmann wrote: > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > From mnaser at vexxhost.com Mon Aug 5 13:38:55 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 5 Aug 2019 09:38:55 -0400 Subject: [tc] fast-approve change to update voting schedule for u release Message-ID: Hi everyone, Doug has proposed a change to make changes to the voting schedule for the U release. I will be fast tracking this change and this serves as a notification to the TC (and the community as whole). It's a trivial change which adjusts scheduling only, nothing more. The current date is already passed so we wouldn't have documentation that makes sense in the first place. https://review.opendev.org/#/c/674465/1 If anyone objects, feel free to push a revert to that patch. Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From laszlo.budai at gmail.com Mon Aug 5 13:45:53 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Mon, 5 Aug 2019 16:45:53 +0300 Subject: [nova] local ssd disk performance In-Reply-To: References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> Message-ID: <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> Thank you for the info. Our is a generic openstack having the main storage on CEPH. We had a requirement from one tenant to provide a very fast storage for a no-sql database. So it came the idea to add some nvme storage to a few computing nodes, and to provide the storage from those to the specific tenant. We have investigated different options in providing this. 1. The ssd managed by nova as LVM 2. the ssd managed by cinder and use the instance locality filter 3. the ssd mounted on the /var/liv/instances and the ephemeral disk managed by nova. Kind regards, Laszlo On 8/5/19 3:08 PM, Donny Davis wrote: > I am happy to share numbers from my iscsi setup. However these numbers probably won't mean much for your workloads. I tuned my openstack to perform as well as possible for a specific workload (Openstack CI), so some of the things I have put my efforts into are for CI work and not really relevant to general purpose. Also your cinder performance hinges greatly on your networks capabilities. I use a dedicated nic for iscsi traffic, and MTU's are set at 9000 for every device in the iscsi path. *Only* that nic is set at MTU 9000, because if the rest of the openstack network is, it can create more problems than it solves. My network spine is 40G, and each compute node has 4 10G nics. I only use one nic for iscsi traffic. The block storage node has two 40G nics.  > > With that said, I use the fio tool to benchmark performance on linux systems.  > Here is the command i use to run the benchmark > > fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 --size=10G --readwrite=randrw --rwmixread=50 > > From the block storage node locally > > Run status group 0 (all jobs): >    READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), io=79.9GiB (85.8GB), run=26948-27662msec >   WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), io=80.1GiB (85.0GB), run=26948-27662msec > > From inside a vm > > Run status group 0 (all jobs): >    READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=30.0GiB (32.2GB), run=69242-69605msec >   WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), io=29.0GiB (32.2GB), run=69242-69605msec > > The vm side of the test is able to push pretty close to the limits of the nic. My cloud also currently has a full workload on it, as I have learned in working to get an optimized for CI cloud... it does matter if there is a workload or not.  > > > Are you using raid for your ssd's, if so what type? > > Do you mind sharing what workload will go on your Openstack deployment? > Is it DB, web, general purpose, etc. > > ~/Donny D > > > > > > > > > On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo > wrote: > > Hi, > > well, we used the same command to measure the different storage possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we have measured the disk mounted directly on the host, and we have used the same command to measure the performance in the guests using different ways to attach the storage to the VM. > > for instance on the host we were able to measure 408MB/s initial writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, on the guest we got the following, using the different technologies: > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, without preallocate images) > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random reads 427MB/s. > > 2. Ephemeral served by nova (images type = lvm, without preallocate images) > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, random reads 550MB/s. > > 3. Cinder attached LVM with instance locality > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, random reads 160MB/s. > > 4. Cinder attached LVM without instance locality > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, random reads 105MB/s. > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, images type =raw, witht preallocate images) > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, random reads 553MB/s > > > So points 3,4 are using ISCSI. As you can see those numbers are far below the local volume based or the local file based with preallocate images. > > Could you share some nubers about the performance of your ISCSI based setup? that would allow us to see whether we are doing something wrong related to the iscsi. Thank you. > > Kind regards, > Laszlo > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > I am using the cinder-lvm backend right now and performance is quite good. My situation is similar without the migration parts. Prior to this arrangement I was using iscsi to mount a disk in /var/lib/nova/instances and that also worked quite well.  > > > > If you don't mind me asking, what kind of i/o performance are you looking for? > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo >> wrote: > > > >     Thank you Daniel, > > > >     My colleague found the same solution in the meantime. And that helped us as well. > > > >     Kind regards, > >     Laszlo > > > >     On 8/2/19 6:50 PM, Daniel Speichert wrote: > >     > For the case of simply using local disk mounted for /var/lib/nova and raw disk image type, you could try adding to nova.conf: > >     > > >     >     preallocate_images = space > >     > > >     > This implicitly changes the I/O method in libvirt from "threads" to "native", which in my case improved performance a lot (10 times) and generally is the best performance I could get. > >     > > >     > Best Regards > >     > Daniel > >     > > >     > On 8/2/2019 10:53, Budai Laszlo wrote: > >     >> Hello all, > >     >> > >     >> we have a problem with the performance of the disk IO in a KVM instance. > >     >> We are trying to provision VMs with high performance SSDs. we have investigated different possibilities with different results ... > >     >> > >     >> 1. configure Nova to use local LVM storage (images_types = lvm) - provided the best performance, but we could not migrate our instances (seems to be a bug). > >     >> 2. use cinder with lvm backend  and instance locality, we could migrate the instances, but the performance is less than half of the previous case > >     >> 3. mount the ssd on /var/lib/nova/instances and use the images_type = raw in nova. We could migrate, but the write performance dropped to ~20% of the images_types = lvm performance and read performance is ~65% of the lvm case. > >     >> > >     >> do you have any idea to improve the performance for any of the cases 2 or 3 which allows migration. > >     >> > >     >> Kind regards, > >     >> Laszlo > >     >> > > > > > From doug at doughellmann.com Mon Aug 5 13:47:48 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 09:47:48 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: > On Aug 5, 2019, at 9:36 AM, Jeremy Freudberg wrote: > > You can drop the Pinyin equivalents from the Mongolian/Russian options > when drawing up the poll. The Pinyin spelling for those options was > added to the Wiki document (by me) to match what was done for the GR > options. I found it important to note that the options which > conveniently began with U under the typical transliteration systems of > those languages only did so because those systems are, essentially, > not Chinese. But, all that said, it's pretty rare to see > Mongolian/Russian place names written in Pinyin so it would come off > as a bit of a confusing distraction to actually include the Pinyin > equivalents in the poll itself. > > Now, my two cents about the poll options: > - I'm against using GR spelling... regardless of the actual history of > the use of GR, I've already seen some Chinese contributors state that > using GR is, at best, weird. Those will be dropped. > - I'm against using Mongolian/Manchu/Russian names... I do not see how > a name from one of these languages is at all representative of > Shanghai. The geographic area under consideration was expanded to "all of China" because it was proving too difficult to come up with place names starting with U from within the narrower area surrounding Shanghai. > - I'm against using arbitrary English words... "Train" was fine > because it represented the conference itself and something about our > community. "Umpire" and "Umbrella" don't represent anything. My understanding is those items were part of the list produced by the Chinese contributor community via their discussions on wechat. I only removed the ones I thought might have obvious negative connotations, and I’m content to leave the rest in the list and have the voters decide if they’re suitable names. > - I'm in favor of using English words that describe something about > the host city or country... for example I think "Urban" is a great > because Shanghai is by certain metrics one of the most urban areas in > the world, China has many of the world's largest cities, etc. > > Deciding whether certain options should even appear on the poll is > outside my responsibility. Lucky you. ;-) > > > > On Sat, Aug 3, 2019 at 8:51 PM Doug Hellmann wrote: >> >> Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. >> >> 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen >> 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou >> 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane >> 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling >> 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai >> 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake >> 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) >> 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) >> 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) >> 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) >> 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) >> 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) >> Ula "Miocene Baogeda Ula" (the name is in Mongolian) >> Uma http://www.fallingrain.com/world/CH/20/Uma.html >> Unicorn >> Urban >> Unique >> Umpire >> Utopia >> Umbrella >> Ultimate >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> >> From donny at fortnebula.com Mon Aug 5 14:01:46 2019 From: donny at fortnebula.com (Donny Davis) Date: Mon, 5 Aug 2019 10:01:46 -0400 Subject: [nova] local ssd disk performance In-Reply-To: <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> References: <616b2439-e3f5-f45f-ddad-efe8014e8ef0@gmail.com> <603c5a17-ebce-1107-3071-e07323f62f3c@gmail.com> Message-ID: If it was me, I would use cinder. Reasons why are as follows DB data doesn't belong on an ephemeral disk, it probably belongs on block storage. When you remove the requirements for data to be stored on ephemeral disks, live migration is less of an issue. You can tune the c-vol node providing the block storage to meet the performance requirements of the end users application. To start with I would look at local performance of disk on the block storage node. In my case, raid5/6 with nvme disks put the cpu at 100% because mdadm isn't multithreaded. Raid 10 provides enough performance, but only because I created seperate raid1 volumes, then put raid0 across them. This way I get more threads from mdadm. Once local performance meets expectations, I would move to creating a test volume and then mounting it on the hypervisor manually. Then tune network performance to meet your goals. Finally test in vm performance to make sure it's doing what you want. Donny Davis c: 805 814 6800 irc: donnyd On Mon, Aug 5, 2019, 9:45 AM Budai Laszlo wrote: > Thank you for the info. Our is a generic openstack having the main storage > on CEPH. We had a requirement from one tenant to provide a very fast > storage for a no-sql database. So it came the idea to add some nvme storage > to a few computing nodes, and to provide the storage from those to the > specific tenant. > > We have investigated different options in providing this. > > 1. The ssd managed by nova as LVM > 2. the ssd managed by cinder and use the instance locality filter > 3. the ssd mounted on the /var/liv/instances and the ephemeral disk > managed by nova. > > Kind regards, > Laszlo > > On 8/5/19 3:08 PM, Donny Davis wrote: > > I am happy to share numbers from my iscsi setup. However these numbers > probably won't mean much for your workloads. I tuned my openstack to > perform as well as possible for a specific workload (Openstack CI), so some > of the things I have put my efforts into are for CI work and not really > relevant to general purpose. Also your cinder performance hinges greatly on > your networks capabilities. I use a dedicated nic for iscsi traffic, and > MTU's are set at 9000 for every device in the iscsi path. *Only* that nic > is set at MTU 9000, because if the rest of the openstack network is, it can > create more problems than it solves. My network spine is 40G, and each > compute node has 4 10G nics. I only use one nic for iscsi traffic. The > block storage node has two 40G nics. > > > > With that said, I use the fio tool to benchmark performance on linux > systems. > > Here is the command i use to run the benchmark > > > > fio --numjobs=16 --randrepeat=1 --ioengine=libaio --direct=1 > --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=32 > --size=10G --readwrite=randrw --rwmixread=50 > > > > From the block storage node locally > > > > Run status group 0 (all jobs): > > READ: bw=2960MiB/s (3103MB/s), 185MiB/s-189MiB/s (194MB/s-198MB/s), > io=79.9GiB (85.8GB), run=26948-27662msec > > WRITE: bw=2963MiB/s (3107MB/s), 185MiB/s-191MiB/s (194MB/s-200MB/s), > io=80.1GiB (85.0GB), run=26948-27662msec > > > > From inside a vm > > > > Run status group 0 (all jobs): > > READ: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), > io=30.0GiB (32.2GB), run=69242-69605msec > > WRITE: bw=441MiB/s (463MB/s), 73.4MiB/s-73.0MiB/s (76.0MB/s-77.6MB/s), > io=29.0GiB (32.2GB), run=69242-69605msec > > > > The vm side of the test is able to push pretty close to the limits of > the nic. My cloud also currently has a full workload on it, as I have > learned in working to get an optimized for CI cloud... it does matter if > there is a workload or not. > > > > > > Are you using raid for your ssd's, if so what type? > > > > Do you mind sharing what workload will go on your Openstack deployment? > > Is it DB, web, general purpose, etc. > > > > ~/Donny D > > > > > > > > > > > > > > > > > > On Mon, Aug 5, 2019 at 4:54 AM Budai Laszlo > wrote: > > > > Hi, > > > > well, we used the same command to measure the different storage > possibilities (sudo iozone -e -I -t 32 -s 100M -r 4k -i 0 -i 1 -i 2), we > have measured the disk mounted directly on the host, and we have used the > same command to measure the performance in the guests using different ways > to attach the storage to the VM. > > > > for instance on the host we were able to measure 408MB/s initial > writes, 420MB/s rewrites, 397MB/s Random writes, and 700MB/s random reads, > on the guest we got the following, using the different technologies: > > > > 1. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, without preallocate images) > > Initial writes 60Mb/s, rewrites 70Mb/s, random writes 73MB/s, random > reads 427MB/s. > > > > 2. Ephemeral served by nova (images type = lvm, without preallocate > images) > > Initial writes 332Mb/s, rewrites 416Mb/s, random writes 417MB/s, > random reads 550MB/s. > > > > 3. Cinder attached LVM with instance locality > > Initial writes 148Mb/s, rewrites 151Mb/s, random writes 149MB/s, > random reads 160MB/s. > > > > 4. Cinder attached LVM without instance locality > > Initial writes 103Mb/s, rewrites 109Mb/s, random writes 103MB/s, > random reads 105MB/s. > > > > 5. Ephemeral served by nova (SSD mounted on /var/lib/nova/instances, > images type =raw, witht preallocate images) > > Initial writes 348Mb/s, rewrites 400Mb/s, random writes 393MB/s, > random reads 553MB/s > > > > > > So points 3,4 are using ISCSI. As you can see those numbers are far > below the local volume based or the local file based with preallocate > images. > > > > Could you share some nubers about the performance of your ISCSI > based setup? that would allow us to see whether we are doing something > wrong related to the iscsi. Thank you. > > > > Kind regards, > > Laszlo > > > > > > On 8/3/19 8:41 PM, Donny Davis wrote: > > > I am using the cinder-lvm backend right now and performance is > quite good. My situation is similar without the migration parts. Prior to > this arrangement I was using iscsi to mount a disk in > /var/lib/nova/instances and that also worked quite well. > > > > > > If you don't mind me asking, what kind of i/o performance are > you looking for? > > > > > > On Fri, Aug 2, 2019 at 12:25 PM Budai Laszlo < > laszlo.budai at gmail.com laszlo.budai at gmail.com >> wrote: > > > > > > Thank you Daniel, > > > > > > My colleague found the same solution in the meantime. And that > helped us as well. > > > > > > Kind regards, > > > Laszlo > > > > > > On 8/2/19 6:50 PM, Daniel Speichert wrote: > > > > For the case of simply using local disk mounted for > /var/lib/nova and raw disk image type, you could try adding to nova.conf: > > > > > > > > preallocate_images = space > > > > > > > > This implicitly changes the I/O method in libvirt from > "threads" to "native", which in my case improved performance a lot (10 > times) and generally is the best performance I could get. > > > > > > > > Best Regards > > > > Daniel > > > > > > > > On 8/2/2019 10:53, Budai Laszlo wrote: > > > >> Hello all, > > > >> > > > >> we have a problem with the performance of the disk IO in a > KVM instance. > > > >> We are trying to provision VMs with high performance SSDs. > we have investigated different possibilities with different results ... > > > >> > > > >> 1. configure Nova to use local LVM storage (images_types = > lvm) - provided the best performance, but we could not migrate our > instances (seems to be a bug). > > > >> 2. use cinder with lvm backend and instance locality, we > could migrate the instances, but the performance is less than half of the > previous case > > > >> 3. mount the ssd on /var/lib/nova/instances and use the > images_type = raw in nova. We could migrate, but the write performance > dropped to ~20% of the images_types = lvm performance and read performance > is ~65% of the lvm case. > > > >> > > > >> do you have any idea to improve the performance for any of > the cases 2 or 3 which allows migration. > > > >> > > > >> Kind regards, > > > >> Laszlo > > > >> > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Mon Aug 5 14:49:20 2019 From: dmellado at redhat.com (Daniel Mellado) Date: Mon, 5 Aug 2019 16:49:20 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL Message-ID: As I have taken on a new role in my company I won't be having the time to dedicate to Kuryr in order to keep as the PTL for the current cycle. I started working on the project more than two cycles ago and it has been a real pleasure for me. Helping a project grow from an idea and a set of diagrams to a production-grade service was an awesome experience and I got help from my awesome team and upstream contributors! I would like to take this opportunity to thank everyone who contributed to the success of Kuryr – either by writing code, suggesting new use cases, participating in our discussions, or helping out with Infra! Michal Dulko (irc: dulek) has been kind enough to accept replacing me as the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's knowledgeable about every piece of the code and is tightly connected to the community. I will still be around to help if needed. Please join me congratulating Michal on his new role! Best! Daniel [1] https://review.opendev.org/674624 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From tbechtold at suse.com Mon Aug 5 15:01:49 2019 From: tbechtold at suse.com (Thomas Bechtold) Date: Mon, 5 Aug 2019 17:01:49 +0200 Subject: [manila] Enabling CephFS snapshot support by default In-Reply-To: References: Message-ID: Hi Goutham, On 7/26/19 5:41 AM, Goutham Pacha Ravi wrote: > Hi Zorillas and interested parties, > > (Copying a couple of folks explicitly as they may not be part of > openstack-discuss) > > Manila's CephFS driver has a configuration option > "cephfs_enable_snapshots" that administrators can toggle in > manila.conf to allow project users to take snapshots of their shares. > [1][2] This option has defaulted to False, since the time the CephFS > driver was introduced (Mitaka release of OpenStack Manila) > > IIUC, CephFS snapshots have been "stable" since the Mimic release of > CephFS, which debuted in June 2018. [3] Since then, ceph > deployers/administrators don't have to toggle anything on the backend > to enable snapshots. > > So, can we consider changing the default value of the config opt in > manila to "True" in the Train release? +1 . I would also vote for deprecating the option directly. [...] Cheers, Tom From mdemaced at redhat.com Mon Aug 5 15:18:11 2019 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 5 Aug 2019 17:18:11 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: Congratulations, Michał!! I'm sure you'll do great. Cheers, Maysa. On Mon, Aug 5, 2019 at 4:49 PM Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haleyb.dev at gmail.com Mon Aug 5 15:31:13 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 5 Aug 2019 11:31:13 -0400 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: <582fd354-3871-6cc8-4949-cd306d66c23a@gmail.com> Big +1 from me as well, keep up the great work Rodolfo! On 8/4/19 2:52 PM, Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron > since the Mitaka cycle. He has been a driving force over these years in > the implementation an evolution of Neutron's QoS feature, currently > leading the sub-team dedicated to it. Recently he has been working on > improving the interaction with Nova during the port binding process, > driven the adoption of Pyroute2 and has become very active in fixing all > kinds of bugs. The quality and number of his code reviews during the > Train cycle are comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel From hongbin.lu at huawei.com Mon Aug 5 15:36:33 2019 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Mon, 5 Aug 2019 15:36:33 +0000 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: 00370D08-B149-4873-BCCC-9B4877FBA7FA Big +1 from me as well. -------------------------------------------------- Hongbin Lu Hongbin Lu Mobile: Email: hongbin.lu at huawei.com From:Miguel Lavalle To:openstack-discuss Date:2019-08-04 14:53:45 Subject:[openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core Dear Neutrinos, I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. I will keep this nomination open for a week as customary. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Aug 5 16:22:43 2019 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 05 Aug 2019 18:22:43 +0200 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: On Mon, 2019-08-05 at 16:49 +0200, Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! Thanks for leading the project for 2.5 cycles, you did a great job! > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! I actually thought a formal election process is required here, but I can confirm that we agreed that I should run in those elections. If using a simpler path is possible here I'm totally fine with it. > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > From i at liuyulong.me Mon Aug 5 16:29:08 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Tue, 6 Aug 2019 00:29:08 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Hi, Please allow me to re-post my former proposal here again [1]. Let me quote some contents: There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. Here is my list: 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 浦东区: Updong,Pudong District, Shanghai 徐汇区: Uxhui,Xuhui District, Shanghai 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu 乌镇: Uwzhen, yes, again 榆林: Uylin, City of China, Shaanxi province 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. 湖南:Uhnan, Hunan Province 鲁:Ul, the abbreviation of Shandong Province Thank you [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html LIU Yulong ------------------ Original ------------------ From: "Doug Hellmann"; Date: Sun, Aug 4, 2019 08:48 AM To: "openstack-discuss"; Subject: [all][tc] U Cycle Naming Poll Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia Umbrella Ultimate [1] https://governance.openstack.org/tc/reference/release-naming.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Aug 5 17:30:26 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 13:30:26 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Hi -- That's an interesting proposal. Do you believe that your suggestion (non-standard modification of Pinyin) is more appropriate than the alternatives (using Mongolian/Russian place names, or using English words)? Would Chinese contributors understand these names with switched letters? On Mon, Aug 5, 2019 at 12:31 PM LIU Yulong wrote: > > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > From: "Doug Hellmann"; > Date: Sun, Aug 4, 2019 08:48 AM > To: "openstack-discuss"; > Subject: [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > From tpb at dyncloud.net Mon Aug 5 17:32:31 2019 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 5 Aug 2019 13:32:31 -0400 Subject: [Manila] Python3 support in Manila 3rd Party CI Message-ID: <20190805173231.xkqrjbwl4wjwvd2k@barron.net> We worked in the Pike release to get Manila unit tests running under Python3. In Train we have completed the work begun in Stein to get all first party functional test jobs running under Python 3. Now we need to push to get third party jobs converted to Python 3 since Train will be the last OpenStack release to keep support for Python2 and it will itself support Python3 first [1]. None of this will be news to those who attend the weekly Manila team meeting at 1500 UTC on Thursdays on #openstack-meetings-alt on freenode [2], but not every back end driver has regular representation at those meetings so we are communicating the need for third party CI job Python 3 support here as well. We are tracking this work on the Manila wiki [3], where you may also find or add tips to help other CI maintainers. Also, please feel free to follow up with me via email or on freenode #openstack-manila. -- Tom Barron email: tpb at dyncloudnet irc: tbarron [1] https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html [2] https://wiki.openstack.org/wiki/Manila/Meetings [3] https://wiki.openstack.org/wiki/Manila/TrainCycle#Python3_Testing From jim at jimrollenhagen.com Mon Aug 5 17:40:42 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 5 Aug 2019 13:40:42 -0400 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: On Mon, Aug 5, 2019 at 12:24 PM Michał Dulko wrote: > On Mon, 2019-08-05 at 16:49 +0200, Daniel Mellado wrote: > > As I have taken on a new role in my company I won't be having the time > > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > > > I started working on the project more than two cycles ago and it has > > been a real pleasure for me. > > > > Helping a project grow from an idea and a set of diagrams to a > > production-grade service was an awesome experience and I got help from > > my awesome team and upstream contributors! > > > > I would like to take this opportunity to thank everyone who contributed > > to the success of Kuryr – either by writing code, suggesting new use > > cases, participating in our discussions, or helping out with Infra! > > Thanks for leading the project for 2.5 cycles, you did a great job! > > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > > knowledgeable about every piece of the code and is tightly connected to > > the community. I will still be around to help if needed. > > > > Please join me congratulating Michal on his new role! > > I actually thought a formal election process is required here, but I > can confirm that we agreed that I should run in those elections. If > using a simpler path is possible here I'm totally fine with it. > Nope! The TC appoints a replacement[0], but if the outgoing PTL appoints a replacement, that makes it easier for us, so we generally accept that. :) [0] https://governance.openstack.org/tc/reference/charter.html#election-for-ptl-seats // jim > > > Best! > > > > Daniel > > > > > > [1] https://review.opendev.org/674624 > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Aug 5 17:48:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 5 Aug 2019 13:48:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <8a579f53-6a8d-53a9-0c7c-a3eecb791443@redhat.com> On 5/08/19 3:40 AM, Rico Lin wrote: > it's *Ussuri*! so let's s/Ussri/Ussuir/ (bad Rico! bad!) and after that we can s/Ussuir/Ussuri/ ;) From haleyb.dev at gmail.com Mon Aug 5 18:05:22 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 5 Aug 2019 14:05:22 -0400 Subject: [neutron] Bug deputy report for week of July 29th Message-ID: <658331ab-ebb4-e944-a8d2-33683e0b4db9@gmail.com> Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. -Brian Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1838699 - Removing a subnet from DVR router also removes DVR MAC flows for other router on network - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838760 - Security groups don't work for trunk ports with iptables_hybrid fw driver - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838793 - "KeepalivedManagerTestCase" tests failing during namespace deletion - Rodolfo took ownership * https://bugs.launchpad.net/devstack/+bug/1838811 - /opt/stack/devstack/tools/outfilter.py failing in neutron functional jobs since 8/2 - https://review.opendev.org/#/c/674426/ merged Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1838396 - update port receive 500 - https://review.opendev.org/#/c/673486/ * https://bugs.launchpad.net/neutron/+bug/1838403 - Asymmetric floating IP notifications - Seems events for floating IPs/routers are not sent in some cases - Additional information was supplied in bug, need to reproduce - Need neutron version information - Needs owner * https://bugs.launchpad.net/neutron/+bug/1838449 - Router migrations failing in the gate - Miguel assigned to himself, needs further investigation * https://bugs.launchpad.net/neutron/+bug/1838431 - [scale issue] ovs-agent port processing time increases linearly and eventually timeouts - Confirmed, needs owner * https://bugs.launchpad.net/neutron/+bug/1838563 - Timeout in executing ovs command crash ovs agent - https://review.opendev.org/#/c/674085/ * https://bugs.launchpad.net/neutron/+bug/1838587 - request neutron with Incorrect body key return 500 - https://review.opendev.org/#/c/674153/ * https://bugs.launchpad.net/neutron/+bug/1838689 - rpc_workers default value ignores setting of api_workers - https://review.opendev.org/674125 merged, backport in progress Low bugs -------- Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1838621 - [RFE] Configure extra dhcp options via API and per network - Discussed at neutron drivers meeting 8/2 - Slawek will investigate questions that came up at meeting Invalid bugs ------------ Further triage required ----------------------- * https://bugs.launchpad.net/neutron/+bug/1838617 - ssh connection getting dropped frequently - Pike, but not much info given, asked for logs and/or a reproducer on more recent code. * https://bugs.launchpad.net/neutron/+bug/1838697 - DVR Mac conversion rules are only added for the first router a network is attached to - Asked for additional information * https://bugs.launchpad.net/neutron/+bug/1839004 - Rocky DVR-SNAT seems missing entries for conntrack marking - Looks like a possible mis-configuration - Asked for additional information From mtreinish at kortar.org Mon Aug 5 18:21:44 2019 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 5 Aug 2019 14:21:44 -0400 Subject: stestr Python 2 Support In-Reply-To: <20190803001018.GA2352@thor.bakeyournoodle.com> References: <20190724131136.GA11582@sinanju.localdomain> <20190803001018.GA2352@thor.bakeyournoodle.com> Message-ID: <20190805182144.GA10102@zeong> On Sat, Aug 03, 2019 at 10:10:18AM +1000, Tony Breeds wrote: > On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > > Hi Everyone, > > > > I just wanted to send a quick update about the state of python 2 support in > > stestr, since OpenStack is the largest user of the project. With the recent > > release of stestr 2.4.0 we've officially deprecated the python 2.7 support > > in stestr. It will emit a DeprecationWarning whenever it's run from the CLI > > with python 2.7 now. The plan (which is not set in stone) is that we will be > > pushing a 3.0.0 release that removes the python 2 support and compat code from > > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on Jan. > > 1st). I don't believe this conflicts with the current Python version support > > plans in OpenStack [1] but just wanted to make sure people were aware so that > > there are no surprises when stestr stops working with Python 2 in 3.0.0. > > Thanks Matt. I know it's a little meta but if something really strange > were to happen would you be open to doing 2.X.Y releases while we still > have maintained branches that use it? Sure, I'm open to doing that if the need arises. The normal release process for stestr doesn't involve branching. But, if there is a critical issue in 2.x.y that requires a fix we can easily create a branch and push a bugfix release when/if that happens. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From thuanlk at viettel.com.vn Mon Aug 5 10:01:36 2019 From: thuanlk at viettel.com.vn (thuanlk at viettel.com.vn) Date: Mon, 5 Aug 2019 17:01:36 +0700 (ICT) Subject: [neutron] OpenvSwitch firewall sctp getting dropped References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> Message-ID: <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> I have tried any version of OpenvSwitch but problem continue happened. Is Openvswitch firewall support sctp? Thanks and best regards ! --------------------------------------- Lăng Khắc Thuận OCS Cloud | OCS (VTTEK) +(84)- 966463589 -----Original Message----- From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] Sent: Tuesday, July 30, 2019 11:22 AM To: 'smooney at redhat.com' ; 'openstack-discuss at lists.openstack.org' Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped I have tried config SCTP but nothing change! openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp Displaying 2 items Direction Ether Type IP Protocol Port Range Remote IP Prefix Remote Security Group Actions Egress IPv4 132 2000 - 10000 0.0.0.0/0 - Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - Thanks and best regards ! --------------------------------------- Lăng Khắc Thuận OCS Cloud | OCS (VTTEK) +(84)- 966463589 -----Original Message----- From: smooney at redhat.com [mailto:smooney at redhat.com] Sent: Tuesday, July 30, 2019 1:27 AM To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: > I have installed Openstack Queens on CentOs 7 with OvS and I recently > used the native openvswitch firewall to implement SecusiryGroup. The > native OvS firewall seems to work just fine with TCP/UDP traffic but > it does not forward any SCTP traffic going to the VMs no matter how I > change the security groups, But it run if i disable port security > completely or use iptables_hybrid firewall driver. What do I have to > do to allow SCTP packets to reach the VMs? the security groups api is a whitelist model so all traffic is droped by default. if you want to allow sctp you would ihave to create an new security group rule with ip_protocol set to the protocol number for sctp. e.g. openstack security group rule create --protocol sctp ... im not sure if neutron support --dst-port for sctp but you can still filter on --remote-ip or --remote-group and can specify the rule as an --ingress or --egress rule as normal. https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html based on this commit https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 it looks like neutron now validates the prot ranges for sctp impligying it support setting them so i gues its just a gap in the documentation. > From ks3019 at att.com Mon Aug 5 15:56:00 2019 From: ks3019 at att.com (SKELS, KASPARS) Date: Mon, 5 Aug 2019 15:56:00 +0000 Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment In-Reply-To: References: Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E808AE8@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hi Anirudh, The Airship Seaworthy is a bare-metal production reference implementation of Airship deployment, e.g. deployment that has 3 control servers (to carry HA and Ceph data replication), as well as ceph setup/replication for tenants data/VMs, redundant/bonded networks, and there are also things as DNS/TLS requirements to get this up and running. We also have Airsloop that is meant for 2 bare-metal servers (1 control node, and 1 compute). This from your description might fit better with your HW – and is also much simpler to install, here are some simplifications for it compared to full setup https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html I would def recommend to fist get familiar with Airsloop and get it up and running. The software/components are all the same but configured in non-redundant way. For virtual setups we have 2 options right now available * You can very simply get AIAB running – it’s a 1 VM setup and will give you a feel to what Airship is https://github.com/airshipit/treasuremap/tree/master/tools/deployment/aiab * There is also virtual multi-node environment that was available in the airship-in-a-bottle repo (https://github.com/airshipit/airship-in-a-bottle/tree/master/tools/multi_nodes_gate). This is now being moved to treasuremap and I would wait a bit since it’s slightly outdated on the old airship-in-a-bottle repo. Kind regards, Kaspars From: Anirudh Gupta Sent: Wednesday, May 29, 2019 2:31 AM To: airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ks3019 at att.com Mon Aug 5 16:09:50 2019 From: ks3019 at att.com (SKELS, KASPARS) Date: Mon, 5 Aug 2019 16:09:50 +0000 Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment In-Reply-To: References: Message-ID: <2ADBF0C373B7E84E944B1E06D3CDDFC91E808B71@MOKSCY3MSGUSRGI.ITServices.sbc.com> Hi, this is great to hear!! Here are few additional tips on the direction you are going * It is actually possible to configure Drydock/MAAS to talk to libvirt, it’s a bit tricky to setup all the SSH keys – but it’s possible https://github.com/airshipit/treasuremap/blob/master/site/seaworthy-virt/profiles/host/gate-vm-cp.yaml#L27 * There is a framework called mutlinode-gate or seaworthy-virt that aims to create more of a testing/gating environment using KVM and single host (launching at this moment 4 VMs). Most of that code was in older airship-in-a-bottle repo but is now being moved to treasuremap (site/seaworthy-virt) site and docs/scripts will come later https://review.opendev.org/#/c/655517/. This uses very similar idea of automating launching KVM VMs and then fully automated deploying Airship on top. Have a look! * For the disk name – ‘bootdisk’ is alias based on a SCSI ID, you can look it up with ‘sudo lshw -c disk’ and then update the HW profile, using direct name as sda is also OK, as you found https://github.com/airshipit/treasuremap/blob/master/site/airsloop/profiles/hardware/dell_r720xd.yaml#L45 * Yes, LMA (logging, monitoring, alerting) is on the heavier side, and in fact I believe in our current virtual setups (AIAB and virtual seaworthy) we disable deployment of them as well. It’s needed in production, though. The Airsloop itself was more meant as bare-metal; you may use it to run proper VMs to test virtual workloads/VNFs in more real setting as compute is bare-metal, but it’s great you got it running!! Cheers, Kaspars From: Li, Cheng1 Sent: Sunday, June 9, 2019 12:59 AM To: Anirudh Gupta ; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: Re: [Airship-discuss] [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Finally, I have been able to deploy airsloop on virtual env. I created two VMs(libvirt/kvm driven), one for genesis and the other for compute node. These two VMs were on the same host. As the compute node VM is supposed to be provisioned by maas via ipmi/pxe. So I used virtualbmc to simulate the ipmi. I authored the site by following these two guides[1][2]. It’s the mix of guide[1] and guide[2]. The commands I used are all these ones[3]. After fixing several issue, I have deployed the virtual airsloop env. I list here some issues I met: 1. Node identify failed. At the beginning of step ‘prepare_and_deploy_nodes’, the drydock power on the compute node VM via ipmi. Once the compute VM starts up via pxe boot, it runs script to detect local network interfaces and sends the info back to drycok. So the drydock can identify the node based on the received info. But the compute VM doesn’t have real ILO interface, so the drydock can’t identify it. What I did to workaround this was to manually fill the ipmi info on maas web page. 2. My host doesn’t have enough CPU cores, neither the VMs. So I had to increase --pods-per-core in kubelet.yaml. 3. The disk name in compute VM is vda, instead of sda. Drydock can’t map the alias device name to vda, so I had to used the fixed alias name ‘vda’ which is the same as it’s real device name.(it was ‘bootdisk’) 4. My host doesn’t have enough resource(CPU, memory), so I removed some resource consuming components(logging, monitoring). Besides, I disabled the neutron rally test. As it failed with timeout error because of the resource limits. I also paste my site changes[4] for reference. [1] https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html [2] https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html [3] https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html#getting-started [4] https://github.com/cheng1li/treasuremap/commit/7a8287720dacc6dc1921948aaddec96b8cf2645e Thanks, Cheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, May 30, 2019 7:29 PM To: Li, Cheng1 >; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: RE: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, I am trying to create Airship-Seaworthy from the link https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html It requires 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes to be configured, but there is no documentation of how to install and getting started with Airship-Seaworthy. Do we need to follow the “Getting Started” section mentioned in Airsloop or will there be any difference in case of Seaworthy. https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html#getting-started Also what all configurations need to be run from the 3 controller nodes and what needs to be run from 3 computes? Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) From: Li, Cheng1 > Sent: 30 May 2019 08:29 To: Anirudh Gupta >; airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: RE: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment I have the same question. I haven’t seen any docs which guides how to deploy airsloop/air-seaworthy in virtual env. I am trying to deploy airsloop on libvirt/kvm driven virtual env. Two VMs, one for genesis, the other for compute. Virtualbmc for ipmi simulation. The genesis.sh scripts has been run on genesis node without error. But deploy_site fails at prepare_and_deploy_nodes task(action ‘set_node_boot’ timeout). I am still investigating this issue. It will be great if we have official document for this scenario. Thanks, Cheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, May 29, 2019 3:31 PM To: airship-discuss at lists.airshipit.org; airship-announce at lists.airshipit.org; openstack-dev at lists.openstack.org; openstack at lists.openstack.org Subject: [Airship-Seaworthy] Deployment of Airship-Seaworthy on Virtual Environment Hi Team, We want to test Production Ready Airship-Seaworthy in our virtual environment The link followed is https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html As per the document we need 6 DELL R720xd bare-metal servers: 3 control, and 3 compute nodes. But we need to deploy our setup on Virtual Environment. Does Airship-Seaworthy support Installation on Virtual Environment? We have 2 Rack Servers with Dual-CPU Intel® Xeon® E5 26xx with 16 cores each and 128 GB RAM. Is it possible that we can create Virtual Machines on them and set up the complete environment. In that case, what possible infrastructure do we require for setting up the complete setup. Looking forward for your response. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Mon Aug 5 19:19:46 2019 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 5 Aug 2019 15:19:46 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Interesting idea. That makes me wonder if we have to pick a name starting with 'U'. If the rule can be relaxed to allow a name with 'U' as the second letter, it would be much easier. Best regards, Hongbin On Mon, Aug 5, 2019 at 12:34 PM LIU Yulong wrote: > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote > some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a > suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How > about we give the OpenStack version name with letters order of rotation? The > first word of Pinyin OpenStack version Name will have a sequence > switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' > and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic > Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming > civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan > province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > *From: * "Doug Hellmann"; > *Date: * Sun, Aug 4, 2019 08:48 AM > *To: * "openstack-discuss"; > *Subject: * [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based on > input from communty members. The name critera are described in [1], and > this time around we were looking for names starting with U associated with > China. With some extra assistance from local community members (thank you > to everyone who helped!), we have a list of candidate names that will go > into the poll. Below is a subset of the names propsed, including those that > meet the standard criteria and some of the suggestions that do not. Before > we start the poll, the process calls for us to provide a period of 1 week > so that any names removed from the proposals can be discussed and any > last-minute objections can be raised. We will start the poll next week > using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared > among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is > in Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name. > Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Aug 5 19:28:42 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 19:28:42 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190805192842.qgzj377epvubrvyv@yuggoth.org> On 2019-08-05 15:19:46 -0400 (-0400), Hongbin Lu wrote: > Interesting idea. That makes me wonder if we have to pick a name > starting with 'U'. If the rule can be relaxed to allow a name with > 'U' as the second letter, it would be much easier. [...] Only if we also actually type the name starting with a "u" in the places we use it, so that it will ASCII sort between "train" and whatever name we come up with for the "v" cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Aug 5 19:32:45 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 5 Aug 2019 15:32:45 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> I don’t think we want to start changing the rules at this point. For one thing, we have a lot of automation built up around the idea that our release names come alphabetically, so we really do need to choose something that starts with U this time around. I would rather we choose something that naturally starts with U, rather than flipping the first two letters of some other words that have U in them. Based on other feedback in this thread, we have dropped some of the proposed names so the list of candidates is now: 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name) 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) Ula "Miocene Baogeda Ula" (the name is in Mongolian) Uma http://www.fallingrain.com/world/CH/20/Uma.html Unicorn Urban Unique Umpire Utopia umbrella ultimate Unless we hear more feedback that those are invalid or inadequate by the end of the week, we will make up the poll using those names. Thanks, Doug > On Aug 5, 2019, at 3:19 PM, Hongbin Lu wrote: > > Interesting idea. That makes me wonder if we have to pick a name starting with 'U'. If the rule can be relaxed to allow a name with 'U' as the second letter, it would be much easier. > > Best regards, > Hongbin > > On Mon, Aug 5, 2019 at 12:34 PM LIU Yulong wrote: > Hi, > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. > How about we give the OpenStack version name with letters order of rotation? > The first word of Pinyin OpenStack version Name will have a sequence switching. > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > Here is my list: > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > 浦东区: Updong,Pudong District, Shanghai > 徐汇区: Uxhui,Xuhui District, Shanghai > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > 乌镇: Uwzhen, yes, again > 榆林: Uylin, City of China, Shaanxi province > 无锡: > Uwxi, City of China, Jiangsu province > > 玉溪: Uyxi, City of China, Yunnan province. > 湖南:Uhnan, Hunan Province > 鲁:Ul, the abbreviation of Shandong Province > > Thank you > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > LIU Yulong > > > ------------------ Original ------------------ > From: "Doug Hellmann"; > Date: Sun, Aug 4, 2019 08:48 AM > To: "openstack-discuss"; > Subject: [all][tc] U Cycle Naming Poll > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > Uma http://www.fallingrain.com/world/CH/20/Uma.html > Unicorn > Urban > Unique > Umpire > Utopia > Umbrella > Ultimate > > [1] https://governance.openstack.org/tc/reference/release-naming.html > From jeremyfreudberg at gmail.com Mon Aug 5 19:41:30 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 5 Aug 2019 15:41:30 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: Responding to myself with a different approach, as seen elsewhere in the thread. I think switching the position of letters is not ideal (e.g. "Uwzhen" instead of "Wuzhen"). I think emphasizing the second letter is okay (e.g. "wUzhen" instead of "Wuzhen", with the branch name called "stable/u-wuzhen" to satisfy tooling). On Mon, Aug 5, 2019 at 1:30 PM Jeremy Freudberg wrote: > > Hi -- That's an interesting proposal. Do you believe that your > suggestion (non-standard modification of Pinyin) is more appropriate > than the alternatives (using Mongolian/Russian place names, or using > English words)? Would Chinese contributors understand these names with > switched letters? > > On Mon, Aug 5, 2019 at 12:31 PM LIU Yulong wrote: > > > > Hi, > > > > Please allow me to re-post my former proposal here again [1]. Let me quote some contents: > > There is no Standard Chinese Pinyin starts with 'U'. So I have a suggestion, because we have 'Wu', 'Lu', 'Hu', 'Nu', 'Yu' and so on. How about we give the OpenStack version name with letters order of rotation? The first word of Pinyin OpenStack version Name will have a sequence switching. > > For instance, we can use 'Uw', 'Uy' to represent the Standard Pinyin 'Wu' and 'Yu'. Then we will have a lot of choices. > > Here is my list: > > 普陀区: Uptuo,Putuo District, Shanghai; Can also be the Mount Putuo, 普陀山 > > 浦东区: Updong,Pudong District, Shanghai > > 徐汇区: Uxhui,Xuhui District, Shanghai > > 陆家嘴: Uljiazui,National Financial Center of the Yangtze River Economic Belt of China, Shanghai > > 武功: Uwgong, town of Shaanxi Province, the birthplace of farming civilization of the China; pun for Kongfu > > 乌镇: Uwzhen, yes, again > > 榆林: Uylin, City of China, Shaanxi province > > 无锡: Uwxi, City of China, Jiangsu province 玉溪: Uyxi, City of China, Yunnan province. > > 湖南:Uhnan, Hunan Province > > 鲁:Ul, the abbreviation of Shandong Province > > > > Thank you > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002706.html > > > > > > LIU Yulong > > > > > > ------------------ Original ------------------ > > From: "Doug Hellmann"; > > Date: Sun, Aug 4, 2019 08:48 AM > > To: "openstack-discuss"; > > Subject: [all][tc] U Cycle Naming Poll > > > > Every OpenStack development cycle and release has a code-name. As with everything we do, the process of choosing the name is open and based on input from communty members. The name critera are described in [1], and this time around we were looking for names starting with U associated with China. With some extra assistance from local community members (thank you to everyone who helped!), we have a list of candidate names that will go into the poll. Below is a subset of the names propsed, including those that meet the standard criteria and some of the suggestions that do not. Before we start the poll, the process calls for us to provide a period of 1 week so that any names removed from the proposals can be discussed and any last-minute objections can be raised. We will start the poll next week using this list, including any modifications based on that discussion. > > > > 乌镇镇 [GR]:Ujenn [PY]:Wuzhen https://en.wikipedia.org/wiki/Wuzhen > > 温州市 [GR]:Uanjou [PY]:Wenzhou https://en.wikipedia.org/wiki/Wenzhou > > 乌衣巷 [GR]:Ui [PY]:Wuyi https://en.wikipedia.org/wiki/Wuyi_Lane > > 温岭市 [GR]:Uanliing [PY]:Wenling https://en.wikipedia.org/wiki/Wenling > > 威海市 [GR]:Ueihae [PY]:Weihai https://en.wikipedia.org/wiki/Weihai > > 微山湖 [GR]:Ueishan [PY]:Weishan https://en.wikipedia.org/wiki/Nansi_Lake > > 乌苏里江 Ussri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūsūlǐ) > > 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánchábù) > > 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlánhàotè) > > 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > > 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūlātè) > > 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name. Pinyin would be Wūzhūmùqìn) > > Ula "Miocene Baogeda Ula" (the name is in Mongolian) > > Uma http://www.fallingrain.com/world/CH/20/Uma.html > > Unicorn > > Urban > > Unique > > Umpire > > Utopia > > Umbrella > > Ultimate > > > > [1] https://governance.openstack.org/tc/reference/release-naming.html > > From irenab.dev at gmail.com Mon Aug 5 20:38:15 2019 From: irenab.dev at gmail.com (Irena Berezovsky) Date: Mon, 5 Aug 2019 23:38:15 +0300 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: Good luck Michal! Thank you Daniel for leading the project, it was a pleasure to work with you. On Monday, August 5, 2019, Daniel Mellado wrote: > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Aug 5 20:49:23 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 5 Aug 2019 20:49:23 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> On 2019-08-05 15:41:30 -0400 (-0400), Jeremy Freudberg wrote: > Responding to myself with a different approach, as seen elsewhere > in the thread. [...] While I appreciate everyone's innovative ideas for coming up with additional names, the time for adding entries to the list has passed. We need a name for the U cycle, like, yesterday. Not having one is holding up a variety of governance and event planning tasks and generally making things more complicated for everyone involved. There were threads on this mailing list in February and April asking for help putting together a solution. People pitched in and the list Doug sent is what they came up with (minus some which we removed in advance for a variety of reasons). The remaining list still seems to have a number of viable options, and this is the phase where we're asking if anyone objects to any of what's there before we put together a community poll to rank them a few days from now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From michael at the-davies.net Mon Aug 5 21:10:47 2019 From: michael at the-davies.net (Michael Davies) Date: Tue, 6 Aug 2019 06:40:47 +0930 Subject: stestr Python 2 Support In-Reply-To: <20190805182144.GA10102@zeong> References: <20190724131136.GA11582@sinanju.localdomain> <20190803001018.GA2352@thor.bakeyournoodle.com> <20190805182144.GA10102@zeong> Message-ID: Thanks Matt - you're a team player! On Tue, Aug 6, 2019 at 3:52 AM Matthew Treinish wrote: > On Sat, Aug 03, 2019 at 10:10:18AM +1000, Tony Breeds wrote: > > On Wed, Jul 24, 2019 at 09:11:36AM -0400, Matthew Treinish wrote: > > > Hi Everyone, > > > > > > I just wanted to send a quick update about the state of python 2 > support in > > > stestr, since OpenStack is the largest user of the project. With the > recent > > > release of stestr 2.4.0 we've officially deprecated the python 2.7 > support > > > in stestr. It will emit a DeprecationWarning whenever it's run from > the CLI > > > with python 2.7 now. The plan (which is not set in stone) is that we > will be > > > pushing a 3.0.0 release that removes the python 2 support and compat > code from > > > stestr sometime in early 2020 (definitely after the Python 2.7 EoL on > Jan. > > > 1st). I don't believe this conflicts with the current Python version > support > > > plans in OpenStack [1] but just wanted to make sure people were aware > so that > > > there are no surprises when stestr stops working with Python 2 in > 3.0.0. > > > > Thanks Matt. I know it's a little meta but if something really strange > > were to happen would you be open to doing 2.X.Y releases while we still > > have maintained branches that use it? > > Sure, I'm open to doing that if the need arises. The normal release > process for > stestr doesn't involve branching. But, if there is a critical issue in > 2.x.y > that requires a fix we can easily create a branch and push a bugfix release > when/if that happens. > > -Matt Treinish > -- Michael Davies michael at the-davies.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacob.anders.au at gmail.com Mon Aug 5 23:42:14 2019 From: jacob.anders.au at gmail.com (Jacob Anders) Date: Tue, 6 Aug 2019 09:42:14 +1000 Subject: [ironic] Moving to office hours as opposed to weekly meetings for the next month In-Reply-To: References: Message-ID: Hi Julia, Thank you for your email and apologies for delayed response on my side. It is tricky indeed. I see two potential ways going forward: - going back to the weekly meeting convention and alternating between two time slots (similarly to what Scientific SIG and Neutron folks do) - an additional "sync up" time for the APACs, as you suggested. It could be a smaller weekly meeting or just an agreed time window (or windows) when the APAC contributors can reach out to the key team members for direction etc. From my perspective the key bit is being able to reach out to someone who will be able to guide me on how best go about the work packages I've taken up etc. What are your thoughts on these? Best Regards, Jacob On Mon, Jul 29, 2019 at 9:56 PM Julia Kreger wrote: > Hi Jacob, > > Sorry for the delay. My hope was that APAC contributors would coalesce > around a time, but it really seems that has not happened, and I am > starting to think that the office hours experiment has not really > helped as there has not been a regular reminder each week. :( > > Happy to discuss more, but perhaps a establishing a dedicated APAC > sync-up meeting is what is required? > > Thoughts? > > -Julia > > On Wed, Jul 17, 2019 at 5:44 AM Jacob Anders > wrote: > > > > Hi Julia, > > > > Do we have more clarity regarding the second (APAC) session? I see the > polls have been open for some time, but haven't seen a mention of a > specific time. > > > > Thank you, > > Jacob > > > > On Wed, Jul 3, 2019 at 9:39 AM Julia Kreger > wrote: > >> > >> Greetings Everyone! > >> > >> This week, during the weekly meeting, we seemed to reach consensus that > >> we would try taking a break from meetings[0] and moving to orienting > >> around using the mailing list[1] and our etherpad "whiteboard" [2]. > >> With this, we're going to want to re-evaluate in about a month. > >> I suspect it would be a good time for us to have a "mid-cycle" style > >> set of topical calls. I've gone ahead and created a poll to try and > >> identify a couple days that might be ideal for contributors[3]. > >> > >> But in the mean time, we want to ensure that we have some times for > >> office hours. The suggestion was also made during this week's meeting > >> that we may want to make the office hours window a little larger to > >> enable more discussion. > >> > >> So when will we have office hours? > >> ---------------------------------- > >> > >> Ideally we'll start with two time windows. One to provide coverage > >> to US and Europe friendly time zones, and another for APAC contributors. > >> > >> * I think 2-4 PM UTC on Mondays would be ideal. This translates to > >> 7-9 AM US-Pacific or 10 AM to 12 PM US-Eastern. > >> * We need to determine a time window that would be ideal for APAC > >> contributors. I've created a poll to help facilitate discussion[4]. > >> > >> So what is Office Hours? > >> ------------------------ > >> > >> Office hours are a time window when we expect some contributors to be > >> on IRC and able to partake in higher bandwidth discussions. > >> These times are not absolute. They can change and evolve, > >> and that is the most important thing for us to keep in mind. > >> > >> -- > >> > >> If there are any questions, Please let me know! > >> Otherwise I'll send a summary email out on next Monday. > >> > >> -Julia > >> > >> [0]: > http://eavesdrop.openstack.org/meetings/ironic/2019/ironic.2019-07-01-15.00.log.html#l-123 > >> [1]: > http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007038.html > >> [2]: https://etherpad.openstack.org/p/IronicWhiteBoard > >> [3]: https://doodle.com/poll/652gzta6svsda343 > >> [4]: https://doodle.com/poll/2ta5vbskytpntmgv > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Aug 6 00:15:47 2019 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 6 Aug 2019 08:15:47 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <20190805204923.z7h4ei6w7n7v7azj@yuggoth.org> Message-ID: agree with Jeremy and Doug, let's stick with the list (it already has an amazing list of names of beautiful cities/places in China) On Tue, Aug 6, 2019 at 4:53 AM Jeremy Stanley wrote: > On 2019-08-05 15:41:30 -0400 (-0400), Jeremy Freudberg wrote: > > Responding to myself with a different approach, as seen elsewhere > > in the thread. > [...] > > While I appreciate everyone's innovative ideas for coming up with > additional names, the time for adding entries to the list has > passed. We need a name for the U cycle, like, yesterday. Not having > one is holding up a variety of governance and event planning tasks > and generally making things more complicated for everyone involved. > There were threads on this mailing list in February and April asking > for help putting together a solution. People pitched in and the list > Doug sent is what they came up with (minus some which we removed in > advance for a variety of reasons). The remaining list still seems to > have a number of viable options, and this is the phase where we're > asking if anyone objects to any of what's there before we put > together a community poll to rank them a few days from now. > -- > Jeremy Stanley > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Aug 6 02:52:36 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 5 Aug 2019 22:52:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: On 5/08/19 3:32 PM, Doug Hellmann wrote: > Unicorn > Urban > Unique > Umpire > Utopia > umbrella > ultimate These names are the ones that don't meet the criteria, which should mean that by default they're not included in the poll. The TC has the discretion to include one or more of them if we think they're exceptionally good. Candidly, I don't think any of them are. None of them, with the questionable exception of 'Urban', have any relation to Shanghai that I am aware of. And there's no shortage of decent local names on the list, even after applying questionable criteria on which sorts of words beginning with W in the Pinyin system are acceptable to use other transliterations on. So to be clear, I expect the TC to get a vote on whether any names not meeting the criteria are added to the poll, and I am personally inclined to vote -1. - ZB From rico.lin.guanyu at gmail.com Tue Aug 6 03:11:31 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 6 Aug 2019 11:11:31 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <3F0801EC-9EDA-4AC0-A12F-7B2FF30952B0@doughellmann.com> Message-ID: On Mon, Aug 5, 2019 at 8:22 PM Doug Hellmann wrote: > > As for all below geographic options, most of them originally from different languages like Mongolian or Russian, so generally speaking, most people won't use Pingyi system for that name. And I don't think it helps to put it's Pinyin on top too. > > Are you saying we should not include any of these names either, or just that when we present the poll we should not include the Pinyin spelling? > Just clarify here, I mean we should not include Pinyin for these options -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Aug 6 03:40:14 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 6 Aug 2019 11:40:14 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: On Tue, Aug 6, 2019 at 10:56 AM Zane Bitter wrote: > > So to be clear, I expect the TC to get a vote on whether any names not > meeting the criteria are added to the poll, and I am personally inclined > to vote -1. +1. Since we still got some days before we officially start the poll, we can run a quick inner poll within TCs by irc/CIVS to final the list. So maybe during office hours on Thursday? > > - ZB > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Aug 6 03:41:02 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 5 Aug 2019 22:41:02 -0500 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> Message-ID: <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> All, This e-mail has multiple purposes.  First, I have expanded the mail audience to go beyond just openstack-discuss to a mailing list I have created for all 3rd Party CI Maintainers associated with Cinder.  I apologize to those of you who are getting this as a duplicate e-mail. For all 3rd Party CI maintainers who have already migrated your systems to using Python3.7...Thank you!  We appreciate you keeping up-to-date with Cinder's requirements and maintaining your CI systems. If this is the first time you are hearing of the Python3.7 requirement please continue reading. It has been decided by the OpenStack TC that support for Py2.7 would be deprecated [1].  The Train development cycle is the last cycle that will support Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. It was discussed at the Train PTG that we would require all 3rd Party CIs to be running using Python3 by the Train milestone 2: [2]  We have been communicating the importance of getting 3rd Party CI running with py3 in meetings and e-mail for quite some time now, but it still appears that nearly half of all vendors are not yet running with Python 3. [3] If you are a vendor who has not yet moved to using Python 3 please take some time to review this document [4] as it has guidance on how to get your CI system updated.  It also includes some additional details as to why this requirement has been set and the associated background.  Also, please update the py3-ci-review etherpad with notes indicating that you are working on adding py3 support. I would also ask all vendors to review the etherpad I have created as it indicates a number of other drivers that have been marked unsupported due to CI systems not running properly.  If you are not planning to continue to support a driver adding such a note in the etherpad would be appreciated. Thanks! Jay [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html [2] https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI [3] https://etherpad.openstack.org/p/cinder-py3-ci-review [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update From gmann at ghanshyammann.com Tue Aug 6 04:33:09 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 13:33:09 +0900 Subject: [kuryr] [ptl] [tc] Stepping down as Kuryr PTL In-Reply-To: References: Message-ID: <16c6533cc55.acfc25ea739772.5697560946632489083@ghanshyammann.com> ---- On Mon, 05 Aug 2019 23:49:20 +0900 Daniel Mellado wrote ---- > As I have taken on a new role in my company I won't be having the time > to dedicate to Kuryr in order to keep as the PTL for the current cycle. > > I started working on the project more than two cycles ago and it has > been a real pleasure for me. > > Helping a project grow from an idea and a set of diagrams to a > production-grade service was an awesome experience and I got help from > my awesome team and upstream contributors! > > I would like to take this opportunity to thank everyone who contributed > to the success of Kuryr – either by writing code, suggesting new use > cases, participating in our discussions, or helping out with Infra! > > Michal Dulko (irc: dulek) has been kind enough to accept replacing me as > the new Kuryr PTL [1]. I’m sure he'll make an excellent work as he's > knowledgeable about every piece of the code and is tightly connected to > the community. I will still be around to help if needed. > > Please join me congratulating Michal on his new role! Thanks Daniel for all your hard work and leadership in Kuryr project. Congrats and best of luck Michal for your new role. Feel free to reach out to TC anytime you need help. -gmann > > Best! > > Daniel > > > [1] https://review.opendev.org/674624 > > From gmann at ghanshyammann.com Tue Aug 6 05:13:32 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 14:13:32 +0900 Subject: [tc][uc][all] Starting goal selection for U series Message-ID: <16c6558c543.f38f0196740077.4057984990144645350@ghanshyammann.com> Hello everyone, We are in R10 week of Train cycle and not so far from the start of U cycle. It's time to start the discussions about community-wide goals ideas for the U series. We are little late to start this thread for U series which usually happened during Summit Forum. But it is not mandatory to wait for f2f meetup to kick off the goal discussions, we can always start the same via ML or ad-hoc meetings. During Shanghai Summit Forum, we will be having the f2f discussion for U (continue the discussion from ML threads) as well as V cycle goals ideas. Community-wide goals are important in term of solving and improving a technical area across OpenStack as a whole. It has lot more benefits to be considered from users as well from a developers perspective. See [1] for more details about community-wide goals and process. Also, you can refer to the backlogs of community-wide goals from this[2] and train cycle goals[3]. If you are interested in proposing a goal, please write down the idea on this etherpad[4] - https://etherpad.openstack.org/p/PVG-u-series-goals Accordingly, we will start the separate ML discussion over each goal idea. [1] https://governance.openstack.org/tc/goals/index.html [2] https://etherpad.openstack.org/p/community-goals [3] https://etherpad.openstack.org/p/BER-t-series-goals [4] https://etherpad.openstack.org/p/PVG-u-series-goals -gmann From berndbausch at gmail.com Tue Aug 6 05:48:42 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 6 Aug 2019 14:48:42 +0900 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax In-Reply-To: References: Message-ID: Yes, aggregate syntax documentation has room for improvement. However, Gnocchi's API documentation has a rather useful list of supported operations at https://gnocchi.xyz/rest.html#list-of-supported-operations. See also my recent issue https://github.com/gnocchixyz/gnocchi/issues/1044, which helped me understand how aggregation works in Gnocchi. Note that you can write an aggregate operation as a string using prefix notation, or as a JSON structure. On the command line, the string version is easier to use in my opinion. Regarding your use case, allow me to focus on CPU. Ceilometer'/s cpu/ metric accumulates the nanoseconds an instance consumes. Try /max/ aggregation to look at the CPU usage of a single instance:     gnocchi measures show --aggregation max --resource-id SERVER_UUID cpu which is equivalent to     gnocchi aggregates '(metric cpu max)' id=SERVER_UUID then use /sum/ aggregation over all instances of a project:     gnocchi aggregates '(aggregate sum (metric cpu max))' project_id=PROJECT_UUID You can even divide the figures by one billion, which converts nanoseconds to seconds:     gnocchi aggregates '(/ (aggregate sum (metric cpu max)) 1000000000)' project_id=PROJECT_UUID If that works, it should not be too hard to do something equivalent for memory and storage. Bernd. On 8/5/2019 5:43 PM, Blom, Merlin, NMU-OI wrote: > > Hey, > > I would like to aggregate data from the gnocchi database by using the > gnocchi aggregates function of the CLI/API > > The documentation does not cover the operations that are available nor > the syntax that has to be used: > > https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggregates > > Searching for more information I found a GitHub Issue: > > https://github.com/gnocchixyz/gnocchi/issues/393 > > But I cannot use the syntax from that ether. > > *My use case:* > > I want to aggregate the vcpus hours per month, vram hours per month, … > per server or project. > > -when an instance is stopped only storage is counted > > -the exact usage is used e.g. 2 vcpus between 1^st and 7^th day 4vcpus > between 8^th and last month no mean calculations > > Do you have detailed documentation about the gnocchi Aggregates > Operation Syntax? > > Do you have complex examples for gnocchi aggregations? Especially when > using the python bindings: > > /conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], > resource_type='instance', groupby='original_resource_id') / > > Can you give me advice regarding my use case? Do's and don'ts… > > Thank you for your help in advance! > > Merlin Blom > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Tue Aug 6 05:59:55 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 6 Aug 2019 14:59:55 +0900 Subject: [telemetry][ceilometer][gnocchi] How to configure aggregate for cpu_util or calculate from metrics In-Reply-To: References: <14ff728c-f19e-e869-90b1-4ff37f7170af@suse.com> <20AC2324-24B6-40D1-A0A4-0382BCE430A7@cern.ch> <48533933-1443-6ad3-9cf1-940ac4d52d6f@dantalion.nl> Message-ID: Thanks much, Sumit. I did not detect your reply until now. Still hard to manage the openstack-discuss mailing list. In the mean time, I have made a lot of progress and understand how Ceilometer creates its own archive policies and adds resources and their metrics to Gnocchi - based on gnocchi_resources.yaml, as you correctly remarked. Thanks to help from the Gnocchi team, I also know how to generate CPU utilization figures. See this issue on Github if you are interested: https://github.com/gnocchixyz/gnocchi/issues/1044. My ultimate goal is autoscaling based on CPU utilization. I have not solved that problem that, but it's a different question. One question at a time! Thanks again, this immediate question is answered. Bernd. On 8/1/2019 8:20 PM, Sumit Jamgade wrote: > Hey Bernd, > > Can you try with just one publisher instead of 2 and also drop the > archive_policy query parameter and its value. > > Then ceilometer should publish metrics based on map defined > gnocchi_resources.yaml > > And while you are at it. Could you post a list of archive policies > already defined in gnocchi, I believe this list should match what > is listed in gnocchi_resources.yaml. > > Hope that helps > Sumit > > > > On 7/31/19 3:22 AM, Bernd Bausch wrote: >> The message at the end of this email is some three months old. I have >> the same problem. The question is: *How to use the new rate metrics in >> Gnocchi. *I am using a Stein Devstack for my tests.* >> * >> >> For example, I need the CPU rate, formerly named /cpu_util/. I created >> a new archive policy that uses /rate:mean/ aggregation and has a 1 >> minute granularity: >> >> $ gnocchi archive-policy show ceilometer-medium-rate >> +---------------------+------------------------------------------------------------------+ >> | Field               | >> Value                                                            | >> +---------------------+------------------------------------------------------------------+ >> | aggregation_methods | rate:mean, >> mean                                                  | >> | back_window         | >> 0                                                                | >> | definition          | - points: 10080, granularity: 0:01:00, >> timespan: 7 days, 0:00:00 | >> | name                | >> ceilometer-medium-rate                                           | >> +---------------------+------------------------------------------------------------------+ >> >> I added the new policy to the publishers in /pipeline.yaml/: >> >> $ tail -n5 /etc/ceilometer/pipeline.yaml >> sinks: >>     - name: meter_sink >>       publishers: >>           - gnocchi://?archive_policy=medium&filter_project=gnocchi_swift >>           *- >> gnocchi://?archive_policy=ceilometer-medium-rate&filter_project=gnocchi_swift* >> >> After restarting all of Ceilometer, my hope was that the CPU rate >> would magically appear in the metric list. But no: All metrics are >> linked to archive policy /medium/, and looking at the details of an >> instance, I don't detect anything rate-related: >> >> $ gnocchi resource show ae3659d6-8998-44ae-a494-5248adbebe11 >> +-----------------------+---------------------------------------------------------------------+ >> | Field                 | >> Value                                                               | >> +-----------------------+---------------------------------------------------------------------+ >> ... >> | metrics               | compute.instance.booting.time: >> 76fac1f5-962e-4ff2-8790-1f497c99c17d | >> |                       | cpu: >> af930d9a-a218-4230-b729-fee7e3796944                           | >> |                       | disk.ephemeral.size: >> 0e838da3-f78f-46bf-aefb-aeddf5ff3a80           | >> |                       | disk.root.size: >> 5b971bbf-e0de-4e23-ba50-a4a9bf7dfe6e                | >> |                       | memory.resident: >> 09efd98d-c848-4379-ad89-f46ec526c183               | >> |                       | memory.swap.in: >> 1bb4bb3c-e40a-4810-997a-295b2fe2d5eb                | >> |                       | memory.swap.out: >> 4d012697-1d89-4794-af29-61c01c925bb4               | >> |                       | memory.usage: >> 93eab625-0def-4780-9310-eceff46aab7b                  | >> |                       | memory: >> ea8f2152-09bd-4aac-bea5-fa8d4e72bbb1                        | >> |                       | vcpus: >> e1c5acaf-1b10-4d34-98b5-3ad16de57a98                         | >> | original_resource_id  | >> ae3659d6-8998-44ae-a494-5248adbebe11                                | >> ... >> >> | type                  | >> instance                                                            | >> | user_id               | >> a9c935f52e5540fc9befae7f91b4b3ae                                    | >> +-----------------------+---------------------------------------------------------------------+ >> >> Obviously, I am missing something. Where is the missing link? What do >> I have to do to get CPU usage rates? Do I have to create metrics? >> Do//I have to ask Ceilometer to create metrics? How? >> >> Right now, no instructions seem to exist at all. If that is correct, I >> would be happy to write documentation once I understand how it works. >> >> Thanks a lot. >> >> Bernd >> >> On 5/10/2019 3:49 PM, info at dantalion.nl wrote: >>> Hello, >>> >>> I am working on Watcher and we are currently changing how metrics are >>> retrieved from different datasources such as Monasca or Gnocchi. Because >>> of this major overhaul I would like to validate that everything is >>> working correctly. >>> >>> Almost all of the optimization strategies in Watcher require the cpu >>> utilization of an instance as metric but with newer versions of >>> Ceilometer this has become unavailable. >>> >>> On IRC I received the information that Gnocchi could be used to >>> configure an aggregate and this aggregate would then report cpu >>> utilization, however, I have been unable to find documentation on how to >>> achieve this. >>> >>> I was also notified that cpu_util is something that could be computed >>> from other metrics. When reading >>> https://docs.openstack.org/ceilometer/rocky/admin/telemetry-measurements.html#openstack-compute >>> the documentation seems to agree on this as it states that cpu_util is >>> measured by using a 'rate of change' transformer. But I have not been >>> able to find how this can be computed. >>> >>> I was hoping someone could spare the time to provide documentation or >>> information on how this currently is best achieved. >>> >>> Kind Regards, >>> Corne Lukken (Dantali0n) >>> > From gmann at ghanshyammann.com Tue Aug 6 06:48:58 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 06 Aug 2019 15:48:58 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-11 Update Message-ID: <16c65b023b0.125a44579741777.2847331246825225821@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R11 week. At the first step, I am preparing the ipv6 jobs for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. It seems like, there are a lot of works required to IPv6 deployment and testing than expected initially. 11 projects put of initial 17 projects are failing on "IPv6-only as listen address". Summary: # of Ipv6 job proposed Projects: 17 # of pass projects: 6 # of failing projects: 11 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Base job 'devstack-tempest-ipv6' and 'tempest-ipv6-only' are merged. 2. 'tempest-ipv6-only' has been proposed to run on 5 services project date side [2]. 3. Neutron stadium projects jobs have been prepared (as first only for projects having zuulv3 job) 4. New projects ipv6 jobs patch and status: - Congress: link: https://review.opendev.org/#/c/671908/ status: job is failing, I will ping Eric to help on debugging. - Monasca: links: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged)+projects:openstack/monasca status: jobs are failing. I fixed some of the IPv6 parsing on monasca-api and monasca-notoficatiob side but it seems kafka addresses still have an issue on IPv6 - Murano: links: https://review.opendev.org/#/c/673291/ Status: job is failing. I have not debugged it. I need project help on this if any bug or failing for any other issue. - cloudkitty: link: https://review.opendev.org/#/c/671909/ status: Job is passing. - quinling: link: https://review.opendev.org/#/c/673506/ status: job is failing. need to debug this. - networking-old link: https://review.opendev.org/#/c/673501/ status: working with Lajos to define the job on top of their new zuulv3 jobset. - networking-ovn: link: https://review.opendev.org/#/c/673488/ status: job is failing, Lucas and Brian already checking it on review. I will debug with them. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/673266/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing - I am adding this wiki info in goal doc also[4]. Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific varification then it can be added in project side job as post-run playbooks as described in wiki page[5]. [1] https://review.opendev.org/#/c/671231/ [2] https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) [3] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2019-07-26.log.html#t2019-07-26T08:56:20 [4] https://review.opendev.org/#/c/671898/ [5] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From dangtrinhnt at gmail.com Tue Aug 6 07:49:15 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 6 Aug 2019 16:49:15 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture Message-ID: Hi, Is there any documents somewhere describing the current architecture of the CI/CD system that OpenStack Infrastructure is running? Best regards, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From merlin.blom at bertelsmann.de Tue Aug 6 08:21:01 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Tue, 6 Aug 2019 08:21:01 +0000 Subject: [telemetry] Gnocchi: Aggregates Operation Syntax Message-ID: Thanks Bernd, now I understand the new aggregation syntax. J In my setup max is not a valid aggregation method so I would use mean instead. Problem with my use case is that I don't want the actual used cpu NS, but the reserved vcpus per hour. Using the metric vcpus is no option, because it does not reflect the status of the VM. When you shut down the VM it is recorded anyway. So I decided to correlate the cpu with the vcpus metrics. For that I have to write my own python scripts Thanks again for your supportiv answer! Merlin Von: Bernd Bausch Gesendet: Dienstag, 6. August 2019 07:49 An: openstack-discuss at lists.openstack.org Betreff: Re: [telemetry] Gnocchi: Aggregates Operation Syntax Yes, aggregate syntax documentation has room for improvement. However, Gnocchi's API documentation has a rather useful list of supported operations at https://gnocchi.xyz/rest.html#list-of-supported-operations . See also my recent issue https://github.com/gnocchixyz/gnocchi/issues/1044 , which helped me understand how aggregation works in Gnocchi. Note that you can write an aggregate operation as a string using prefix notation, or as a JSON structure. On the command line, the string version is easier to use in my opinion. Regarding your use case, allow me to focus on CPU. Ceilometer's cpu metric accumulates the nanoseconds an instance consumes. Try max aggregation to look at the CPU usage of a single instance: gnocchi measures show --aggregation max --resource-id SERVER_UUID cpu which is equivalent to gnocchi aggregates '(metric cpu max)' id=SERVER_UUID then use sum aggregation over all instances of a project: gnocchi aggregates '(aggregate sum (metric cpu max))' project_id=PROJECT_UUID You can even divide the figures by one billion, which converts nanoseconds to seconds: gnocchi aggregates '(/ (aggregate sum (metric cpu max)) 1000000000)' project_id=PROJECT_UUID If that works, it should not be too hard to do something equivalent for memory and storage. Bernd. On 8/5/2019 5:43 PM, Blom, Merlin, NMU-OI wrote: Hey, I would like to aggregate data from the gnocchi database by using the gnocchi aggregates function of the CLI/API The documentation does not cover the operations that are available nor the syntax that has to be used: https://gnocchi.xyz/gnocchiclient/shell.html?highlight=reaggregation#aggrega tes Searching for more information I found a GitHub Issue: https://github.com/gnocchixyz/gnocchi/issues/393 But I cannot use the syntax from that ether. My use case: I want to aggregate the vcpus hours per month, vram hours per month, . per server or project. - when an instance is stopped only storage is counted - the exact usage is used e.g. 2 vcpus between 1st and 7th day 4vcpus between 8th and last month no mean calculations Do you have detailed documentation about the gnocchi Aggregates Operation Syntax? Do you have complex examples for gnocchi aggregations? Especially when using the python bindings: conn_gnocchi.metric.aggregation(metrics="memory", query=[XXXXXXXX], resource_type='instance', groupby='original_resource_id') Can you give me advice regarding my use case? Do's and don'ts. Thank you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From merlin.blom at bertelsmann.de Tue Aug 6 08:52:49 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Tue, 6 Aug 2019 08:52:49 +0000 Subject: [telemetry] ceilometer: Octavia Loadbalancer Message-ID: Hey, Has anybody experiences with amphora/Octavia Loadbalancers in ceilometer/gnoochi with OpenStack Ansible on Stein? The Metrics for load balancing are not pushed into gnocchi. https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html #load-balancer-as-a-service-lbaas-v2 Looking for the amphora instances directly showed, that the instances in the service tenant don't show up in gnocchi ether. The integration in neutron for Octavia is deactivated because it is deprecated. Is there a workaround? Maybe creating the Measurements via a custom polling script? Thanks you for your help in advance! Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From thierry at openstack.org Tue Aug 6 09:12:19 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 6 Aug 2019 11:12:19 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <49B6C47E-05C1-4E63-A739-7B91ADCEF04C@doughellmann.com> Message-ID: <20675865-2d74-014c-8f7e-3129343b3cb4@openstack.org> Zane Bitter wrote: > [...] > So to be clear, I expect the TC to get a vote on whether any names not > meeting the criteria are added to the poll, and I am personally inclined > to vote -1. That sounds fair. We can do a quick vote at the TC meeting this week (or earlier on the channel). For me, only two names feel sufficently-compelling: the above-mentioned 'Urban' (Shanghai is definitely urban), and 'Unicorn' to celebrate how difficult it was to find a name this time around. That said I'd only support addition of 'urban'... Because 'unicorn' would likely win if added, passing on the opportunity to get a name that is more representative of China. -- Thierry Carrez (ttx) From anlin.kong at gmail.com Tue Aug 6 09:46:15 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 6 Aug 2019 21:46:15 +1200 Subject: [telemetry] ceilometer: Octavia Loadbalancer In-Reply-To: References: Message-ID: The link you mentioned is related to LBaaS v2, not for Octavia. Currently, I don't think there is any pollster for Octavia in upstream Ceilometer. Best regards, Lingxian Kong Catalyst Cloud On Tue, Aug 6, 2019 at 9:02 PM Blom, Merlin, NMU-OI < merlin.blom at bertelsmann.de> wrote: > Hey, > > Has anybody experiences with amphora/Octavia Loadbalancers in > ceilometer/gnoochi with OpenStack Ansible on Stein? > > > > The Metrics for load balancing are not pushed into gnocchi. > > > https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html#load-balancer-as-a-service-lbaas-v2 > > Looking for the amphora instances directly showed, that the instances in > the service tenant don’t show up in gnocchi ether. > > > > The integration in neutron for Octavia is deactivated because it is > deprecated. > > > > Is there a workaround? > > Maybe creating the Measurements via a custom polling script? > > > > Thanks you for your help in advance! > > Merlin Blom > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.engelmann at everyware.ch Tue Aug 6 10:47:10 2019 From: florian.engelmann at everyware.ch (Engelmann Florian) Date: Tue, 6 Aug 2019 10:47:10 +0000 Subject: [nova] edit flavor Message-ID: Hi, I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any possible impact on running instances using those flavors? I guess resize will work? All the best, Florian -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Tue Aug 6 11:12:15 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 6 Aug 2019 13:12:15 +0200 Subject: [keystone] [stein] user_enabled_emulation config problem In-Reply-To: References: Message-ID: Hello all, I investigated the case. My issue arises from group_members_are_ids ignored for user_enabled_emulation_use_group_config. I reported a bug in keystone: https://bugs.launchpad.net/keystone/+bug/1839133 and will submit a patch. Hopefully it helps someone else as well. Kind regards, Radek sob., 3 sie 2019 o 20:56 Radosław Piliszek napisał(a): > Hello all, > > I have an issue using user_enabled_emulation with my LDAP solution. > > I set: > user_tree_dn = ou=Users,o=UCO > user_objectclass = inetOrgPerson > user_id_attribute = uid > user_name_attribute = uid > user_enabled_emulation = true > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > user_enabled_emulation_use_group_config = true > group_tree_dn = ou=Groups,o=UCO > group_objectclass = posixGroup > group_id_attribute = cn > group_name_attribute = cn > group_member_attribute = memberUid > group_members_are_ids = true > > Keystone properly lists members of the Users group but they all remain > disabled. > Did I misinterpret something? > > Kind regards, > Radek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Aug 6 11:38:44 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 06 Aug 2019 12:38:44 +0100 Subject: [nova] edit flavor In-Reply-To: References: Message-ID: On Tue, 2019-08-06 at 10:47 +0000, Engelmann Florian wrote: > Hi, > > I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not > handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to > delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any > possible impact on running instances using those flavors? I guess resize will work? Flavor with disk 0 are intended to be used with boot form volume guests if mangnum does not support boot form volume guest then it should not use those flavors. if you edit it in the db it will not update the embeded flavor in the instnce record. it may result in host being over substibed and it will not update the allocations in placement to reflect the new size. a resize should fix that but if you are using boot form volume be aware that instance will be schdule based on the local disk space available on the compute nodes so if you are using boot form volume it will not work as expected. your best solution assuming you are using local storage is to create a new flavor and do a resize. > > All the best, > Florian From mark at stackhpc.com Tue Aug 6 12:18:45 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Aug 2019 13:18:45 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Thu, 18 Jul 2019 at 09:54, Eddie Yen wrote: > Hi everyone, I met an issue when try to evacuate host. > The platform is stable/rocky and using kolla-ansible to deploy. > And all storage backends are connected to Ceph. > > Before I try to evacuate host, the source host had about 24 VMs running. > When I shutdown the node and execute evacuation, there're few VMs failed. > The error code is 504. > Strange is those VMs are all attach its own volume. > > Then I check nova-compute log, a detailed error has pasted at below link; > https://pastebin.com/uaE7YrP1 > > Does anyone have any experience with this? I googled but no enough > information about this. > > Thanks! > Gateway timeout suggests the server timeout in haproxy is too low, and the server (cinder-api) has not responded to the request in time. The default timeout is 60s, and is configured via haproxy_server_timeout (and possibly haproxy_client_timeout). You could try increasing this in globals.yml. We do use a larger timeout for glance-api (haproxy_glance_api_client_timeout and haproxy_glance_api_server_timeout, both 6h). Perhaps we need something similar for cinder-api. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Aug 6 12:45:20 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Aug 2019 08:45:20 -0400 Subject: [ironic] Moving to office hours as opposed to weekly meetings for the next month In-Reply-To: References: Message-ID: Indeed this is tricky. :( Is there a magic easy button out there? Anyway! I think the weekly meeting serves as a good checkpoint. Almost an interrupt that forces people to stop and context switch to the meeting. I think if APAC contributors are able to agree on a mutual time that would be better or work for them, then I think we can consider alternating, or try to perform an additional APAC focused sync time if we know a good time for it. Contributor availability being the key, without knowing a time, it is a little difficult to determine a forward path. :( Want to just arbitrarily toss out a time and lets see how that looks? -Julia On Mon, Aug 5, 2019 at 7:42 PM Jacob Anders wrote: > > Hi Julia, > > Thank you for your email and apologies for delayed response on my side. > > It is tricky indeed. I see two potential ways going forward: > > - going back to the weekly meeting convention and alternating between two time slots (similarly to what Scientific SIG and Neutron folks do) > - an additional "sync up" time for the APACs, as you suggested. It could be a smaller weekly meeting or just an agreed time window (or windows) when the APAC contributors can reach out to the key team members for direction etc. From my perspective the key bit is being able to reach out to someone who will be able to guide me on how best go about the work packages I've taken up etc. > > What are your thoughts on these? > > Best Regards, > Jacob > > On Mon, Jul 29, 2019 at 9:56 PM Julia Kreger wrote: >> >> Hi Jacob, >> >> Sorry for the delay. My hope was that APAC contributors would coalesce >> around a time, but it really seems that has not happened, and I am >> starting to think that the office hours experiment has not really >> helped as there has not been a regular reminder each week. :( >> >> Happy to discuss more, but perhaps a establishing a dedicated APAC >> sync-up meeting is what is required? >> >> Thoughts? >> >> -Julia >> >> On Wed, Jul 17, 2019 at 5:44 AM Jacob Anders wrote: >> > >> > Hi Julia, >> > >> > Do we have more clarity regarding the second (APAC) session? I see the polls have been open for some time, but haven't seen a mention of a specific time. >> > >> > Thank you, >> > Jacob >> > >> > On Wed, Jul 3, 2019 at 9:39 AM Julia Kreger wrote: >> >> >> >> Greetings Everyone! >> >> >> >> This week, during the weekly meeting, we seemed to reach consensus that >> >> we would try taking a break from meetings[0] and moving to orienting >> >> around using the mailing list[1] and our etherpad "whiteboard" [2]. >> >> With this, we're going to want to re-evaluate in about a month. >> >> I suspect it would be a good time for us to have a "mid-cycle" style >> >> set of topical calls. I've gone ahead and created a poll to try and >> >> identify a couple days that might be ideal for contributors[3]. >> >> >> >> But in the mean time, we want to ensure that we have some times for >> >> office hours. The suggestion was also made during this week's meeting >> >> that we may want to make the office hours window a little larger to >> >> enable more discussion. >> >> >> >> So when will we have office hours? >> >> ---------------------------------- >> >> >> >> Ideally we'll start with two time windows. One to provide coverage >> >> to US and Europe friendly time zones, and another for APAC contributors. >> >> >> >> * I think 2-4 PM UTC on Mondays would be ideal. This translates to >> >> 7-9 AM US-Pacific or 10 AM to 12 PM US-Eastern. >> >> * We need to determine a time window that would be ideal for APAC >> >> contributors. I've created a poll to help facilitate discussion[4]. >> >> >> >> So what is Office Hours? >> >> ------------------------ >> >> >> >> Office hours are a time window when we expect some contributors to be >> >> on IRC and able to partake in higher bandwidth discussions. >> >> These times are not absolute. They can change and evolve, >> >> and that is the most important thing for us to keep in mind. >> >> >> >> -- >> >> >> >> If there are any questions, Please let me know! >> >> Otherwise I'll send a summary email out on next Monday. >> >> >> >> -Julia >> >> >> >> [0]: http://eavesdrop.openstack.org/meetings/ironic/2019/ironic.2019-07-01-15.00.log.html#l-123 >> >> [1]: http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007038.html >> >> [2]: https://etherpad.openstack.org/p/IronicWhiteBoard >> >> [3]: https://doodle.com/poll/652gzta6svsda343 >> >> [4]: https://doodle.com/poll/2ta5vbskytpntmgv >> >> From mnaser at vexxhost.com Tue Aug 6 13:41:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 09:41:36 -0400 Subject: [nova] edit flavor In-Reply-To: References: Message-ID: On Tue, Aug 6, 2019 at 7:44 AM Sean Mooney wrote: > > On Tue, 2019-08-06 at 10:47 +0000, Engelmann Florian wrote: > > Hi, > > > > I would like to edit flavors which we created with 0 disk size. Flavors with a disk size (root volume) of 0 are not > > handled well with, eg. Magnum. So I would like to add a root disk size to those flavors. The default precedure is to > > delete them and recreate them. But that's not what I want. I would like to edit them, eg in the database. Is there any > > possible impact on running instances using those flavors? I guess resize will work? > > Flavor with disk 0 are intended to be used with boot form volume guests if mangnum does not support boot form volume > guest then it should not use those flavors. FYI: https://review.opendev.org/#/c/621734/ > if you edit it in the db it will not update the embeded flavor in the instnce record. > it may result in host being over substibed and it will not update the allocations in placement to reflect the new size. > > a resize should fix that but if you are using boot form volume be aware that instance will be schdule based > on the local disk space available on the compute nodes so if you are using boot form volume it will not work as > expected. your best solution assuming you are using local storage is to create a new flavor and do a resize. > > > > > All the best, > > Florian > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Aug 6 14:05:12 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 10:05:12 -0400 Subject: [openstack-ansible] Shanghai Summit Planning In-Reply-To: References: Message-ID: Hi everyone, I have not seen any updates to the etherpad, neither have I seen any names added in there. I have added a secton of "who's not going": https://etherpad.openstack.org/p/PVG-OSA-PTG I just want to make sure to have an idea of attendance. Thanks, Mohammed On Thu, Aug 1, 2019 at 4:10 PM Mohammed Naser wrote: > > Hey everyone! > > Here's the link to the Etherpad for this year's Shanghai summit > initial planning. You can put your name if you're attending and also > write down your topic of discussion ideas. Looking forward to seeing > you there! > > https://etherpad.openstack.org/p/PVG-OSA-PTG > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Aug 6 14:06:06 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 10:06:06 -0400 Subject: [tc] Shanghai Summit Planning In-Reply-To: References: Message-ID: Bumping this email, I have not seen any traction on this and even if you don't have any ideas, please just add your name if you're attending or not! Thanks, Mohammed On Thu, Aug 1, 2019 at 4:11 PM Mohammed Naser wrote: > > Hey everyone! > > Here's the link to the Etherpad for this year's Shanghai summit > initial planning. You can put your name if you're attending and also > write down your topic of discussion ideas. Looking forward to seeing > you there! > > https://etherpad.openstack.org/p/PVG-TC-PTG > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Aug 6 14:37:43 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 14:37:43 +0000 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: References: Message-ID: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> On 2019-08-06 16:49:15 +0900 (+0900), Trinh Nguyen wrote: > Is there any documents somewhere describing the current > architecture of the CI/CD system that OpenStack Infrastructure is > running? OpenStack uses the OpenDev deployment of the Zuul project gating system for CI/CD. The OpenDev infrastructure sysadmins maintain some operational Zuul deployment documentation for their own purposes at https://docs.openstack.org/infra/system-config/zuul.html and also some information which was written separately during v3 migration at https://docs.openstack.org/infra/system-config/zuulv3.html which is due to get rolled into the first. Zuul itself already has excellent documentation (written by many of the same people) and its main architecture can be found described in the Admin Guide, with the diagram at https://zuul-ci.org/docs/zuul/admin/components.html providing a nice component overview. What specifically are you looking for? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Aug 6 15:10:40 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 10:10:40 -0500 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> Message-ID: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Another thing to check if you're having seemingly inexplicable messaging issues is that there isn't a notification queue filling up somewhere. If notifications are enabled somewhere but nothing is consuming them the size of the queue will eventually grind rabbit to a halt. I used to check queue sizes through the rabbit web ui, so I have to admit I'm not sure how to do it through the cli. On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: > Hi, > Are you using ssl connections ? > > Can be this issue ? > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 > > > ------------------------------------------------------------------------ > *From:* Laurent Dumont > *Sent:* Wednesday, July 31, 2019 4:20 PM > *To:* Grant Morley > *Cc:* openstack-operators at lists.openstack.org > *Subject:* Re: Slow instance launch times due to RabbitMQ > That is a bit strange, list_queues should return stuff. Couple of ideas : > > * Are the Rabbit connection failure logs on the compute pointing to a > specific controller? > * Are there any logs within Rabbit on the controller that would point > to a transient issue? > * cluster_status is a snapshot of the cluster at the time you ran the > command. If the alarms have cleared, you won't see anything. > * If you have the RabbitMQ management plugin activated, I would > recommend a quick look to see the historical metrics and overall status. > > > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > wrote: > > Hi guys, > > We are using Ubuntu 16 and OpenStack ansible to do our setup. > > rabbitmqctl list_queues > Listing queues > > (Doesn't appear to be any queues ) > > rabbitmqctl cluster_status > > Cluster status of node > 'rabbit at management-1-rabbit-mq-container-b4d7791f' > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', >                 'rabbit at management-2-rabbit-mq-container-b455e77d', >                 'rabbit at management-3-rabbit-mq-container-1d6ae377']}]}, >  {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377', >                  'rabbit at management-2-rabbit-mq-container-b455e77d', >                  'rabbit at management-1-rabbit-mq-container-b4d7791f']}, >  {cluster_name,<<"openstack">>}, >  {partitions,[]}, >  {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, >           {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, >           {'rabbit at management-1-rabbit-mq-container-b4d7791f',[]}]}] > > Regards, > > On 31/07/2019 11:49, Laurent Dumont wrote: >> Could you forward the output of the following commands on a >> controller node? : >> >> rabbitmqctl cluster_status >> rabbitmqctl list_queues >> >> You won't necessarily see a high load on a Rabbit cluster that is >> in a bad state. >> >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley > > wrote: >> >> Hi all, >> >> We are randomly seeing slow instance launch / deletion times >> and it appears to be because of RabbitMQ. We are seeing a lot >> of these messages in the logs for Nova and Neutron: >> >> ERROR oslo.messaging._drivers.impl_rabbit [-] >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on >> 10.6.2.212:5671 is unreachable: Too >> many heartbeats missed. Trying again in 1 seconds. Client >> port: 37098: ConnectionForced: Too many heartbeats missed >> >> The RabbitMQ cluster isn't under high load and I am not seeing >> any packets drop over the network when I do some tracing. >> >> We are only running 15 compute nodes currently and have >1000 >> instances so it isn't a large deployment. >> >> Are there any good configuration tweaks for RabbitMQ running >> on OpenStack Queens? >> >> Many Thanks, >> >> -- >> >> Grant Morley >> Cloud Lead, Civo Ltd >> www.civo.com | Signup for an account! >> >> > -- > > Grant Morley > Cloud Lead, Civo Ltd > www.civo.com | Signup for an account! > > From openstack at nemebean.com Tue Aug 6 15:24:59 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 10:24:59 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> Message-ID: <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Just a reminder that there is also http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html which was intended to address this same issue. I toyed around with it a bit for TripleO installs back then and it did seem to speed things up, but at the time there was a bug in our client plugin where it was triggering a prompt for input that was problematic with the server running in the background. I never really got back to it once that was fixed. :-/ On 7/26/19 6:53 PM, Clark Boylan wrote: > Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). > > I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. > > I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? > > [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > [3] https://review.opendev.org/#/c/673108/ > [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > Note the jobs compared above all ran on rax-dfw. > > Clark > From mriedemos at gmail.com Tue Aug 6 15:31:29 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Aug 2019 10:31:29 -0500 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On 8/6/2019 7:18 AM, Mark Goddard wrote: > We do use a larger timeout for glance-api > (haproxy_glance_api_client_timeout > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need > something similar for cinder-api. A 6 hour timeout for cinder API calls would be nuts IMO. The thing that was failing was a volume attachment delete/create from what I recall, which is the newer version (as of Ocata?) for the old initialize_connection/terminate_connection APIs. These are synchronous RPC calls from cinder-api to cinder-volume to do things on the storage backend and we have seen them take longer than 60 seconds in the gate CI runs with the lvm driver. I think the investigation normally turned up lvchange taking over 60 seconds on some concurrent operation locking out the RPC call which eventually results in the MessagingTimeout from oslo.messaging. That's unrelated to your gateway timeout from HAProxy but the point is yeah you likely want to bump up those timeouts since cinder-api has these synchronous calls to the cinder-volume service. I just don't think you need to go to 6 hours :). I think the keystoneauth1 default http response timeout is 10 minutes so maybe try that. -- Thanks, Matt From cboylan at sapwetik.org Tue Aug 6 15:49:17 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Aug 2019 08:49:17 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > Just a reminder that there is also > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > which was intended to address this same issue. > > I toyed around with it a bit for TripleO installs back then and it did > seem to speed things up, but at the time there was a bug in our client > plugin where it was triggering a prompt for input that was problematic > with the server running in the background. I never really got back to it > once that was fixed. :-/ I'm not tied to any particular implementation. Mostly I wanted to show that we can take this ~5 minute portion of devstack and turn it into a 15 second portion of devstack by improving our use of the service APIs (and possibly even further if we apply it to all of the api interaction). Any idea how difficult it would be to get your client as a service stuff running in devstack again? I do not think we should make a one off change like I've done in my POC. That will just end up being harder to understand and debug in the future since it will be different than all of the other API interaction. I like the idea of a manifest or feeding a longer lived process api update commands as we can then avoid requesting new tokens as well as pkg_resource startup time. Such a system could be used by all of devstack as well (avoiding the "this bit is special" problem). Is there any interest from the QA team in committing to an approach and working to do a conversion? I don't want to commit any more time to this myself unless there is strong interest in getting changes merged (as I expect it will be a slow process weeding out places where we've made bad assumptions particularly around plugins). One of the things I found was that using names with osc results in name to id lookups as well. We can avoid these entirely if we remember name to id mappings instead (which my POC does). Any idea if your osc as a service tool does or can do that? Probably have to be more careful for scoping things in a tool like that as it may be reused by people with name collisions across projects/users/groups/domains. > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). > > > > I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. > > > > I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? > > > > [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > [3] https://review.opendev.org/#/c/673108/ > > [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > Note the jobs compared above all ran on rax-dfw. > > > > Clark > > > > From mark at stackhpc.com Tue Aug 6 15:59:25 2019 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 6 Aug 2019 16:59:25 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: > On 8/6/2019 7:18 AM, Mark Goddard wrote: > > We do use a larger timeout for glance-api > > (haproxy_glance_api_client_timeout > > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need > > something similar for cinder-api. > > A 6 hour timeout for cinder API calls would be nuts IMO. The thing that > was failing was a volume attachment delete/create from what I recall, > which is the newer version (as of Ocata?) for the old > initialize_connection/terminate_connection APIs. These are synchronous > RPC calls from cinder-api to cinder-volume to do things on the storage > backend and we have seen them take longer than 60 seconds in the gate CI > runs with the lvm driver. I think the investigation normally turned up > lvchange taking over 60 seconds on some concurrent operation locking out > the RPC call which eventually results in the MessagingTimeout from > oslo.messaging. That's unrelated to your gateway timeout from HAProxy > but the point is yeah you likely want to bump up those timeouts since > cinder-api has these synchronous calls to the cinder-volume service. I > just don't think you need to go to 6 hours :). I think the keystoneauth1 > default http response timeout is 10 minutes so maybe try that. > > Yeah, wasn't advocating for 6 hours - just showing which knobs are available :) > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Aug 6 16:16:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 16:16:01 +0000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: [...] > One of the things I found was that using names with osc results in > name to id lookups as well. We can avoid these entirely if we > remember name to id mappings instead (which my POC does). Any idea > if your osc as a service tool does or can do that? Probably have > to be more careful for scoping things in a tool like that as it > may be reused by people with name collisions across > projects/users/groups/domains. [...] Out of curiosity, could OSC/SDK cache those relationships so they're only looked up once (or at least infrequently)? I guess there are cache invalidation concerns if an entity is deleted and another created out-of-band using the same name, but if it's all done through the same persistent daemon then that's less of a risk right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Aug 6 16:34:28 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 6 Aug 2019 11:34:28 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> Message-ID: <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> On 8/6/19 10:49 AM, Clark Boylan wrote: > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >> Just a reminder that there is also >> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >> which was intended to address this same issue. >> >> I toyed around with it a bit for TripleO installs back then and it did >> seem to speed things up, but at the time there was a bug in our client >> plugin where it was triggering a prompt for input that was problematic >> with the server running in the background. I never really got back to it >> once that was fixed. :-/ > > I'm not tied to any particular implementation. Mostly I wanted to show that we can take this ~5 minute portion of devstack and turn it into a 15 second portion of devstack by improving our use of the service APIs (and possibly even further if we apply it to all of the api interaction). Any idea how difficult it would be to get your client as a service stuff running in devstack again? I wish I could take credit, but this is actually Dan Berrange's work. :-) > > I do not think we should make a one off change like I've done in my POC. That will just end up being harder to understand and debug in the future since it will be different than all of the other API interaction. I like the idea of a manifest or feeding a longer lived process api update commands as we can then avoid requesting new tokens as well as pkg_resource startup time. Such a system could be used by all of devstack as well (avoiding the "this bit is special" problem). > > Is there any interest from the QA team in committing to an approach and working to do a conversion? I don't want to commit any more time to this myself unless there is strong interest in getting changes merged (as I expect it will be a slow process weeding out places where we've made bad assumptions particularly around plugins). > > One of the things I found was that using names with osc results in name to id lookups as well. We can avoid these entirely if we remember name to id mappings instead (which my POC does). Any idea if your osc as a service tool does or can do that? Probably have to be more careful for scoping things in a tool like that as it may be reused by people with name collisions across projects/users/groups/domains. I don't believe this would handle name to id mapping. It's a very thin wrapper around the regular client code that just makes it persistent so we don't pay the startup costs every call. On the plus side that means it basically works like the vanilla client, on the minus side that means it may not provide as much improvement as a more targeted solution. IIRC it's pretty easy to use, so I can try it out again and make sure it still works and still provides a performance benefit. > >> >> On 7/26/19 6:53 PM, Clark Boylan wrote: >>> Today I have been digging into devstack runtime costs to help Donny Davis understand why tempest jobs sometimes timeout on the FortNebula cloud. One thing I discovered was that the keystone user, group, project, role, and domain setup [0] can take many minutes [1][2] (in the examples here almost 5). >>> >>> I've rewritten create_keystone_accounts to be a python tool [3] and get the runtime for that subset of setup from ~100s to ~9s [4]. I imagine that if we applied this to the other create_X_accounts functions we would see similar results. >>> >>> I think this is so much faster because we avoid repeated costs in openstack client including: python process startup, pkg_resource disk scanning to find entrypoints, and needing to convert names to IDs via the API every time osc is run. Given my change shows this can be so much quicker is there any interest in modifying devstack to be faster here? And if so what do we think an appropriate approach would be? >>> >>> [0] https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>> [1] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>> [2] http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>> [3] https://review.opendev.org/#/c/673108/ >>> [4] http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>> >>> Note the jobs compared above all ran on rax-dfw. >>> >>> Clark >>> >> >> > From cdent+os at anticdent.org Tue Aug 6 16:38:48 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 6 Aug 2019 17:38:48 +0100 (BST) Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: On Tue, 6 Aug 2019, Jeremy Stanley wrote: > On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: > [...] >> One of the things I found was that using names with osc results in >> name to id lookups as well. We can avoid these entirely if we >> remember name to id mappings instead (which my POC does). Any idea >> if your osc as a service tool does or can do that? Probably have >> to be more careful for scoping things in a tool like that as it >> may be reused by people with name collisions across >> projects/users/groups/domains. > [...] > > Out of curiosity, could OSC/SDK cache those relationships so they're > only looked up once (or at least infrequently)? I guess there are > cache invalidation concerns if an entity is deleted and another > created out-of-band using the same name, but if it's all done > through the same persistent daemon then that's less of a risk right? If we are in a situation where name to id and id to name translations are slow at the services' API layer, isn't that a really big bug? One where the fixing is beneficial to everyone, including devstack users? (Yes, I'm aware of TCP overhead and all that, but I reckon that's way down on the list of contributing factors here?) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From cboylan at sapwetik.org Tue Aug 6 16:40:50 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Aug 2019 09:40:50 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190806161601.jjf5ar6hczy5533i@yuggoth.org> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: <339c12a2-a9bc-4938-b9c0-4f48ef846ad8@www.fastmail.com> On Tue, Aug 6, 2019, at 9:17 AM, Jeremy Stanley wrote: > On 2019-08-06 08:49:17 -0700 (-0700), Clark Boylan wrote: > [...] > > One of the things I found was that using names with osc results in > > name to id lookups as well. We can avoid these entirely if we > > remember name to id mappings instead (which my POC does). Any idea > > if your osc as a service tool does or can do that? Probably have > > to be more careful for scoping things in a tool like that as it > > may be reused by people with name collisions across > > projects/users/groups/domains. > [...] > > Out of curiosity, could OSC/SDK cache those relationships so they're > only looked up once (or at least infrequently)? I guess there are > cache invalidation concerns if an entity is deleted and another > created out-of-band using the same name, but if it's all done > through the same persistent daemon then that's less of a risk right? They could cache these things too. The concern is a valid one too; however, a relatively short TTL may address that as these resources tend to all be used near each other. For example create a router, network, subnet in neutron or a user, role, group/domain in keystone. That said I think a bigger win would be caching tokens if we want to make changes to caching for osc (I think it can cache tokens but we don't set it up properly in devstack?) Every invocation of osc first hits the pkg_resources cost, then hits the catalog and token lookup costs, then does name to id translations, then does the actual thing you requested. Addressing the first two upfront costs likely has a bigger impact than name to id translations. Clark From brentonpoke at outlook.com Tue Aug 6 18:44:24 2019 From: brentonpoke at outlook.com (Brenton Poke) Date: Tue, 6 Aug 2019 18:44:24 +0000 Subject: [scientific]How are orgs using openstack for research? Message-ID: One of the answers I'm seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I'm not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6530 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Tue Aug 6 19:04:22 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 6 Aug 2019 21:04:22 +0200 Subject: [edge][all] Edge Hacking Days - August 9, 16 In-Reply-To: <7966B87C-E600-4681-83FB-FD947250012A@gmail.com> References: <7966B87C-E600-4681-83FB-FD947250012A@gmail.com> Message-ID: <2BB743BA-5BE8-4DC3-BA7C-D7ACDA3CBED1@gmail.com> Hi, Based on the Doodle poll results __August 9 and August 16__ got the most votes. You can find the dial in details on this etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. Potential topics to work on: * Building and testing edge reference architectures * Keystone testing and bug fixing Please let me know if you have any questions. See you on Friday! :) Thanks and Best Regards, Ildikó > On 2019. Jul 30., at 14:04, Ildiko Vancsa wrote: > > Hi, > > I’m reaching out with an attempt to organize hacking days to work on edge related tasks. > > The idea is to get together remotely on IRC/Zoom or any other platform that supports remote communication and work on items like building and testing our reference architectures or work on some project specific items like in Keystone or Ironic. > > Here are Doodle polls for the next three months: > > August: https://doodle.com/poll/ucfc9w7iewe6gdp4 > September: https://doodle.com/poll/3cyqxzr9vd82pwtr > October: https://doodle.com/poll/6nzziuihs65hwt7b > > Please mark any day when you have some availability to dedicate to hack even if it’s not a full day. > > Please let me know if you have any questions. > > As a reminder you can find the edge computing group’s resources and information about latest activities here: https://wiki.openstack.org/wiki/Edge_Computing_Group > > Thanks and Best Regards, > Ildikó > (IRC: ildikov) > > From mnaser at vexxhost.com Tue Aug 6 19:18:52 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 6 Aug 2019 15:18:52 -0400 Subject: [openstack-ansible] office hours update Message-ID: Hey everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. We talked about who was attending the Shanghai Summit this year but not many of us are compared to past years. There was an issue with Manila still failing but it's been fixed and there's good progress on that front. There’s also an issue with adding novnc test to os_nova because the tempest plugins used isn’t from master, which brought up some discussion to perhaps start using the Zuul checked-out roles. Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Tue Aug 6 19:19:09 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 19:19:09 +0000 Subject: [OSSA-2019-003] Nova Server Resource Faults Leak External Exception Details (CVE-2019-14433) Message-ID: <20190806191908.o52es6mbavyle2k4@yuggoth.org> ========================================================================== OSSA-2019-003: Nova Server Resource Faults Leak External Exception Details ========================================================================== :Date: August 06, 2019 :CVE: CVE-2019-14433 Affects ~~~~~~~ - Nova: <17.0.12,>=18.0.0<18.2.2,>=19.0.0<19.0.2 Description ~~~~~~~~~~~ Donny Davis with Intel reported a vulnerability in Nova Compute resource fault handling. If an API request from an authenticated user ends in a fault condition due to an external exception, details of the underlying environment may be leaked in the response and could include sensitive configuration or other data. Patches ~~~~~~~ - https://review.openstack.org/674908 (Ocata) - https://review.openstack.org/674877 (Pike) - https://review.openstack.org/674859 (Queens) - https://review.openstack.org/674848 (Rocky) - https://review.openstack.org/674828 (Stein) - https://review.openstack.org/674821 (Train) Credits ~~~~~~~ - Donny Davis from Intel (CVE-2019-14433) References ~~~~~~~~~~ - https://launchpad.net/bugs/1837877 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14433 Notes ~~~~~ - The stable/ocata and stable/pike branches are under extended maintenance and will receive no new point releases, but patches for them are provided as a courtesy. -- Jeremy Stanley OpenStack Vulnerability Management Team -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Tue Aug 6 19:44:36 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 6 Aug 2019 14:44:36 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: On Tue, Aug 6, 2019 at 11:42 AM Chris Dent wrote: > If we are in a situation where name to id and id to name > translations are slow at the services' API layer, isn't that a > really big bug? One where the fixing is beneficial to everyone, > including devstack users? While the name->ID lookup is an additional API round trip, it does not cause an additional python startup scan, which is the major killer here. In fact, it is possible that there is more than one lookup and that at least one will always be done because we do not know if that value is a name or an ID. The GET is done in any case because nearly every time (in non-create operations) we probably want the full object anyway. I also played with starting OSC as a background process a while back, it actually does work pretty well and with a bit more error handling would have been good enough(tm)[0]. The major concern with it then was it was not representative of how people actually use OSC and changed the testing value we get from doing that. dt [0] Basically run interactive mode in background, plumb up stdin/stdout to some descriptors and off to the races. -- Dean Troyer dtroyer at gmail.com From fungi at yuggoth.org Tue Aug 6 20:00:14 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Aug 2019 20:00:14 +0000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: <20190806200014.5rkpjvg3uyhm2rkx@yuggoth.org> On 2019-08-06 14:44:36 -0500 (-0500), Dean Troyer wrote: [...] > The major concern with it then was it was not representative of > how people actually use OSC and changed the testing value we get > from doing that. [...] In an ideal world, OSC would have explicit functional testing independent of the side effect of calling it when standing up DevStack. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Aug 6 20:00:15 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 6 Aug 2019 16:00:15 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <20190806161601.jjf5ar6hczy5533i@yuggoth.org> Message-ID: > On Aug 6, 2019, at 3:44 PM, Dean Troyer wrote: > > On Tue, Aug 6, 2019 at 11:42 AM Chris Dent wrote: >> If we are in a situation where name to id and id to name >> translations are slow at the services' API layer, isn't that a >> really big bug? One where the fixing is beneficial to everyone, >> including devstack users? > > While the name->ID lookup is an additional API round trip, it does not > cause an additional python startup scan, which is the major killer > here. In fact, it is possible that there is more than one lookup and > that at least one will always be done because we do not know if that > value is a name or an ID. The GET is done in any case because nearly > every time (in non-create operations) we probably want the full object > anyway. > > I also played with starting OSC as a background process a while back, > it actually does work pretty well and with a bit more error handling > would have been good enough(tm)[0]. The major concern with it then > was it was not representative of how people actually use OSC and > changed the testing value we get from doing that. > > dt > > [0] Basically run interactive mode in background, plumb up > stdin/stdout to some descriptors and off to the races. > > -- > Dean Troyer > dtroyer at gmail.com > I made some notes about the plugin lookup issue a while back [1] and I looked at that again the most recent time we were in Denver [2], and came to the conclusion that the implementation was going to require more changes in osc-lib than I was going to have time to figure out on my own. Unfortunately, it’s not a simple matter of choosing between looking at 1 internal cache or doing the pkg_resource scan because of the plugin version management layer osc-lib added. In any case, I think we’ve discussed the fact many times that the way to fix this is to not scan for plugins unless we have to do so. We just need someone to sit down and work on figuring out how to make that work. Doug [1] https://etherpad.openstack.org/p/mFsAgTZggf [2] https://etherpad.openstack.org/p/train-ptg-osc From stig.openstack at telfer.org Tue Aug 6 20:08:31 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 6 Aug 2019 21:08:31 +0100 Subject: [scientific-sig] IRC meeting today - experiences with accounting and chargeback Message-ID: <3E29D363-66BC-47D7-A137-9B86FDBD434E@telfer.org> Hi all - We have a Scientific SIG IRC meeting today at 2100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. Today’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_6th_2019 We’d like to continue with last week’s discussion, gathering notes on experiences with accounting and chargeback for scientific OpenStack deployments. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Aug 6 22:04:14 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 6 Aug 2019 23:04:14 +0100 Subject: [scientific]How are orgs using openstack for research? In-Reply-To: References: Message-ID: Hi Brenton - This is the kind of discussion that goes on within the Scientific SIG (https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meetings ), particularly your final question. It would be great to have you join the meetings and raise some of these topics for discussion. The meetings alternate between EMEA and Americas timezones. Where are you based? Cheers, Stig (oneswig) > On 6 Aug 2019, at 19:44, Brenton Poke wrote: > > One of the answers I’m seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I’m not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Aug 7 00:01:11 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 17:01:11 -0700 Subject: Zuul log location changing Message-ID: <87y305onco.fsf@meyer.lemoncheese.net> Hi, We've been working for some time[1] to retire our current static log server in favor of storing Zuul job logs in Swift. We're just about ready to do that. This means that not only do we get to use another great OpenStack project, but we also can stop worrying about fscking our 14TB log partition whenever it gets hit with a comsic ray. The change will happen in two phases, only the first of which should be readily apparent. Phase 1: On Monday August 12, 2019 we will change the URL that Zuul reports back to Gerrit. Instead of being a direct link to the log server, it will be a link to the Zuul Build page for the job. This is part of Zuul's web interface which has been around for a while, but isn't well known since we haven't linked to it yet. The build page shows a summary of information about the build, including output snippets for failed tasks. Next to the "Summary" tab, you'll find the "Logs" tab. This contains an expandable index of all the log files uploaded for the build. If they are text files, they can be rendered in-app with line-number hyperlinks and severity filtering. There are also direct links to the files on the log server. Links to preview sites (e.g., for docs builds) will show up in the "Artifacts" section. We also plan to further enhance the build page with additional features. Here are some links to sample build pages so you can see what it's like: https://zuul.opendev.org/t/openstack/build/a6e13a8098fc4a1fbff43d8f2c27ad29 https://zuul.opendev.org/t/openstack/build/75d1e8d4ffaf477db00520d7bfd77246 This step is necessary because our static log server implements a number of features as WSGI middleware in Apache. We have re-implemented the these on the Zuul build page, so there should be no loss in functionality (in fact, we think this is an improvement). Once in place, we can change the backend storage options without impacting the user interface. Phase 2: Shortly afterwards (depending on how phase 1 goes), we will configure jobs to upload logs to Swift instead of the static log server. At this point, there should be no user-visible change, since the main interface for interacting with logs is now the Zuul build page. However, you may notice that log urls have changed from our static log server to one of six different Swift instances. The Swift instance used to store the logs for any given build is chosen at random from among our providers, and is yet another really cool multi-cloud feature we get by using OpenStack. Thanks to our amazing providers and all of the folks who have helped with this effort over the years[1]. Please let us know if you have any questions or encounter any issues, either here, or in #openstack-infra on IRC. -Jim [1] Years. So. Many. Years. From corvus at inaugust.com Wed Aug 7 00:32:38 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 17:32:38 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> (Doug Hellmann's message of "Sat, 3 Aug 2019 20:48:45 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> Message-ID: <87blx1olw9.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Every OpenStack development cycle and release has a code-name. As with > everything we do, the process of choosing the name is open and based > on input from communty members. The name critera are described in [1], > and this time around we were looking for names starting with U > associated with China. With some extra assistance from local community > members (thank you to everyone who helped!), we have a list of > candidate names that will go into the poll. Below is a subset of the > names propsed, including those that meet the standard criteria and > some of the suggestions that do not. Before we start the poll, the > process calls for us to provide a period of 1 week so that any names > removed from the proposals can be discussed and any last-minute > objections can be raised. We will start the poll next week using this > list, including any modifications based on that discussion. Hi, I had previously added an entry to the suggestions wiki page, but I did not see it in this email: * University https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai (Shanghai is famous for its universities) To pick one at random, the "University of Shanghai for Science and Technology" is a place in Shanghai; I think that meets the requirement for "physical or human geography". It's a point of pride that Shanghai has so many renowned universities, so I think it's a good choice and one well worth considering. -Jim From dangtrinhnt at gmail.com Wed Aug 7 00:48:25 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 7 Aug 2019 09:48:25 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: Hi Jeremy, Thanks for pointing that out. They're pretty helpful. Sorry for not clarifying the purpose of my question in the first email. Right now my company is using Jenkins for CI/CD which is not scalable and for me it's hard to define job pipeline because of XML. I'm about to build a demonstration for my company using Zuul with Github as a replacement and trying to make sense of the OpenStack deployment of Zuul. I have been working with OpenStack projects for a couple of cycles in which Zuul has shown me its greatness and I think I can bring that power to the company. Bests, On Tue, Aug 6, 2019 at 11:42 PM Jeremy Stanley wrote: > On 2019-08-06 16:49:15 +0900 (+0900), Trinh Nguyen wrote: > > Is there any documents somewhere describing the current > > architecture of the CI/CD system that OpenStack Infrastructure is > > running? > > OpenStack uses the OpenDev deployment of the Zuul project gating > system for CI/CD. The OpenDev infrastructure sysadmins maintain some > operational Zuul deployment documentation for their own purposes at > https://docs.openstack.org/infra/system-config/zuul.html and also > some information which was written separately during v3 migration at > https://docs.openstack.org/infra/system-config/zuulv3.html which is > due to get rolled into the first. Zuul itself already has excellent > documentation (written by many of the same people) and its main > architecture can be found described in the Admin Guide, with the > diagram at https://zuul-ci.org/docs/zuul/admin/components.html > providing a nice component overview. > > What specifically are you looking for? > -- > Jeremy Stanley > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Wed Aug 7 01:06:16 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 6 Aug 2019 20:06:16 -0500 Subject: Shanghai Summit Schedule Live Message-ID: <4AB2799E-0E12-4B98-8681-0939DF6D8218@openstack.org> Hi everyone, The agenda for the Open Infrastructure Summit (formerly the OpenStack Summit) is now live! If you need a reason to join the Summit in Shanghai, November 4-6, here’s what you can expect: Breakout sessions spanning 30+ open source projects from technical community leaders and organizations including ARM, WalmartLabs, China Mobile, China Railway, Shanghai Electric Power Company, China UnionPay, Haitong Securities Company, CERN, and more. Project updates and onboarding from OSF projects: Airship, Kata Containers, OpenStack, StarlingX, and Zuul. Join collaborative sessions at the Forum , where open infrastructure operators and upstream developers will gather to jointly chart the future of open source infrastructure, discussing topics ranging from upgrades to networking models and how to get started contributing. Get hands on training around open source technologies directly from the developers and operators building the software. Now what? Register before prices increase on August 14 at 11:59pm PT (August 15 at 2:59pm China Standard Time). Recruiting new talent? Pitching a new product? Enhance the visibility of your organization by sponsoring the Summit ! Questions? Reach out to summit at openstack.org Cheers, Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Aug 7 01:25:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Aug 2019 01:25:16 +0000 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: <20190807012515.zkjfmrrnffpx3rql@yuggoth.org> On 2019-08-07 09:48:25 +0900 (+0900), Trinh Nguyen wrote: [...] > Right now my company is using Jenkins for CI/CD which is not > scalable and for me it's hard to define job pipeline because of > XML. Not steer you away from Zuul, but back when we originally used Jenkins we noticed the same things. We conquered the XML problem by inventing jenkins-job-builder, which allows you to define Jenkins jobs via templated YAML and then use that to generate the XML it expects. The scalability issue was worked around by creating the jenkins-gearman plugin and having (an earlier incarnation of) Zuul distribute jobs across multiple Jenkins masters via the Gearman protocol. You'll notice that current Zuul versions retain some of this heritage by continuing to use YAML (much of which is for Ansible) and multiple executors (which are no longer Jenkins masters, just servers invoking Ansible) communicating with the scheduler via Gearman. For us it's been a natural evolution. > I'm about to build a demonstration for my company using Zuul with > Github as a replacement and trying to make sense of the OpenStack > deployment of Zuul. I have been working with OpenStack projects > for a couple of cycles in which Zuul has shown me its greatness > and I think I can bring that power to the company. [...] If you're looking for inspiration, check out some of the user stories at https://zuul-ci.org/users.html and visit the Zuul community in the #zuul channel on the Freenode IRC network or maybe subscribe to the mailing lists here: http://lists.zuul-ci.org/cgi-bin/mailman/listinfo Zuul has some remarkably thorough documentation, and helpful folks who are always happy to answer your questions. Good luck with your demo and let us know if you need any help! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Aug 7 02:06:08 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 06 Aug 2019 19:06:08 -0700 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: (Trinh Nguyen's message of "Wed, 7 Aug 2019 09:48:25 +0900") References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> Message-ID: <87o911n2zz.fsf@meyer.lemoncheese.net> Trinh Nguyen writes: > Hi Jeremy, > > Thanks for pointing that out. They're pretty helpful. > > Sorry for not clarifying the purpose of my question in the first email. > Right now my company is using Jenkins for CI/CD which is not scalable and > for me it's hard to define job pipeline because of XML. I'm about to build > a demonstration for my company using Zuul with Github as a replacement and > trying to make sense of the OpenStack deployment of Zuul. I have been > working with OpenStack projects for a couple of cycles in which Zuul has > shown me its greatness and I think I can bring that power to the company. > > Bests, In addition to the excellent information that Jeremy provided, since you're talking about setting up a proof of concept, you may find it simpler to start with the Zuul Quick-Start: https://zuul-ci.org/start That's a container-based tutorial that will set you up with a complete Zuul system running on a single host, along with a private Gerrit instance. Once you have that running, it's fairly straightforward to take that and update the configuration to use GitHub instead of Gerrit. -Jim From sundar.nadathur at intel.com Wed Aug 7 06:06:52 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 7 Aug 2019 06:06:52 +0000 Subject: [cyborg] Poll for new weekly IRC meeting time Message-ID: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> The current Cyborg weekly IRC meeting time [1] is a conflict for many. We are looking for a better time that works for more people, with the understanding that no time is perfect for all. Please fill out this poll: https://doodle.com/poll/6t279f9y6msztz7x Be sure to indicate which times do not work for you. You can propose a new timeslot beyond what I included in the poll. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Weekly_IRC_Cyborg_team_meeting Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Aug 7 06:46:35 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Aug 2019 14:46:35 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87blx1olw9.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Aug 7, 2019 at 9:41 AM James E. Blair wrote: > I had previously added an entry to the suggestions wiki page, but I did > not see it in this email: > > * University > https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai > (Shanghai is famous for its universities) > > To pick one at random, the "University of Shanghai for Science and > Technology" is a place in Shanghai; I think that meets the requirement > for "physical or human geography". > > It's a point of pride that Shanghai has so many renowned universities, > so I think it's a good choice and one well worth considering. > Just added it in https://wiki.openstack.org/wiki/Release_Naming/U_Proposals Will make sure TCs evaluate on this one when evaluating names that do not meet the criteria Thanks for the idea > -Jim > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 07:34:42 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 15:34:42 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] Message-ID: OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G interfaces) Ansible: 2.8.3 kolla-ansible: 8.0.0 kolla_base_distro: "centos" kolla_install_type: "source" # I also tried "binary" openstack_release: "stein" I already tried redeploying (fresh OS reinstallation) 3 times already and kolla-ansible deploy always fails on this TASK ( http://paste.openstack.org/show/755599/) and won't continue and finish the deployment. And I think the issue is that the admin_url was not created ( http://paste.openstack.org/show/755600/) but why? Which task failed to not create the admin_url? Kolla-ansible only specified that 1 task failed. On keystone logs (http://paste.openstack.org/show/755601/) it says that the admin endpoint was created. The 3 keystone containers (keystone_fernet, keystone_ssh and keystone) are running without error on their logs though. - Vlad ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Wed Aug 7 08:23:55 2019 From: iurygregory at gmail.com (Iury Gregory) Date: Wed, 7 Aug 2019 10:23:55 +0200 Subject: Zuul log location changing In-Reply-To: <87y305onco.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> Message-ID: Congratulations to everyone involved! I really liked the new build page, pretty cool! Em qua, 7 de ago de 2019 às 02:05, James E. Blair escreveu: > Hi, > > We've been working for some time[1] to retire our current static log > server in favor of storing Zuul job logs in Swift. We're just about > ready to do that. > > This means that not only do we get to use another great OpenStack > project, but we also can stop worrying about fscking our 14TB log > partition whenever it gets hit with a comsic ray. > > The change will happen in two phases, only the first of which should be > readily apparent. > > Phase 1: > > On Monday August 12, 2019 we will change the URL that Zuul reports back > to Gerrit. Instead of being a direct link to the log server, it will be > a link to the Zuul Build page for the job. This is part of Zuul's web > interface which has been around for a while, but isn't well known since > we haven't linked to it yet. > > The build page shows a summary of information about the build, including > output snippets for failed tasks. Next to the "Summary" tab, you'll > find the "Logs" tab. This contains an expandable index of all the log > files uploaded for the build. If they are text files, they can be > rendered in-app with line-number hyperlinks and severity filtering. > There are also direct links to the files on the log server. > > Links to preview sites (e.g., for docs builds) will show up in the > "Artifacts" section. > > We also plan to further enhance the build page with additional features. > > Here are some links to sample build pages so you can see what it's like: > > > https://zuul.opendev.org/t/openstack/build/a6e13a8098fc4a1fbff43d8f2c27ad29 > > https://zuul.opendev.org/t/openstack/build/75d1e8d4ffaf477db00520d7bfd77246 > > This step is necessary because our static log server implements a number > of features as WSGI middleware in Apache. We have re-implemented the > these on the Zuul build page, so there should be no loss in > functionality (in fact, we think this is an improvement). Once in > place, we can change the backend storage options without impacting the > user interface. > > Phase 2: > > Shortly afterwards (depending on how phase 1 goes), we will configure > jobs to upload logs to Swift instead of the static log server. At this > point, there should be no user-visible change, since the main interface > for interacting with logs is now the Zuul build page. However, you may > notice that log urls have changed from our static log server to one of > six different Swift instances. > > The Swift instance used to store the logs for any given build is chosen > at random from among our providers, and is yet another really cool > multi-cloud feature we get by using OpenStack. > > Thanks to our amazing providers and all of the folks who have helped > with this effort over the years[1]. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. > > -Jim > > [1] Years. So. Many. Years. > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 08:40:26 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 16:40:26 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: I was intrigued by the error so I tried redeploying but this time on a non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( http://paste.openstack.org/show/755605/ ) - Vlad ᐧ On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < vladimir.blando at gmail.com> wrote: > OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G > interfaces) > Ansible: 2.8.3 > kolla-ansible: 8.0.0 > > kolla_base_distro: "centos" > kolla_install_type: "source" # I also tried "binary" > openstack_release: "stein" > > I already tried redeploying (fresh OS reinstallation) 3 times already and > kolla-ansible deploy always fails on this TASK ( > http://paste.openstack.org/show/755599/) and won't continue and finish > the deployment. And I think the issue is that the admin_url was not > created (http://paste.openstack.org/show/755600/) but why? Which task > failed to not create the admin_url? Kolla-ansible only specified that 1 > task failed. On keystone logs (http://paste.openstack.org/show/755601/) > it says that the admin endpoint was created. The 3 keystone containers > (keystone_fernet, keystone_ssh and keystone) are running without error on > their logs though. > > - Vlad > > > ᐧ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Aug 7 08:43:49 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Aug 2019 08:43:49 +0000 Subject: [scientific]How are orgs using openstack for research? In-Reply-To: References: Message-ID: <4788571F-B1F9-4044-AEAA-DA9B96D1656E@cern.ch> Brenton, In addition to Stig’s recommendation on the Scientific SIG, CERN does use OpenStack extensively, including for running Kafka/Hadoop/Spark (some details at https://indico.cern.ch/event/728770/contributions/3001750/attachments/1653323/2645467/HadoopatCERN_Hepix2018spring.pdf) with all relevant changes contributed back to the open source communities. Some other scientific use cases were covered at the recent CERN OpenStack day - Slides/Video at https://indico.cern.ch/event/776411/timetable/#20190527.detailed Summit talks on the CERN cloud are at https://www.openstack.org/videos/search?search=cern Tim On 7 Aug 2019, at 00:04, Stig Telfer > wrote: Hi Brenton - This is the kind of discussion that goes on within the Scientific SIG (https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meetings), particularly your final question. It would be great to have you join the meetings and raise some of these topics for discussion. The meetings alternate between EMEA and Americas timezones. Where are you based? Cheers, Stig (oneswig) On 6 Aug 2019, at 19:44, Brenton Poke > wrote: One of the answers I’m seeking is whether or not some orgs create shareable configurations for software stacks that might do things like collect data into sinks like that of what amazon is attempting to offer with aws research cloud. The limitation I see with aws is that everything is built specifically with aws systems, like using S3 to store everything. For example, if your output would be better represented as a graph, it would make more sense to use a kafka-orientDB sink that can be reused. Make it a helm chart? I’m not sure. I was told that CERN uses openstack for research, but I have no idea as to what extent or if they contribute anything back. Does anyone know how a research org is using the infrastructure right now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Aug 7 09:16:41 2019 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 7 Aug 2019 11:16:41 +0200 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: Hi Vlad, I think the message: "Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Internal Server Error (HTTP 500)” is the key here Can you please raise a bug in launchpad (https://launchpad.net/kolla-ansible) and attach: kolla-ansible package version /etc/kolla/globals.yml full log from kolla-ansible -vvv deploy as a starter? Best regards, Michal śr., 7 sie 2019 o 10:41 vladimir franciz blando napisał(a): > I was intrigued by the error so I tried redeploying but this time on a > non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( > http://paste.openstack.org/show/755605/ ) > > - Vlad > > ᐧ > > On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < > vladimir.blando at gmail.com> wrote: > >> OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G >> interfaces) >> Ansible: 2.8.3 >> kolla-ansible: 8.0.0 >> >> kolla_base_distro: "centos" >> kolla_install_type: "source" # I also tried "binary" >> openstack_release: "stein" >> >> I already tried redeploying (fresh OS reinstallation) 3 times already and >> kolla-ansible deploy always fails on this TASK ( >> http://paste.openstack.org/show/755599/) and won't continue and finish >> the deployment. And I think the issue is that the admin_url was not >> created (http://paste.openstack.org/show/755600/) but why? Which task >> failed to not create the admin_url? Kolla-ansible only specified that 1 >> task failed. On keystone logs (http://paste.openstack.org/show/755601/) >> it says that the admin endpoint was created. The 3 keystone containers >> (keystone_fernet, keystone_ssh and keystone) are running without error on >> their logs though. >> >> - Vlad >> >> >> ᐧ >> > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.blando at gmail.com Wed Aug 7 09:24:40 2019 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 7 Aug 2019 17:24:40 +0800 Subject: [kolla-ansible] Failed to complete TASK [keystone: Creating the keystone service and enpoint] In-Reply-To: References: Message-ID: Sure. - Vlad ᐧ On Wed, Aug 7, 2019 at 5:17 PM Michał Nasiadka wrote: > Hi Vlad, > > I think the message: > "Could not find versioned identity endpoints when attempting to > authenticate. Please check that your auth_url is correct. Internal Server > Error (HTTP 500)” > is the key here > > Can you please raise a bug in launchpad ( > https://launchpad.net/kolla-ansible) and attach: > kolla-ansible package version > /etc/kolla/globals.yml > full log from kolla-ansible -vvv deploy > > as a starter? > > Best regards, > Michal > > śr., 7 sie 2019 o 10:41 vladimir franciz blando > napisał(a): > >> I was intrigued by the error so I tried redeploying but this time on a >> non-HA deploy (enable_haproxy=no) and it also error'd on that same TASK ( >> http://paste.openstack.org/show/755605/ ) >> >> - Vlad >> >> ᐧ >> >> On Wed, Aug 7, 2019 at 3:34 PM vladimir franciz blando < >> vladimir.blando at gmail.com> wrote: >> >>> OS: CentOS 7 on Baremetal nodes (3x controller, 1xcompute, 2 bonded 10G >>> interfaces) >>> Ansible: 2.8.3 >>> kolla-ansible: 8.0.0 >>> >>> kolla_base_distro: "centos" >>> kolla_install_type: "source" # I also tried "binary" >>> openstack_release: "stein" >>> >>> I already tried redeploying (fresh OS reinstallation) 3 times already >>> and kolla-ansible deploy always fails on this TASK ( >>> http://paste.openstack.org/show/755599/) and won't continue and finish >>> the deployment. And I think the issue is that the admin_url was not >>> created (http://paste.openstack.org/show/755600/) but why? Which task >>> failed to not create the admin_url? Kolla-ansible only specified that 1 >>> task failed. On keystone logs (http://paste.openstack.org/show/755601/) >>> it says that the admin endpoint was created. The 3 keystone containers >>> (keystone_fernet, keystone_ssh and keystone) are running without error on >>> their logs though. >>> >>> - Vlad >>> >>> >>> ᐧ >>> >> > > -- > Michał Nasiadka > mnasiadka at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Aug 7 11:01:14 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 07 Aug 2019 20:01:14 +0900 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190801085818.GD2077@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <20190801085818.GD2077@fedora19.localdomain> Message-ID: <16c6bbd778a.fdd737fb22757.8067378431087320764@ghanshyammann.com> ---- On Thu, 01 Aug 2019 17:58:18 +0900 Ian Wienand wrote ---- > On Fri, Jul 26, 2019 at 04:53:28PM -0700, Clark Boylan wrote: > > Given my change shows this can be so much quicker is there any > > interest in modifying devstack to be faster here? And if so what do > > we think an appropriate approach would be? > > My first concern was if anyone considered openstack-client setting > these things up as actually part of the testing. I'd say not, > comments in [1] suggest similar views. > > My second concern is that we do keep sufficient track of complexity v > speed; obviously doing things in a sequential manner via a script is > pretty simple to follow and as we start putting things into scripts we > make it harder to debug when a monoscript dies and you have to start > pulling apart where it was. With just a little json fiddling we can > currently pull good stats from logstash ([2]) so I think as we go it > would be good to make sure we account for the time using appropriate > wrappers, etc. I agree on this concern about maintainability and debugging with scripts. Now a days, very less people have good knowledge on devstack code and debugging the failure on job side is much harder for most of the developers. IMO the maintainability and easy to debug is much needed as first priority. If we wanted to convert the OSC with something faster, Tempest service client comes into my mind. They are the very straight call to API directly but the token is requested for each API call. But that is something need PoC about speed improvement especially. > > Then the third concern is not to break anything for plugins -- > devstack has a very very loose API which basically relies on plugin > authors using a combination of good taste and copying other code to > decide what's internal or not. > > Which made me start thinking I wonder if we look at this closely, even > without replacing things we might make inroads? > > For example [3]; it seems like SERVICE_DOMAIN_NAME is never not > default, so the get_or_create_domain call is always just overhead (the > result is never used). > > Then it seems that in the gate, basically all of the "get_or_create" > calls will really just be "create" calls? Because we're always > starting fresh. So we could cut out about half of the calls there > pre-checking if we know we're under zuul (proof-of-concept [4]). > > Then we have blocks like: > > get_or_add_user_project_role $member_role $demo_user $demo_project > get_or_add_user_project_role $admin_role $admin_user $demo_project > get_or_add_user_project_role $another_role $demo_user $demo_project > get_or_add_user_project_role $member_role $demo_user $invis_project > > If we wrapped that in something like > > start_osc_session > ... > end_osc_session > > which sets a variable that means instead of calling directly, those > functions write their arguments to a tmp file. Then at the end call, > end_osc_session does > > $ osc "$(< tmpfile)" > > and uses the inbuilt batching? If that had half the calls by skipping > the "get_or" bit, and used common authentication from batching, would > that help? > > And then I don't know if all the projects and groups are required for > every devstack run? Maybe someone skilled in the art could do a bit > of an audit and we could cut more of that out too? Yeah, improving such usused o not required call with the audit is a good call. For example, In most place, devstack need just resource id or name or few fields for created resource so get call which gives complete resource fileds might not be needed and for async call we can have an exception to get resource('addressess' in server). -gmann > > So I guess my point is that maybe we could tweak what we have a bit to > make some immediate wins, before anyone has to rewrite too much? > > -i > > [1] https://review.opendev.org/673018 > [2] https://ethercalc.openstack.org/rzuhevxz7793 > [3] https://review.opendev.org/673941 > [4] https://review.opendev.org/673936 > > From nate.johnston at redhat.com Wed Aug 7 13:13:25 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 7 Aug 2019 08:13:25 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Big +1 from me! Nate > On Aug 4, 2019, at 1:52 PM, Miguel Lavalle wrote: > > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the Neutron core team. Rodolfo has been an active contributor to Neutron since the Mitaka cycle. He has been a driving force over these years in the implementation an evolution of Neutron's QoS feature, currently leading the sub-team dedicated to it. Recently he has been working on improving the interaction with Nova during the port binding process, driven the adoption of Pyroute2 and has become very active in fixing all kinds of bugs. The quality and number of his code reviews during the Train cycle are comparable with the leading members of the core team: https://www.stackalytics.com/?release=train&module=neutron-group. In my opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Aug 7 13:33:48 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Aug 2019 08:33:48 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> Message-ID: <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> On 8/6/19 11:34 AM, Ben Nemec wrote: > > > On 8/6/19 10:49 AM, Clark Boylan wrote: >> On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >>> Just a reminder that there is also >>> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >>> >>> which was intended to address this same issue. >>> >>> I toyed around with it a bit for TripleO installs back then and it did >>> seem to speed things up, but at the time there was a bug in our client >>> plugin where it was triggering a prompt for input that was problematic >>> with the server running in the background. I never really got back to it >>> once that was fixed. :-/ >> >> I'm not tied to any particular implementation. Mostly I wanted to show >> that we can take this ~5 minute portion of devstack and turn it into a >> 15 second portion of devstack by improving our use of the service APIs >> (and possibly even further if we apply it to all of the api >> interaction). Any idea how difficult it would be to get your client as >> a service stuff running in devstack again? > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > >> >> I do not think we should make a one off change like I've done in my >> POC. That will just end up being harder to understand and debug in the >> future since it will be different than all of the other API >> interaction. I like the idea of a manifest or feeding a longer lived >> process api update commands as we can then avoid requesting new tokens >> as well as pkg_resource startup time. Such a system could be used by >> all of devstack as well (avoiding the "this bit is special" problem). >> >> Is there any interest from the QA team in committing to an approach >> and working to do a conversion? I don't want to commit any more time >> to this myself unless there is strong interest in getting changes >> merged (as I expect it will be a slow process weeding out places where >> we've made bad assumptions particularly around plugins). >> >> One of the things I found was that using names with osc results in >> name to id lookups as well. We can avoid these entirely if we remember >> name to id mappings instead (which my POC does). Any idea if your osc >> as a service tool does or can do that? Probably have to be more >> careful for scoping things in a tool like that as it may be reused by >> people with name collisions across projects/users/groups/domains. > > I don't believe this would handle name to id mapping. It's a very thin > wrapper around the regular client code that just makes it persistent so > we don't pay the startup costs every call. On the plus side that means > it basically works like the vanilla client, on the minus side that means > it may not provide as much improvement as a more targeted solution. > > IIRC it's pretty easy to use, so I can try it out again and make sure it > still works and still provides a performance benefit. It still works and it still helps. Using the osc service cut about 3 minutes off my 21 minute devstack run. Subjectively I would say that most of the time was being spent cloning and installing services and their deps. I guess the downside is that working around the OSC slowness in CI will reduce developer motivation to fix the problem, which affects all users too. Then again, this has been a problem for years and no one has fixed it, so apparently that isn't a big enough lever to get things moving anyway. :-/ > >> >>> >>> On 7/26/19 6:53 PM, Clark Boylan wrote: >>>> Today I have been digging into devstack runtime costs to help Donny >>>> Davis understand why tempest jobs sometimes timeout on the >>>> FortNebula cloud. One thing I discovered was that the keystone user, >>>> group, project, role, and domain setup [0] can take many minutes >>>> [1][2] (in the examples here almost 5). >>>> >>>> I've rewritten create_keystone_accounts to be a python tool [3] and >>>> get the runtime for that subset of setup from ~100s to ~9s [4].  I >>>> imagine that if we applied this to the other create_X_accounts >>>> functions we would see similar results. >>>> >>>> I think this is so much faster because we avoid repeated costs in >>>> openstack client including: python process startup, pkg_resource >>>> disk scanning to find entrypoints, and needing to convert names to >>>> IDs via the API every time osc is run. Given my change shows this >>>> can be so much quicker is there any interest in modifying devstack >>>> to be faster here? And if so what do we think an appropriate >>>> approach would be? >>>> >>>> [0] >>>> https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>>> >>>> [1] >>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>>> >>>> [2] >>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>>> >>>> [3] https://review.opendev.org/#/c/673108/ >>>> [4] >>>> http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>>> >>>> >>>> Note the jobs compared above all ran on rax-dfw. >>>> >>>> Clark >>>> >>> >>> >> > From mnaser at vexxhost.com Wed Aug 7 13:41:01 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 7 Aug 2019 09:41:01 -0400 Subject: Agenda for TC Meeting 8 August 2019 at 1400 UTC Message-ID: Hi everyone, Here’s the agenda for our monthly TC meeting. It will happen tomorrow (Thursday the 8th) at 1400 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee * Follow up on past action items ** fungi to add himself as TC liaison for Image Encryption popup team ** fungi to draft a resolution on proper retirement procedures * Active initiatives ** Python 3: mnaser to sync up with swift team on python3 migration and mugsie to sync with dhellmann or release-team to find the code for the proposal bot ** Forum follow-up: ttx to organise Milestone 2 forum meeting with tc-members (done) ** Make goal selection a two-step process (needs reviews at https://review.opendev.org/#/c/667932/) * Discussion ** Attendance for leadership meeting during Shanghai Summit on 3 November ** Reviving Performance WG / Large deployment team into a Large scale SIG (ttx) Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Wed Aug 7 14:22:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Aug 2019 14:22:11 +0000 Subject: Agenda for TC Meeting 8 August 2019 at 1400 UTC In-Reply-To: References: Message-ID: <20190807142211.busjg5ike6q7pgkd@yuggoth.org> On 2019-08-07 09:41:01 -0400 (-0400), Mohammed Naser wrote: [...] > ** fungi to add himself as TC liaison for Image Encryption popup team Done: https://governance.openstack.org/tc/reference/popup-teams.html#image-encryption > ** fungi to draft a resolution on proper retirement procedures [...] Latest revision has been under review since 2019-07-22 but is still a few votes shy of quorum. > find the code for the proposal bot [...] I didn't know anyone had lost it? https://opendev.org/openstack/project-config/src/branch/master/playbooks/proposal/propose_update.sh -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Wed Aug 7 14:30:00 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 07 Aug 2019 07:30:00 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Wed, 7 Aug 2019 14:46:35 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> Message-ID: <871rxxm4k7.fsf@meyer.lemoncheese.net> Rico Lin writes: > On Wed, Aug 7, 2019 at 9:41 AM James E. Blair wrote: > >> I had previously added an entry to the suggestions wiki page, but I did >> not see it in this email: >> >> * University >> > https://en.wikipedia.org/wiki/List_of_universities_and_colleges_in_Shanghai >> (Shanghai is famous for its universities) >> >> To pick one at random, the "University of Shanghai for Science and >> Technology" is a place in Shanghai; I think that meets the requirement >> for "physical or human geography". >> >> It's a point of pride that Shanghai has so many renowned universities, >> so I think it's a good choice and one well worth considering. >> > Just added it in https://wiki.openstack.org/wiki/Release_Naming/U_Proposals > Will make sure TCs evaluate on this one when evaluating names that do not > meet the criteria > Thanks for the idea Sorry if I wasn't clear, I had already added it to the wiki page more than a week ago -- you can still see my entry there at the bottom of the list of names that do meet the criteria. Here's the diff: https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 Also, I do think this meets the criteria, since there is a place in Shanghai with "University" in the name. This is similar to "Pike" which is short for the "Massachusetts Turnpike", which was deemed to meet the criteria for the P naming poll. Of course, as the coordinator it's up to you to determine whether it meets the criteria, but I believe it does, and hope you agree. Thanks, Jim From smooney at redhat.com Wed Aug 7 14:37:42 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 07 Aug 2019 15:37:42 +0100 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> Message-ID: On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > Just a reminder that there is also > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > which was intended to address this same issue. > > > > > > > > I toyed around with it a bit for TripleO installs back then and it did > > > > seem to speed things up, but at the time there was a bug in our client > > > > plugin where it was triggering a prompt for input that was problematic > > > > with the server running in the background. I never really got back to it > > > > once that was fixed. :-/ > > > > > > I'm not tied to any particular implementation. Mostly I wanted to show > > > that we can take this ~5 minute portion of devstack and turn it into a > > > 15 second portion of devstack by improving our use of the service APIs > > > (and possibly even further if we apply it to all of the api > > > interaction). Any idea how difficult it would be to get your client as > > > a service stuff running in devstack again? > > > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > > > > > > > > I do not think we should make a one off change like I've done in my > > > POC. That will just end up being harder to understand and debug in the > > > future since it will be different than all of the other API > > > interaction. I like the idea of a manifest or feeding a longer lived > > > process api update commands as we can then avoid requesting new tokens > > > as well as pkg_resource startup time. Such a system could be used by > > > all of devstack as well (avoiding the "this bit is special" problem). > > > > > > Is there any interest from the QA team in committing to an approach > > > and working to do a conversion? I don't want to commit any more time > > > to this myself unless there is strong interest in getting changes > > > merged (as I expect it will be a slow process weeding out places where > > > we've made bad assumptions particularly around plugins). > > > > > > One of the things I found was that using names with osc results in > > > name to id lookups as well. We can avoid these entirely if we remember > > > name to id mappings instead (which my POC does). Any idea if your osc > > > as a service tool does or can do that? Probably have to be more > > > careful for scoping things in a tool like that as it may be reused by > > > people with name collisions across projects/users/groups/domains. > > > > I don't believe this would handle name to id mapping. It's a very thin > > wrapper around the regular client code that just makes it persistent so > > we don't pay the startup costs every call. On the plus side that means > > it basically works like the vanilla client, on the minus side that means > > it may not provide as much improvement as a more targeted solution. > > > > IIRC it's pretty easy to use, so I can try it out again and make sure it > > still works and still provides a performance benefit. > > It still works and it still helps. Using the osc service cut about 3 > minutes off my 21 minute devstack run. Subjectively I would say that > most of the time was being spent cloning and installing services and > their deps. > > I guess the downside is that working around the OSC slowness in CI will > reduce developer motivation to fix the problem, which affects all users > too. Then again, this has been a problem for years and no one has fixed > it, so apparently that isn't a big enough lever to get things moving > anyway. :-/ using osc diretly i dont think the slowness is really perceptable from a human stand point but it adds up in a ci run. there are large problems to kill with gate slowness then fixing osc will solve be every little helps. i do agree however that the gage is not a big enough motivater for people to fix osc slowness as we can wait hours in some cases for jobs to start so 3 minutes is not really a consern form a latency perspective but if we saved 3 mins on every run that might in aggreaget reduce the latency problems we have. > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > Today I have been digging into devstack runtime costs to help Donny > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > FortNebula cloud. One thing I discovered was that the keystone user, > > > > > group, project, role, and domain setup [0] can take many minutes > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool [3] and > > > > > get the runtime for that subset of setup from ~100s to ~9s [4]. I > > > > > imagine that if we applied this to the other create_X_accounts > > > > > functions we would see similar results. > > > > > > > > > > I think this is so much faster because we avoid repeated costs in > > > > > openstack client including: python process startup, pkg_resource > > > > > disk scanning to find entrypoints, and needing to convert names to > > > > > IDs via the API every time osc is run. Given my change shows this > > > > > can be so much quicker is there any interest in modifying devstack > > > > > to be faster here? And if so what do we think an appropriate > > > > > approach would be? > > > > > > > > > > [0] > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > [1] > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > [2] > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > [4] > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > Clark > > > > > > > > > > > > > > > From rico.lin.guanyu at gmail.com Wed Aug 7 14:58:53 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Aug 2019 22:58:53 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <871rxxm4k7.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: On Wed, Aug 7, 2019 at 10:30 PM James E. Blair wrote: > > Sorry if I wasn't clear, I had already added it to the wiki page more > than a week ago -- you can still see my entry there at the bottom of the > list of names that do meet the criteria. Here's the diff: > > https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 > > Also, I do think this meets the criteria, since there is a place in > Shanghai with "University" in the name. This is similar to "Pike" which > is short for the "Massachusetts Turnpike", which was deemed to meet the > criteria for the P naming poll. > As we discussed in IRC:#openstack-tc, change the reference from general Universities to specific University will make it meet the criteria "The name must refer to the physical or human geography" Added it back to the 'meet criteria' list and update it with reference to specific university "University of Shanghai for Science and Technology". feel free to correct me, if I misunderstand the criteria rule. :) > Of course, as the coordinator it's up to you to determine whether it > meets the criteria, but I believe it does, and hope you agree. > > Thanks, > > Jim -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Aug 7 15:10:20 2019 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 7 Aug 2019 17:10:20 +0200 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: Le mar. 6 août 2019 à 17:14, Ben Nemec a écrit : > Another thing to check if you're having seemingly inexplicable messaging > issues is that there isn't a notification queue filling up somewhere. If > notifications are enabled somewhere but nothing is consuming them the > size of the queue will eventually grind rabbit to a halt. > > I used to check queue sizes through the rabbit web ui, so I have to > admit I'm not sure how to do it through the cli. > You can use the following command to monitor your queues and observe size and growing: ``` watch -c "rabbitmqctl list_queues name messages_unacknowledged" ``` Or also something like that: ``` rabbitmqctl list_queues messages consumers name message_bytes messages_unacknowledged > messages_ready head_message_timestamp consumer_utilisation memory state | grep reply ``` > > On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: > > Hi, > > Are you using ssl connections ? > > > > Can be this issue ? > > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 > > > > > > ------------------------------------------------------------------------ > > *From:* Laurent Dumont > > *Sent:* Wednesday, July 31, 2019 4:20 PM > > *To:* Grant Morley > > *Cc:* openstack-operators at lists.openstack.org > > *Subject:* Re: Slow instance launch times due to RabbitMQ > > That is a bit strange, list_queues should return stuff. Couple of ideas : > > > > * Are the Rabbit connection failure logs on the compute pointing to a > > specific controller? > > * Are there any logs within Rabbit on the controller that would point > > to a transient issue? > > * cluster_status is a snapshot of the cluster at the time you ran the > > command. If the alarms have cleared, you won't see anything. > > * If you have the RabbitMQ management plugin activated, I would > > recommend a quick look to see the historical metrics and overall > status. > > > > > > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > > wrote: > > > > Hi guys, > > > > We are using Ubuntu 16 and OpenStack ansible to do our setup. > > > > rabbitmqctl list_queues > > Listing queues > > > > (Doesn't appear to be any queues ) > > > > rabbitmqctl cluster_status > > > > Cluster status of node > > 'rabbit at management-1-rabbit-mq-container-b4d7791f' > > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', > > 'rabbit at management-2-rabbit-mq-container-b455e77d', > > 'rabbit at management-3-rabbit-mq-container-1d6ae377 > ']}]}, > > {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377 > ', > > 'rabbit at management-2-rabbit-mq-container-b455e77d > ', > > 'rabbit at management-1-rabbit-mq-container-b4d7791f > ']}, > > {cluster_name,<<"openstack">>}, > > {partitions,[]}, > > {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, > > {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, > > {'rabbit at management-1-rabbit-mq-container-b4d7791f > ',[]}]}] > > > > Regards, > > > > On 31/07/2019 11:49, Laurent Dumont wrote: > >> Could you forward the output of the following commands on a > >> controller node? : > >> > >> rabbitmqctl cluster_status > >> rabbitmqctl list_queues > >> > >> You won't necessarily see a high load on a Rabbit cluster that is > >> in a bad state. > >> > >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley >> > wrote: > >> > >> Hi all, > >> > >> We are randomly seeing slow instance launch / deletion times > >> and it appears to be because of RabbitMQ. We are seeing a lot > >> of these messages in the logs for Nova and Neutron: > >> > >> ERROR oslo.messaging._drivers.impl_rabbit [-] > >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on > >> 10.6.2.212:5671 is unreachable: Too > >> many heartbeats missed. Trying again in 1 seconds. Client > >> port: 37098: ConnectionForced: Too many heartbeats missed > >> > >> The RabbitMQ cluster isn't under high load and I am not seeing > >> any packets drop over the network when I do some tracing. > >> > >> We are only running 15 compute nodes currently and have >1000 > >> instances so it isn't a large deployment. > >> > >> Are there any good configuration tweaks for RabbitMQ running > >> on OpenStack Queens? > >> > >> Many Thanks, > >> > >> -- > >> > >> Grant Morley > >> Cloud Lead, Civo Ltd > >> www.civo.com | Signup for an account! > >> > >> > > -- > > > > Grant Morley > > Cloud Lead, Civo Ltd > > www.civo.com | Signup for an account! > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Aug 7 15:11:20 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Aug 2019 10:11:20 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> Message-ID: <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> On 8/7/19 9:37 AM, Sean Mooney wrote: > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: >> >> On 8/6/19 11:34 AM, Ben Nemec wrote: >>> >>> >>> On 8/6/19 10:49 AM, Clark Boylan wrote: >>>> On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: >>>>> Just a reminder that there is also >>>>> http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html >>>>> >>>>> which was intended to address this same issue. >>>>> >>>>> I toyed around with it a bit for TripleO installs back then and it did >>>>> seem to speed things up, but at the time there was a bug in our client >>>>> plugin where it was triggering a prompt for input that was problematic >>>>> with the server running in the background. I never really got back to it >>>>> once that was fixed. :-/ >>>> >>>> I'm not tied to any particular implementation. Mostly I wanted to show >>>> that we can take this ~5 minute portion of devstack and turn it into a >>>> 15 second portion of devstack by improving our use of the service APIs >>>> (and possibly even further if we apply it to all of the api >>>> interaction). Any idea how difficult it would be to get your client as >>>> a service stuff running in devstack again? >>> >>> I wish I could take credit, but this is actually Dan Berrange's work. :-) >>> >>>> >>>> I do not think we should make a one off change like I've done in my >>>> POC. That will just end up being harder to understand and debug in the >>>> future since it will be different than all of the other API >>>> interaction. I like the idea of a manifest or feeding a longer lived >>>> process api update commands as we can then avoid requesting new tokens >>>> as well as pkg_resource startup time. Such a system could be used by >>>> all of devstack as well (avoiding the "this bit is special" problem). >>>> >>>> Is there any interest from the QA team in committing to an approach >>>> and working to do a conversion? I don't want to commit any more time >>>> to this myself unless there is strong interest in getting changes >>>> merged (as I expect it will be a slow process weeding out places where >>>> we've made bad assumptions particularly around plugins). >>>> >>>> One of the things I found was that using names with osc results in >>>> name to id lookups as well. We can avoid these entirely if we remember >>>> name to id mappings instead (which my POC does). Any idea if your osc >>>> as a service tool does or can do that? Probably have to be more >>>> careful for scoping things in a tool like that as it may be reused by >>>> people with name collisions across projects/users/groups/domains. >>> >>> I don't believe this would handle name to id mapping. It's a very thin >>> wrapper around the regular client code that just makes it persistent so >>> we don't pay the startup costs every call. On the plus side that means >>> it basically works like the vanilla client, on the minus side that means >>> it may not provide as much improvement as a more targeted solution. >>> >>> IIRC it's pretty easy to use, so I can try it out again and make sure it >>> still works and still provides a performance benefit. >> >> It still works and it still helps. Using the osc service cut about 3 >> minutes off my 21 minute devstack run. Subjectively I would say that >> most of the time was being spent cloning and installing services and >> their deps. >> >> I guess the downside is that working around the OSC slowness in CI will >> reduce developer motivation to fix the problem, which affects all users >> too. Then again, this has been a problem for years and no one has fixed >> it, so apparently that isn't a big enough lever to get things moving >> anyway. :-/ > using osc diretly i dont think the slowness is really perceptable from a human > stand point but it adds up in a ci run. there are large problems to kill with gate > slowness then fixing osc will solve be every little helps. i do agree however > that the gage is not a big enough motivater for people to fix osc slowness as > we can wait hours in some cases for jobs to start so 3 minutes is not really a consern > form a latency perspective but if we saved 3 mins on every run that might > in aggreaget reduce the latency problems we have. I find the slowness very noticeable in interactive use. It adds something like 2 seconds to a basic call like image list that returns almost instantly in the OSC interactive shell where there is no startup overhead. From my performance days, any latency over 1 second was considered unacceptable for an interactive call. The interactive shell does help with that if I'm doing a bunch of calls in a row though. That said, you're right that 3 minutes multiplied by the number of jobs we run per day is significant. Picking 1000 as a round number (and I'm pretty sure we run a _lot_ more than that per day), a 3 minute decrease in runtime per job would save about 50 hours of CI time in total. Small things add up at scale. :-) >> >>> >>>> >>>>> >>>>> On 7/26/19 6:53 PM, Clark Boylan wrote: >>>>>> Today I have been digging into devstack runtime costs to help Donny >>>>>> Davis understand why tempest jobs sometimes timeout on the >>>>>> FortNebula cloud. One thing I discovered was that the keystone user, >>>>>> group, project, role, and domain setup [0] can take many minutes >>>>>> [1][2] (in the examples here almost 5). >>>>>> >>>>>> I've rewritten create_keystone_accounts to be a python tool [3] and >>>>>> get the runtime for that subset of setup from ~100s to ~9s [4]. I >>>>>> imagine that if we applied this to the other create_X_accounts >>>>>> functions we would see similar results. >>>>>> >>>>>> I think this is so much faster because we avoid repeated costs in >>>>>> openstack client including: python process startup, pkg_resource >>>>>> disk scanning to find entrypoints, and needing to convert names to >>>>>> IDs via the API every time osc is run. Given my change shows this >>>>>> can be so much quicker is there any interest in modifying devstack >>>>>> to be faster here? And if so what do we think an appropriate >>>>>> approach would be? >>>>>> >>>>>> [0] >>>>>> > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 >>>>>> >>>>>> >>>>>> [1] >>>>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 >>>>>> >>>>>> >>>>>> [2] >>>>>> http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 >>>>>> >>>>>> >>>>>> [3] https://review.opendev.org/#/c/673108/ >>>>>> [4] >>>>>> > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 >>>>>> >>>>>> >>>>>> >>>>>> Note the jobs compared above all ran on rax-dfw. >>>>>> >>>>>> Clark >>>>>> >>>>> >>>>> >> >> > > From smooney at redhat.com Wed Aug 7 17:16:07 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 07 Aug 2019 18:16:07 +0100 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: On Wed, 2019-08-07 at 10:11 -0500, Ben Nemec wrote: > > On 8/7/19 9:37 AM, Sean Mooney wrote: > > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > > > > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > > > Just a reminder that there is also > > > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > > > > > which was intended to address this same issue. > > > > > > > > > > > > I toyed around with it a bit for TripleO installs back then and it did > > > > > > seem to speed things up, but at the time there was a bug in our client > > > > > > plugin where it was triggering a prompt for input that was problematic > > > > > > with the server running in the background. I never really got back to it > > > > > > once that was fixed. :-/ > > > > > > > > > > I'm not tied to any particular implementation. Mostly I wanted to show > > > > > that we can take this ~5 minute portion of devstack and turn it into a > > > > > 15 second portion of devstack by improving our use of the service APIs > > > > > (and possibly even further if we apply it to all of the api > > > > > interaction). Any idea how difficult it would be to get your client as > > > > > a service stuff running in devstack again? > > > > > > > > I wish I could take credit, but this is actually Dan Berrange's work. :-) > > > > > > > > > > > > > > I do not think we should make a one off change like I've done in my > > > > > POC. That will just end up being harder to understand and debug in the > > > > > future since it will be different than all of the other API > > > > > interaction. I like the idea of a manifest or feeding a longer lived > > > > > process api update commands as we can then avoid requesting new tokens > > > > > as well as pkg_resource startup time. Such a system could be used by > > > > > all of devstack as well (avoiding the "this bit is special" problem). > > > > > > > > > > Is there any interest from the QA team in committing to an approach > > > > > and working to do a conversion? I don't want to commit any more time > > > > > to this myself unless there is strong interest in getting changes > > > > > merged (as I expect it will be a slow process weeding out places where > > > > > we've made bad assumptions particularly around plugins). > > > > > > > > > > One of the things I found was that using names with osc results in > > > > > name to id lookups as well. We can avoid these entirely if we remember > > > > > name to id mappings instead (which my POC does). Any idea if your osc > > > > > as a service tool does or can do that? Probably have to be more > > > > > careful for scoping things in a tool like that as it may be reused by > > > > > people with name collisions across projects/users/groups/domains. > > > > > > > > I don't believe this would handle name to id mapping. It's a very thin > > > > wrapper around the regular client code that just makes it persistent so > > > > we don't pay the startup costs every call. On the plus side that means > > > > it basically works like the vanilla client, on the minus side that means > > > > it may not provide as much improvement as a more targeted solution. > > > > > > > > IIRC it's pretty easy to use, so I can try it out again and make sure it > > > > still works and still provides a performance benefit. > > > > > > It still works and it still helps. Using the osc service cut about 3 > > > minutes off my 21 minute devstack run. Subjectively I would say that > > > most of the time was being spent cloning and installing services and > > > their deps. > > > > > > I guess the downside is that working around the OSC slowness in CI will > > > reduce developer motivation to fix the problem, which affects all users > > > too. Then again, this has been a problem for years and no one has fixed > > > it, so apparently that isn't a big enough lever to get things moving > > > anyway. :-/ > > > > using osc diretly i dont think the slowness is really perceptable from a human > > stand point but it adds up in a ci run. there are large problems to kill with gate > > slowness then fixing osc will solve be every little helps. i do agree however > > that the gage is not a big enough motivater for people to fix osc slowness as > > we can wait hours in some cases for jobs to start so 3 minutes is not really a consern > > form a latency perspective but if we saved 3 mins on every run that might > > in aggreaget reduce the latency problems we have. > > I find the slowness very noticeable in interactive use. It adds > something like 2 seconds to a basic call like image list that returns > almost instantly in the OSC interactive shell where there is no startup > overhead. From my performance days, any latency over 1 second was > considered unacceptable for an interactive call. The interactive shell > does help with that if I'm doing a bunch of calls in a row though. well that was kind of my point when we write sripts we invoke it over and over again. if i need to use osc to do lots of commands for some reason i generaly enter the interactive mode. the interactive mode already masks the pain so anytime it has bother me in the past i have just ended up using it instead. its been a long time since i looked at this but it think there were two reasons it is slow on startup. one is the need to get the token for each request and the other was related to the way we scan for plugins. i honestly dont know if either have imporved but the interactive shell elimiates both as issues. > > That said, you're right that 3 minutes multiplied by the number of jobs > we run per day is significant. Picking 1000 as a round number (and I'm > pretty sure we run a _lot_ more than that per day), a 3 minute decrease > in runtime per job would save about 50 hours of CI time in total. Small > things add up at scale. :-) yep it defintly does. > > > > > > > > > > > > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > > > Today I have been digging into devstack runtime costs to help Donny > > > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > > > FortNebula cloud. One thing I discovered was that the keystone user, > > > > > > > group, project, role, and domain setup [0] can take many minutes > > > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool [3] and > > > > > > > get the runtime for that subset of setup from ~100s to ~9s [4]. I > > > > > > > imagine that if we applied this to the other create_X_accounts > > > > > > > functions we would see similar results. > > > > > > > > > > > > > > I think this is so much faster because we avoid repeated costs in > > > > > > > openstack client including: python process startup, pkg_resource > > > > > > > disk scanning to find entrypoints, and needing to convert names to > > > > > > > IDs via the API every time osc is run. Given my change shows this > > > > > > > can be so much quicker is there any interest in modifying devstack > > > > > > > to be faster here? And if so what do we think an appropriate > > > > > > > approach would be? > > > > > > > > > > > > > > [0] > > > > > > > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > > > > > > > [2] > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > > > [4] > > > > > > > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > > > > > Clark > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From melwittt at gmail.com Wed Aug 7 18:48:23 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 7 Aug 2019 11:48:23 -0700 Subject: [nova] Hyper-V CI broken on stable branches Message-ID: Dear Hyper-V CI maintainers, We noticed upstream that the Hyper-V CI has been failing on the stable/stein and stable/rocky (and perhaps older branches too) since this tempest change merged to unskip test_stamp_pattern: https://review.opendev.org/615434 Here are links to sample failed CI runs: stable/stein: http://cloudbase-ci.com/nova/674828/1/tempest/subunit.html.gz stable/rocky: http://cloudbase-ci.com/nova/674916/1/tempest/subunit.html.gz It looks like all is well on the master branch (looks like test_stamp_pattern does not run on master). Just wanted to alert you about the failures and see if anyone is available to fix it or add test_stamp_pattern to a skip list for Hyper-V CI. Cheers, -melanie From donny at fortnebula.com Wed Aug 7 19:45:56 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 7 Aug 2019 15:45:56 -0400 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: Just for reference FortNebula does 73-80 jobs an hour, so that's 1700(ish) jobs a day -3 minutes per job. That is 5200(ish) cycle minutes a day. Or about 4 days worth of computing time. If there can be a fix that saves minutes, its surely worth it. On Wed, Aug 7, 2019 at 1:18 PM Sean Mooney wrote: > On Wed, 2019-08-07 at 10:11 -0500, Ben Nemec wrote: > > > > On 8/7/19 9:37 AM, Sean Mooney wrote: > > > On Wed, 2019-08-07 at 08:33 -0500, Ben Nemec wrote: > > > > > > > > On 8/6/19 11:34 AM, Ben Nemec wrote: > > > > > > > > > > > > > > > On 8/6/19 10:49 AM, Clark Boylan wrote: > > > > > > On Tue, Aug 6, 2019, at 8:26 AM, Ben Nemec wrote: > > > > > > > Just a reminder that there is also > > > > > > > > http://lists.openstack.org/pipermail/openstack-dev/2016-April/092546.html > > > > > > > > > > > > > > which was intended to address this same issue. > > > > > > > > > > > > > > I toyed around with it a bit for TripleO installs back then > and it did > > > > > > > seem to speed things up, but at the time there was a bug in > our client > > > > > > > plugin where it was triggering a prompt for input that was > problematic > > > > > > > with the server running in the background. I never really got > back to it > > > > > > > once that was fixed. :-/ > > > > > > > > > > > > I'm not tied to any particular implementation. Mostly I wanted > to show > > > > > > that we can take this ~5 minute portion of devstack and turn it > into a > > > > > > 15 second portion of devstack by improving our use of the > service APIs > > > > > > (and possibly even further if we apply it to all of the api > > > > > > interaction). Any idea how difficult it would be to get your > client as > > > > > > a service stuff running in devstack again? > > > > > > > > > > I wish I could take credit, but this is actually Dan Berrange's > work. :-) > > > > > > > > > > > > > > > > > I do not think we should make a one off change like I've done in > my > > > > > > POC. That will just end up being harder to understand and debug > in the > > > > > > future since it will be different than all of the other API > > > > > > interaction. I like the idea of a manifest or feeding a longer > lived > > > > > > process api update commands as we can then avoid requesting new > tokens > > > > > > as well as pkg_resource startup time. Such a system could be > used by > > > > > > all of devstack as well (avoiding the "this bit is special" > problem). > > > > > > > > > > > > Is there any interest from the QA team in committing to an > approach > > > > > > and working to do a conversion? I don't want to commit any more > time > > > > > > to this myself unless there is strong interest in getting changes > > > > > > merged (as I expect it will be a slow process weeding out places > where > > > > > > we've made bad assumptions particularly around plugins). > > > > > > > > > > > > One of the things I found was that using names with osc results > in > > > > > > name to id lookups as well. We can avoid these entirely if we > remember > > > > > > name to id mappings instead (which my POC does). Any idea if > your osc > > > > > > as a service tool does or can do that? Probably have to be more > > > > > > careful for scoping things in a tool like that as it may be > reused by > > > > > > people with name collisions across projects/users/groups/domains. > > > > > > > > > > I don't believe this would handle name to id mapping. It's a very > thin > > > > > wrapper around the regular client code that just makes it > persistent so > > > > > we don't pay the startup costs every call. On the plus side that > means > > > > > it basically works like the vanilla client, on the minus side that > means > > > > > it may not provide as much improvement as a more targeted solution. > > > > > > > > > > IIRC it's pretty easy to use, so I can try it out again and make > sure it > > > > > still works and still provides a performance benefit. > > > > > > > > It still works and it still helps. Using the osc service cut about 3 > > > > minutes off my 21 minute devstack run. Subjectively I would say that > > > > most of the time was being spent cloning and installing services and > > > > their deps. > > > > > > > > I guess the downside is that working around the OSC slowness in CI > will > > > > reduce developer motivation to fix the problem, which affects all > users > > > > too. Then again, this has been a problem for years and no one has > fixed > > > > it, so apparently that isn't a big enough lever to get things moving > > > > anyway. :-/ > > > > > > using osc diretly i dont think the slowness is really perceptable from > a human > > > stand point but it adds up in a ci run. there are large problems to > kill with gate > > > slowness then fixing osc will solve be every little helps. i do agree > however > > > that the gage is not a big enough motivater for people to fix osc > slowness as > > > we can wait hours in some cases for jobs to start so 3 minutes is not > really a consern > > > form a latency perspective but if we saved 3 mins on every run that > might > > > in aggreaget reduce the latency problems we have. > > > > I find the slowness very noticeable in interactive use. It adds > > something like 2 seconds to a basic call like image list that returns > > almost instantly in the OSC interactive shell where there is no startup > > overhead. From my performance days, any latency over 1 second was > > considered unacceptable for an interactive call. The interactive shell > > does help with that if I'm doing a bunch of calls in a row though. > well that was kind of my point when we write sripts we invoke it over and > over again. > if i need to use osc to do lots of commands for some reason i generaly > enter the interactive > mode. the interactive mode already masks the pain so anytime it has bother > me in the past > i have just ended up using it instead. > > its been a long time since i looked at this but it think there were two > reasons it is slow on startup. > one is the need to get the token for each request and the other was > related to the way we scan > for plugins. i honestly dont know if either have imporved but the > interactive shell elimiates both > as issues. > > > > That said, you're right that 3 minutes multiplied by the number of jobs > > we run per day is significant. Picking 1000 as a round number (and I'm > > pretty sure we run a _lot_ more than that per day), a 3 minute decrease > > in runtime per job would save about 50 hours of CI time in total. Small > > things add up at scale. :-) > yep it defintly does. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 7/26/19 6:53 PM, Clark Boylan wrote: > > > > > > > > Today I have been digging into devstack runtime costs to > help Donny > > > > > > > > Davis understand why tempest jobs sometimes timeout on the > > > > > > > > FortNebula cloud. One thing I discovered was that the > keystone user, > > > > > > > > group, project, role, and domain setup [0] can take many > minutes > > > > > > > > [1][2] (in the examples here almost 5). > > > > > > > > > > > > > > > > I've rewritten create_keystone_accounts to be a python tool > [3] and > > > > > > > > get the runtime for that subset of setup from ~100s to ~9s > [4]. I > > > > > > > > imagine that if we applied this to the other > create_X_accounts > > > > > > > > functions we would see similar results. > > > > > > > > > > > > > > > > I think this is so much faster because we avoid repeated > costs in > > > > > > > > openstack client including: python process startup, > pkg_resource > > > > > > > > disk scanning to find entrypoints, and needing to convert > names to > > > > > > > > IDs via the API every time osc is run. Given my change shows > this > > > > > > > > can be so much quicker is there any interest in modifying > devstack > > > > > > > > to be faster here? And if so what do we think an appropriate > > > > > > > > approach would be? > > > > > > > > > > > > > > > > [0] > > > > > > > > > > > > > > > https://opendev.org/openstack/devstack/src/commit/6aeaceb0c4ef078d028fb6605cac2a37444097d8/stack.sh#L1146-L1161 > > > > > > > > > > > > > > > > > > > > > > > > [1] > > > > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_31_04_488228 > > > > > > > > > > > > > > > > > > > > > > > > [2] > > > > > > > > > > http://logs.openstack.org/05/672805/4/check/tempest-full/14f3211/job-output.txt.gz#_2019-07-26_12_35_53_445059 > > > > > > > > > > > > > > > > > > > > > > > > [3] https://review.opendev.org/#/c/673108/ > > > > > > > > [4] > > > > > > > > > > > > > > > http://logs.openstack.org/08/673108/6/check/devstack-xenial/a4107d0/job-output.txt.gz#_2019-07-26_23_18_37_211013 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Note the jobs compared above all ran on rax-dfw. > > > > > > > > > > > > > > > > Clark > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Aug 7 19:50:56 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 7 Aug 2019 12:50:56 -0700 Subject: [nova] Hyper-V CI broken on stable branches In-Reply-To: References: Message-ID: <85954248-d3f4-d095-899e-d7725b98d615@gmail.com> -hyper-v_ci at microsoft.com, +nova_hyperv_ci at cloudbasesolutions.com (used the wrong third party CI page contact info earlier, sorry) Dear Hyper-V CI maintainers, We noticed upstream that the Hyper-V CI has been failing on the stable/stein and stable/rocky (and perhaps older branches too) since this tempest change merged to unskip test_stamp_pattern: https://review.opendev.org/615434 Here are links to sample failed CI runs: stable/stein: http://cloudbase-ci.com/nova/674828/1/tempest/subunit.html.gz stable/rocky: http://cloudbase-ci.com/nova/674916/1/tempest/subunit.html.gz It looks like all is well on the master branch (looks like test_stamp_pattern does not run on master). Just wanted to alert you about the failures and see if anyone is available to fix it or add test_stamp_pattern to a skip list for Hyper-V CI. Cheers, -melanie From donny at fortnebula.com Wed Aug 7 19:52:51 2019 From: donny at fortnebula.com (Donny Davis) Date: Wed, 7 Aug 2019 15:52:51 -0400 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: I am curious how your system is setup? Are you using nova with local storage? Are you using ceph? How long does it take to launch an instance when you are seeing this message? On Wed, Aug 7, 2019 at 11:12 AM Herve Beraud wrote: > > > Le mar. 6 août 2019 à 17:14, Ben Nemec a écrit : > >> Another thing to check if you're having seemingly inexplicable messaging >> issues is that there isn't a notification queue filling up somewhere. If >> notifications are enabled somewhere but nothing is consuming them the >> size of the queue will eventually grind rabbit to a halt. >> >> I used to check queue sizes through the rabbit web ui, so I have to >> admit I'm not sure how to do it through the cli. >> > > You can use the following command to monitor your queues and observe size > and growing: > > ``` > watch -c "rabbitmqctl list_queues name messages_unacknowledged" > ``` > > Or also something like that: > > ``` > rabbitmqctl list_queues messages consumers name message_bytes > messages_unacknowledged > messages_ready head_message_timestamp > consumer_utilisation memory state | grep reply > ``` > > >> >> On 7/31/19 10:48 AM, Gabriele Santomaggio wrote: >> > Hi, >> > Are you using ssl connections ? >> > >> > Can be this issue ? >> > https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1800957 >> > >> > >> > ------------------------------------------------------------------------ >> > *From:* Laurent Dumont >> > *Sent:* Wednesday, July 31, 2019 4:20 PM >> > *To:* Grant Morley >> > *Cc:* openstack-operators at lists.openstack.org >> > *Subject:* Re: Slow instance launch times due to RabbitMQ >> > That is a bit strange, list_queues should return stuff. Couple of ideas >> : >> > >> > * Are the Rabbit connection failure logs on the compute pointing to a >> > specific controller? >> > * Are there any logs within Rabbit on the controller that would point >> > to a transient issue? >> > * cluster_status is a snapshot of the cluster at the time you ran the >> > command. If the alarms have cleared, you won't see anything. >> > * If you have the RabbitMQ management plugin activated, I would >> > recommend a quick look to see the historical metrics and overall >> status. >> > >> > >> > On Wed, Jul 31, 2019 at 9:35 AM Grant Morley > > > wrote: >> > >> > Hi guys, >> > >> > We are using Ubuntu 16 and OpenStack ansible to do our setup. >> > >> > rabbitmqctl list_queues >> > Listing queues >> > >> > (Doesn't appear to be any queues ) >> > >> > rabbitmqctl cluster_status >> > >> > Cluster status of node >> > 'rabbit at management-1-rabbit-mq-container-b4d7791f' >> > [{nodes,[{disc,['rabbit at management-1-rabbit-mq-container-b4d7791f', >> > 'rabbit at management-2-rabbit-mq-container-b455e77d >> ', >> > 'rabbit at management-3-rabbit-mq-container-1d6ae377 >> ']}]}, >> > {running_nodes,['rabbit at management-3-rabbit-mq-container-1d6ae377 >> ', >> > 'rabbit at management-2-rabbit-mq-container-b455e77d >> ', >> > 'rabbit at management-1-rabbit-mq-container-b4d7791f >> ']}, >> > {cluster_name,<<"openstack">>}, >> > {partitions,[]}, >> > {alarms,[{'rabbit at management-3-rabbit-mq-container-1d6ae377',[]}, >> > {'rabbit at management-2-rabbit-mq-container-b455e77d',[]}, >> > {'rabbit at management-1-rabbit-mq-container-b4d7791f >> ',[]}]}] >> > >> > Regards, >> > >> > On 31/07/2019 11:49, Laurent Dumont wrote: >> >> Could you forward the output of the following commands on a >> >> controller node? : >> >> >> >> rabbitmqctl cluster_status >> >> rabbitmqctl list_queues >> >> >> >> You won't necessarily see a high load on a Rabbit cluster that is >> >> in a bad state. >> >> >> >> On Wed, Jul 31, 2019 at 5:19 AM Grant Morley > >> > wrote: >> >> >> >> Hi all, >> >> >> >> We are randomly seeing slow instance launch / deletion times >> >> and it appears to be because of RabbitMQ. We are seeing a lot >> >> of these messages in the logs for Nova and Neutron: >> >> >> >> ERROR oslo.messaging._drivers.impl_rabbit [-] >> >> [f4ab3ca0-b837-4962-95ef-dfd7d60686b6] AMQP server on >> >> 10.6.2.212:5671 is unreachable: Too >> >> many heartbeats missed. Trying again in 1 seconds. Client >> >> port: 37098: ConnectionForced: Too many heartbeats missed >> >> >> >> The RabbitMQ cluster isn't under high load and I am not seeing >> >> any packets drop over the network when I do some tracing. >> >> >> >> We are only running 15 compute nodes currently and have >1000 >> >> instances so it isn't a large deployment. >> >> >> >> Are there any good configuration tweaks for RabbitMQ running >> >> on OpenStack Queens? >> >> >> >> Many Thanks, >> >> >> >> -- >> >> >> >> Grant Morley >> >> Cloud Lead, Civo Ltd >> >> www.civo.com | Signup for an account! >> >> >> >> >> > -- >> > >> > Grant Morley >> > Cloud Lead, Civo Ltd >> > www.civo.com | Signup for an account! >> > >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Aug 6 03:38:14 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 5 Aug 2019 22:38:14 -0500 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration Message-ID: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> All, This e-mail has multiple purposes.  First, I have expanded the mail audience to go beyond just openstack-discuss to a mailing list I have created for all 3rd Party CI Maintainers associated with Cinder.  I apologize to those of you who are getting this as a duplicate e-mail. For all 3rd Party CI maintainers who have already migrated your systems to using Python3.7...Thank you!  We appreciate you keeping up-to-date with Cinder's requirements and maintaining your CI systems. If this is the first time you are hearing of the Python3.7 requirement please continue reading. It has been decided by the OpenStack TC that support for Py2.7 would be deprecated [1].  The Train development cycle is the last cycle that will support Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. It was discussed at the Train PTG that we would require all 3rd Party CIs to be running using Python3 by the Train milestone 2: [2]  We have been communicating the importance of getting 3rd Party CI running with py3 in meetings and e-mail for quite some time now, but it still appears that nearly half of all vendors are not yet running with Python 3. [3] If you are a vendor who has not yet moved to using Python 3 please take some time to review this document [4] as it has guidance on how to get your CI system updated.  It also includes some additional details as to why this requirement has been set and the associated background.  Also, please update the py3-ci-review etherpad with notes indicating that you are working on adding py3 support. I would also ask all vendors to review the etherpad I have created as it indicates a number of other drivers that have been marked unsupported due to CI systems not running properly.  If you are not planning to continue to support a driver adding such a note in the etherpad would be appreciated. Thanks! Jay [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html [2] https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI [3] https://etherpad.openstack.org/p/cinder-py3-ci-review [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update From jlibosva at redhat.com Tue Aug 6 15:42:29 2019 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 6 Aug 2019 17:42:29 +0200 Subject: [neutron] OpenvSwitch firewall sctp getting dropped In-Reply-To: <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> Message-ID: <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> On 05/08/2019 12:01, thuanlk at viettel.com.vn wrote: > I have tried any version of OpenvSwitch but problem continue happened. > Is Openvswitch firewall support sctp? Yes, as long as you have sctp conntrack support in kernel. Can you paste output of 'ovs-ofctl dump-flows br-int | grep +inv' on the node where the VM using sctp is running? If the counters are not 0 it's likely that you're missing the sctp conntrack kernel module. Jakub > > Thanks and best regards ! > > --------------------------------------- > Lăng Khắc Thuận > OCS Cloud | OCS (VTTEK) > +(84)- 966463589 > > > -----Original Message----- > From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] > Sent: Tuesday, July 30, 2019 11:22 AM > To: 'smooney at redhat.com' ; 'openstack-discuss at lists.openstack.org' > Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped > > I have tried config SCTP but nothing change! > > openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp > > Displaying 2 items > Direction Ether Type IP Protocol Port Range Remote IP Prefix Remote Security Group Actions > Egress IPv4 132 2000 - 10000 0.0.0.0/0 - > Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - > > > Thanks and best regards ! > > --------------------------------------- > Lăng Khắc Thuận > OCS Cloud | OCS (VTTEK) > +(84)- 966463589 > > > -----Original Message----- > From: smooney at redhat.com [mailto:smooney at redhat.com] > Sent: Tuesday, July 30, 2019 1:27 AM > To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org > Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped > > On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: >> I have installed Openstack Queens on CentOs 7 with OvS and I recently >> used the native openvswitch firewall to implement SecusiryGroup. The >> native OvS firewall seems to work just fine with TCP/UDP traffic but >> it does not forward any SCTP traffic going to the VMs no matter how I >> change the security groups, But it run if i disable port security >> completely or use iptables_hybrid firewall driver. What do I have to >> do to allow SCTP packets to reach the VMs? > the security groups api is a whitelist model so all traffic is droped by default. > > if you want to allow sctp you would ihave to create an new security group rule with ip_protocol set to the protocol number for sctp. > > e.g. > openstack security group rule create --protocol sctp ... > > im not sure if neutron support --dst-port for sctp but you can still filter on --remote-ip or --remote-group and can specify the rule as an --ingress or --egress rule as normal. > > https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html > > based on this commit https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 > > it looks like neutron now validates the prot ranges for sctp impligying it support setting them so i gues its just a gap in the documentation. > > > >> > > From julen at larrucea.eu Wed Aug 7 13:20:04 2019 From: julen at larrucea.eu (Julen Larrucea.) Date: Wed, 7 Aug 2019 15:20:04 +0200 Subject: [training-labs] what is access domain In-Reply-To: References: Message-ID: Hi Oscar, Sorry for the delay. Here is the file you need: https://github.com/openstack/training-labs/blob/master/labs/osbash/config/credentials So: User: admin Project: admin Password: admin_user_secret Best regards Julen On Fri, Aug 2, 2019 at 11:45 PM Oscar Omar Posada Sanchez < oscar.posada.sanchez at gmail.com> wrote: > Hi Team, > I am starting to study openStack I am following this reference > https://github.com/openstack/training-labs but I can not find the access > domain in the first login already installed the laboratory. Could you tell > me, thanks. > > -- > Por su atención y tiempo, gracias. Que pase feliz día. > ------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Aug 7 23:34:25 2019 From: zigo at debian.org (Thomas Goirand) Date: Thu, 8 Aug 2019 01:34:25 +0200 Subject: Slow instance launch times due to RabbitMQ In-Reply-To: <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> References: <0723e410-6029-fcf7-2bd0-8c38e4586cdd@civo.com> <1c6078ab-bc1d-7a84-cc87-1b470235ccc0@civo.com> <8b9d836f-e4ee-b268-8410-a828b4186b7b@nemebean.com> Message-ID: On 8/6/19 5:10 PM, Ben Nemec wrote: > Another thing to check if you're having seemingly inexplicable messaging > issues is that there isn't a notification queue filling up somewhere. If > notifications are enabled somewhere but nothing is consuming them the > size of the queue will eventually grind rabbit to a halt. > > I used to check queue sizes through the rabbit web ui, so I have to > admit I'm not sure how to do it through the cli. On the cli. Purging Rabbit notification queues: rabbitmqctl purge_queue versioned_notifications.info rabbitmqctl purge_queue notifications.info Getting the total number of messages in Rabbit: NUM_MESSAGE=$(curl -k -uuser:pass https://192.168.0.1:15671/api/overview 2>/dev/null | jq '.["queue_totals"]["messages"]') The same way, you can get a json output of all queues using this URL: https://192.168.0.1:15671/api/queues and playing with jq, you can do many things like: jq '.[] | select(.name == "versioned_notifications.info") | .messages' jq '.[] | select(.name == "notifications.info") | .messages' jq '.[] | select(.name == "versioned_notifications.error") | .messages' jq '.[] | select(.name == "notifications.error") | .messages' If sum add the output of all of the above 4 queues, you get the total number of notification messages. What I did is outputing to graphite like this: echo "`hostname`.rabbitmq.notifications ${NUM_TOTAL_NOTIF} `date +%s`" \ | nc -w 2 graphite-node-hostname 2003 for the amount of notif + the other types of messages. Doing this every minute makes it possible to graph the number of messages in Grafana, which gives me a nice overview of what's going on with notifications and the rest. I hope this will help someone, Cheers, Thomas Goirand (zigo) From dangtrinhnt at gmail.com Thu Aug 8 01:00:27 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 8 Aug 2019 10:00:27 +0900 Subject: [OpenStack Infra] Current documentations of OpenStack CI/CD architecture In-Reply-To: <87o911n2zz.fsf@meyer.lemoncheese.net> References: <20190806143742.xwf5dtirb2swxu4p@yuggoth.org> <87o911n2zz.fsf@meyer.lemoncheese.net> Message-ID: Thanks, Jeremy and James for the super helpful information. I will try. On Wed, Aug 7, 2019 at 11:06 AM James E. Blair wrote: > Trinh Nguyen writes: > > > Hi Jeremy, > > > > Thanks for pointing that out. They're pretty helpful. > > > > Sorry for not clarifying the purpose of my question in the first email. > > Right now my company is using Jenkins for CI/CD which is not scalable and > > for me it's hard to define job pipeline because of XML. I'm about to > build > > a demonstration for my company using Zuul with Github as a replacement > and > > trying to make sense of the OpenStack deployment of Zuul. I have been > > working with OpenStack projects for a couple of cycles in which Zuul has > > shown me its greatness and I think I can bring that power to the company. > > > > Bests, > > In addition to the excellent information that Jeremy provided, since > you're talking about setting up a proof of concept, you may find it > simpler to start with the Zuul Quick-Start: > > https://zuul-ci.org/start > > That's a container-based tutorial that will set you up with a complete > Zuul system running on a single host, along with a private Gerrit > instance. Once you have that running, it's fairly straightforward to > take that and update the configuration to use GitHub instead of Gerrit. > > -Jim > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From khacthuan.hut at gmail.com Thu Aug 8 02:09:24 2019 From: khacthuan.hut at gmail.com (KhacThuan Bk) Date: Thu, 8 Aug 2019 09:09:24 +0700 Subject: [neutron] OpenvSwitch firewall sctp getting dropped In-Reply-To: <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> References: <000001d54623$a91ae750$fb50b5f0$@viettel.com.vn> <030101d54b74$c4be6800$4e3b3800$@viettel.com.vn> <23a26029-9324-47e1-4ffa-bf1602b95493@redhat.com> Message-ID: I saw the counter is not 0. But no sctp conntrack module in my system. How can i find it? [root at compute02 ~]# ovs-ofctl dump-flows br-int | grep +inv cookie=0x46c226b6d9a3ff8f, duration=229312.185s, table=72, n_packets=13, n_bytes=1274, idle_age=65534, hard_age=65534, priority=50,ct_state=+inv+trk actions=resubmit(,93) cookie=0x46c226b6d9a3ff8f, duration=229312.186s, table=82, n_packets=2517, n_bytes=925218, idle_age=65534, hard_age=65534, priority=50,ct_state=+inv+trk actions=resubmit(,93) [root at compute02 ~]# [root at compute02 ~]# [root at compute02 ~]# lsmod | grep sctp [root at compute02 ~]# [root at compute02 ~]# [root at compute02 ~]# modprobe ip_conntrack_proto_sctp modprobe: FATAL: Module ip_conntrack_proto_sctp not found. [root at compute02 ~]# [root at compute02 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root at compute02 ~]# [root at compute02 ~]# uname -r 3.10.0-957.el7.x86_64 Vào Th 5, 8 thg 8, 2019 lúc 04:37 Jakub Libosvar đã viết: > On 05/08/2019 12:01, thuanlk at viettel.com.vn wrote: > > I have tried any version of OpenvSwitch but problem continue happened. > > Is Openvswitch firewall support sctp? > > Yes, as long as you have sctp conntrack support in kernel. Can you paste > output of 'ovs-ofctl dump-flows br-int | grep +inv' on the node where > the VM using sctp is running? If the counters are not 0 it's likely that > you're missing the sctp conntrack kernel module. > > Jakub > > > > > Thanks and best regards ! > > > > --------------------------------------- > > Lăng Khắc Thuận > > OCS Cloud | OCS (VTTEK) > > +(84)- 966463589 > > > > > > -----Original Message----- > > From: Lang Khac Thuan [mailto:thuanlk at viettel.com.vn] > > Sent: Tuesday, July 30, 2019 11:22 AM > > To: 'smooney at redhat.com' ; ' > openstack-discuss at lists.openstack.org' < > openstack-discuss at lists.openstack.org> > > Subject: RE: [neutron] OpenvSwitch firewall sctp getting dropped > > > > I have tried config SCTP but nothing change! > > > > openstack security group rule create --ingress --remote-ip 0.0.0.0/0 > --protocol 132 --dst-port 2000:10000 --description "SCTP" sctp openstack > security group rule create --egress --remote-ip 0.0.0.0/0 --protocol 132 > --dst-port 2000:10000 --description "SCTP" sctp > > > > Displaying 2 items > > Direction Ether Type IP Protocol Port Range Remote IP > Prefix Remote Security Group Actions > > Egress IPv4 132 2000 - 10000 0.0.0.0/0 - > > Ingress IPv4 132 2000 - 10000 0.0.0.0/0 - > > > > > > Thanks and best regards ! > > > > --------------------------------------- > > Lăng Khắc Thuận > > OCS Cloud | OCS (VTTEK) > > +(84)- 966463589 > > > > > > -----Original Message----- > > From: smooney at redhat.com [mailto:smooney at redhat.com] > > Sent: Tuesday, July 30, 2019 1:27 AM > > To: thuanlk at viettel.com.vn; openstack-discuss at lists.openstack.org > > Subject: Re: [neutron] OpenvSwitch firewall sctp getting dropped > > > > On Mon, 2019-07-29 at 22:38 +0700, thuanlk at viettel.com.vn wrote: > >> I have installed Openstack Queens on CentOs 7 with OvS and I recently > >> used the native openvswitch firewall to implement SecusiryGroup. The > >> native OvS firewall seems to work just fine with TCP/UDP traffic but > >> it does not forward any SCTP traffic going to the VMs no matter how I > >> change the security groups, But it run if i disable port security > >> completely or use iptables_hybrid firewall driver. What do I have to > >> do to allow SCTP packets to reach the VMs? > > the security groups api is a whitelist model so all traffic is droped by > default. > > > > if you want to allow sctp you would ihave to create an new security > group rule with ip_protocol set to the protocol number for sctp. > > > > e.g. > > openstack security group rule create --protocol sctp ... > > > > im not sure if neutron support --dst-port for sctp but you can still > filter on --remote-ip or --remote-group and can specify the rule as an > --ingress or --egress rule as normal. > > > > > https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/security-group-rule.html > > > > based on this commit > https://github.com/openstack/neutron/commit/f711ad78c5c0af44318c6234957590c91592b984 > > > > it looks like neutron now validates the prot ranges for sctp impligying > it support setting them so i gues its just a gap in the documentation. > > > > > > > >> > > > > > > > -- *Lăng Khắc Thuận* *Phone*: 01649729889 *Email: khacthuan.hut at gmail.com * *Skype: khacthuan_bk* *Student at Applied Mathematics and Informatics* *Center for training of excellent students* *Hanoi University of Science and Technology. * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Aug 8 04:11:59 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 8 Aug 2019 14:11:59 +1000 Subject: [ptl][release] Stepping down as Release Management PTL Message-ID: <20190808041159.GK2352@thor.bakeyournoodle.com> Hello all, I'm sorry to say that I have insufficient time and resources to dedicate to being the kind of PTL the community deserves. With that in mind I've asked if Sean has the time to see out the role for the remainder of the Train cycle and he's agreed. I have proposed https://review.opendev.org/675246 up update the governance repo. I'm not going any where but instead trying to take on less and do *that* better. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From nicolas.ghirlanda at everyware.ch Thu Aug 8 08:44:56 2019 From: nicolas.ghirlanda at everyware.ch (Nicolas Ghirlanda) Date: Thu, 8 Aug 2019 10:44:56 +0200 Subject: [nova/neutron/openvswitch] "No such device" after failed live migrations Message-ID: <30fa7330-e758-10fe-d380-f19eb7fab264@everyware.ch> Hello all, after a misconfigured setup of a couple of VMs, the live migration of those VMs failed. Now we have some "No such device" listings in openvswitch for those VMs on some compute nodes. (openvswitch-vswitchd)[root at ewos1-com1-prod /]# ovs-vsctl show b5034213-9b15-45f5-8ce0-edcf32d16c57     Manager "ptcp:6640:127.0.0.1"         is_connected: true     Bridge br-int         Controller "tcp:127.0.0.1:6633"             is_connected: true         fail_mode: secure         Port "qvo35487c68-23"             tag: 5             Interface "qvo35487c68-23" *        Port "qvo5b1aac7e-d4"** **            Interface "qvo5b1aac7e-d4"** **                error: "could not open network device qvo5b1aac7e-d4 (No such device)"** **        Port "qvod9cdff27-9c"** **            Interface "qvod9cdff27-9c"** **                error: "could not open network device qvod9cdff27-9c (No such device)"*         Port "qvoed2da602-02"             tag: 32             Interface "qvoed2da602-02"         Port "qvoa63378f6-d9" for example qvo5b1aac7e-d4 (openvswitch-vswitchd)[root at ewos1-com1-prod /]# ovs-vsctl list Interface  ca7e771b-88a0-4cf6-b9be-6be77816baff _uuid               : ca7e771b-88a0-4cf6-b9be-6be77816baff admin_state         : [] bfd                 : {} bfd_status          : {} cfm_fault           : [] cfm_fault_status    : [] cfm_flap_count      : [] cfm_health          : [] cfm_mpid            : [] cfm_remote_mpids    : [] cfm_remote_opstate  : [] duplex              : [] error               : "could not open network device qvo5b1aac7e-d4 (No such device)" external_ids        : {attached-mac="fa:16:3e:4b:f0:4b", iface-id="5b1aac7e-d45b-4a40-8f30-1275b07ffc0b", iface-status=active, vm-uuid="c3fbcc6b-2dfe-4aa1-82bd-522b161a37a9"} ifindex             : [] ingress_policing_burst: 0 ingress_policing_rate: 0 lacp_current        : [] link_resets         : [] link_speed          : [] link_state          : [] lldp                : {} mac                 : [] mac_in_use          : [] mtu                 : [] mtu_request         : [] name                : "qvo5b1aac7e-d4" ofport              : -1 ofport_request      : [] options             : {} other_config        : {} statistics          : {} status              : {} type                : "" there are no tap devices or bridge devices on the compute node, but still the entry in openvswitch root at computenode5:~# brctl show | grep 5b1aac7e root at computenode5:~# Is that an issue? if yes, should we remove the ports manually or reboot the compute nodes? Could that lead to networking issues? kind regards Nicolas ... -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5230 bytes Desc: not available URL: From missile0407 at gmail.com Thu Aug 8 10:38:15 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 8 Aug 2019 18:38:15 +0800 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: Hi Mark, thanks for suggestion. I think this, too. Cinder-api may normal but HAproxy could be very busy since one controller down. I'll try to increase the value about cinder-api timeout. Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: > > > On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: > >> On 8/6/2019 7:18 AM, Mark Goddard wrote: >> > We do use a larger timeout for glance-api >> > (haproxy_glance_api_client_timeout >> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >> > something similar for cinder-api. >> >> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >> was failing was a volume attachment delete/create from what I recall, >> which is the newer version (as of Ocata?) for the old >> initialize_connection/terminate_connection APIs. These are synchronous >> RPC calls from cinder-api to cinder-volume to do things on the storage >> backend and we have seen them take longer than 60 seconds in the gate CI >> runs with the lvm driver. I think the investigation normally turned up >> lvchange taking over 60 seconds on some concurrent operation locking out >> the RPC call which eventually results in the MessagingTimeout from >> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >> but the point is yeah you likely want to bump up those timeouts since >> cinder-api has these synchronous calls to the cinder-volume service. I >> just don't think you need to go to 6 hours :). I think the keystoneauth1 >> default http response timeout is 10 minutes so maybe try that. >> >> > Yeah, wasn't advocating for 6 hours - just showing which knobs are > available :) > > >> -- >> >> Thanks, >> >> Matt >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Thu Aug 8 11:43:45 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Thu, 8 Aug 2019 20:43:45 +0900 Subject: [glance] glance-cache-management hardcodes URL with port Message-ID: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Stein re-introduces Glance cache management, but I have not been able to use the glance-cache-manage command. I always get errno 111, connection refused. It turns out that the command tries to access http://localhost:9292. It has options for non-default IP address and port, but unfortunately on my (devstack) cloud, the Glance endpoint is http://192.168.1.200/image. No port. Is there a way to tell glance-cache-manage to use this endpoint? Bernd. From sean.mcginnis at gmx.com Thu Aug 8 11:50:50 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Aug 2019 06:50:50 -0500 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808041159.GK2352@thor.bakeyournoodle.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> Message-ID: <20190808115050.GA28237@sm-workstation> On Thu, Aug 08, 2019 at 02:11:59PM +1000, Tony Breeds wrote: > Hello all, > I'm sorry to say that I have insufficient time and resources to > dedicate to being the kind of PTL the community deserves. With that in > mind I've asked if Sean has the time to see out the role for the > remainder of the Train cycle and he's agreed. > > I have proposed https://review.opendev.org/675246 up update the > governance repo. > > I'm not going any where but instead trying to take on less and do *that* > better. > > Yours Tony. Thanks for leading the team over the last several months Tony! From satish.txt at gmail.com Thu Aug 8 12:39:18 2019 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 8 Aug 2019 08:39:18 -0400 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: References: Message-ID: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> +1 Without any single doubt. in past, I worked with him on openstack-ansible project and he is freaking awesome! Sent from my iPhone > On Jul 26, 2019, at 5:00 PM, Alex Schultz wrote: > > Hey folks, > > I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. tripleo-ansible-core). He has made excellent progress centralizing our ansible roles and improving the testing around them. > > Please reply with your approval/objections. If there are no objections, we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. > > Thanks, > -Alex From ssbarnea at redhat.com Thu Aug 8 12:49:02 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Thu, 8 Aug 2019 13:49:02 +0100 Subject: [tripleo] Proposing Kevin Carter (cloudnull) for tripleo-ansible core In-Reply-To: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> References: <5703B450-62BB-4648-A8AD-3252B63FBC12@gmail.com> Message-ID: <337F9564-2253-42CE-AC32-96BD9FAD0C30@redhat.com> While I am not a core myself, I do support the proposal as I happened to watch and review many changes made by him around ansible roles. Cheers, Sorin > On 8 Aug 2019, at 13:39, Satish Patel wrote: > > +1 Without any single doubt. in past, I worked with him on openstack-ansible project and he is freaking awesome! > > Sent from my iPhone > >> On Jul 26, 2019, at 5:00 PM, Alex Schultz wrote: >> >> Hey folks, >> >> I'd like to propose Kevin as a core for the tripleo-ansible repo (e.g. tripleo-ansible-core). He has made excellent progress centralizing our ansible roles and improving the testing around them. >> >> Please reply with your approval/objections. If there are no objections, we'll add him to tripleo-ansible-core next Friday Aug 2, 2019. >> >> Thanks, >> -Alex > From doug at doughellmann.com Thu Aug 8 13:16:32 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 8 Aug 2019 09:16:32 -0400 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808115050.GA28237@sm-workstation> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <20190808115050.GA28237@sm-workstation> Message-ID: <6713DC6D-0297-4705-89D7-B7CBF4F99D85@doughellmann.com> > On Aug 8, 2019, at 7:50 AM, Sean McGinnis wrote: > > On Thu, Aug 08, 2019 at 02:11:59PM +1000, Tony Breeds wrote: >> Hello all, >> I'm sorry to say that I have insufficient time and resources to >> dedicate to being the kind of PTL the community deserves. With that in >> mind I've asked if Sean has the time to see out the role for the >> remainder of the Train cycle and he's agreed. >> >> I have proposed https://review.opendev.org/675246 up update the >> governance repo. >> >> I'm not going any where but instead trying to take on less and do *that* >> better. >> >> Yours Tony. > > Thanks for leading the team over the last several months Tony! > > Thank you, Tony & Sean. I know how time-consuming the role can be, so I appreciate the work both of you are doing. Doug From thierry at openstack.org Thu Aug 8 13:31:37 2019 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Aug 2019 15:31:37 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190808041159.GK2352@thor.bakeyournoodle.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> Message-ID: <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> Tony Breeds wrote: > Hello all, > I'm sorry to say that I have insufficient time and resources to > dedicate to being the kind of PTL the community deserves. With that in > mind I've asked if Sean has the time to see out the role for the > remainder of the Train cycle and he's agreed. Thanks Tony for all your work and for driving the train up to this station ! Taking the opportunity to recruit: if you are interested in learning how we do release management at scale, help openstack as a whole and are not afraid of train-themed dadjokes, join us in #openstack-release ! -- Thierry Carrez (ttx) From dtantsur at redhat.com Thu Aug 8 13:41:40 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 8 Aug 2019 15:41:40 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> Message-ID: <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> On 8/8/19 3:31 PM, Thierry Carrez wrote: > Tony Breeds wrote: >> Hello all, >>     I'm sorry to say that I have insufficient time and resources to >> dedicate to being the kind of PTL the community deserves.  With that in >> mind I've asked if Sean has the time to see out the role for the >> remainder of the Train cycle and he's agreed. > > Thanks Tony for all your work and for driving the train up to this station ! > > Taking the opportunity to recruit: if you are interested in learning how we do > release management at scale, help openstack as a whole and are not afraid of > train-themed dadjokes, join us in #openstack-release ! > After having quite some (years of) experience with release and stable affairs for ironic, I think I could help here. Dmitry From gmann at ghanshyammann.com Thu Aug 8 14:01:21 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 08 Aug 2019 23:01:21 +0900 Subject: [nova] API updates week 19-32 Message-ID: <16c7188b80b.d6b84f7161156.3889376610277260569@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. API Related BP : ============ COMPLETED: 1. Support adding description while locking an instance: - https://blueprints.launchpad.net/nova/+spec/add-locked-reason 2. Add host and hypervisor_hostname flag to create server - https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server Code Ready for Review: ------------------------------ 1. Specifying az when restore shelved server - Topic: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) - Weekly Progress: It is ready for re-review. Patch is updated. 2. Nova API cleanup - Topic: https://review.opendev.org/#/q/topic:bp/api-consistency-cleanup+(status:open+OR+status:merged) - Weekly Progress: This is on runway. stephenfin has +2 on nova patch. I will work on novalcient and osc patch tomorrow. 3. Show Server numa-topology - Topic: https://review.opendev.org/#/q/topic:bp/show-server-numa-topology+(status:open+OR+status:merged) - Weekly Progress: Under review. 4. Nova API policy improvement - Topic: https://review.openstack.org/#/q/topic:bp/policy-default-refresh+(status:open+OR+status:merged) - Weekly Progress: First set of os-service API policy series is ready to review - https://review.opendev.org/#/c/648480/7 Specs are merged and code in-progress: ------------------------------ ------------------ 5. Detach and attach boot volumes: - Topic: https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: No Progress. Patches are in merge conflict. Spec Ready for Review: ----------------------------- 1. Support for changing deleted_on_termination after boot -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: No update this week. Pending on Lee Yarwood proposal after PTG discussion. 3. Support delete_on_termination in volume attach api -Spec: https://review.openstack.org/#/c/612949/ - Weekly Progress: No updates this week. matt recommend to merging this with 580336 which is pending on Lee Yarwood proposal. Previously approved Spec needs to be re-proposed for Train: --------------------------------------------------------------------------- 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - I remember I planned this to re-propose but could not get time. If anyone would like to help on this please repropose. otherwise I will start this in U cycle. 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - This also need volutneer - http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007411.html Others: 1. Add API ref guideline for body text - 2 api-ref are left to fix. Bugs: ==== No progress report in this week. NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. From mark at stackhpc.com Thu Aug 8 14:36:27 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 8 Aug 2019 15:36:27 +0100 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: On Thu, 8 Aug 2019 at 11:39, Eddie Yen wrote: > Hi Mark, thanks for suggestion. > > I think this, too. Cinder-api may normal but HAproxy could be very busy > since one controller down. > I'll try to increase the value about cinder-api timeout. > Will you be proposing this fix upstream? > > Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: > >> >> >> On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: >> >>> On 8/6/2019 7:18 AM, Mark Goddard wrote: >>> > We do use a larger timeout for glance-api >>> > (haproxy_glance_api_client_timeout >>> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >>> > something similar for cinder-api. >>> >>> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >>> was failing was a volume attachment delete/create from what I recall, >>> which is the newer version (as of Ocata?) for the old >>> initialize_connection/terminate_connection APIs. These are synchronous >>> RPC calls from cinder-api to cinder-volume to do things on the storage >>> backend and we have seen them take longer than 60 seconds in the gate CI >>> runs with the lvm driver. I think the investigation normally turned up >>> lvchange taking over 60 seconds on some concurrent operation locking out >>> the RPC call which eventually results in the MessagingTimeout from >>> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >>> but the point is yeah you likely want to bump up those timeouts since >>> cinder-api has these synchronous calls to the cinder-volume service. I >>> just don't think you need to go to 6 hours :). I think the keystoneauth1 >>> default http response timeout is 10 minutes so maybe try that. >>> >>> >> Yeah, wasn't advocating for 6 hours - just showing which knobs are >> available :) >> >> >>> -- >>> >>> Thanks, >>> >>> Matt >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thiagocmartinsc at gmail.com Thu Aug 8 19:24:27 2019 From: thiagocmartinsc at gmail.com (=?UTF-8?B?TWFydGlueCAtIOOCuOOCp+ODvOODoOOCug==?=) Date: Thu, 8 Aug 2019 15:24:27 -0400 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Hey Adrian, I was playing with Nova LXD with OpenStack Ansible, I have an example here: https://github.com/tmartinx/openstack_deploy/tree/master/group_vars It's too bad that Nova LXD is gone... :-( I use LXD a lot in my Ubuntu servers (not openstack based), so, next step would be to deploy bare-metal OpenStack clouds with it but, I canceled my plans. Cheers! Thiago On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > Hey, > > We were planing to migrate some thousand containers from OpenVZ 6 to > Nova-LXD this fall and I know at least one company with the same plans. > > I read the message about current team retiring from the project. > > Unfortunately we don't have the manpower to invest heavily in the project > development. > We would however be able to allocate a few hours per month, at least for > bug fixing. > > So I'm curios if there are organizations using or planning to use Nova-LXD > in production and they have the know-how and time to contribute. > > It would be a pity if the the project dies. > > > Cheers! > - Adrian Andreias > https://fleio.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Aug 8 20:15:58 2019 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Aug 2019 16:15:58 -0400 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: https://docs.openstack.org/nova/stein/configuration/config.html Looks like you can still use LXC if that fits your use case. It also looks like there are images that are current here https://us.images.linuxcontainers.org/ Not sure the state of the driver, but maybe give it a whirl and let us know how it goes. On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ wrote: > Hey Adrian, > > I was playing with Nova LXD with OpenStack Ansible, I have an example > here: > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > It's too bad that Nova LXD is gone... :-( > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next step > would be to deploy bare-metal OpenStack clouds with it but, I canceled my > plans. > > Cheers! > Thiago > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > >> Hey, >> >> We were planing to migrate some thousand containers from OpenVZ 6 to >> Nova-LXD this fall and I know at least one company with the same plans. >> >> I read the message about current team retiring from the project. >> >> Unfortunately we don't have the manpower to invest heavily in the project >> development. >> We would however be able to allocate a few hours per month, at least for >> bug fixing. >> >> So I'm curios if there are organizations using or planning to use >> Nova-LXD in production and they have the know-how and time to contribute. >> >> It would be a pity if the the project dies. >> >> >> Cheers! >> - Adrian Andreias >> https://fleio.com >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Aug 8 21:22:50 2019 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 8 Aug 2019 17:22:50 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core Message-ID: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Hey all, I'd like to propose Eric Fried be made core on SDK. This is slightly different than a normal core proposal, so I'd like to say a few more words about it than normal- largely because I think it's a pattern we might want to explore in SDK land. Eric is obviously super smart and capable - he's PTL of Nova after all. He's one of the few people in OpenStack that has a handle on version discovery, having helped write the keystoneauth support for it. And he's core on os-service-types, which is another piece of client-side arcana. However, he's a busy human, what with being Nova core, and his interaction with SDK has been limited to the intersection of it needed for Nova integration. I don't expect that to change. Basically, as it stands now, Eric only votes on SDK patches that have impact on the use of SDK in Nova - but when he does they are thorough reviews and they indicate "this makes things better for Nova". So I'd like to start recognizing such a vote. As our overall numbers diminish, I think we need to be more efficient with the use of our human time - and along with that we need to find new ways to trust each other to act on behalf of the project. I'd like to give a stab at doing that here. Thoughts? Monty From openstack at nemebean.com Thu Aug 8 21:38:52 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Aug 2019 16:38:52 -0500 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> On 8/8/19 4:22 PM, Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? +1 from me. I've adopted essentially the same philosophy for Oslo. > Monty > From flux.adam at gmail.com Thu Aug 8 22:05:39 2019 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 9 Aug 2019 07:05:39 +0900 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Octavia was looking at doing a proof of concept container based backend driver using nova-lxd, and some work had been slowly ongoing for the past couple of years. But, it looks like we also will have to completely abandon that effort if the driver is dead. Shame. :( --Adam On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > https://docs.openstack.org/nova/stein/configuration/config.html > > Looks like you can still use LXC if that fits your use case. It also looks > like there are images that are current here > https://us.images.linuxcontainers.org/ > > Not sure the state of the driver, but maybe give it a whirl and let us > know how it goes. > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ > wrote: > >> Hey Adrian, >> >> I was playing with Nova LXD with OpenStack Ansible, I have an example >> here: >> >> https://github.com/tmartinx/openstack_deploy/tree/master/group_vars >> >> It's too bad that Nova LXD is gone... :-( >> >> I use LXD a lot in my Ubuntu servers (not openstack based), so, next >> step would be to deploy bare-metal OpenStack clouds with it but, I canceled >> my plans. >> >> Cheers! >> Thiago >> >> On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: >> >>> Hey, >>> >>> We were planing to migrate some thousand containers from OpenVZ 6 to >>> Nova-LXD this fall and I know at least one company with the same plans. >>> >>> I read the message about current team retiring from the project. >>> >>> Unfortunately we don't have the manpower to invest heavily in the >>> project development. >>> We would however be able to allocate a few hours per month, at least for >>> bug fixing. >>> >>> So I'm curios if there are organizations using or planning to use >>> Nova-LXD in production and they have the know-how and time to contribute. >>> >>> It would be a pity if the the project dies. >>> >>> >>> Cheers! >>> - Adrian Andreias >>> https://fleio.com >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Aug 8 22:18:16 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 08 Aug 2019 23:18:16 +0100 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: On Fri, 2019-08-09 at 07:05 +0900, Adam Harwell wrote: > Octavia was looking at doing a proof of concept container based backend > driver using nova-lxd, and some work had been slowly ongoing for the past > couple of years. But, it looks like we also will have to completely abandon > that effort if the driver is dead. Shame. :( you could try the nova libivirt dirver with virt_type=lxc or zun instead > > --Adam > > On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > > > https://docs.openstack.org/nova/stein/configuration/config.html > > > > Looks like you can still use LXC if that fits your use case. It also looks > > like there are images that are current here > > https://us.images.linuxcontainers.org/ > > > > Not sure the state of the driver, but maybe give it a whirl and let us > > know how it goes. > > > > > > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ > > wrote: > > > > > Hey Adrian, > > > > > > I was playing with Nova LXD with OpenStack Ansible, I have an example > > > here: > > > > > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > > > > > It's too bad that Nova LXD is gone... :-( > > > > > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next > > > step would be to deploy bare-metal OpenStack clouds with it but, I canceled > > > my plans. > > > > > > Cheers! > > > Thiago > > > > > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias wrote: > > > > > > > Hey, > > > > > > > > We were planing to migrate some thousand containers from OpenVZ 6 to > > > > Nova-LXD this fall and I know at least one company with the same plans. > > > > > > > > I read the message about current team retiring from the project. > > > > > > > > Unfortunately we don't have the manpower to invest heavily in the > > > > project development. > > > > We would however be able to allocate a few hours per month, at least for > > > > bug fixing. > > > > > > > > So I'm curios if there are organizations using or planning to use > > > > Nova-LXD in production and they have the know-how and time to contribute. > > > > > > > > It would be a pity if the the project dies. > > > > > > > > > > > > Cheers! > > > > - Adrian Andreias > > > > https://fleio.com > > > > > > > > From mriedemos at gmail.com Thu Aug 8 23:31:24 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Aug 2019 18:31:24 -0500 Subject: [nova] Intermittent gate failures in functional tests Message-ID: <7817edaa-57e7-4fe4-231c-e9214827c301@gmail.com> In case you're seeing a bunch of nova versioned notification functional tests failing this week, it's being tracked [1] and there is a skip patch approved [2] to hopefully resolve it while a long-term fix is worked. [1] http://status.openstack.org/elastic-recheck/#1839515 [2] https://review.opendev.org/#/c/675417 -- Thanks, Matt From colleen at gazlene.net Fri Aug 9 00:29:44 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 08 Aug 2019 17:29:44 -0700 Subject: [keystone] [stein] [ops] user_enabled_emulation config problem In-Reply-To: References: Message-ID: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> Hi Radosław, On Tue, Aug 6, 2019, at 04:13, Radosław Piliszek wrote: > Hello all, > > I investigated the case. > My issue arises from group_members_are_ids ignored for > user_enabled_emulation_use_group_config. > I reported a bug in keystone: > https://bugs.launchpad.net/keystone/+bug/1839133 > and will submit a patch. > Hopefully it helps someone else as well. > > Kind regards, > Radek Thanks for the bug report and the patch. I've added the [ops] tag to the subject line of this thread because I'm curious how many other people have tried to use the user_enabled_emulation feature and whether anyone else has run into this problem. I'm seeing similar behavior even when using the groupOfNames objectclass and not using group_members_are_ids, so I'm hesitant to add conditionals based on that configuration. Have you tried this on any other versions of keystone besides Stein? Colleen > > sob., 3 sie 2019 o 20:56 Radosław Piliszek > napisał(a): > > Hello all, > > > > I have an issue using user_enabled_emulation with my LDAP solution. > > > > I set: > > user_tree_dn = ou=Users,o=UCO > > user_objectclass = inetOrgPerson > > user_id_attribute = uid > > user_name_attribute = uid > > user_enabled_emulation = true > > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > > user_enabled_emulation_use_group_config = true > > group_tree_dn = ou=Groups,o=UCO > > group_objectclass = posixGroup > > group_id_attribute = cn > > group_name_attribute = cn > > group_member_attribute = memberUid > > group_members_are_ids = true > > > > Keystone properly lists members of the Users group but they all remain disabled. > > Did I misinterpret something? > > > > Kind regards, > > Radek From jordan.ansell at catalyst.net.nz Fri Aug 9 03:31:52 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Fri, 9 Aug 2019 15:31:52 +1200 Subject: [nova][entropy] what are your rate limits?? Message-ID: Hello Openstack Discuss, I am doing some investigation into instance entropy and was wondering what settings others are using with regard to rate limiting entropy supplied by the hypervisor. Specifically, we're adding the "hw_rng:allowed=True" nova flavor property to pass libvirt the relevant config, but need to decide the appropriate rate limiting settings to prevent instances from being greedy with entropy but still retain a comfortable level for themselves. I've done some experimenting (100 bytes/s is possibly a minimum, but still allows a comfortable value of ~1000 for free entropy in instances) I'm also curious to hear other's experiences when it comes to entropy in Openstack: * What sources of entropy did you use in the hypervisor? * Issues you've faced which was caused by insufficient entropy (instance or host) Note, this is for a public cloud scenario, should that impact any suggestions you have. Regards, Jordan From artem.goncharov at gmail.com Fri Aug 9 06:04:46 2019 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 9 Aug 2019 08:04:46 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> <2d7fed2d-9057-87ee-1646-ee1701494cbe@nemebean.com> Message-ID: +1 from me. Would be great if so we would be able to address all our discovery challenges. Artem ---- typed from mobile, auto-correct typos assumed ---- On Thu, 8 Aug 2019, 23:42 Ben Nemec, wrote: > > > On 8/8/19 4:22 PM, Monty Taylor wrote: > > Hey all, > > > > I'd like to propose Eric Fried be made core on SDK. > > > > This is slightly different than a normal core proposal, so I'd like to > > say a few more words about it than normal- largely because I think it's > > a pattern we might want to explore in SDK land. > > > > Eric is obviously super smart and capable - he's PTL of Nova after all. > > He's one of the few people in OpenStack that has a handle on version > > discovery, having helped write the keystoneauth support for it. And he's > > core on os-service-types, which is another piece of client-side arcana. > > > > However, he's a busy human, what with being Nova core, and his > > interaction with SDK has been limited to the intersection of it needed > > for Nova integration. I don't expect that to change. > > > > Basically, as it stands now, Eric only votes on SDK patches that have > > impact on the use of SDK in Nova - but when he does they are thorough > > reviews and they indicate "this makes things better for Nova". So I'd > > like to start recognizing such a vote. > > > > As our overall numbers diminish, I think we need to be more efficient > > with the use of our human time - and along with that we need to find new > > ways to trust each other to act on behalf of the project. I'd like to > > give a stab at doing that here. > > > > Thoughts? > > +1 from me. I've adopted essentially the same philosophy for Oslo. > > > Monty > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Aug 9 06:05:55 2019 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 9 Aug 2019 08:05:55 +0200 Subject: [keystone] [stein] [ops] user_enabled_emulation config problem In-Reply-To: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> References: <15c29286-de12-4dfe-be36-393f425c57bf@www.fastmail.com> Message-ID: Hi Colleen, at least Rocky is affected too. The issue is posixGroup is not a list of DNs (unlike groupOfNames, the default, which is) but IDs - the listing code already took that into account (by group_members_are_ids being on), the emulation code did not. It does not make sense for the two to behave differently when you ask to behave the same (by user_enabled_emulation_use_group_config being on). Kind regards, Radek pt., 9 sie 2019 o 02:31 Colleen Murphy napisał(a): > Hi Radosław, > > On Tue, Aug 6, 2019, at 04:13, Radosław Piliszek wrote: > > Hello all, > > > > I investigated the case. > > My issue arises from group_members_are_ids ignored for > > user_enabled_emulation_use_group_config. > > I reported a bug in keystone: > > https://bugs.launchpad.net/keystone/+bug/1839133 > > and will submit a patch. > > Hopefully it helps someone else as well. > > > > Kind regards, > > Radek > > Thanks for the bug report and the patch. I've added the [ops] tag to the > subject line of this thread because I'm curious how many other people have > tried to use the user_enabled_emulation feature and whether anyone else has > run into this problem. > > I'm seeing similar behavior even when using the groupOfNames objectclass > and not using group_members_are_ids, so I'm hesitant to add conditionals > based on that configuration. > > Have you tried this on any other versions of keystone besides Stein? > > Colleen > > > > > sob., 3 sie 2019 o 20:56 Radosław Piliszek > > napisał(a): > > > Hello all, > > > > > > I have an issue using user_enabled_emulation with my LDAP solution. > > > > > > I set: > > > user_tree_dn = ou=Users,o=UCO > > > user_objectclass = inetOrgPerson > > > user_id_attribute = uid > > > user_name_attribute = uid > > > user_enabled_emulation = true > > > user_enabled_emulation_dn = cn=Users,ou=Groups,o=UCO > > > user_enabled_emulation_use_group_config = true > > > group_tree_dn = ou=Groups,o=UCO > > > group_objectclass = posixGroup > > > group_id_attribute = cn > > > group_name_attribute = cn > > > group_member_attribute = memberUid > > > group_members_are_ids = true > > > > > > Keystone properly lists members of the Users group but they all remain > disabled. > > > Did I misinterpret something? > > > > > > Kind regards, > > > Radek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Aug 9 06:47:21 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 9 Aug 2019 08:47:21 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On 8/8/19 11:22 PM, Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to say a few > more words about it than normal- largely because I think it's a pattern we might > want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. He's one > of the few people in OpenStack that has a handle on version discovery, having > helped write the keystoneauth support for it. And he's core on os-service-types, > which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his interaction with > SDK has been limited to the intersection of it needed for Nova integration. I > don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have impact on > the use of SDK in Nova - but when he does they are thorough reviews and they > indicate "this makes things better for Nova". So I'd like to start recognizing > such a vote. +2, makes a lot of sense. > > As our overall numbers diminish, I think we need to be more efficient with the > use of our human time - and along with that we need to find new ways to trust > each other to act on behalf of the project. I'd like to give a stab at doing > that here. > > Thoughts? > Monty > From skaplons at redhat.com Fri Aug 9 06:50:02 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 9 Aug 2019 08:50:02 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <73581B23-6A15-45C7-A4A3-D688539D5388@redhat.com> +1 from me. I also feels that my role in SDK team is similar but from Neutron’s point of view. And I think that it’s good approach for OpenStack SDK. > On 8 Aug 2019, at 23:22, Monty Taylor wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to say a few more words about it than normal- largely because I think it's a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. He's one of the few people in OpenStack that has a handle on version discovery, having helped write the keystoneauth support for it. And he's core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his interaction with SDK has been limited to the intersection of it needed for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have impact on the use of SDK in Nova - but when he does they are thorough reviews and they indicate "this makes things better for Nova". So I'd like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient with the use of our human time - and along with that we need to find new ways to trust each other to act on behalf of the project. I'd like to give a stab at doing that here. > > Thoughts? > Monty > — Slawek Kaplonski Senior software engineer Red Hat From tim.j.culhane at gmail.com Fri Aug 9 08:18:04 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 09:18:04 +0100 Subject: inaccessibility of instances page in my openstack installation Message-ID: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions From jean-philippe at evrard.me Fri Aug 9 09:28:21 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 09 Aug 2019 11:28:21 +0200 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <088df2047f999214a64207a4fd283c72f2ed8816.camel@evrard.me> On Thu, 2019-08-08 at 17:22 -0400, Monty Taylor wrote: > As our overall numbers diminish, I think we need to be more > efficient > with the use of our human time - and along with that we need to find > new > ways to trust each other to act on behalf of the project. I'd like > to > give a stab at doing that here. > I like this. Regards, JP From doug at stackhpc.com Fri Aug 9 10:38:05 2019 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 9 Aug 2019 11:38:05 +0100 Subject: [monasca] Enable 'Review-Priority' voting in Gerrit Message-ID: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> A number of projects have added a 'Review-Priority' vote alongside the usual 'Code-Review' and 'Workflow' radio buttons [1]. The idea is to make it easier to direct reviewer attention towards high priority patches. As such, Gerrit dashboards can filter based on the 'Review-Priority' rating. Please vote on whether to enable this feature in Monasca here: https://review.opendev.org/#/c/675574 [1]: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001304.html From donny at fortnebula.com Fri Aug 9 11:55:29 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 07:55:29 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM wrote: > Hi, > > I'm a blind programmer and we use Openstack in our organisation to create > and manage servers for development. > > In the past when I wanted to launch or delete an instance I'd go to the > Instances page. > > Just above the table listing the current instances I have, would be a > series of buttons which allowed you to launch, delete instances or carry > out more actions. > > > When using Firefox I can access these items via the keyboard by hitting > enter on them. > > Up to very recently this was also the case in Chrome (which is my preferred > browser). > > However, in the last week of so I've noticed that I can no longer use the > keyboard to interact with these controls, you need to click them with the > mouse. > > I'm not sure if this is an issue with Chrome or with Openstack. > Has there been any recent changes to Openstack which might explain this? > > Many thanks, > > Tim > > Instance ID = Filter Launch Instance Delete Instances More Actions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Aug 9 11:57:41 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 07:57:41 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis wrote: > I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official > Build) (64-bit) and just tested this functionality and it seems to be > working as expected. > > I went to the instances page, tabbed to launch instance and hit enter. > This brings up the launch instance dialog for me. > > What version of chrome are you using? > > > > > On Fri, Aug 9, 2019 at 4:23 AM wrote: > >> Hi, >> >> I'm a blind programmer and we use Openstack in our organisation to >> create >> and manage servers for development. >> >> In the past when I wanted to launch or delete an instance I'd go to the >> Instances page. >> >> Just above the table listing the current instances I have, would be a >> series of buttons which allowed you to launch, delete instances or carry >> out more actions. >> >> >> When using Firefox I can access these items via the keyboard by hitting >> enter on them. >> >> Up to very recently this was also the case in Chrome (which is my >> preferred >> browser). >> >> However, in the last week of so I've noticed that I can no longer use the >> keyboard to interact with these controls, you need to click them with the >> mouse. >> >> I'm not sure if this is an issue with Chrome or with Openstack. >> Has there been any recent changes to Openstack which might explain this? >> >> Many thanks, >> >> Tim >> >> Instance ID = Filter Launch Instance Delete Instances More Actions >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.j.culhane at gmail.com Fri Aug 9 11:59:34 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 12:59:34 +0100 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> Message-ID: <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Yes. I’m using the Jaws screen reader to access the site. Notice you’ll need to use a screen reader to access the site to reproduce the issue. Chrome is latest version (it automatically updates). Tim From: Donny Davis Sent: Friday 9 August 2019 12:58 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss Subject: Re: inaccessibility of instances page in my openstack installation Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis > wrote: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM > wrote: Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Aug 9 12:13:40 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 9 Aug 2019 08:13:40 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:24 PM Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > This is something we need to do more of. I know some teams are doing it well, but thank you for saying it publicly and explicitly! :) // jim > > Thoughts? > Monty > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Aug 9 12:19:13 2019 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Aug 2019 08:19:13 -0400 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: <002901d54ea9$e959be70$bc0d3b50$@gmail.com> References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Message-ID: I don't have windows to test jaws with, I am using linux which jaws does not seem to be packaged for. On Fri, Aug 9, 2019 at 7:59 AM wrote: > Yes. > > > > I’m using the Jaws screen reader to access the site. > > > > Notice you’ll need to use a screen reader to access the site to reproduce > the issue. > > > > Chrome is latest version (it automatically updates). > > > > Tim > > > > > > *From:* Donny Davis > *Sent:* Friday 9 August 2019 12:58 > *To:* tim.j.culhane at gmail.com > *Cc:* OpenStack Discuss > *Subject:* Re: inaccessibility of instances page in my openstack > installation > > > > Hit enter too soon, > > > > Also are you trying to use the arrow buttons on your keyboard to navigate > around? > > > > > > > > On Fri, Aug 9, 2019 at 7:55 AM Donny Davis wrote: > > I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official > Build) (64-bit) and just tested this functionality and it seems to be > working as expected. > > > > I went to the instances page, tabbed to launch instance and hit enter. > This brings up the launch instance dialog for me. > > > > What version of chrome are you using? > > > > > > > > > > On Fri, Aug 9, 2019 at 4:23 AM wrote: > > Hi, > > I'm a blind programmer and we use Openstack in our organisation to create > and manage servers for development. > > In the past when I wanted to launch or delete an instance I'd go to the > Instances page. > > Just above the table listing the current instances I have, would be a > series of buttons which allowed you to launch, delete instances or carry > out more actions. > > > When using Firefox I can access these items via the keyboard by hitting > enter on them. > > Up to very recently this was also the case in Chrome (which is my preferred > browser). > > However, in the last week of so I've noticed that I can no longer use the > keyboard to interact with these controls, you need to click them with the > mouse. > > I'm not sure if this is an issue with Chrome or with Openstack. > Has there been any recent changes to Openstack which might explain this? > > Many thanks, > > Tim > > Instance ID = Filter Launch Instance Delete Instances More Actions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.j.culhane at gmail.com Fri Aug 9 12:55:11 2019 From: tim.j.culhane at gmail.com (tim.j.culhane at gmail.com) Date: Fri, 9 Aug 2019 13:55:11 +0100 Subject: inaccessibility of instances page in my openstack installation In-Reply-To: References: <010c01d54e8a$f7fb1ba0$e7f152e0$@gmail.com> <002901d54ea9$e959be70$bc0d3b50$@gmail.com> Message-ID: <004001d54eb1$ae6bdca0$0b4395e0$@gmail.com> Yes, Jaws is only a Windows screen reader. Tim From: Donny Davis Sent: Friday 9 August 2019 13:19 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss Subject: Re: inaccessibility of instances page in my openstack installation I don't have windows to test jaws with, I am using linux which jaws does not seem to be packaged for. On Fri, Aug 9, 2019 at 7:59 AM > wrote: Yes. I’m using the Jaws screen reader to access the site. Notice you’ll need to use a screen reader to access the site to reproduce the issue. Chrome is latest version (it automatically updates). Tim From: Donny Davis > Sent: Friday 9 August 2019 12:58 To: tim.j.culhane at gmail.com Cc: OpenStack Discuss > Subject: Re: inaccessibility of instances page in my openstack installation Hit enter too soon, Also are you trying to use the arrow buttons on your keyboard to navigate around? On Fri, Aug 9, 2019 at 7:55 AM Donny Davis > wrote: I am using OpenStack Stein and Chrome Version 76.0.3809.100 (Official Build) (64-bit) and just tested this functionality and it seems to be working as expected. I went to the instances page, tabbed to launch instance and hit enter. This brings up the launch instance dialog for me. What version of chrome are you using? On Fri, Aug 9, 2019 at 4:23 AM > wrote: Hi, I'm a blind programmer and we use Openstack in our organisation to create and manage servers for development. In the past when I wanted to launch or delete an instance I'd go to the Instances page. Just above the table listing the current instances I have, would be a series of buttons which allowed you to launch, delete instances or carry out more actions. When using Firefox I can access these items via the keyboard by hitting enter on them. Up to very recently this was also the case in Chrome (which is my preferred browser). However, in the last week of so I've noticed that I can no longer use the keyboard to interact with these controls, you need to click them with the mouse. I'm not sure if this is an issue with Chrome or with Openstack. Has there been any recent changes to Openstack which might explain this? Many thanks, Tim Instance ID = Filter Launch Instance Delete Instances More Actions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Aug 9 13:03:40 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 9 Aug 2019 15:03:40 +0200 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint Message-ID: Hi folks! I'd like to propose adding Riccardo to our stable team. He's been consistently checking stable patches [1], and we're clearly understaffed when it comes to stable reviews. Thoughts? Dmitry [1] https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master From juliaashleykreger at gmail.com Fri Aug 9 13:21:08 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 9 Aug 2019 09:21:08 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: +2 :) On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur wrote: > > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been consistently > checking stable patches [1], and we're clearly understaffed when it comes to > stable reviews. Thoughts? > > Dmitry > > [1] > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > From ssbarnea at redhat.com Fri Aug 9 14:04:36 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Fri, 9 Aug 2019 15:04:36 +0100 Subject: [monasca] Enable 'Review-Priority' voting in Gerrit In-Reply-To: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> References: <6152dbcd-14d8-c359-7214-d09e032dfb62@stackhpc.com> Message-ID: <1B4560BA-E93B-4FCD-A766-53FC448D767C@redhat.com> I like the idea and I seen it used in other projects. My question is: why not implementing it on all openstack projects? > On 9 Aug 2019, at 11:38, Doug Szumski wrote: > > A number of projects have added a 'Review-Priority' vote alongside the usual 'Code-Review' and 'Workflow' radio buttons [1]. The idea is to make it easier to direct reviewer attention towards high priority patches. As such, Gerrit dashboards can filter based on the 'Review-Priority' rating. > > Please vote on whether to enable this feature in Monasca here: > > https://review.opendev.org/#/c/675574 > > [1]: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001304.html > > From cdent+os at anticdent.org Fri Aug 9 14:09:28 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 9 Aug 2019 15:09:28 +0100 (BST) Subject: [placement] update 19-31 Message-ID: HTML: https://anticdent.org/placement-update-19-31.html Pupdate 19-31. No bromides today. # Most Important Same as last week: The main things on the Placement radar are implementing Consumer Types and cleanups, performance analysis, and documentation related to nested resource providers. We need to decide how much of a priority consumer types support is. I've taken the task of asking around with the various interested parties. # What's Changed * A more complex nested topology is now being used in the nested-perfload check job, and both that and the non-nest perfload run [apache benchmark](https://review.opendev.org/#/c/673540/) at the end. When you make changes you can have a look at the results of the `placement-perfload` and `placement-nested-perfload` gate jobs to see if there has been a performance impact. Keep in mind the numbers are only a guide. The performance characteristics of VMs from different CI providers varies _wildly_. * A stack of several performance related improvements has merged, with still more to come. I've written a separate [Placement Performance Analysis](https://anticdent.org/placement-performance-analysis.html) that summarizes some of the changes. Many of these may be useful for other services. Each iteration reveals another opportunity. * In some environments placement will receive a URL of '' when '/' is expected. Auth handling for version control needs to [handle this](https://review.opendev.org/674543). * osc-placmeent 1.6.0 is in the process of being [released](https://review.opendev.org/675311). # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 22 (-1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 3 (0) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (-1) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (0) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (0). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 5 (1). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new '--amend' option which can update resource provider inventory without requiring the user to pass a full replacement for inventory. This has been broken up into three patches to help with review. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. As said above, there's lots of performance work in progress. We'll need to make a similar effort with regard to docs. One outcome of this work will be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There are two [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Nova: nova-manage: heal port allocations * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * blazar: Fix placement operations in multi-region deployments * Nova: libvirt: Start reporting PCPU inventory to placement A part of Nova: support move ops with qos ports * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * Nova: Update HostState.\*\_allocation_ratio earlier * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Watcher: Remove resource used fields from ComputeNode * Nova: Scheduler translate properties to traits # End Somewhere in this performance work is a lesson for life: Every time I think we've reached the bottom of the "easy stuff", I find yet another bit of easy stuff. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From jim at jimrollenhagen.com Fri Aug 9 14:25:05 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 9 Aug 2019 10:25:05 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 9:22 AM Julia Kreger wrote: > +2 :) > Same! > > On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur wrote: > > > > Hi folks! > > > > I'd like to propose adding Riccardo to our stable team. He's been > consistently > > checking stable patches [1], and we're clearly understaffed when it > comes to > > stable reviews. Thoughts? > > > > Dmitry > > > > [1] > > > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiko at redhat.com Fri Aug 9 14:29:18 2019 From: elmiko at redhat.com (Michael McCune) Date: Fri, 9 Aug 2019 10:29:18 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:27 PM Monty Taylor wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? sounds entirely reasonable to me, ++ > Monty > From thomas at goirand.fr Fri Aug 9 09:34:20 2019 From: thomas at goirand.fr (Thomas Goirand) Date: Fri, 9 Aug 2019 11:34:20 +0200 Subject: [nova][entropy] what are your rate limits?? In-Reply-To: References: Message-ID: <5c0864c3-d00b-f944-d8e9-7e2e2b3df229@goirand.fr> On 8/9/19 5:31 AM, Jordan Ansell wrote: > * What sources of entropy did you use in the hypervisor? When we need a real, trust-able, source of entropy, we use a ChaosKey. https://altusmetrum.org/ChaosKey/ Otherwise, we just install haveged on the host. That's not ideal, but costs nothing, and better having entropy starvation. I hope this helps, Cheers, Thomas Goirand (zigo) From missile0407 at gmail.com Fri Aug 9 15:24:43 2019 From: missile0407 at gmail.com (Eddie Yen) Date: Fri, 9 Aug 2019 23:24:43 +0800 Subject: [kolla][nova][cinder] Got Gateway-Timeout error on VM evacuation if it has volume attached. In-Reply-To: References: Message-ID: Perhaps, but not so fast. I still need more investigation. Mark Goddard 於 2019年8月8日 週四 下午10:36寫道: > > > On Thu, 8 Aug 2019 at 11:39, Eddie Yen wrote: > >> Hi Mark, thanks for suggestion. >> >> I think this, too. Cinder-api may normal but HAproxy could be very busy >> since one controller down. >> I'll try to increase the value about cinder-api timeout. >> > > Will you be proposing this fix upstream? > >> >> Mark Goddard 於 2019年8月7日 週三 上午12:06寫道: >> >>> >>> >>> On Tue, 6 Aug 2019 at 16:33, Matt Riedemann wrote: >>> >>>> On 8/6/2019 7:18 AM, Mark Goddard wrote: >>>> > We do use a larger timeout for glance-api >>>> > (haproxy_glance_api_client_timeout >>>> > and haproxy_glance_api_server_timeout, both 6h). Perhaps we need >>>> > something similar for cinder-api. >>>> >>>> A 6 hour timeout for cinder API calls would be nuts IMO. The thing that >>>> was failing was a volume attachment delete/create from what I recall, >>>> which is the newer version (as of Ocata?) for the old >>>> initialize_connection/terminate_connection APIs. These are synchronous >>>> RPC calls from cinder-api to cinder-volume to do things on the storage >>>> backend and we have seen them take longer than 60 seconds in the gate >>>> CI >>>> runs with the lvm driver. I think the investigation normally turned up >>>> lvchange taking over 60 seconds on some concurrent operation locking >>>> out >>>> the RPC call which eventually results in the MessagingTimeout from >>>> oslo.messaging. That's unrelated to your gateway timeout from HAProxy >>>> but the point is yeah you likely want to bump up those timeouts since >>>> cinder-api has these synchronous calls to the cinder-volume service. I >>>> just don't think you need to go to 6 hours :). I think the >>>> keystoneauth1 >>>> default http response timeout is 10 minutes so maybe try that. >>>> >>>> >>> Yeah, wasn't advocating for 6 hours - just showing which knobs are >>> available :) >>> >>> >>>> -- >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Aug 9 16:33:02 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Aug 2019 11:33:02 -0500 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> Message-ID: <20190809163302.GA29942@sm-workstation> > > > > Taking the opportunity to recruit: if you are interested in learning how > > we do release management at scale, help openstack as a whole and are not > > afraid of train-themed dadjokes, join us in #openstack-release ! > > > > After having quite some (years of) experience with release and stable > affairs for ironic, I think I could help here. > > Dmitry > We'd love to have you Dmitry! All of the tasks to do over the course of a release cycle should now be captured here: https://releases.openstack.org/reference/process.html We have a weekly meeting that is currently on Thursday: http://eavesdrop.openstack.org/#Release_Team_Meeting Doug recorded a nice introductory walk through of how to review release requests and some common things to look for. I can't find the link to that at the moment, but will see if we can track that down. Sean From shrews at redhat.com Fri Aug 9 17:08:47 2019 From: shrews at redhat.com (David Shrewsbury) Date: Fri, 9 Aug 2019 13:08:47 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: On Thu, Aug 8, 2019 at 5:26 PM Monty Taylor wrote: > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? > Monty > > I whole-heartedly embrace this experiment. Having worked with shade since the beginning of time, and now sdk, I personally know how tremendously difficult it is to be (or even pretend to be) knowledgeable in all of the OpenStack services and code interacting with them. Let's bring in experts with a narrow focus to help with the pieces they know well. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Aug 9 17:23:03 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 9 Aug 2019 13:23:03 -0400 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <20190809163302.GA29942@sm-workstation> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: > On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: > >>> >>> Taking the opportunity to recruit: if you are interested in learning how >>> we do release management at scale, help openstack as a whole and are not >>> afraid of train-themed dadjokes, join us in #openstack-release ! >>> >> >> After having quite some (years of) experience with release and stable >> affairs for ironic, I think I could help here. >> >> Dmitry >> > > We'd love to have you Dmitry! > > All of the tasks to do over the course of a release cycle should now be > captured here: > > https://releases.openstack.org/reference/process.html > > We have a weekly meeting that is currently on Thursday: > > http://eavesdrop.openstack.org/#Release_Team_Meeting > > Doug recorded a nice introductory walk through of how to review release > requests and some common things to look for. I can't find the link to that at > the moment, but will see if we can track that down. > > Sean > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html Doug From mordred at inaugust.com Fri Aug 9 17:28:14 2019 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 9 Aug 2019 13:28:14 -0400 Subject: [sdk] Proposing Eric Fried for openstacksdk-core In-Reply-To: References: <0a13cdfb-267f-2c6c-9baf-1f1681ec4617@inaugust.com> Message-ID: <76f53ed8-b6d3-5f85-80f7-18fa72920ac6@inaugust.com> On 8/9/19 1:08 PM, David Shrewsbury wrote: > > On Thu, Aug 8, 2019 at 5:26 PM Monty Taylor > wrote: > > Hey all, > > I'd like to propose Eric Fried be made core on SDK. > > This is slightly different than a normal core proposal, so I'd like to > say a few more words about it than normal- largely because I think it's > a pattern we might want to explore in SDK land. > > Eric is obviously super smart and capable - he's PTL of Nova after all. > He's one of the few people in OpenStack that has a handle on version > discovery, having helped write the keystoneauth support for it. And > he's > core on os-service-types, which is another piece of client-side arcana. > > However, he's a busy human, what with being Nova core, and his > interaction with SDK has been limited to the intersection of it needed > for Nova integration. I don't expect that to change. > > Basically, as it stands now, Eric only votes on SDK patches that have > impact on the use of SDK in Nova - but when he does they are thorough > reviews and they indicate "this makes things better for Nova". So I'd > like to start recognizing such a vote. > > As our overall numbers diminish, I think we need to be more efficient > with the use of our human time - and along with that we need to find > new > ways to trust each other to act on behalf of the project. I'd like to > give a stab at doing that here. > > Thoughts? > Monty > > > I whole-heartedly embrace this experiment. Having worked with shade since > the beginning of time, and now sdk, I personally know how tremendously > difficult > it is to be (or even pretend to be) knowledgeable in all of the > OpenStack services > and code interacting with them. Let's bring in experts with a narrow > focus to help > with the pieces they know well. Sweet. That seems like a good number of agreement and no dissent. efried - you have been en-core-ified. From johnsomor at gmail.com Fri Aug 9 17:39:35 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 9 Aug 2019 10:39:35 -0700 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: Thank you Tony for all of your help and work on releases! The Octavia team appreciated it. Michael On Fri, Aug 9, 2019 at 10:24 AM Doug Hellmann wrote: > > > > > On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: > > > >>> > >>> Taking the opportunity to recruit: if you are interested in learning how > >>> we do release management at scale, help openstack as a whole and are not > >>> afraid of train-themed dadjokes, join us in #openstack-release ! > >>> > >> > >> After having quite some (years of) experience with release and stable > >> affairs for ironic, I think I could help here. > >> > >> Dmitry > >> > > > > We'd love to have you Dmitry! > > > > All of the tasks to do over the course of a release cycle should now be > > captured here: > > > > https://releases.openstack.org/reference/process.html > > > > We have a weekly meeting that is currently on Thursday: > > > > http://eavesdrop.openstack.org/#Release_Team_Meeting > > > > Doug recorded a nice introductory walk through of how to review release > > requests and some common things to look for. I can't find the link to that at > > the moment, but will see if we can track that down. > > > > Sean > > > > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html > > Doug > > From dirk at dmllr.de Fri Aug 9 18:14:47 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Fri, 9 Aug 2019 20:14:47 +0200 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient Message-ID: Hi, For a while the requirements team is trying to go through the process of removing the upper cap on jsonschema to allow the update to jsonschema 3.x. The update for that is becoming more urgent as more and more other (non-OpenStack) projects are going with requiring jsonschema >= 3, so we need to move forward as well to keep co-installability and be able to consume updates of packages to versions that depend on jsonschema >= 3. The current blocker seems to be tripleo-common / os-collect-config depending on python-zaqarclient, which has a broken gate since the merge of: http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html on the server side, which was done here: https://review.opendev.org/#/c/628723/ The python-zaqarclient functional tests have not been correspondingly adjusted, and are failing for more than 5 months meanwhile, in consequence many patches for zaqarclient, including the one uncapping jsonschema are piling up. It looks like no real merge activity happened since https://review.opendev.org/#/c/607553/ which is a bit more than 6 months ago. How should we move forward? doing a release of zaqarclient using some implementation of an API that got removed server side doesn't seem to be a terribly great idea, plus that we still need to merge either one of my patches (one that makes functional testing non-voting or the brutal "lets drop all tests that fail" patch). On the other side, I don't know how feasible it is for Triple-O to drop the dependency on os-collect-config or os-collect-config to drop the dependency on zaqar. Any suggestion on how to move forward? TIA, Dirk From mnaser at vexxhost.com Fri Aug 9 18:15:13 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 9 Aug 2019 14:15:13 -0400 Subject: [tc] meeting summary for aug. 8 2019 Message-ID: Hi everyone, The TC held it’s monthly meeting on the 8th of August 2019 and this email provides a summary of that meeting Jeremy Stanley (fungi) was added as TC Liaison for the Image Encryption pop-up team and their proposed resolution on proper retirement procedures was approved and merged. Swift is now working in Python 3 and inside DevStack so this puts us in a really good place to continue with Python 3 efforts in Swift. Graham Hayes (mugsie) is currently working on the code for the proposal bot so that when we cut a branch, it automatically pushes up a patch to add the ‘python3 jobs’ for that series. Thierry Carrez (ttx) organized the milestone 2 forum meeting with TC members. We have Jim Rollenhagen (jroll) and maybe Graham Hayes (mugsie) volunteering for the programming committee. The proposal for making goal selection a two-step process has been needing reviews for a while so I encourage the community to have a look at it as well as other TC members. We also talked about who’s attending the Shanghai Summit (here’s the Etherpad with the list of who’s going: https://etherpad.openstack.org/p/PVG-TC-PTG) and who will be attending the leadership meeting. We think that everybody already on the TC going to PTG will be there but we’ll only be able to know the exact number at the end of the election, so towards the end of September. We’re also thinking about starting a “Large Scale” SIG so people could collaborate in tackling more of the scaling issues together. Thierry Carrez (ttx) and I will be looking into that by mentioning the idea to LINE and YahooJapan (as some perspective at-scale operators)to see what they think and also make a list of organizations that could be interested. Rico Lin (ricolin) will also update the SIG guidelines documents to make the whole process easier and Jim Rollenhagen (jroll) will try and bring this up at Verizon Media. Finally, we talked about an issue regarding CI Maintainers associated with Cinder not maintaining their systems by not migrating them to Python 3.7. Half of those drivers will be deprecated since they still run on Python 2.7, which won’t be supported by the next ‘U’ release. Jay Bryant tried contacting them all individually but most didn’t answer (a lot of contact info isn’t up to date). If you know someone who maintains a Cinder driver in-tree, please have them double-check on this. I hope that I covered most of what we discussed, for the full meeting logs, you can find them here: http://eavesdrop.openstack.org/meetings/tc/2019/tc.2019-08-08-14.00.log.html Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack at nemebean.com Fri Aug 9 18:59:51 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 9 Aug 2019 13:59:51 -0500 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: <624207ba-b55e-768e-4caf-36995908ca2a@nemebean.com> On 8/9/19 1:15 PM, Mohammed Naser wrote: > Hi everyone, > > The TC held it’s monthly meeting on the 8th of August 2019 and this > email provides a summary of that meeting > > Jeremy Stanley (fungi) was added as TC Liaison for the Image > Encryption pop-up team and their proposed resolution on proper > retirement procedures was approved and merged. > > Swift is now working in Python 3 and inside DevStack so this puts us > in a really good place to continue with Python 3 efforts in Swift. \o/ It's been a long journey, nice work to everyone who made it happen! Also, this feels like something someone should successbot (I would do it myself, but it feel weird to since I had nothing to do with it). From corey.bryant at canonical.com Fri Aug 9 19:12:56 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 9 Aug 2019 15:12:56 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-5) Message-ID: This is the goal-5 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 5 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. If your project has patches with successful tests please help get them merged. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == Today I reached out to all PTLs who have projects with failing patches, to ask for their help with getting tests to pass. == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Aug 9 19:45:45 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Fri, 9 Aug 2019 19:45:45 +0000 Subject: [Edge-computing] [ironic][ops] Taking ironic nodes out of production In-Reply-To: References: <08cb8294-04c8-e4ba-78c0-dec00f87156a@redhat.com> <6A205BFA-881E-4D2D-9A7D-E35935F6631B@est.tech> <09e4bfaa95404bcfba37ee63f6bf1189@AUSX13MPS304.AMER.DELL.COM> Message-ID: <0a38187f191c4a739fc7ed106a4188c9@AUSX13MPS308.AMER.DELL.COM> Julia, For #3 what I was trying to cover the case when Ironic is used to manage servers for multiple different platform clusters. Like 2 different OpenStack cluster that share single Ironic. Ore One OpenStack and one Kubernetes cluster with shared Ironic between them. This use case support take a node from one platform cluster, clean it up, and allocate to another platform cluster. Thanks, Arkady -----Original Message----- From: Julia Kreger Sent: Tuesday, May 21, 2019 12:33 PM To: Kanevsky, Arkady Cc: Christopher Price; Bogdan Dobrelya; openstack-discuss; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [ironic][ops] Taking ironic nodes out of production [EXTERNAL EMAIL] On Tue, May 21, 2019 at 5:55 AM wrote: > > Let's dig deeper into requirements. > I see three distinct use cases: > 1. put node into maintenance mode. Say to upgrade FW/BIOS or any other life-cycle event. It stays in ironic cluster but it is no longer in use by the rest of openstack, like Nova. > 2. Put node into "fail" state. That is remove from usage, remove from Ironic cluster. What cleanup, operator would like/can do is subject to failure. Depending on the node type it may need to be "replaced". Or troubleshooted by a human, and could be returned to a non-failure state. I think largely the only way we as developers could support that is allow for hook scripts to be called upon entering/exiting such a state. That being said, At least from what Beth was saying at the PTG, this seems to be one of the most important states. > 3. Put node into "available" to other usage. What cleanup operator wants to do will need to be defined. This is very similar step as used for Baremetal as a Service as node is reassigned back into available pool. Depending on the next usage of a node it may stay in the Ironic cluster or may be removed from it. Once removed it can be "retired" or used for any other purpose. Do you mean "unprovision" a node and move it through cleaning? I'm not sure I understand what your trying to get across. There is a case where a node would have been moved to a "failed" state, and could be "unprovisioned". If we reach the point where we are able to unprovision, it seems like we might be able to re-deploy, so maybe the option is to automatically move to state which is kind of like bucket for broken nodes? > > Thanks, > Arkady > > -----Original Message----- > From: Christopher Price > Sent: Tuesday, May 21, 2019 3:26 AM > To: Bogdan Dobrelya; openstack-discuss at lists.openstack.org; > edge-computing at lists.openstack.org > Subject: Re: [Edge-computing] [ironic][ops] Taking ironic nodes out of > production > > > [EXTERNAL EMAIL] > > I would add that something as simple as an operator policy could/should be able to remove hardware from an operational domain. It does not specifically need to be a fault or retirement, it may be as simple as repurposing to a different operational domain. From an OpenStack perspective this should not require any special handling from "retirement", it's just to know that there may be time constraints implied in a policy change that could potentially be ignored in a "retirement scenario". > > Further, at least in my imagination, one might be reallocating > hardware from one Ironic domain to another which may have implications > on how we best bring a new node online. (or not, I'm no expert) end dubious thought stream> > > / Chris > > On 2019-05-21, 09:16, "Bogdan Dobrelya" wrote: > > [CC'ed edge-computing at lists.openstack.org] > > On 20.05.2019 18:33, Arne Wiebalck wrote: > > Dear all, > > > > One of the discussions at the PTG in Denver raised the need for > > a mechanism to take ironic nodes out of production (a task for > > which the currently available 'maintenance' flag does not seem > > appropriate [1]). > > > > The use case there is an unhealthy physical node in state 'active', > > i.e. associated with an instance. The request is then to enable an > > admin to mark such a node as 'faulty' or 'in quarantine' with the > > aim of not returning the node to the pool of available nodes once > > the hosted instance is deleted. > > > > A very similar use case which came up independently is node > > retirement: it should be possible to mark nodes ('active' or not) > > as being 'up for retirement' to prepare the eventual removal from > > ironic. As in the example above, ('active') nodes marked this way > > should not become eligible for instance scheduling again, but > > automatic cleaning, for instance, should still be possible. > > > > In an effort to cover these use cases by a more general > > "quarantine/retirement" feature: > > > > - are there additional use cases which could profit from such a > > "take a node out of service" mechanism? > > There are security related examples described in the Edge Security > Challenges whitepaper [0] drafted by k8s IoT SIG [1], like in the > chapter 2 Trusting hardware, whereby "GPS coordinate changes can be used > to force a shutdown of an edge node". So a node may be taken out of > service as an indicator of a particular condition of edge hardware. > > [0] > https://docs.google.com/document/d/1iSIk8ERcheehk0aRG92dfOvW5NjkdedN8F7mSUTr-r0/edit#heading=h.xf8mdv7zexgq > [1] > https://github.com/kubernetes/community/tree/master/wg-iot-edge > > > > > - would these use cases put additional constraints on how the > > feature should look like (e.g.: "should not prevent cleaning") > > > > - are there other characteristics such a feature should have > > (e.g.: "finding these nodes should be supported by the cli") > > > > Let me know if you have any thoughts on this. > > > > Cheers, > > Arne > > > > > > [1] https://etherpad.openstack.org/p/DEN-train-ironic-ptg, l. 360 > > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From Tim.Bell at cern.ch Fri Aug 9 20:21:54 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 9 Aug 2019 20:21:54 +0000 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: > On 9 Aug 2019, at 20:15, Mohammed Naser wrote: > > Hi everyone, > > The TC held it’s monthly meeting on the 8th of August 2019 and this > email provides a summary of that meeting > > ... > > We’re also thinking about starting a “Large Scale” SIG so people could > collaborate in tackling more of the scaling issues together. Thierry > Carrez (ttx) and I will be looking into that by mentioning the idea to > LINE and YahooJapan (as some perspective at-scale operators)to see > what they think and also make a list of organizations that could be > interested. Rico Lin (ricolin) will also update the SIG guidelines > documents to make the whole process easier and Jim Rollenhagen (jroll) > will try and bring this up at Verizon Media. > How about a forum brainstorm in Shanghai ? Tim > > > Thanks for tuning in! > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com > From alifshit at redhat.com Fri Aug 9 21:11:02 2019 From: alifshit at redhat.com (Artom Lifshitz) Date: Fri, 9 Aug 2019 17:11:02 -0400 Subject: [nova] NUMA live migration is ready for review and testing Message-ID: tl;dr If you care about NUMA live migration, check out [1] and test in in your env(s), or review it. Over the months that I've worked on NUMA LM, I've been pinged by various folks that were interested in helping out. At this point I've addressed all the issues that were found at the end of the Stein cycle, and the series is ready for review and testing, with the aim of getting it merged in Train (for real this time). So if you care about NUMA-aware live migration and have some spare time and hardware (if you're in the former category I don't think I need to explain what kind of hardware - though I'll try to answer questions as best I can), I would greatly appreciate it if you deployed the patches and tested them. I've done that myself, of course, but, as at the end of Stein, I'm sure there are edge cases that I didn't think of (though I'm selfishly hoping that there aren't). I believe the series is also ready for review, though I haven't put it in the runway queue just yet because the last functional test patch is still a WIP, as I need to fiddle with it to assert more things. Thanks in advance, cheers! [1] https://review.opendev.org/#/c/672595/8 From sundar.nadathur at intel.com Fri Aug 9 21:25:33 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 9 Aug 2019 21:25:33 +0000 Subject: [cyborg] [release] Process to discontinue os-acc Message-ID: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> Hi, A project called os-acc [1] was created in Stein cycle based on an expectation that it will be used for Cyborg - Nova integration. It is not relevant anymore and we have no plans to support it in Train. It needs to be discontinued. What is the process for doing that? A part of that is presumably to update or delete the os-acc release yaml [2]. What else is needed? [1] https://opendev.org/openstack/os-acc/ [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/train/os-acc.yaml Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 9 21:58:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Aug 2019 21:58:10 +0000 Subject: [cyborg] [release] Process to discontinue os-acc In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5275FE6AE@fmsmsx122.amr.corp.intel.com> Message-ID: <20190809215809.ufqvysqsax4uthah@yuggoth.org> On 2019-08-09 21:25:33 +0000 (+0000), Nadathur, Sundar wrote: > A project called os-acc [1] was created in Stein cycle based on an > expectation that it will be used for Cyborg - Nova integration. It > is not relevant anymore and we have no plans to support it in > Train. > > It needs to be discontinued. What is the process for doing that? A > part of that is presumably to update or delete the os-acc release > yaml [2]. What else is needed? [...] For instructions on retiring a repository, see: https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From colleen at gazlene.net Fri Aug 9 23:44:12 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 09 Aug 2019 16:44:12 -0700 Subject: [keystone] Keystone Team Update - Week of 5 August 2019 Message-ID: <6dee223e-2752-4628-b70f-d7d81d19d235@www.fastmail.com> # Keystone Team Update - Week of 5 August 2019 ## News ### CI instability update To follow up from this topic from last week, we came up with a solution[1][2] that at least reduces the size of the unit test log files to an acceptable, non-browser-crashing size. Unfortunately that didn't seem to be the root cause of the frequent timeouts, so it's unclear (to me, at least) whether the issue stems from a problem in our unit tests or if we're just getting unlucky with noisy neighbors. It could be as simple as needing to raise the timeout to account for all the additional protection tests we've added in the past few months. [1] https://review.opendev.org/673932 [2] https://review.opendev.org/673933 ### Call for help We need help completing the system-scope and default roles policy updates[3][4] before the end of the cycle, as operators cannot safely enable [oslo_policy]/enforce_scope until all of them are completed. For the most part, the task involves updating the scope_types option in the policy and adding a ton of unit tests. The already completed work[5][6] can serve as an example for what's needed. [3] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [4] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope [5] https://bugs.launchpad.net/keystone/+bugs?field.status%3Alist=FIXRELEASED&field.tag=default-roles [6] https://bugs.launchpad.net/keystone/+bugs?field.status%3Alist=FIXRELEASED&field.tag=system-scope ### PTG attendance and Forum Planning Based on our poll[7] it's looking like there are not enough keystone-minded people planning to attend the Shanghai PTG to warrant requesting a room, so I will likely tell Kendall that we don't need a room unless something changes very soon. Even if you won't be attending, please use that etherpad to add topics you would like to see discussed at the Forum. We can use those discussions as a jumping off point for our pre- and post-PTG virtual gatherings. [7] https://etherpad.openstack.org/p/keystone-shanghai-ptg ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. We will skip next week's office hours since we don't have a topic planned. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Open Specs Train specs: https://bit.ly/2uZ2tRl Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 9 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 43 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories - https://review.opendev.org/#/q/topic:bug/1818734 (system scope and default roles) - https://review.opendev.org/#/q/topic:implement-default-roles+is:open (system scope and default roles) - https://review.opendev.org/#/q/topic:bp/whitelist-extension-for-app-creds+is:open (application credential access rules) - https://review.opendev.org/672120 (caching guide) - https://review.opendev.org/#/q/project:openstack/oslo.limit+topic:rewrite+is:open (oslo.limit) * Needs Discussion - https://review.opendev.org/618144 (Reparent Projects) - https://review.opendev.org/674940 (Make policy deprecation reasons less verbose) - https://review.opendev.org/675303 (Allows LDAP extra attributes to be exposed to the end user) ## Bugs This week we opened 4 new bugs and closed 4. Bugs opened (4)  Bug #1839393 (keystone:Low) opened by Matthew Thode https://bugs.launchpad.net/keystone/+bug/1839393  Bug #1839133 (keystone:Undecided) opened by Radosław Piliszek https://bugs.launchpad.net/keystone/+bug/1839133  Bug #1839441 (keystone:Undecided) opened by Jose Castro Leon https://bugs.launchpad.net/keystone/+bug/1839441  Bug #1839577 (keystone:Undecided) opened by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1839577  Bugs fixed (4)  Bug #1773967 (keystone:High) fixed by Jose Castro Leon https://bugs.launchpad.net/keystone/+bug/1773967  Bug #1838592 (keystone:High) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1838592  Bug #1709344 (keystone:Low) fixed by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1709344  Bug #1837741 (oslo.policy:High) fixed by no one https://bugs.launchpad.net/oslo.policy/+bug/1837741 ## Milestone Outlook https://releases.openstack.org/train/schedule.html Feature proposal freeze is NEXT WEEK (August 12-August 16). Spec implementations that are not submitted or still in a WIP state by the end of the week will need to be postponed until next cycle unless we agree on an exception. Code implementing system scope and default roles policy work will be accepted until feature freeze week (September 9-September 13). If you are able, please help by picking up some of these tasks[7][8] or helping to review them (thanks Vishakha for jumping on the endpoint groups policies!). Final release of non-client libraries is the week of September 2 which allows us about three weeks to both implement and review library changes needed for this cycle. [7] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [8] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From gagehugo at gmail.com Fri Aug 9 23:56:13 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 9 Aug 2019 18:56:13 -0500 Subject: [Security SIG] Weekly Newsletter Aug 08 2019 Message-ID: Of note, OSSA-2019-003 was released this week #Week of: 08 Aug 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-08-08-15.00.html - Announced OSSA-2019-003 release on Tuesday August 06th 2019 - Image Encryption Spec - image encryption spec for nova unlikely to get a freeze exception - Will likely polish it up, target an early 'U' release # VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - OSSA-2019-003 was released this week: https://security.openstack.org/ossa/OSSA-2019-003.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Sat Aug 10 12:09:41 2019 From: flux.adam at gmail.com (Adam Harwell) Date: Sat, 10 Aug 2019 21:09:41 +0900 Subject: [nova-lxd] no one uses nova-lxd? In-Reply-To: References: Message-ID: Yeah we're definitely looking into Zun, which is probably a better approach going forward, but it's a very different implemention and we were pretty close with the old one. Didn't mean to make it sound like we'd stop working on containerization, it's just a setback. :D On Fri, Aug 9, 2019, 07:18 Sean Mooney wrote: > On Fri, 2019-08-09 at 07:05 +0900, Adam Harwell wrote: > > Octavia was looking at doing a proof of concept container based backend > > driver using nova-lxd, and some work had been slowly ongoing for the past > > couple of years. But, it looks like we also will have to completely > abandon > > that effort if the driver is dead. Shame. :( > > you could try the nova libivirt dirver with virt_type=lxc or zun instead > > > > --Adam > > > > On Fri, Aug 9, 2019, 05:19 Donny Davis wrote: > > > > > https://docs.openstack.org/nova/stein/configuration/config.html > > > > > > Looks like you can still use LXC if that fits your use case. It also > looks > > > like there are images that are current here > > > https://us.images.linuxcontainers.org/ > > > > > > Not sure the state of the driver, but maybe give it a whirl and let us > > > know how it goes. > > > > > > > > > > > > > > > On Thu, Aug 8, 2019 at 3:28 PM Martinx - ジェームズ < > thiagocmartinsc at gmail.com> > > > wrote: > > > > > > > Hey Adrian, > > > > > > > > I was playing with Nova LXD with OpenStack Ansible, I have an > example > > > > here: > > > > > > > > https://github.com/tmartinx/openstack_deploy/tree/master/group_vars > > > > > > > > It's too bad that Nova LXD is gone... :-( > > > > > > > > I use LXD a lot in my Ubuntu servers (not openstack based), so, next > > > > step would be to deploy bare-metal OpenStack clouds with it but, I > canceled > > > > my plans. > > > > > > > > Cheers! > > > > Thiago > > > > > > > > On Fri, 26 Jul 2019 at 15:37, Adrian Andreias > wrote: > > > > > > > > > Hey, > > > > > > > > > > We were planing to migrate some thousand containers from OpenVZ 6 > to > > > > > Nova-LXD this fall and I know at least one company with the same > plans. > > > > > > > > > > I read the message about current team retiring from the project. > > > > > > > > > > Unfortunately we don't have the manpower to invest heavily in the > > > > > project development. > > > > > We would however be able to allocate a few hours per month, at > least for > > > > > bug fixing. > > > > > > > > > > So I'm curios if there are organizations using or planning to use > > > > > Nova-LXD in production and they have the know-how and time to > contribute. > > > > > > > > > > It would be a pity if the the project dies. > > > > > > > > > > > > > > > Cheers! > > > > > - Adrian Andreias > > > > > https://fleio.com > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sat Aug 10 17:45:06 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 11 Aug 2019 01:45:06 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: After discussion in this ML and in irc [1], I will finalize the U release name candidate list [2] and will go forward to create a public poll at 2019-08-12. Here is finalized name list from etherpad: - 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet transcription of the name) - 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) - 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in Mongolian; this is a common Latin-alphabet transcription of the name) - Ula "Miocene Baogeda Ula" (the name is in Mongolian) - Uma http://www.fallingrain.com/world/CH/20/Uma.html So thanks to all who help with propose names, provide solutions or join discussions. And big thanks for Doug who put a significant amount of effort on this. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 [2] https://etherpad.openstack.org/p/u-name-poll-email On Wed, Aug 7, 2019 at 10:58 PM Rico Lin wrote: > > > On Wed, Aug 7, 2019 at 10:30 PM James E. Blair > wrote: > > > > Sorry if I wasn't clear, I had already added it to the wiki page more > > than a week ago -- you can still see my entry there at the bottom of the > > list of names that do meet the criteria. Here's the diff: > > > > > https://wiki.openstack.org/w/index.php?title=Release_Naming%2FU_Proposals&type=revision&diff=171231&oldid=171132 > > > > Also, I do think this meets the criteria, since there is a place in > > Shanghai with "University" in the name. This is similar to "Pike" which > > is short for the "Massachusetts Turnpike", which was deemed to meet the > > criteria for the P naming poll. > > > As we discussed in IRC:#openstack-tc, change the reference from general > Universities to specific University will make it meet the criteria "The > name must refer to the physical or human geography" > Added it back to the 'meet criteria' list and update it with reference to > specific university "University of Shanghai for Science and Technology". > feel free to correct me, if I misunderstand the criteria rule. :) > > > Of course, as the coordinator it's up to you to determine whether it > > meets the criteria, but I believe it does, and hope you agree. > > > > > Thanks, > > > > Jim > > > > -- > May The Force of OpenStack Be With You, > Rico Lin > irc: ricolin > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Aug 10 20:30:14 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 10 Aug 2019 16:30:14 -0400 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 2:18 PM Dirk Müller wrote: > > Hi, > > For a while the requirements team is trying to go through the process > of removing the upper cap > on jsonschema to allow the update to jsonschema 3.x. The update > for that is becoming more urgent as more and more other > (non-OpenStack) projects are going > with requiring jsonschema >= 3, so we need to move forward as well to > keep co-installability > and be able to consume updates of packages to versions that depend on > jsonschema >= 3. > > The current blocker seems to be tripleo-common / os-collect-config > depending on python-zaqarclient, > which has a broken gate since the merge of: > > http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html > > on the server side, which was done here: > > https://review.opendev.org/#/c/628723/ > > The python-zaqarclient functional tests have not been correspondingly > adjusted, and are failing > for more than 5 months meanwhile, in consequence many patches for > zaqarclient, including > the one uncapping jsonschema are piling up. It looks like no real > merge activity happened since > > https://review.opendev.org/#/c/607553/ > > which is a bit more than 6 months ago. How should we move forward? > doing a release of zaqarclient > using some implementation of an API that got removed server side > doesn't seem to be a terribly great > idea, plus that we still need to merge either one of my patches (one > that makes functional testing non-voting > or the brutal "lets drop all tests that fail" patch). On the other > side, I don't know how feasible it is for Triple-O > to drop the dependency on os-collect-config or os-collect-config to > drop the dependency on zaqar. > > Any suggestion on how to move forward? I'm going to reach out to the PTL who seems to be active as the last proposed change was 5 days ago by them. > TIA, > Dirk > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From miguel at mlavalle.com Sun Aug 11 16:51:02 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 11 Aug 2019 11:51:02 -0500 Subject: [openstack-dev] [neutron] Propose Rodolfo Alonso for Neutron core In-Reply-To: References: Message-ID: Dear Neutrinos, It has been a week since this nomination was sent out to the community and it has only received positive feedback. As a consequence, I have added Rodolfo to the Neutron core team. Congratulations and keep up the good work! Best regards Miguel On Sun, Aug 4, 2019 at 1:52 PM Miguel Lavalle wrote: > Dear Neutrinos, > > I want to nominate Rodolfo Alonso (irc:ralonsoh) as a member of the > Neutron core team. Rodolfo has been an active contributor to Neutron since > the Mitaka cycle. He has been a driving force over these years in the > implementation an evolution of Neutron's QoS feature, currently leading the > sub-team dedicated to it. Recently he has been working on improving the > interaction with Nova during the port binding process, driven the adoption > of Pyroute2 and has become very active in fixing all kinds of bugs. The > quality and number of his code reviews during the Train cycle are > comparable with the leading members of the core team: > https://www.stackalytics.com/?release=train&module=neutron-group. In my > opinion, Rodolfo will be a great addition to the core team. > > I will keep this nomination open for a week as customary. > > Best regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Sun Aug 11 17:30:32 2019 From: corvus at inaugust.com (James E. Blair) Date: Sun, 11 Aug 2019 10:30:32 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Sun, 11 Aug 2019 01:45:06 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> Message-ID: <87pnlbehjb.fsf@meyer.lemoncheese.net> Rico Lin writes: > After discussion in this ML and in irc [1], I will finalize the U release > name candidate list [2] and will go forward to create a public poll > at 2019-08-12. > Here is finalized name list from etherpad: > > - 乌苏里江 Ussuri https://en.wikipedia.org/wiki/Ussuri_River (the name is > shared among Mongolian/Manchu/Russian; this is a common Latin-alphabet > transcription of the name) > - 乌兰察布市 Ulanqab https://en.wikipedia.org/wiki/Ulanqab (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - 乌兰浩特市 Ulanhot https://en.wikipedia.org/wiki/Ulanhot (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - 乌兰苏海组 Ulansu (Ulansu sea) (the name is in Mongolian) > - 乌拉特中旗 Urad https://en.wikipedia.org/wiki/Urad_Middle_Banner (the name > is in Mongolian; this is a common Latin-alphabet transcription of the name) > - 东/西乌珠穆沁旗 Ujimqin https://en.wikipedia.org/wiki/Ujimqin (the name is in > Mongolian; this is a common Latin-alphabet transcription of the name) > - Ula "Miocene Baogeda Ula" (the name is in Mongolian) > - Uma http://www.fallingrain.com/world/CH/20/Uma.html > > So thanks to all who help with propose names, provide solutions or join > discussions. And big thanks for Doug who put a significant amount of effort > on this. Hi, I object to the omission of University (which I thought, based on the previous email, had been determined to have meet the criteria). If I had known there would be a followup conversation, I would have participated. I still do believe that it meets all of the criteria. In particular, it meets this: * The name must refer to the physical or human geography of the region encompassing the location of the OpenStack summit for the corresponding release. It is short for "University of Shanghai for Science and Technology", which is a place in Shanghai. Here is their website: http://en.usst.edu.cn/ Moreover, it met the criteria *before* it was enlarged to include all of China. The subtext of this name is that Shanghai is famous for its Universities, and it has a lot of them. Wikipedia lists 36. The most famous of which is Fudan -- the first institution of higher education to be founded by a Chinese person. It is, in short, a name to honor the unique qualities of our host city. It deserves to be considered. -Jim From fungi at yuggoth.org Sun Aug 11 18:03:05 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 11 Aug 2019 18:03:05 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87pnlbehjb.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> Message-ID: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: [...] > I still do believe that it meets all of the criteria. In particular, it > meets this: > > * The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack summit for the > corresponding release. > > It is short for "University of Shanghai for Science and Technology", > which is a place in Shanghai. Here is their website: > http://en.usst.edu.cn/ [...] This got discussed after last week's TC meeting during Thursday office hours, and I'm sorry I didn't think to give you a heads-up when the topic arose: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 One of the objections raised was that "University" in the name "University of Shanghai for Science and Technology" was a general class of place or feature and not a particular place or feature. But as you pointed out in IRC a while back (and which I should have remembered), there is precedent with the Pike cycle name: Pike (the Massachusetts Turnpike, also the Mass Pike...) https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names Another objection raised is that "OpenStack University" was the old name for what we now call the OpenStack Upstream Institute and that it could lead to name confusion if chosen. A search of the Web for that name last week turned up only two occurrences for me on the first page of results, both of which were lingering references in our wiki which I immediately corrected, so I don't think that argument holds. Then there was the suggestion that "University" might somehow be a trademark risk, though in my opinion that's why we have the OSF vet the preliminary winning results after the community ranks them (so that the TC doesn't need to concern itself with trademark issues). It was also pointed out that each time we have a poll with a mix of English and non-English names/words, an English name inevitably wins. Since this concern isn't backed up by the documented process[*] we're ostensibly following, I'm not really sure how to address it. Ultimately I was unable to convince my colleagues on the TC that "University" was a qualifying name, and so it was handled as a possible exception to the normal rules which, following a poll of most TC members, was decided would not be granted. [*] https://governance.openstack.org/tc/reference/release-naming.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Sun Aug 11 18:15:00 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 12 Aug 2019 02:15:00 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> Message-ID: Just make sure more information is awarded by all, here's some discussion on irc: openstack-tc during this mail is send. http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-11.log.html#t2019-08-11T16:37:03 On Mon, Aug 12, 2019 at 2:07 AM Jeremy Stanley wrote: > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > [...] > > I still do believe that it meets all of the criteria. In particular, it > > meets this: > > > > * The name must refer to the physical or human geography of the region > > encompassing the location of the OpenStack summit for the > > corresponding release. > > > > It is short for "University of Shanghai for Science and Technology", > > which is a place in Shanghai. Here is their website: > > http://en.usst.edu.cn/ > [...] > > This got discussed after last week's TC meeting during Thursday > office hours, and I'm sorry I didn't think to give you a heads-up > when the topic arose: > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > One of the objections raised was that "University" in the name > "University of Shanghai for Science and Technology" was a general > class of place or feature and not a particular place or feature. But > as you pointed out in IRC a while back (and which I should have > remembered), there is precedent with the Pike cycle name: > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > Another objection raised is that "OpenStack University" was the old > name for what we now call the OpenStack Upstream Institute and that > it could lead to name confusion if chosen. A search of the Web for > that name last week turned up only two occurrences for me on the > first page of results, both of which were lingering references in > our wiki which I immediately corrected, so I don't think that > argument holds. > > Then there was the suggestion that "University" might somehow be a > trademark risk, though in my opinion that's why we have the OSF vet > the preliminary winning results after the community ranks them (so > that the TC doesn't need to concern itself with trademark issues). > > It was also pointed out that each time we have a poll with a mix of > English and non-English names/words, an English name inevitably > wins. Since this concern isn't backed up by the documented > process[*] we're ostensibly following, I'm not really sure how to > address it. > > Ultimately I was unable to convince my colleagues on the TC that > "University" was a qualifying name, and so it was handled as a > possible exception to the normal rules which, following a poll of > most TC members, was decided would not be granted. > > [*] https://governance.openstack.org/tc/reference/release-naming.html > -- > Jeremy Stanley > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From yikunkero at gmail.com Mon Aug 12 03:03:01 2019 From: yikunkero at gmail.com (Yikun Jiang) Date: Mon, 12 Aug 2019 11:03:01 +0800 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> Message-ID: Hi, Jay Thanks for reminder, and I cc this mail to futaotao, who is working on python3 migration of Huawei volume and FusionStorage Driver. And @futaotao will make that the Huawei volume and Huawei FusionStorage driver have good python3 support in T release. Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com Jay Bryant 于2019年8月6日周二 上午11:45写道: > All, > > This e-mail has multiple purposes. First, I have expanded the mail > audience to go beyond just openstack-discuss to a mailing list I have > created for all 3rd Party CI Maintainers associated with Cinder. I > apologize to those of you who are getting this as a duplicate e-mail. > > For all 3rd Party CI maintainers who have already migrated your systems > to using Python3.7...Thank you! We appreciate you keeping up-to-date > with Cinder's requirements and maintaining your CI systems. > > If this is the first time you are hearing of the Python3.7 requirement > please continue reading. > > It has been decided by the OpenStack TC that support for Py2.7 would be > deprecated [1]. The Train development cycle is the last cycle that will > support Py2.7 and therefore all vendor drivers need to demonstrate > support for Py3.7. > > It was discussed at the Train PTG that we would require all 3rd Party > CIs to be running using Python3 by the Train milestone 2: [2] We have > been communicating the importance of getting 3rd Party CI running with > py3 in meetings and e-mail for quite some time now, but it still appears > that nearly half of all vendors are not yet running with Python 3. [3] > > If you are a vendor who has not yet moved to using Python 3 please take > some time to review this document [4] as it has guidance on how to get > your CI system updated. It also includes some additional details as to > why this requirement has been set and the associated background. Also, > please update the py3-ci-review etherpad with notes indicating that you > are working on adding py3 support. > > I would also ask all vendors to review the etherpad I have created as it > indicates a number of other drivers that have been marked unsupported > due to CI systems not running properly. If you are not planning to > continue to support a driver adding such a note in the etherpad would be > appreciated. > > Thanks! > > Jay > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008255.html > > [2] > https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_Party_CI > > [3] https://etherpad.openstack.org/p/cinder-py3-ci-review > > [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhuri.kumari at intel.com Mon Aug 12 09:41:59 2019 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Mon, 12 Aug 2019 09:41:59 +0000 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: <0512CBBECA36994BAA14C7FEDE986CA614AA454D@BGSMSX102.gar.corp.intel.com> +1 if my vote counts ☺ Regards, Madhuri From: Jim Rollenhagen [mailto:jim at jimrollenhagen.com] Sent: Friday, August 9, 2019 7:55 PM To: Julia Kreger Cc: Dmitry Tantsur ; openstack-discuss Subject: Re: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint On Fri, Aug 9, 2019 at 9:22 AM Julia Kreger > wrote: +2 :) Same! On Fri, Aug 9, 2019 at 9:04 AM Dmitry Tantsur > wrote: > > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been consistently > checking stable patches [1], and we're clearly understaffed when it comes to > stable reviews. Thoughts? > > Dmitry > > [1] > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Mon Aug 12 10:32:34 2019 From: arxcruz at redhat.com (Arx Cruz) Date: Mon, 12 Aug 2019 12:32:34 +0200 Subject: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA In-Reply-To: References: <71d568ee47ce516a5a4cab1422290da2be1baff6.camel@evrard.me> Message-ID: Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://review.opendev.org/#/c/675858/ On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From novy at ondrej.org Mon Aug 12 10:56:15 2019 From: novy at ondrej.org (Ondrej Novy) Date: Mon, 12 Aug 2019 12:56:15 +0200 Subject: [swauth][swift] Retiring swauth Message-ID: Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Aug 12 11:55:07 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Aug 2019 07:55:07 -0400 Subject: [ironic] Resuming having weekly meetings Message-ID: All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia From ralf.teckelmann at bertelsmann.de Mon Aug 12 12:59:20 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Mon, 12 Aug 2019 12:59:20 +0000 Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Aug 12 13:43:14 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Aug 2019 15:43:14 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> Message-ID: <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> On 8/9/19 7:23 PM, Doug Hellmann wrote: > > >> On Aug 9, 2019, at 12:33 PM, Sean McGinnis wrote: >> >>>> >>>> Taking the opportunity to recruit: if you are interested in learning how >>>> we do release management at scale, help openstack as a whole and are not >>>> afraid of train-themed dadjokes, join us in #openstack-release ! >>>> >>> >>> After having quite some (years of) experience with release and stable >>> affairs for ironic, I think I could help here. >>> >>> Dmitry >>> >> >> We'd love to have you Dmitry! >> >> All of the tasks to do over the course of a release cycle should now be >> captured here: >> >> https://releases.openstack.org/reference/process.html >> >> We have a weekly meeting that is currently on Thursday: >> >> http://eavesdrop.openstack.org/#Release_Team_Meeting 9pm my time is a bit difficult :( >> >> Doug recorded a nice introductory walk through of how to review release >> requests and some common things to look for. I can't find the link to that at >> the moment, but will see if we can track that down. >> >> Sean >> > > I think that IRC transcript became https://releases.openstack.org/reference/reviewer_guide.html Cool, thanks! I'll have a PTO this week, I'll start jumping on reviews next week. Dmitry > > Doug > From corvus at inaugust.com Mon Aug 12 14:08:49 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 12 Aug 2019 07:08:49 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> (Jeremy Stanley's message of "Sun, 11 Aug 2019 18:03:05 +0000") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> Message-ID: <87mugecw7i.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > [...] >> I still do believe that it meets all of the criteria. In particular, it >> meets this: >> >> * The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack summit for the >> corresponding release. >> >> It is short for "University of Shanghai for Science and Technology", >> which is a place in Shanghai. Here is their website: >> http://en.usst.edu.cn/ > [...] > > This got discussed after last week's TC meeting during Thursday > office hours, and I'm sorry I didn't think to give you a heads-up > when the topic arose: > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > One of the objections raised was that "University" in the name > "University of Shanghai for Science and Technology" was a general > class of place or feature and not a particular place or feature. But > as you pointed out in IRC a while back (and which I should have > remembered), there is precedent with the Pike cycle name: > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > Another objection raised is that "OpenStack University" was the old > name for what we now call the OpenStack Upstream Institute and that > it could lead to name confusion if chosen. A search of the Web for > that name last week turned up only two occurrences for me on the > first page of results, both of which were lingering references in > our wiki which I immediately corrected, so I don't think that > argument holds. > > Then there was the suggestion that "University" might somehow be a > trademark risk, though in my opinion that's why we have the OSF vet > the preliminary winning results after the community ranks them (so > that the TC doesn't need to concern itself with trademark issues). > > It was also pointed out that each time we have a poll with a mix of > English and non-English names/words, an English name inevitably > wins. Since this concern isn't backed up by the documented > process[*] we're ostensibly following, I'm not really sure how to > address it. > > Ultimately I was unable to convince my colleagues on the TC that > "University" was a qualifying name, and so it was handled as a > possible exception to the normal rules which, following a poll of > most TC members, was decided would not be granted. > > [*] https://governance.openstack.org/tc/reference/release-naming.html Thanks for the clarification. The only point raised which should have any bearing on the process at this time is is the first one, and I think that has been addressed. The process is designed to collect the widest range of names, and let the *community* decide. It is not the function of the TC to vet the names for suitability before the poll. The community itself is to do that, in the poll. And because vetting for trademark is a specialized and costly task, that happens *after* the poll, so that we don't waste time and money on it. It was exactly the kind of seemingly arbitrary process of producing the names for the poll which is on display here that prompted us to write down this more open process in the first place. It's unfortunate that the last three objections that you cite are clearly in contradiction to that. We pride ourselves on fairness and openness, but we seem to have lost the enthusiasm for that here. I would rather we not do this at all than to do it poorly, so I have proposed we simply stop naming releases. It's more trouble than it's worth. Here's my proposed TC resolution for that: https://review.opendev.org/675788 -Jim From opensrloo at gmail.com Mon Aug 12 14:59:52 2019 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 12 Aug 2019 10:59:52 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: +2. Good idea! :) --ruby On Fri, Aug 9, 2019 at 9:06 AM Dmitry Tantsur wrote: > Hi folks! > > I'd like to propose adding Riccardo to our stable team. He's been > consistently > checking stable patches [1], and we're clearly understaffed when it comes > to > stable reviews. Thoughts? > > Dmitry > > [1] > > https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Mon Aug 12 16:50:00 2019 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 12 Aug 2019 17:50:00 +0100 Subject: [glance] glance-cache-management hardcodes URL with port In-Reply-To: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> References: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Message-ID: On Thu, Aug 8, 2019 at 12:49 PM Bernd Bausch wrote: > Stein re-introduces Glance cache management, but I have not been able to > use the glance-cache-manage command. I always get errno 111, connection > refused. > > It turns out that the command tries to access http://localhost:9292. It > has options for non-default IP address and port, but unfortunately on my > (devstack) cloud, the Glance endpoint is http://192.168.1.200/image. No > port. > > Is there a way to tell glance-cache-manage to use this endpoint? > > Bernd. > > Hi Bernd, You can always give it the port 80, the real problem likely is the prefix /image you have there. Are you running glance-api as wsgi app under some http-server or is that reverse-proxy/loadbalancer you're directing the glance-cache-manage towards? Remember that the management is not currently cluster wide so you should be always targeting single service process at the time. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Mon Aug 12 17:02:32 2019 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 12 Aug 2019 18:02:32 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87mugecw7i.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Aug 12, 2019 at 3:12 PM James E. Blair wrote: > Jeremy Stanley writes: > > > On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: > > [...] > >> I still do believe that it meets all of the criteria. In particular, it > >> meets this: > >> > >> * The name must refer to the physical or human geography of the region > >> encompassing the location of the OpenStack summit for the > >> corresponding release. > >> > >> It is short for "University of Shanghai for Science and Technology", > >> which is a place in Shanghai. Here is their website: > >> http://en.usst.edu.cn/ > > [...] > > > > This got discussed after last week's TC meeting during Thursday > > office hours, and I'm sorry I didn't think to give you a heads-up > > when the topic arose: > > > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 > > > > One of the objections raised was that "University" in the name > > "University of Shanghai for Science and Technology" was a general > > class of place or feature and not a particular place or feature. But > > as you pointed out in IRC a while back (and which I should have > > remembered), there is precedent with the Pike cycle name: > > > > Pike (the Massachusetts Turnpike, also the Mass Pike...) > > > > > https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names > > > > Another objection raised is that "OpenStack University" was the old > > name for what we now call the OpenStack Upstream Institute and that > > it could lead to name confusion if chosen. A search of the Web for > > that name last week turned up only two occurrences for me on the > > first page of results, both of which were lingering references in > > our wiki which I immediately corrected, so I don't think that > > argument holds. > > > > Then there was the suggestion that "University" might somehow be a > > trademark risk, though in my opinion that's why we have the OSF vet > > the preliminary winning results after the community ranks them (so > > that the TC doesn't need to concern itself with trademark issues). > > > > It was also pointed out that each time we have a poll with a mix of > > English and non-English names/words, an English name inevitably > > wins. Since this concern isn't backed up by the documented > > process[*] we're ostensibly following, I'm not really sure how to > > address it. > > > > Ultimately I was unable to convince my colleagues on the TC that > > "University" was a qualifying name, and so it was handled as a > > possible exception to the normal rules which, following a poll of > > most TC members, was decided would not be granted. > > > > [*] https://governance.openstack.org/tc/reference/release-naming.html > > Thanks for the clarification. The only point raised which should have > any bearing on the process at this time is is the first one, and I think > that has been addressed. > > The process is designed to collect the widest range of names, and let > the *community* decide. It is not the function of the TC to vet the > names for suitability before the poll. The community itself is to do > that, in the poll. And because vetting for trademark is a specialized > and costly task, that happens *after* the poll, so that we don't waste > time and money on it. > > It was exactly the kind of seemingly arbitrary process of producing the > names for the poll which is on display here that prompted us to write > down this more open process in the first place. It's unfortunate that > the last three objections that you cite are clearly in contradiction to > that. > > We pride ourselves on fairness and openness, but we seem to have lost > the enthusiasm for that here. I would rather we not do this at all than > to do it poorly, so I have proposed we simply stop naming releases. > It's more trouble than it's worth. > > Here's my proposed TC resolution for that: > > https://review.opendev.org/675788 > > -Jim > > I'm with Jim on this, specially would like to highlight couple of points from the governance: """ #. The marketing community may identify any names of particular concern from a marketing standpoint and discuss such issues publicly on the Marketing mailing list. The marketing community may produce a list of problematic items (with citations to the mailing list discussion of the rationale) to the election official. This information will be communicated during the election, but the names will not be removed from the poll. #. After the close of nominations, the election official will finalize the list of proposed names and publicize it. In general, the official should strive to make objective determinations as to whether a name meets the `Release Name Criteria`_, but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number. #. Once the list is finalized and publicized, a one-week period shall elapse before the start of the election so that any names removed from consideration because they did not meet the `Release Name Criteria`_ may be discussed. Names erroneously removed may be re-added during this period, and the Technical Committee may vote to add exceptional names (which do not meet the standard criteria). """ The marketing community concerns will be communicated, "but the names will not be removed from the poll." Officials should be objective if the name meets the criteria "but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number." "Technical Committee may vote to add exceptional names", not to remove qualifying names for personal preference. i think if we take the route taken here, we better just stop naming things. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Aug 12 17:29:16 2019 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 12 Aug 2019 11:29:16 -0600 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Fri, Aug 9, 2019 at 12:20 PM Dirk Müller wrote: > Hi, > > For a while the requirements team is trying to go through the process > of removing the upper cap > on jsonschema to allow the update to jsonschema 3.x. The update > for that is becoming more urgent as more and more other > (non-OpenStack) projects are going > with requiring jsonschema >= 3, so we need to move forward as well to > keep co-installability > and be able to consume updates of packages to versions that depend on > jsonschema >= 3. > > The current blocker seems to be tripleo-common / os-collect-config > depending on python-zaqarclient, > which has a broken gate since the merge of: > > > http://specs.openstack.org/openstack/zaqar-specs/specs/stein/remove-pool-group-totally.html > > on the server side, which was done here: > > https://review.opendev.org/#/c/628723/ > > The python-zaqarclient functional tests have not been correspondingly > adjusted, and are failing > for more than 5 months meanwhile, in consequence many patches for > zaqarclient, including > the one uncapping jsonschema are piling up. It looks like no real > merge activity happened since > > https://review.opendev.org/#/c/607553/ > > which is a bit more than 6 months ago. How should we move forward? > doing a release of zaqarclient > using some implementation of an API that got removed server side > doesn't seem to be a terribly great > idea, plus that we still need to merge either one of my patches (one > that makes functional testing non-voting > or the brutal "lets drop all tests that fail" patch). On the other > side, I don't know how feasible it is for Triple-O > to drop the dependency on os-collect-config or os-collect-config to > drop the dependency on zaqar. > Do you have an example of what the issue with tripleo/os-collect-config is? It looks like os-collect-config has support for using zaqarclient as a notification mechanism for work but I don't think it's currently used. That being said, can we just fix whatever issue is? I don't see os-collect-config using pool_group anywhere > > Any suggestion on how to move forward? > > TIA, > Dirk > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debayan.ray at gmail.com Mon Aug 12 13:55:18 2019 From: debayan.ray at gmail.com (Debayan Ray) Date: Mon, 12 Aug 2019 19:25:18 +0530 Subject: [ironic] [sushy] Stepping down from Sushy core Message-ID: Hey all, it's not easy to send this email. With a heavy heart, I announce that I'll be stepping down from the Sushy core team effective today. Almost six months back, I left HPE and joined Oracle. Although earlier I thought of keeping out some time to dedicate to being an effective Sushy core reviewer, gradually I found those "spare" time very much elusive and near impossible to find. Now after all these years of working together, I really can't completely disassociate myself altogether. So you can expect me reviewing stuff from time to time and I will continue to follow Ironic and its projects and other interesting ones in OpenStack. Thanks, everyone for everything. It has been great, to say the least, to work with you all these years. Cheers! Debayan (deray) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Aug 12 19:12:57 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 12 Aug 2019 15:12:57 -0400 Subject: [ironic] [sushy] Stepping down from Sushy core In-Reply-To: References: Message-ID: Debayan, Thank you for your heartfelt email. It is always difficult when life gets in the way. Accordingly, I have removed you from the sushy-core group in gerrit. Thank you for your service as a core reviewer on sushy, and we'll see you around. :) -Julia On Mon, Aug 12, 2019 at 2:25 PM Debayan Ray wrote: > > Hey all, it's not easy to send this email. With a heavy heart, I announce that I'll be stepping down from the Sushy core team effective today. > > Almost six months back, I left HPE and joined Oracle. Although earlier I thought of keeping out some time to dedicate to being an effective Sushy core reviewer, gradually I found those "spare" time very much elusive and near impossible to find. > > Now after all these years of working together, I really can't completely disassociate myself altogether. So you can expect me reviewing stuff from time to time and I will continue to follow Ironic and its projects and other interesting ones in OpenStack. > > Thanks, everyone for everything. It has been great, to say the least, to work with you all these years. Cheers! > > Debayan (deray) From peter.matulis at canonical.com Mon Aug 12 20:34:01 2019 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 12 Aug 2019 16:34:01 -0400 Subject: [charms] OpenStack Charms 19.07 release is now available Message-ID: The OpenStack Charms team is delighted to announce the 19.07 charms release. This release brings several new features and improvements, including some for existing releases (Queens, Rocky, Stein) and many stable combinations of Ubuntu and OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/1907.html == Highlights == * Percona Cluster Cold Start The percona-cluster charm now contains logic and actions to assist with operational tasks surrounding a database shutdown scenario. * DVR SNAT The neutron-openvswitch charm now supports deployment of DVR based routers with combined SNAT functionality, removing the need to use the neutron-gateway charm in some types of deployment. * Octavia Image Lifecycle Management A new octavia-diskimage-retrofit charm provides a tool for retrofitting cloud images for use as Octavia Amphora. * Nova Live Migration: Streamline SSH Host Key Handling The nova-cloud-controller charm has improved the host key discovery and distribution algorithm. This will make the addition of a nova-compute unit faster and the nova-cloud-controller upgrade-charm hook will be significantly improved for large deployments. == OpenStack Charms team == The OpenStack Charms development team can be contacted on the #openstack-charms IRC channel on Freenode. The team will also be represented at the Open Infrastructure Summit and PTG events in Shanghai (November, 2019). == Thank you == Huge thanks to the below 46 charm contributors who worked together to squash 48 bugs, to enable an entirely new charmed version of OpenStack, and to move the line forward on several key features! Frode Nordahl Chris MacNaughton David Ames Liam Young Alex Kavanagh James Page Corey Bryant Ryan Beisner Tytus Kurek Edward Hope-Morley Sahid Orentino Ferdjaoui Dmitrii Shcherbakov Rodrigo Barbieri Peter Matulis Ghanshyam Mann Jorge Niedbalski Nicolas Pochet Andrea Ieri Andreas Jaeger Zachary Zehring Trent Lloyd Tiago Pasqualini David Coronel Hua Zhang Ian Wienand Dan Ackerson Ramon Grullon George Kraft Alvaro Uria Michael Skalka Nikolay Vinogradov melissaml Nobuto Murata Andrew McLeod Frank Kloeker Tim Burke Cory Johns Marian Gasparovic sunnyve Felipe Reyes Pete Vander Giessen Ryan Farrell Mark Maglana Levente Tamas Alexander Litvinov Marcelo Subtil Marcal -- OpenStack Charms Team From ed at leafe.com Mon Aug 12 21:44:33 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 12 Aug 2019 16:44:33 -0500 Subject: [uc] Less than 4 days left to nominate for the UC! Message-ID: A week has gone by since nominations opened, and we have yet to receive a single nomination! Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. -- Ed Leafe [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html From zbitter at redhat.com Mon Aug 12 21:57:42 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 12 Aug 2019 17:57:42 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87mugecw7i.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> Message-ID: <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> On 12/08/19 10:08 AM, James E. Blair wrote: > Jeremy Stanley writes: > >> On 2019-08-11 10:30:32 -0700 (-0700), James E. Blair wrote: >> [...] >>> I still do believe that it meets all of the criteria. In particular, it >>> meets this: >>> >>> * The name must refer to the physical or human geography of the region >>> encompassing the location of the OpenStack summit for the >>> corresponding release. >>> >>> It is short for "University of Shanghai for Science and Technology", >>> which is a place in Shanghai. Here is their website: >>> http://en.usst.edu.cn/ >> [...] >> >> This got discussed after last week's TC meeting during Thursday >> office hours, and I'm sorry I didn't think to give you a heads-up >> when the topic arose: >> >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T14:59:01 >> >> One of the objections raised was that "University" in the name >> "University of Shanghai for Science and Technology" was a general >> class of place or feature and not a particular place or feature. But >> as you pointed out in IRC a while back (and which I should have >> remembered), there is precedent with the Pike cycle name: >> >> Pike (the Massachusetts Turnpike, also the Mass Pike...) >> >> https://wiki.openstack.org/wiki/Release_Naming/P_Proposals#Proposed_Names >> >> Another objection raised is that "OpenStack University" was the old >> name for what we now call the OpenStack Upstream Institute and that >> it could lead to name confusion if chosen. A search of the Web for >> that name last week turned up only two occurrences for me on the >> first page of results, both of which were lingering references in >> our wiki which I immediately corrected, so I don't think that >> argument holds. >> >> Then there was the suggestion that "University" might somehow be a >> trademark risk, though in my opinion that's why we have the OSF vet >> the preliminary winning results after the community ranks them (so >> that the TC doesn't need to concern itself with trademark issues). >> >> It was also pointed out that each time we have a poll with a mix of >> English and non-English names/words, an English name inevitably >> wins. Since this concern isn't backed up by the documented >> process[*] we're ostensibly following, I'm not really sure how to >> address it. >> >> Ultimately I was unable to convince my colleagues on the TC that >> "University" was a qualifying name, and so it was handled as a >> possible exception to the normal rules which, following a poll of >> most TC members, was decided would not be granted. >> >> [*] https://governance.openstack.org/tc/reference/release-naming.html > > Thanks for the clarification. The only point raised which should have > any bearing on the process at this time is is the first one, and I think > that has been addressed. To be clear, the thing that stopped us from automatically including it was that there was no consensus that it met the criteria, which exclude words that describe a general class of Geographic feature. I regret that you didn't get an opportunity to discuss this; I initially raised it in response to you and I both being pinged[1], but we probably should have tried to ping you again when discussions resumed during office hours the next day. FWIW I never thought that Pike should have been automatically included either, but nobody asked me at the time ;) Once it's treated as an exception put to a TC vote, it's up to TC members to decide if it "sounds really cool"[2] enough to make an exception for. I think we can all agree that this is an extremely subjective decision, and I'd expect that people took all of the factors mentioned in this thread (both for and against) into account in their vote. In the end, a majority of the TC decided not to add it to the list. I hope that helps clarify the process that led us here. cheers, Zane. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-07.log.html#t2019-08-07T15:21:10) [2] actual words From corvus at inaugust.com Mon Aug 12 23:18:07 2019 From: corvus at inaugust.com (James E. Blair) Date: Mon, 12 Aug 2019 16:18:07 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> (Zane Bitter's message of "Mon, 12 Aug 2019 17:57:42 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> Message-ID: <875zn27z2o.fsf@meyer.lemoncheese.net> Zane Bitter writes: > To be clear, the thing that stopped us from automatically including it > was that there was no consensus that it met the criteria, which > exclude words that describe a general class of Geographic feature. I > regret that you didn't get an opportunity to discuss this; I initially > raised it in response to you and I both being pinged[1], but we > probably should have tried to ping you again when discussions resumed > during office hours the next day. FWIW I never thought that Pike > should have been automatically included either, but nobody asked me at > the time ;) Thanks, I suppose it's better late than never to have this discussion. Happily, the process does not require that the TC come to a consensus on whether a name fits the criteria. In establishing the process, this was a deliberate decision to avoid the TC having exactly that kind of discussion because we all have better things to be doing. That is why this is the sole purview of the election official. We should remember that the purpose of this process is to collect as many names as possible, weeding out only the obvious non-conforming candidates, so that the whole community may decide on the name. As I understand it, the sequence of events that led us here was: A) Doug (as interim unofficial election official) removed the name for unspecified reasons. [1] B) I objected to the removal. This is in accordance with step 5 of the process: Once the list is finalized and publicized, a one-week period shall elapse before the start of the election so that any names removed from consideration because they did not meet the Release Name Criteria may be discussed. Names erroneously removed may be re-added during this period, and the Technical Committee may vote to add exceptional names (which do not meet the standard criteria). C) Rico (the election official at the time) agreed with my reasoning that it was erroneously removed and re-added the name. [2] D) The list was re-issued and the name was once again missing. Four reasons were cited, three of which have no place being considered prior to voting, and the fourth is a claim that it does not meet the criteria. Aside from no explanation being given for (A) (and assuming that the explanation, if offered, would have been that the name does not meet the criteria) the events A through C are fairly in accordance with the documented process. I believe the following: * It was incorrect for the name to have been removed in the first place (but that's fine, it's an appeal-able decision and I have appealed it). * It was correct for Rico to re-add the name. There are several reasons for this: * Points 1 and 2 of the Release Name Criteria are not at issue. * The name refers to the human geography of the area around the summit (it is a name of a place you can find on the map), and so satisfies point 3. * I believe that point 4, which it has been recently asserted the name does not satisfy, was not intended to exclude names which describe features. It was a point of clarification that should a feature have a descriptive term, it should not be included, for the sake of brevity. Point 4 begins with the length limitation, and therefore should be considered as a discussion primarily of length. It states: The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Note that the examples in the text are "Foo City" and "Foo Peak" for "Foo". Obviously, that example would be for the "F" release where "City" and "Peak" would not be candidates. Therefore, point 4 is effectively silent on whether words like "City" and "Peak" would be permitted for the "C" and "P" releases. * The name "Pike" was accepted as meeting the criteria. It is short for "Massachusetts Turnpike". It serves the same function as a descriptive name and serves and precedent. * I will absolutely agree that point 4 could provide more clarity on this and therefore a subjective evaluation must be made. On this point, we should refer to step 4 of the Release Naming Process: In general, the official should strive to make objective determinations as to whether a name meets the Release Name Criteria, but if subjective evaluation is required, should be generous in interpreting the rules. It is not necessary to reduce the list of proposed names to a small number. This indicates again that Rico was correct to accept the name, because of the "generous interpretation" clause. The ambiguity in point 4 combined with the precedent set by Pike is certainly sufficient reason to be "generous". * While the election official is free to consult with whomever they wish, including the rest of the TC, there is no formal role for the TC in reducing the names before voting begins (in fact, the process clearly indicates that is an anti-goal). So after Rico re-added the name, it was not necessary to further review or reverse the decision. I appreciate that the TC proactively considered the name under the "really cool" exception, even though I had not requested it (deeming it to be unnecessary). Thank you for that. Given the above reasoning, I hope that I have made a compelling case that the name meets the criteria (or at least, warrants "generous interpretation") and would appreciate it if the name were added back to the poll. Thanks, Jim [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T15:02:46 [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008334.html From rico.lin.guanyu at gmail.com Tue Aug 13 02:01:03 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Aug 2019 10:01:03 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <875zn27z2o.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: IMO, it's good to release the whole thing out of TC's responsibility, and do hope we can do these in an automatic way, so like people can just raise whatever cool name it's and see if that pass a CI job. :) As long as the whole naming process is still under TC's governance and words like *the process should consider potential issues of trademark* still in [1] (which I think we should specific put down as a more formal rule, or remove it out of that docs), I believe TCs still need to confirm the final list. And that's why I'm the one asking TCs to put their final confirm with it through the inner TC poll during office hour. Maybe the process will change though all these discussions and patches you proposed on governance repo (kind of hope it will, at least we should improve the docs to make more clear info. for all), but as long as the inner TC poll result does not turn over, I will respect the result, and hope that's good enough reason to call that list is final. This discussion definitely worth to keep developing, but as I promised for postponing 24 hours from yesterday, it's time to bring the public poll up. [1] https://governance.openstack.org/tc/reference/release-naming.html On Tue, Aug 13, 2019 at 7:22 AM James E. Blair wrote: > Zane Bitter writes: > > > To be clear, the thing that stopped us from automatically including it > > was that there was no consensus that it met the criteria, which > > exclude words that describe a general class of Geographic feature. I > > regret that you didn't get an opportunity to discuss this; I initially > > raised it in response to you and I both being pinged[1], but we > > probably should have tried to ping you again when discussions resumed > > during office hours the next day. FWIW I never thought that Pike > > should have been automatically included either, but nobody asked me at > > the time ;) > > Thanks, I suppose it's better late than never to have this discussion. > > Happily, the process does not require that the TC come to a consensus on > whether a name fits the criteria. In establishing the process, this was > a deliberate decision to avoid the TC having exactly that kind of > discussion because we all have better things to be doing. That is why > this is the sole purview of the election official. > > We should remember that the purpose of this process is to collect as > many names as possible, weeding out only the obvious non-conforming > candidates, so that the whole community may decide on the name. > > As I understand it, the sequence of events that led us here was: > > A) Doug (as interim unofficial election official) removed the name for > unspecified reasons. [1] > > B) I objected to the removal. This is in accordance with step 5 of the > process: > > Once the list is finalized and publicized, a one-week period shall > elapse before the start of the election so that any names removed > from consideration because they did not meet the Release Name > Criteria may be discussed. Names erroneously removed may be > re-added during this period, and the Technical Committee may vote > to add exceptional names (which do not meet the standard criteria). > > C) Rico (the election official at the time) agreed with my reasoning > that it was erroneously removed and re-added the name. [2] > > D) The list was re-issued and the name was once again missing. Four > reasons were cited, three of which have no place being considered > prior to voting, and the fourth is a claim that it does not meet the > criteria. > > Aside from no explanation being given for (A) (and assuming that the > explanation, if offered, would have been that the name does not meet the > criteria) the events A through C are fairly in accordance with the > documented process. > > I believe the following: > > * It was incorrect for the name to have been removed in the first place > (but that's fine, it's an appeal-able decision and I have appealed > it). > > * It was correct for Rico to re-add the name. There are several reasons > for this: > > * Points 1 and 2 of the Release Name Criteria are not at issue. > > * The name refers to the human geography of the area around the summit > (it is a name of a place you can find on the map), and so satisfies > point 3. > > * I believe that point 4, which it has been recently asserted the name > does not satisfy, was not intended to exclude names which describe > features. It was a point of clarification that should a feature > have a descriptive term, it should not be included, for the sake of > brevity. Point 4 begins with the length limitation, and therefore > should be considered as a discussion primarily of length. It > states: > > The name must be a single word with a maximum of 10 characters. > Words that describe the feature should not be included, so "Foo > City" or "Foo Peak" would both be eligible as "Foo". > > Note that the examples in the text are "Foo City" and "Foo Peak" for > "Foo". Obviously, that example would be for the "F" release where > "City" and "Peak" would not be candidates. Therefore, point 4 is > effectively silent on whether words like "City" and "Peak" would be > permitted for the "C" and "P" releases. > > * The name "Pike" was accepted as meeting the criteria. It is short > for "Massachusetts Turnpike". It serves the same function as a > descriptive name and serves and precedent. > > * I will absolutely agree that point 4 could provide more clarity on > this and therefore a subjective evaluation must be made. On this > point, we should refer to step 4 of the Release Naming Process: > > In general, the official should strive to make objective > determinations as to whether a name meets the Release Name > Criteria, but if subjective evaluation is required, should be > generous in interpreting the rules. It is not necessary to reduce > the list of proposed names to a small number. > > This indicates again that Rico was correct to accept the name, > because of the "generous interpretation" clause. The ambiguity in > point 4 combined with the precedent set by Pike is certainly > sufficient reason to be "generous". > > * While the election official is free to consult with whomever they > wish, including the rest of the TC, there is no formal role for the TC > in reducing the names before voting begins (in fact, the process > clearly indicates that is an anti-goal). So after Rico re-added the > name, it was not necessary to further review or reverse the decision. > > I appreciate that the TC proactively considered the name under the > "really cool" exception, even though I had not requested it (deeming it > to be unnecessary). Thank you for that. > > Given the above reasoning, I hope that I have made a compelling case > that the name meets the criteria (or at least, warrants "generous > interpretation") and would appreciate it if the name were added back to > the poll. > > Thanks, > > Jim > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2019-08-08.log.html#t2019-08-08T15:02:46 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008334.html > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Aug 13 04:56:08 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 13 Aug 2019 12:56:08 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open Message-ID: Hi, all OpenStackers, It's time to vote for the naming of the U release!! U 版本正式命名票选开始!! First, big thanks for all people who take their own time to propose names on [2] or help to push/improve to the naming process. Thank you. We'll use a public polling option over per-user private URLs for voting. This means everybody should proceed to use the following URL to cast their vote: *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 * We've selected a public poll to ensure that the whole community, not just Gerrit change owners get a vote. Also, the size of our community has grown such that we can overwhelm CIVS if using private URLs. A public can mean that users behind NAT, proxy servers or firewalls may receive a message saying that your vote has already been lodged if this happens please try another IP. Because this is a public poll, results will currently be only viewable by me until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], and results will be posted shortly after. [1] https://governance.openstack.org/tc/reference/release-naming.html [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk at dmllr.de Tue Aug 13 09:29:49 2019 From: dirk at dmllr.de (=?UTF-8?B?RGlyayBNw7xsbGVy?=) Date: Tue, 13 Aug 2019 11:29:49 +0200 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: Hi Alex, Am Mo., 12. Aug. 2019 um 19:29 Uhr schrieb Alex Schultz : > Do you have an example of what the issue with tripleo/os-collect-config is? It looks like os-collect-config has support for using zaqarclient as a notification mechanism for work but I don't think it's currently used. it depends on it (has it in its requirements.txt) so if its not used, maybe we can remove it? > That being said, can we just fix whatever issue is? I don't see os-collect-config using pool_group anywhere sure, lets see if we can fix zaqarclient and release a new version. hao wang recently started responding to this (in a private conversation), so I'll give him and the team a few days to sort the issues out. Greetings, Dirk From mriedemos at gmail.com Tue Aug 13 12:25:52 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 13 Aug 2019 07:25:52 -0500 Subject: [nova] The race for 2.76 Message-ID: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> There are several compute API microversion changes that are conflicting and will be fighting for 2.76, but I think we're trying to prioritize this one [1] for the ironic power sync external event handling since (1) Surya is going to be on vacation soon, (2) there is an ironic change that depends on it which has had review [2] and (3) the nova change has had quite a bit of review already. As such I think others waiting to rebase from 2.75 to 2.76 should probably hold off until [1] is approved which should happen today or tomorrow. [1] https://review.opendev.org/#/c/645611/ [2] https://review.opendev.org/#/c/664842/ -- Thanks, Matt From mnaser at vexxhost.com Tue Aug 13 12:29:23 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 13 Aug 2019 08:29:23 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the weekly update for what happened in the Openstack TC. You can get more information by checking for changes in openstack/governance repository. # General changes - Changed the dates for the ‘U’ naming poll: https://review.opendev.org/#/c/674465/ - Rico Lin volunteered to be the naming poll coordinator instead of Tony Breeds: https://review.opendev.org/#/c/674494/ - Added a wiki link to the rpm-packaging project: https://review.opendev.org/#/c/673837/ - Updated policy regarding project retirements: https://review.opendev.org/#/c/670741/ Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amotoki at gmail.com Tue Aug 13 13:30:05 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 13 Aug 2019 22:30:05 +0900 Subject: [neutron] bug deputy report (the week of Aug 5) Message-ID: Hi neutrinos, I was a bug deputy for last week (Aug 5 to Aug 11). We got a few new bug last week. Two bugs are in the undecided state. https://bugs.launchpad.net/neutron/+bug/1834045 Live-migration double binding doesn't work with OVN New, Undecided, No assignee (in neutron) NOTE: 'neutron' was added to the affected projects last week. This is related to the double binding feature on non-agent based drivers like networking-ovn. networking-ovn already has a workaround for this. Is any further work needed in neutron side? More input would be appreciated. https://bugs.launchpad.net/neutron/+bug/1839658 "subnet" register in the DB can have network_id=NULL New, Undecided, Assigned to ralonsoh NOTE: I could not reproduce it yet. It looks like a corner case and more code investigation might be needed. The following new bugs have been fixed. - https://bugs.launchpad.net/bugs/1839595 Thanks, Akihiro From aschultz at redhat.com Tue Aug 13 13:51:43 2019 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 13 Aug 2019 07:51:43 -0600 Subject: [zaqar][requirements][tc][triple-o][tripleo] jsonschema 3.x and python-zaqarclient In-Reply-To: References: Message-ID: On Tue, Aug 13, 2019 at 3:30 AM Dirk Müller wrote: > Hi Alex, > > Am Mo., 12. Aug. 2019 um 19:29 Uhr schrieb Alex Schultz < > aschultz at redhat.com>: > > > Do you have an example of what the issue with tripleo/os-collect-config > is? It looks like os-collect-config has support for using zaqarclient as a > notification mechanism for work but I don't think it's currently used. > > it depends on it (has it in its requirements.txt) so if its not used, > maybe we can remove it? > > Well code exists that calls zaqarclient, I just don't know if anyone has deployed the functionality. > > That being said, can we just fix whatever issue is? I don't see > os-collect-config using pool_group anywhere > > sure, lets see if we can fix zaqarclient and release a new version. > hao wang recently started responding to this (in a private > conversation), so I'll give him and the team a few days to sort the > issues out. > > Ok let us know if you have any specific issues with os-collect-config and we can take a look. Thanks > Greetings, > Dirk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Aug 13 14:44:07 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Aug 2019 15:44:07 +0100 Subject: [neutron] bug deputy report (the week of Aug 5) In-Reply-To: References: Message-ID: On Tue, 2019-08-13 at 22:30 +0900, Akihiro Motoki wrote: > Hi neutrinos, > > I was a bug deputy for last week (Aug 5 to Aug 11). > We got a few new bug last week. > > Two bugs are in the undecided state. > > https://bugs.launchpad.net/neutron/+bug/1834045 > Live-migration double binding doesn't work with OVN > New, Undecided, No assignee (in neutron) > NOTE: 'neutron' was added to the affected projects last week. > This is related to the double binding feature on non-agent based > drivers like networking-ovn. > networking-ovn already has a workaround for this. > Is any further work needed in neutron side? > More input would be appreciated. Form a nova and neutron perspective my understanding is that all drivers are required to implement this feature. The nova implementation certenly requires that all neutron backends support it if neutron reports support for the portbinding-extended extension. Given that neutron does not support disabling this extension that in trun implies that this has been a required extension for all ml2 drivers to support. It is my understanding at least that this was not intended to be optional. As an interim workaround nova supports disabling treating the lack of a vif plugged event as fatal for live migrations. https://docs.openstack.org/nova/latest/configuration/config.html#compute.live_migration_wait_for_vif_plug however we do want to eventually remove that and it should also be noted that we intent to use the multiple port binding in other code paths which means some nova feature will not be available with ovn until support is added. i have not updated the bug but i personal tend to be live it should be marked as invalid against nova. the issue to me seams to be that neutron is reporting support for an extension that not supported by networking-ovn. and networking-ovn is not implementing a mandatory extention that cannot be disabled. i you look at the nova spec https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html#proposed-change it states "Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have tThere is no additional configuration for deployers. The use of multiple bindings will be enabled automatically. We decide whether to use the new or old API flow, if both compute nodes support this feature and based on the available Neutron API extensions. We cache extensions support in the usual way utilizing the existing neutron_extensions_cache. Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have to implement the extension separately and will continue to use the old workflow until their maintainers support the new neutron extension." This statement about implementing the exteion in the ml2 plugin layer came organically form conversations with miguel lavalle when he took over the implemantion of https://review.opendev.org/#/c/414251/ and we discuss this and the existence of the extention being report being the contract by which nova detect support of this feature at before codifying it at the nova spec. this contract was estrablish specificaly to ensure that we could ensure out of tree drivers would still work by falling back to the old flow with the expectation that they would all be updated eventurally. > > https://bugs.launchpad.net/neutron/+bug/1839658 > "subnet" register in the DB can have network_id=NULL > New, Undecided, Assigned to ralonsoh > NOTE: I could not reproduce it yet. It looks like a corner case and > more code investigation might be needed. > > The following new bugs have been fixed. > - https://bugs.launchpad.net/bugs/1839595 > > Thanks, > Akihiro > From skaplons at redhat.com Tue Aug 13 14:46:17 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 13 Aug 2019 16:46:17 +0200 Subject: [neutron] QoS meeting cancelled today Message-ID: <9956FAC6-390D-4EE3-86A8-A29781EE1348@redhat.com> Hi, As there is no agenda for today and Rodolfo is not available, today’s QoS meeting will be cancelled. Sorry for very late announcement of that :) — Slawek Kaplonski Senior software engineer Red Hat From berndbausch at gmail.com Tue Aug 13 14:47:50 2019 From: berndbausch at gmail.com (Bernd Bausch) Date: Tue, 13 Aug 2019 23:47:50 +0900 Subject: [glance] glance-cache-management hardcodes URL with port In-Reply-To: References: <4b4d09d3-21ae-b2bf-e888-faff196d3aec@gmail.com> Message-ID: <712e178b-0862-2cad-a9f3-f1c92729295c@gmail.com> Hi Erno, yes, the /image is the problem. This is a Devstack (stable/Stein), which by default deploys Glance and all(?) other services as WSGI applications. I know that this is not recommended for Glance, but it's not illegal either, as far as I understand it. Not illegal, but it disables the glance-cache-manage client in its current form. Bernd On 8/13/2019 1:50 AM, Erno Kuvaja wrote: > On Thu, Aug 8, 2019 at 12:49 PM Bernd Bausch > wrote: > > Stein re-introduces Glance cache management, but I have not been > able to > use the glance-cache-manage command. I always get errno 111, > connection > refused. > > It turns out that the command tries to access > http://localhost:9292. It > has options for non-default IP address and port, but unfortunately > on my > (devstack) cloud, the Glance endpoint is > http://192.168.1.200/image. No > port. > > Is there a way to tell glance-cache-manage to use this endpoint? > > Bernd. > > Hi Bernd, > > You can always give it the port 80, the real problem likely is the > prefix /image you have there. Are you running glance-api as wsgi app > under some http-server or is that reverse-proxy/loadbalancer you're > directing the glance-cache-manage towards? Remember that the > management is not currently cluster wide so you should be always > targeting single service process at the time. > > - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Aug 13 14:57:29 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 07:57:29 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Tue, 13 Aug 2019 10:01:03 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: <8736i56rl2.fsf@meyer.lemoncheese.net> Rico Lin writes: > IMO, it's good to release the whole thing out of TC's responsibility, and > do hope we can do these in an automatic way, so like people can just raise > whatever cool name it's and see if that pass a CI job. :) I agree, and in fact, that's why I wrote this process originally, to do exactly that. If we were to simply follow the steps described in [1] (it is a 7 step process, each one clearly saying what should be done), I don't think we would have so much confusion. The only responsibilities that the TC has in that document is to set the dates, the region, appoint the coordinator, and vote on adding "really cool" names. That's it. The process also says that in the rare event that a subjective evaluation of whether a name meets the criteria needs to be made, the coordinator should be generous. That means that the coordinator should accept names, even if they are not certain they meet the criteria. > As long as the whole naming process is still under TC's governance and > words like *the process should consider potential issues of trademark* > still in [1] (which I think we should specific put down as a more formal > rule, or remove it out of that docs), I believe TCs still need to confirm > the final list. I disagree here. That quote is from the preamble. It is general introductory material, but is not part of the specific step-by-step process which should be followed. There *is* more specific detail about that, it is step 7: The Foundation will perform a trademark check on the winning name. If there is a trademark conflict, then the Foundation will proceed down the ranked list of Condorcet results until a name without a trademark conflict is found. This will be the selected name. Therefore, trademark considerations are explicitly out of the purview of the TC. Several folks, including you, have said that they wish the process were out of the TC's hands. The fact is that it already is, but unfortunately people seem to keep wanting to manipulate the list before it goes out for a vote. I believe that the current process as written is as straightforward and fair as we can make it and still have community involvement. This is not the first time we, as a community, have not been able to follow it. I think that's because not enough of us care. This election had, at least, three coordinators, it was run late, dates were missed, and something like 10 names were dropped from the poll before it went out, simply due to personal preference of various folks on the TC. Since we take particular pride in our community participation, the fact that we have not been able or willing to do this correctly reflects very poorly on us. I would rather that we not do this at all than do it badly, so I think this should be the last release with a name. I've proposed that change here: https://review.opendev.org/675788 -Jim [1] https://governance.openstack.org/tc/reference/release-naming.html From skaplons at redhat.com Tue Aug 13 14:58:20 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 13 Aug 2019 16:58:20 +0200 Subject: [neutron] bug deputy report (the week of Aug 5) In-Reply-To: References: Message-ID: <9218FFFD-0D55-44C8-938B-8751C63643C7@redhat.com> Hi, > On 13 Aug 2019, at 16:44, Sean Mooney wrote: > > On Tue, 2019-08-13 at 22:30 +0900, Akihiro Motoki wrote: >> Hi neutrinos, >> >> I was a bug deputy for last week (Aug 5 to Aug 11). >> We got a few new bug last week. >> >> Two bugs are in the undecided state. >> >> https://bugs.launchpad.net/neutron/+bug/1834045 >> Live-migration double binding doesn't work with OVN >> New, Undecided, No assignee (in neutron) >> NOTE: 'neutron' was added to the affected projects last week. >> This is related to the double binding feature on non-agent based >> drivers like networking-ovn. >> networking-ovn already has a workaround for this. >> Is any further work needed in neutron side? >> More input would be appreciated. > Form a nova and neutron perspective my understanding is that all drivers are > required to implement this feature. The nova implementation certenly requires > that all neutron backends support it if neutron reports support for the > portbinding-extended extension. Given that neutron does not support disabling this > extension that in trun implies that this has been a required extension for all ml2 drivers > to support. It is my understanding at least that this was not intended to be optional. > > As an interim workaround nova supports disabling treating the lack of a vif plugged event as fatal for live migrations. > https://docs.openstack.org/nova/latest/configuration/config.html#compute.live_migration_wait_for_vif_plug > however we do want to eventually remove that and it should also be noted that we intent to use the multiple port binding > in other code paths which means some nova feature will not be available with ovn until support is added. > > i have not updated the bug but i personal tend to be live it should be marked as invalid against nova. > the issue to me seams to be that neutron is reporting support for an extension that not supported by networking-ovn. > and networking-ovn is not implementing a mandatory extention that cannot be disabled. IIUC bug report correctly, problem is that during live-migration nova is doing inactive binding on destination host and waits for "network-vif-plugged” even about port on destination host before it will start migration. And that is IMO wrong as this even should be IMO send after migration as then vif is really plugged and configured on dest host. It works currently for ML2/OVS case because during creation of inactive binding neutron is doing port update which triggers neutron-ovs-agent on source host and this triggers sending this notification. IMO there should be different notification for this case or nova should maybe check in some other way if inactive binding is done on destination host or not. But I wasn’t the one who was debugging it so my understanding might be wrong here. Adding Maciek in CC as he was debugging this issue on networking-ovn side and he reported this bug originally. > > i you look at the nova spec > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html#proposed-change > it states > > "Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the > extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have tThere is no additional > configuration for deployers. The use of multiple bindings will be enabled automatically. We decide whether to use the > new or old API flow, if both compute nodes support this feature and based on the available Neutron API extensions. We > cache extensions support in the usual way utilizing the existing neutron_extensions_cache. > > Note: The new neutron API extension will be implemented in the ml2 plugin layer, above the ml2 driver layer so if the > extension is exposed it will be supported for all ml2 drivers. Monolithic plugins will have to implement the extension > separately and will continue to use the old workflow until their maintainers support the new neutron extension." > > This statement about implementing the exteion in the ml2 plugin layer came organically form conversations with miguel > lavalle when he took over the implemantion of https://review.opendev.org/#/c/414251/ and we discuss this and the > existence of the extention being report being the contract by which nova detect support of this feature at before > codifying it at the nova spec. > > > this contract was estrablish specificaly to ensure that we could ensure out of tree drivers would still work by falling > back to the old flow with the expectation that they would all be updated eventurally. > >> >> https://bugs.launchpad.net/neutron/+bug/1839658 >> "subnet" register in the DB can have network_id=NULL >> New, Undecided, Assigned to ralonsoh >> NOTE: I could not reproduce it yet. It looks like a corner case and >> more code investigation might be needed. >> >> The following new bugs have been fixed. >> - https://bugs.launchpad.net/bugs/1839595 >> >> Thanks, >> Akihiro — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Tue Aug 13 15:38:28 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Aug 2019 15:38:28 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <20190813153828.ifzdhhxeviz5svs2@yuggoth.org> On 2019-08-13 07:57:29 -0700 (-0700), James E. Blair wrote: [...] > Several folks, including you, have said that they wish the process > were out of the TC's hands. The fact is that it already is, but > unfortunately people seem to keep wanting to manipulate the list > before it goes out for a vote. [...] You've also convinced me that I should not have requested removal of politically-sensitive choices from the list, since we could have done that as part of the community discussion instead of prior to it. I feel like the goal in narrowing the list of options was to speed up the public review period so we could get on with the vote quickly and have a name sooner, but in retrospect that raised as much or more discussion than leaving them in likely would have done. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Tue Aug 13 16:14:41 2019 From: smooney at redhat.com (Sean Mooney) Date: Tue, 13 Aug 2019 17:14:41 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: > Since we take particular pride in our community participation, the fact > that we have not been able or willing to do this correctly reflects very > poorly on us. I would rather that we not do this at all than do it > badly, so I think this should be the last release with a name. I've > proposed that change here: > > https://review.opendev.org/675788 not to takethis out of context but it is rather long thread so i have sniped the bit i wanted to comment on. i thnik not nameing release would be problemeatic on two fronts. one without a common comunity name i think codename or other conventint names are going to crop up as many have been refering to the U release as the unicorn release just to avoid the confusion between "U" and "you" when speak about the release untill we have an offical name. if we had no offical names i think we woudl keep using those placeholders at least on irc or in person. (granted we would not use them for code or docs) that is a minor thing but the more distributive issue i see is that nova's U release will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the set of compatiable project for a given version we woudl only have the letter and form a marketing perspective and even from development perspective i think that will be problematic. we could just have the V release but i think it loses something in clarity. From thierry at openstack.org Tue Aug 13 16:19:36 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:19:36 +0200 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: References: Message-ID: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> Tim Bell wrote: >> We’re also thinking about starting a “Large Scale” SIG so people could >> collaborate in tackling more of the scaling issues together. Thierry >> Carrez (ttx) and I will be looking into that by mentioning the idea to >> LINE and YahooJapan (as some perspective at-scale operators)to see >> what they think and also make a list of organizations that could be >> interested. Rico Lin (ricolin) will also update the SIG guidelines >> documents to make the whole process easier and Jim Rollenhagen (jroll) >> will try and bring this up at Verizon Media. > > How about a forum brainstorm in Shanghai ? Yes, my goal is to get a number of interested stakeholders to meet in Shanghai and see if there is enough alignment in goals to openly collaborate on those questions. We have had several groups tackling facets of the "large scale" problem in the past (performance WG, large deployments workgroup, LCOO...) -- my thinking here would be to narrow it down to addressing scaling limitations in cluster sizes (think: RabbitMQ falling down after a given number of compute nodes). -- Thierry Carrez (ttx) From corvus at inaugust.com Tue Aug 13 16:34:33 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 09:34:33 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> (Sean Mooney's message of "Tue, 13 Aug 2019 17:14:41 +0100") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: <87imr13tye.fsf@meyer.lemoncheese.net> Sean Mooney writes: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >> Since we take particular pride in our community participation, the fact >> that we have not been able or willing to do this correctly reflects very >> poorly on us. I would rather that we not do this at all than do it >> badly, so I think this should be the last release with a name. I've >> proposed that change here: >> >> https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the > letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. That's a good point. Maybe we could just number them? V would be "OpenStack Release 22". Or we could refer to them by date, as we used to, but without attempting to use dates as actual version numbers. -Jim From thierry at openstack.org Tue Aug 13 16:46:21 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:46:21 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> Sean Mooney wrote: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >> Since we take particular pride in our community participation, the fact >> that we have not been able or willing to do this correctly reflects very >> poorly on us. I would rather that we not do this at all than do it >> badly, so I think this should be the last release with a name. I've >> proposed that change here: >> >> https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. So... I agree the naming process is creating a lot of problems (the reason I decided a long time ago to stop handling it myself, the moment it stopped being a fun exercise). But I still think we need a way to refer to a given series, and we have lots of tooling that is based on the fact that it's alpha-ordered. Ideally we'd have a way to name releases that removes the subjectivity and polling parts, which seems to be the painful part. Just have some objective way of ranking a limited number of options for trademark analysis, and be done with it. -- Thierry Carrez (ttx) From thierry at openstack.org Tue Aug 13 16:55:20 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Aug 2019 18:55:20 +0200 Subject: [ptl][release] Stepping down as Release Management PTL In-Reply-To: <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> References: <20190808041159.GK2352@thor.bakeyournoodle.com> <791369f4-0c95-2211-7c49-5470d393252a@openstack.org> <5b540bc6-d8be-00bc-00dc-a785e76369d0@redhat.com> <20190809163302.GA29942@sm-workstation> <3cbf7440-9ef8-f491-bff8-00e315827c01@redhat.com> Message-ID: <9d81d7c9-a131-96dc-a34b-432eada956bb@openstack.org> Dmitry Tantsur wrote: >>> We have a weekly meeting that is currently on Thursday: >>> >>> http://eavesdrop.openstack.org/#Release_Team_Meeting > > 9pm my time is a bit difficult :( NB: We are talking of moving it to 16utc on Thursdays. -- Thierry Carrez (ttx) From aj at suse.com Tue Aug 13 17:09:11 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 13 Aug 2019 19:09:11 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <8736i56rl2.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: On 8/13/19 4:57 PM, James E. Blair wrote: > [...] > Since we take particular pride in our community participation, the fact > that we have not been able or willing to do this correctly reflects very > poorly on us. I would rather that we not do this at all than do it > badly, so I think this should be the last release with a name. I've > proposed that change here: > > https://review.opendev.org/675788 The names were fun initially - but sometimes a joke turns old, I agree, it's time to change the process. And giving up names is fine. But then we need another way to sequence. The U release is the 21th release, so let's use that as overall number (even if different projects have less than 20 releases). Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Tue Aug 13 17:09:31 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 13 Aug 2019 19:09:31 +0200 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> Message-ID: <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> On 8/13/19 6:46 PM, Thierry Carrez wrote: > Sean Mooney wrote: >> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >>> Since we take particular pride in our community participation, the fact >>> that we have not been able or willing to do this correctly reflects very >>> poorly on us. I would rather that we not do this at all than do it >>> badly, so I think this should be the last release with a name. I've >>> proposed that change here: >>> >>> https://review.opendev.org/675788 >> >> not to takethis out of context but it is rather long thread so i have sniped >> the bit i wanted to comment on. >> >> i thnik not nameing release would be problemeatic on two fronts. >> one without a common comunity name i think codename or other conventint names >> are going to crop up as many have been refering to the U release as the unicorn >> release just to avoid the confusion between "U" and "you" when speak about the release >> untill we have an offical name. if we had no offical names i think we woudl keep using >> those placeholders at least on irc or in person. (granted we would not use them for code >> or docs) >> >> that is a minor thing but the more distributive issue i see is that nova's U release >> will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the >> set of compatiable project for a given version we woudl only have the letter and form a marketing >> perspective and even from development perspective i think that will be problematic. >> >> we could just have the V release but i think it loses something in clarity. > > So... I agree the naming process is creating a lot of problems (the > reason I decided a long time ago to stop handling it myself, the moment > it stopped being a fun exercise). But I still think we need a way to > refer to a given series, and we have lots of tooling that is based on > the fact that it's alpha-ordered. And we need to change this anyhow once we go to 26 (Z is end of alphabet). So, we have it a few releases earlier now ;) > > Ideally we'd have a way to name releases that removes the subjectivity > and polling parts, which seems to be the painful part. Just have some > objective way of ranking a limited number of options for trademark > analysis, and be done with it. Or use numbers, years,... Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cdent+os at anticdent.org Tue Aug 13 17:25:42 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 13 Aug 2019 18:25:42 +0100 (BST) Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: On Tue, 13 Aug 2019, Andreas Jaeger wrote: > But then we need another way to sequence. The U release is the 21th > release, so let's use that as overall number (even if different projects > have less than 20 releases). How about "U"? No tooling changes required. And then we get a couple years of not having to have this discussion again. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From rico.lin.guanyu at gmail.com Tue Aug 13 17:50:21 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 14 Aug 2019 01:50:21 +0800 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: (Put my whatever hat on) Here's my suggestion, we can either make a patch to clarify the process step by step (no exception) or simply move everything out of https://governance.openstack.org/tc That actually leads to the current discussion here, to just use versions or not. Personally, I'm interested in improving the document and not that much interested in making only versions. I do like to see if we can use whatever alphabet we like so this version can be *cool*, and the next version can be *awesome*. Isn't that sounds cool and awesome? :) And like the idea Chris Dent propose to just use *U* or *V*, etc. to save us from having to have this discussion again(I'm actually the one to propose *U* in the list this time:) ) And if we're going to use any new naming system, I strongly suggest we should remove the *Geographic Region* constraint if we plan to have a poll. It's always easy to find conflict between what local people think about the name and what the entire community thinks about it. (Put my official hat on) And for the problem of *University* part: back in the proposal period, I find a way to add *University* back to the meet criteria list so hope people get to discuss whether or not it can be in the poll. And (regardless for the ongoing discussion about whether or not TC got any role to govern this process) I did turn to TCs and ask for the advice for the final answer (isn't that is the responsibility for TCs to guide?), so I guess we can say I'm the one to remove it out of the final list. Therefore I'm taking the responsibility to say I'm the one to omit *University*. During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan from the meet criteria list because they're not the most popular spelling system in China. And we omitted Urumqi from the meet criteria list because of the potential political issue. Those are before I was an official. And we should consider them all during discuss about *University* here. I guess we should define more about in which stage should the official propose the final list of all names that meet criteria should all automatically be part of the final list. On Wed, Aug 14, 2019 at 1:29 AM Chris Dent wrote: > On Tue, 13 Aug 2019, Andreas Jaeger wrote: > > > But then we need another way to sequence. The U release is the 21th > > release, so let's use that as overall number (even if different projects > > have less than 20 releases). > > How about "U"? > > No tooling changes required. And then we get a couple years of not > having to have this discussion again. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Tue Aug 13 18:05:33 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 13 Aug 2019 13:05:33 -0500 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <63ecc3a8-bb9a-0155-6b0b-f84cfa3327ef@openstack.org> <5f5b6edb-4256-be00-1a40-81dbf9273b3f@suse.com> Message-ID: <041B5C41-39BB-4056-B249-DDBB9A718521@leafe.com> On Aug 13, 2019, at 12:09 PM, Andreas Jaeger wrote: > >> Ideally we'd have a way to name releases that removes the subjectivity >> and polling parts, which seems to be the painful part. Just have some >> objective way of ranking a limited number of options for trademark >> analysis, and be done with it. > > Or use numbers, years,... The whole release cycle was based on Ubuntu patterns, since many early OpenStackers came from Ubuntu. OpenStack, though, just used the alphabetical names for releases, rather than also using the YYYY.MM pattern. The Ubuntu animal names are cute, but most people refer to a release by the year/month name, as it's simpler. The problem with this naming cycle in the convergence of the next letter being a letter that doesn’t occur natively in the language where the summit is held. That possibility was not considered when the naming requirements were adopted, and is the root cause of all these naming discussions. It seems rather square-peg-round-hole to force it with this release by using English-like renderings of Chinese words to force compliance with a requirement that wasn’t fully thought out. So since the requirements assume the English alphabet, which doesn’t fit well with Chinese, what about suspending the requirements for geographic relevance, and instead select English words beginning with “U” that have some relevance to Shanghai. I don’t have any ideas along these lines; just pointing out that blind adherence to a poor rule will usually produce poor results. -- Ed Leafe From Tim.Bell at cern.ch Tue Aug 13 18:54:45 2019 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 13 Aug 2019 18:54:45 +0000 Subject: [tc] meeting summary for aug. 8 2019 In-Reply-To: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> References: <0b62399f-29f2-37dd-c133-7e7eec2a9cc7@openstack.org> Message-ID: <0F54F6FE-4A27-494F-BB36-54BCF5043FA2@cern.ch> On 13 Aug 2019, at 18:19, Thierry Carrez > wrote: Tim Bell wrote: We’re also thinking about starting a “Large Scale” SIG so people could collaborate in tackling more of the scaling issues together. Thierry Carrez (ttx) and I will be looking into that by mentioning the idea to LINE and YahooJapan (as some perspective at-scale operators)to see what they think and also make a list of organizations that could be interested. Rico Lin (ricolin) will also update the SIG guidelines documents to make the whole process easier and Jim Rollenhagen (jroll) will try and bring this up at Verizon Media. How about a forum brainstorm in Shanghai ? Yes, my goal is to get a number of interested stakeholders to meet in Shanghai and see if there is enough alignment in goals to openly collaborate on those questions. We have had several groups tackling facets of the "large scale" problem in the past (performance WG, large deployments workgroup, LCOO...) -- my thinking here would be to narrow it down to addressing scaling limitations in cluster sizes (think: RabbitMQ falling down after a given number of compute nodes). We’ll have some CERN people in Shanghai and would be happy to participate. We’re following up on a number of scaling issues (such as https://techblog.web.cern.ch/techblog/post/nova-ironic-at-scale/ and Neutron) and would be happy to share best practise with other large deployments. Tim -- Thierry Carrez (ttx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Aug 13 19:01:26 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 12:01:26 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Rico Lin's message of "Wed, 14 Aug 2019 01:50:21 +0800") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> Message-ID: <87v9v028l5.fsf@meyer.lemoncheese.net> Rico Lin writes: > (Put my whatever hat on) > Here's my suggestion, we can either make a patch to clarify the process > step by step (no exception) or simply move everything out of > https://governance.openstack.org/tc > That actually leads to the current discussion here, to just use versions or > not. Personally, I'm interested in improving the document and not that much > interested in making only versions. I do like to see if we can use whatever > alphabet we like so this version can be *cool*, and the next version can be > *awesome*. Isn't that sounds cool and awesome? :) I'm happy to help improve it if that's what folks want. I already think it says what you and several other people want it to say. But I wrote it, and so the fact that people keep reading it and coming away with different understandings means I did a bad job. So I'll need help to figure out which parts I wasn't clear on. But I'm serious about the suggestion to scrap names altogether. Every time we have an issue with this, it's because people start making their own judgments when the job of the coordinator is basically just to send some emails. The process is 7 very clear steps. Many of them were definitely not followed this time. We can try to make it more clear, but we have done that before, and it still didn't prevent things from going wrong this time. As a community, we just don't care enough to get it right, and getting it wrong only produces bad feelings and wastes all our time. I'm looking forward to OpenStack Release 22. That sounds cool. That's a big number. Way bigger than like 1.x. > And like the idea > Chris Dent propose to just use *U* or *V*, etc. to save us from having to > have this discussion again(I'm actually the one to propose *U* in the list > this time:) ) That would solve a lot of problems, and create one new one in a few years. :) > And if we're going to use any new naming system, I strongly suggest we > should remove the *Geographic Region* constraint if we plan to have a poll. > It's always easy to find conflict between what local people think about the > name and what the entire community thinks about it. We will have a very long list if we do that. I'm not sure I agree with you about that problem though. In practice, deciding whether a river is within a state boundary is not that contentious. That's pretty much all that's ever been asked. > (Put my official hat on) > And for the problem of *University* part: > back in the proposal period, I find a way to add *University* back to the > meet criteria list so hope people get to discuss whether or not it can be > in the poll. And (regardless for the ongoing discussion about whether or > not TC got any role to govern this process) I did turn to TCs and ask for > the advice for the final answer (isn't that is the responsibility for TCs > to guide?), so I guess we can say I'm the one to remove it out of the final > list. Therefore I'm taking the responsibility to say I'm the one to omit > *University*. Thanks. I don't fault you personally for this, I think we got into this situation because no one wanted to do it and so a confusing set of people on the TC ended up performing various tasks ad-hoc. That you stepped up and took action and responsibility is commendable. You have my respect for that. I do think the conversation about University could have been more clear. Specific yes/no answers and reasons would have been nice. Instead of a single decision about whether it was included, I received 3 decisions with 4 rationales from several different people. Either of the following would have been perfectly fine outcomes: Me: Can has University, plz? Coordinator: Violates criterion 4 Me: But Pike Coordinator: Questionable, but process says be "generous" so, okay, it's in. or Coordinator: . Sorry, it's still out. However, reasons around trademark or the suitability of English words are not appropriate reasons to exclude a name. Nor is "the TC didn't like it". There is only one reason to exclude a name, and that is that it violates one of the 4 criteria. Of course it's fine to ask the TC, or anyone else for guidance. However, it's clear from the IRC log that many members of the TC did not appreciate what was being asked of them. It would be okay to ask them "Do you think this meets the criteria?" But instead, a long discussion about whether the names were *good choices* ensued. That's not one of the steps in the process. In fact, it's the exact thing that the process is supposed to avoid. No matter what the members of the TC thought about whether a name was a good idea, if it met the criteria it should be in. > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan > from the meet criteria list because they're not the most popular spelling > system in China. And we omitted Urumqi from the meet criteria list because > of the potential political issue. Those are before I was an official. And > we should consider them all during discuss about *University* here. I guess > we should define more about in which stage should the official propose the > final list of all names that meet criteria should all automatically be part > of the final list. None of those should have been removed. They, even more so than University, clearly meet the criteria, and were only removed due to personal preference. I want to be clear, there *is* a place for consideration of all of these things. That is step 3: The marketing community may identify any names of particular concern from a marketing standpoint and discuss such issues publicly on the Marketing mailing list. The marketing community may produce a list of problematic items (with citations to the mailing list discussion of the rationale) to the election official. This information will be communicated during the election, but the names will not be removed from the poll. That is where we would identify things like "this name uses an unusual romanization system" or "this name has political ramifications". We don't remove those names from the list, but we let the community know about the issues, so that when people vote, they have all the information. We trust our community to make good (or hilariously bad) decisions. That's what this all comes down to. The process as written is supposed to collect a lot of names, with a lot of information, and present them to our community and let us all decide together. That's what has been lost. -Jim From mnaser at vexxhost.com Tue Aug 13 19:19:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 13 Aug 2019 15:19:36 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87v9v028l5.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of a > single decision about whether it was included, I received 3 decisions > with 4 rationales from several different people. Either of the > following would have been perfectly fine outcomes: For transparency, I did not feel comfortable vetoing options, expressed that I don't think it's my business to be picking what's in and what's out. To me, the steps that come *after* the poll were the ones that decided which ones we ended up picking, that's why we have voting that lets you pick more than one option, so we can back down to second/third choices if need-be. I chose to cast a vote in the TC poll with all options tied equally. While having said that, I'm pretty disappointed of the state that we are currently in and I'm starting to lean towards simplifying the process of name selection for releases. However, I think it's probably way too late to make these type of changes. I feel partly responsible because I spent a significant amount of time trying to work with our Chinese community members (and OSF staff in China) to make sure that we get the name choices right, but it seems that has added too much delay and process in the system. In retrospective, this was hard from the start and I think we should have seen this coming much earlier, because the issue about no romanization that starts with "U" was brought up really early on but we didn't really take action on it. As much as I am disappointed in the outcome, I'd like us to turn it around and resolve this while everyone's interests are invested in it right now to avoid the same thing from happening again. It's clearly not the first time this has happened and this is a good time to rework that release naming document. From mesutaygn at gmail.com Tue Aug 13 19:32:45 2019 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Tue, 13 Aug 2019 22:32:45 +0300 Subject: Heat Template Message-ID: Hi everyone; I am writing the template for cluster but i cant inject the cloud-init data. How can I inject the password data for vm? heat_template_version: 2014-10-16 new_instance: type: OS::Nova::Server properties: key_name: { get_param: key_name } image: { get_param: image_id } flavor: bir name: str_replace: template: master-$NONCE-dmind params: $NONCE: { get_resource: name_nonce } user_data: | #!/bin/bash #cloud-config password: 724365 echo "Running boot script" >> /home/ubuntu/test sudo sed -i "/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes" /etc/ssh/sshd_config sudo useradd -d /home/mesut -m mesut sudo usermod --password 724365 ubuntu /etc/init.d/ssh restart -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Tue Aug 13 19:45:28 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 13 Aug 2019 15:45:28 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87v9v028l5.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: Even though I agree the process this time around was comedy of errors (or worse), I don't think switching to numeric releases is particularly wise... for example, more than once someone has reported an issue with Sahara and stated that they are using version of Sahara. Turns out that is actually the version of the OSA playbooks being used. Let's not add another number to get confused about into the mix. Anyway: - I think that now with you having pointed out everything that went wrong and having pointed us towards the simple steps that should be followed instead, we ought to give ourselves one more try to get the process correct for "V". We really should be able to get it right next time and there's something to be said for tradition. - I did not see in the document about how to determine the geographic region (its size etc or who should determine it). This is an opportunity for confusion sometimes leading to bitterness (and it was in the case of U -- whole China versus near Shanghai). Just some thoughts. P.S.: Single letter (like "U", "V") doesn't work when we wrap the alphabet (as has already been observed), but something like "U21", "V22", ... "A27" seems to work fine. On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: > > Rico Lin writes: > > > (Put my whatever hat on) > > Here's my suggestion, we can either make a patch to clarify the process > > step by step (no exception) or simply move everything out of > > https://governance.openstack.org/tc > > That actually leads to the current discussion here, to just use versions or > > not. Personally, I'm interested in improving the document and not that much > > interested in making only versions. I do like to see if we can use whatever > > alphabet we like so this version can be *cool*, and the next version can be > > *awesome*. Isn't that sounds cool and awesome? :) > > I'm happy to help improve it if that's what folks want. I already think > it says what you and several other people want it to say. But I wrote > it, and so the fact that people keep reading it and coming away with > different understandings means I did a bad job. So I'll need help to > figure out which parts I wasn't clear on. > > But I'm serious about the suggestion to scrap names altogether. Every > time we have an issue with this, it's because people start making their > own judgments when the job of the coordinator is basically just to send > some emails. > > The process is 7 very clear steps. Many of them were definitely not > followed this time. We can try to make it more clear, but we have done > that before, and it still didn't prevent things from going wrong this > time. > > As a community, we just don't care enough to get it right, and getting > it wrong only produces bad feelings and wastes all our time. I'm > looking forward to OpenStack Release 22. > > That sounds cool. That's a big number. Way bigger than like 1.x. > > > And like the idea > > Chris Dent propose to just use *U* or *V*, etc. to save us from having to > > have this discussion again(I'm actually the one to propose *U* in the list > > this time:) ) > > That would solve a lot of problems, and create one new one in a few > years. :) > > > And if we're going to use any new naming system, I strongly suggest we > > should remove the *Geographic Region* constraint if we plan to have a poll. > > It's always easy to find conflict between what local people think about the > > name and what the entire community thinks about it. > > We will have a very long list if we do that. > > I'm not sure I agree with you about that problem though. In practice, > deciding whether a river is within a state boundary is not that > contentious. That's pretty much all that's ever been asked. > > > (Put my official hat on) > > And for the problem of *University* part: > > back in the proposal period, I find a way to add *University* back to the > > meet criteria list so hope people get to discuss whether or not it can be > > in the poll. And (regardless for the ongoing discussion about whether or > > not TC got any role to govern this process) I did turn to TCs and ask for > > the advice for the final answer (isn't that is the responsibility for TCs > > to guide?), so I guess we can say I'm the one to remove it out of the final > > list. Therefore I'm taking the responsibility to say I'm the one to omit > > *University*. > > Thanks. I don't fault you personally for this, I think we got into this > situation because no one wanted to do it and so a confusing set of > people on the TC ended up performing various tasks ad-hoc. That you > stepped up and took action and responsibility is commendable. You have > my respect for that. > > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of a > single decision about whether it was included, I received 3 decisions > with 4 rationales from several different people. Either of the > following would have been perfectly fine outcomes: > > Me: Can has University, plz? > Coordinator: Violates criterion 4 > Me: But Pike > Coordinator: Questionable, but process says be "generous" > so, okay, it's in. > or > Coordinator: . Sorry, it's > still out. > > However, reasons around trademark or the suitability of English words > are not appropriate reasons to exclude a name. Nor is "the TC didn't > like it". There is only one reason to exclude a name, and that is that > it violates one of the 4 criteria. > > Of course it's fine to ask the TC, or anyone else for guidance. > However, it's clear from the IRC log that many members of the TC did not > appreciate what was being asked of them. It would be okay to ask them > "Do you think this meets the criteria?" But instead, a long discussion > about whether the names were *good choices* ensued. That's not one of > the steps in the process. In fact, it's the exact thing that the > process is supposed to avoid. No matter what the members of the TC > thought about whether a name was a good idea, if it met the criteria it > should be in. > > > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan > > from the meet criteria list because they're not the most popular spelling > > system in China. And we omitted Urumqi from the meet criteria list because > > of the potential political issue. Those are before I was an official. And > > we should consider them all during discuss about *University* here. I guess > > we should define more about in which stage should the official propose the > > final list of all names that meet criteria should all automatically be part > > of the final list. > > None of those should have been removed. They, even more so than > University, clearly meet the criteria, and were only removed due to > personal preference. > > I want to be clear, there *is* a place for consideration of all of these > things. That is step 3: > > The marketing community may identify any names of particular concern > from a marketing standpoint and discuss such issues publicly on the > Marketing mailing list. The marketing community may produce a list of > problematic items (with citations to the mailing list discussion of > the rationale) to the election official. This information will be > communicated during the election, but the names will not be removed > from the poll. > > That is where we would identify things like "this name uses an unusual > romanization system" or "this name has political ramifications". We > don't remove those names from the list, but we let the community know > about the issues, so that when people vote, they have all the > information. > > We trust our community to make good (or hilariously bad) decisions. > > That's what this all comes down to. The process as written is supposed > to collect a lot of names, with a lot of information, and present them > to our community and let us all decide together. That's what has been > lost. > > -Jim > From openstack at fried.cc Tue Aug 13 20:20:50 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 13 Aug 2019 15:20:50 -0500 Subject: [nova] Shanghai Project Update - Volunteer(s) needed Message-ID: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> Hello Nova. Traditionally summit project updates are done by the PTL(s) of the foregoing and/or upcoming cycles. In this case, the former (that would be me) will not be attending the summit, and we don't yet know who the latter will be. So I am asking for a volunteer or two who is a) attending the summit [1], and b) willing [2] to deliver a Nova project update presentation. I (and others, I am sure) will be happy to help with slide content and other prep. Please respond ASAP so we can reserve a slot. Thanks, efried [1] it need not be definite at this point - obviously corporate, political, travel, and personal contingencies may interfere [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. From corvus at inaugust.com Tue Aug 13 20:53:28 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 13 Aug 2019 13:53:28 -0700 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: (Jeremy Freudberg's message of "Tue, 13 Aug 2019 15:45:28 -0400") References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: <87o90szt13.fsf@meyer.lemoncheese.net> Jeremy Freudberg writes: > - I did not see in the document about how to determine the geographic > region (its size etc or who should determine it). This is an > opportunity for confusion sometimes leading to bitterness (and it was > in the case of U -- whole China versus near Shanghai). That's a good question. The TC decides that before the process starts, along with setting the dates. It appears in the table at the end of: https://governance.openstack.org/tc/reference/release-naming.html The process kicks off with a TC resolution commit like this: https://opendev.org/openstack/governance/commit/9219939fb153857ec5f53b986f867fcf4d29ab37 Hopefully at that point everyone is on the same page. So at least once the process starts, there shouldn't be any question about the geographic area. Of course, this time, there were 3 more commits after that one changing various things, including the area. Ideally, we'd set a region and not change it. But to me, expanding a region is at least better than reducing it. So I don't fault the TC for making that change (and making it in a deliberative way). Specifying the region in advance was in fact a late addition to the process and document. We didn't get that right the first time. The first entry in that table (which now says "Tokyo"; this seems revisionist to me) used to say "N/A" because we did not specify a region in advance, and it caused problems. If we keep the document (I hope we don't), I agree that we should add more text explaining that. -Jim > Just some thoughts. > > P.S.: Single letter (like "U", "V") doesn't work when we wrap the > alphabet (as has already been observed), but something like "U21", > "V22", ... "A27" seems to work fine. If we can shift it by 2 to get a B-52's tribute release, I can get on board with that. From Arkady.Kanevsky at dell.com Tue Aug 13 20:59:50 2019 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 13 Aug 2019 20:59:50 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: Suggest we stick to geographically named releases. AT least till we reach Z. Community and User are well accustomed to it now. It has marketing value now, so let's keep it. Thanks, Arkady -----Original Message----- From: Jeremy Freudberg Sent: Tuesday, August 13, 2019 2:45 PM To: James E. Blair Cc: OpenStack Discuss Subject: Re: [all][tc] U Cycle Naming Poll [EXTERNAL EMAIL] Even though I agree the process this time around was comedy of errors (or worse), I don't think switching to numeric releases is particularly wise... for example, more than once someone has reported an issue with Sahara and stated that they are using version of Sahara. Turns out that is actually the version of the OSA playbooks being used. Let's not add another number to get confused about into the mix. Anyway: - I think that now with you having pointed out everything that went wrong and having pointed us towards the simple steps that should be followed instead, we ought to give ourselves one more try to get the process correct for "V". We really should be able to get it right next time and there's something to be said for tradition. - I did not see in the document about how to determine the geographic region (its size etc or who should determine it). This is an opportunity for confusion sometimes leading to bitterness (and it was in the case of U -- whole China versus near Shanghai). Just some thoughts. P.S.: Single letter (like "U", "V") doesn't work when we wrap the alphabet (as has already been observed), but something like "U21", "V22", ... "A27" seems to work fine. On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: > > Rico Lin writes: > > > (Put my whatever hat on) > > Here's my suggestion, we can either make a patch to clarify the > > process step by step (no exception) or simply move everything out of > > https://governance.openstack.org/tc > > That actually leads to the current discussion here, to just use > > versions or not. Personally, I'm interested in improving the > > document and not that much interested in making only versions. I do > > like to see if we can use whatever alphabet we like so this version > > can be *cool*, and the next version can be *awesome*. Isn't that > > sounds cool and awesome? :) > > I'm happy to help improve it if that's what folks want. I already > think it says what you and several other people want it to say. But I > wrote it, and so the fact that people keep reading it and coming away > with different understandings means I did a bad job. So I'll need > help to figure out which parts I wasn't clear on. > > But I'm serious about the suggestion to scrap names altogether. Every > time we have an issue with this, it's because people start making > their own judgments when the job of the coordinator is basically just > to send some emails. > > The process is 7 very clear steps. Many of them were definitely not > followed this time. We can try to make it more clear, but we have > done that before, and it still didn't prevent things from going wrong > this time. > > As a community, we just don't care enough to get it right, and getting > it wrong only produces bad feelings and wastes all our time. I'm > looking forward to OpenStack Release 22. > > That sounds cool. That's a big number. Way bigger than like 1.x. > > > And like the idea > > Chris Dent propose to just use *U* or *V*, etc. to save us from > > having to have this discussion again(I'm actually the one to propose > > *U* in the list this time:) ) > > That would solve a lot of problems, and create one new one in a few > years. :) > > > And if we're going to use any new naming system, I strongly suggest > > we should remove the *Geographic Region* constraint if we plan to have a poll. > > It's always easy to find conflict between what local people think > > about the name and what the entire community thinks about it. > > We will have a very long list if we do that. > > I'm not sure I agree with you about that problem though. In practice, > deciding whether a river is within a state boundary is not that > contentious. That's pretty much all that's ever been asked. > > > (Put my official hat on) > > And for the problem of *University* part: > > back in the proposal period, I find a way to add *University* back > > to the meet criteria list so hope people get to discuss whether or > > not it can be in the poll. And (regardless for the ongoing > > discussion about whether or not TC got any role to govern this > > process) I did turn to TCs and ask for the advice for the final > > answer (isn't that is the responsibility for TCs to guide?), so I > > guess we can say I'm the one to remove it out of the final list. > > Therefore I'm taking the responsibility to say I'm the one to omit *University*. > > Thanks. I don't fault you personally for this, I think we got into > this situation because no one wanted to do it and so a confusing set > of people on the TC ended up performing various tasks ad-hoc. That > you stepped up and took action and responsibility is commendable. You > have my respect for that. > > I do think the conversation about University could have been more clear. > Specific yes/no answers and reasons would have been nice. Instead of > a single decision about whether it was included, I received 3 > decisions with 4 rationales from several different people. Either of > the following would have been perfectly fine outcomes: > > Me: Can has University, plz? > Coordinator: Violates criterion 4 > Me: But Pike > Coordinator: Questionable, but process says be "generous" > so, okay, it's in. > or > Coordinator: . Sorry, it's > still out. > > However, reasons around trademark or the suitability of English words > are not appropriate reasons to exclude a name. Nor is "the TC didn't > like it". There is only one reason to exclude a name, and that is > that it violates one of the 4 criteria. > > Of course it's fine to ask the TC, or anyone else for guidance. > However, it's clear from the IRC log that many members of the TC did > not appreciate what was being asked of them. It would be okay to ask > them "Do you think this meets the criteria?" But instead, a long > discussion about whether the names were *good choices* ensued. That's > not one of the steps in the process. In fact, it's the exact thing > that the process is supposed to avoid. No matter what the members of > the TC thought about whether a name was a good idea, if it met the > criteria it should be in. > > > During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, > > Ueishan from the meet criteria list because they're not the most > > popular spelling system in China. And we omitted Urumqi from the > > meet criteria list because of the potential political issue. Those > > are before I was an official. And we should consider them all during > > discuss about *University* here. I guess we should define more about > > in which stage should the official propose the final list of all > > names that meet criteria should all automatically be part of the final list. > > None of those should have been removed. They, even more so than > University, clearly meet the criteria, and were only removed due to > personal preference. > > I want to be clear, there *is* a place for consideration of all of > these things. That is step 3: > > The marketing community may identify any names of particular concern > from a marketing standpoint and discuss such issues publicly on the > Marketing mailing list. The marketing community may produce a list of > problematic items (with citations to the mailing list discussion of > the rationale) to the election official. This information will be > communicated during the election, but the names will not be removed > from the poll. > > That is where we would identify things like "this name uses an unusual > romanization system" or "this name has political ramifications". We > don't remove those names from the list, but we let the community know > about the issues, so that when people vote, they have all the > information. > > We trust our community to make good (or hilariously bad) decisions. > > That's what this all comes down to. The process as written is > supposed to collect a lot of names, with a lot of information, and > present them to our community and let us all decide together. That's > what has been lost. > > -Jim > From openstack at nemebean.com Tue Aug 13 21:17:21 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 13 Aug 2019 16:17:21 -0500 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <87v9v028l5.fsf@meyer.lemoncheese.net> Message-ID: <6fccbd18-e449-bf31-cf91-bae6356d74db@nemebean.com> On 8/13/19 2:45 PM, Jeremy Freudberg wrote: > Even though I agree the process this time around was comedy of errors > (or worse), I don't think switching to numeric releases is > particularly wise... for example, more than once someone has reported > an issue with Sahara and stated that they are using > version of Sahara. Turns out that is actually the > version of the OSA playbooks being used. Let's not add another number > to get confused about into the mix. We also use version numbers for our downstream OpenStack product, and I believe others do as well. It's kind of nice to know that if someone is talking about a numerical version they mean downstream, whereas a letter means upstream. Although I guess the other side of that is if upstream went to numbers downstream could just match those. It would be a little weird because we'd skip several major versions, but it could be done. > > Anyway: > - I think that now with you having pointed out everything that went > wrong and having pointed us towards the simple steps that should be > followed instead, we ought to give ourselves one more try to get the > process correct for "V". We really should be able to get it right next > time and there's something to be said for tradition. > - I did not see in the document about how to determine the geographic > region (its size etc or who should determine it). This is an > opportunity for confusion sometimes leading to bitterness (and it was > in the case of U -- whole China versus near Shanghai). > > Just some thoughts. > > P.S.: Single letter (like "U", "V") doesn't work when we wrap the > alphabet (as has already been observed), but something like "U21", > "V22", ... "A27" seems to work fine. It doesn't solve the problem of tooling that assumes names will sort alphabetically though. I suppose we could go to ZA (all hail the mighty Za-Lord!*), ZB, etc., but that seems pretty hacky. I think we're ultimately going to have to make some changes to the tooling no matter what we decide now. * any Dresden Files fans here? ;-) > > On Tue, Aug 13, 2019 at 3:03 PM James E. Blair wrote: >> >> Rico Lin writes: >> >>> (Put my whatever hat on) >>> Here's my suggestion, we can either make a patch to clarify the process >>> step by step (no exception) or simply move everything out of >>> https://governance.openstack.org/tc >>> That actually leads to the current discussion here, to just use versions or >>> not. Personally, I'm interested in improving the document and not that much >>> interested in making only versions. I do like to see if we can use whatever >>> alphabet we like so this version can be *cool*, and the next version can be >>> *awesome*. Isn't that sounds cool and awesome? :) >> >> I'm happy to help improve it if that's what folks want. I already think >> it says what you and several other people want it to say. But I wrote >> it, and so the fact that people keep reading it and coming away with >> different understandings means I did a bad job. So I'll need help to >> figure out which parts I wasn't clear on. >> >> But I'm serious about the suggestion to scrap names altogether. Every >> time we have an issue with this, it's because people start making their >> own judgments when the job of the coordinator is basically just to send >> some emails. >> >> The process is 7 very clear steps. Many of them were definitely not >> followed this time. We can try to make it more clear, but we have done >> that before, and it still didn't prevent things from going wrong this >> time. >> >> As a community, we just don't care enough to get it right, and getting >> it wrong only produces bad feelings and wastes all our time. I'm >> looking forward to OpenStack Release 22. >> >> That sounds cool. That's a big number. Way bigger than like 1.x. >> >>> And like the idea >>> Chris Dent propose to just use *U* or *V*, etc. to save us from having to >>> have this discussion again(I'm actually the one to propose *U* in the list >>> this time:) ) >> >> That would solve a lot of problems, and create one new one in a few >> years. :) >> >>> And if we're going to use any new naming system, I strongly suggest we >>> should remove the *Geographic Region* constraint if we plan to have a poll. >>> It's always easy to find conflict between what local people think about the >>> name and what the entire community thinks about it. >> >> We will have a very long list if we do that. >> >> I'm not sure I agree with you about that problem though. In practice, >> deciding whether a river is within a state boundary is not that >> contentious. That's pretty much all that's ever been asked. >> >>> (Put my official hat on) >>> And for the problem of *University* part: >>> back in the proposal period, I find a way to add *University* back to the >>> meet criteria list so hope people get to discuss whether or not it can be >>> in the poll. And (regardless for the ongoing discussion about whether or >>> not TC got any role to govern this process) I did turn to TCs and ask for >>> the advice for the final answer (isn't that is the responsibility for TCs >>> to guide?), so I guess we can say I'm the one to remove it out of the final >>> list. Therefore I'm taking the responsibility to say I'm the one to omit >>> *University*. >> >> Thanks. I don't fault you personally for this, I think we got into this >> situation because no one wanted to do it and so a confusing set of >> people on the TC ended up performing various tasks ad-hoc. That you >> stepped up and took action and responsibility is commendable. You have >> my respect for that. >> >> I do think the conversation about University could have been more clear. >> Specific yes/no answers and reasons would have been nice. Instead of a >> single decision about whether it was included, I received 3 decisions >> with 4 rationales from several different people. Either of the >> following would have been perfectly fine outcomes: >> >> Me: Can has University, plz? >> Coordinator: Violates criterion 4 >> Me: But Pike >> Coordinator: Questionable, but process says be "generous" >> so, okay, it's in. >> or >> Coordinator: . Sorry, it's >> still out. >> >> However, reasons around trademark or the suitability of English words >> are not appropriate reasons to exclude a name. Nor is "the TC didn't >> like it". There is only one reason to exclude a name, and that is that >> it violates one of the 4 criteria. >> >> Of course it's fine to ask the TC, or anyone else for guidance. >> However, it's clear from the IRC log that many members of the TC did not >> appreciate what was being asked of them. It would be okay to ask them >> "Do you think this meets the criteria?" But instead, a long discussion >> about whether the names were *good choices* ensued. That's not one of >> the steps in the process. In fact, it's the exact thing that the >> process is supposed to avoid. No matter what the members of the TC >> thought about whether a name was a good idea, if it met the criteria it >> should be in. >> >>> During the process, we omitted Ujenn, Uanjou, Ui, Uanliing, Ueihae, Ueishan >>> from the meet criteria list because they're not the most popular spelling >>> system in China. And we omitted Urumqi from the meet criteria list because >>> of the potential political issue. Those are before I was an official. And >>> we should consider them all during discuss about *University* here. I guess >>> we should define more about in which stage should the official propose the >>> final list of all names that meet criteria should all automatically be part >>> of the final list. >> >> None of those should have been removed. They, even more so than >> University, clearly meet the criteria, and were only removed due to >> personal preference. >> >> I want to be clear, there *is* a place for consideration of all of these >> things. That is step 3: >> >> The marketing community may identify any names of particular concern >> from a marketing standpoint and discuss such issues publicly on the >> Marketing mailing list. The marketing community may produce a list of >> problematic items (with citations to the mailing list discussion of >> the rationale) to the election official. This information will be >> communicated during the election, but the names will not be removed >> from the poll. >> >> That is where we would identify things like "this name uses an unusual >> romanization system" or "this name has political ramifications". We >> don't remove those names from the list, but we let the community know >> about the issues, so that when people vote, they have all the >> information. >> >> We trust our community to make good (or hilariously bad) decisions. >> >> That's what this all comes down to. The process as written is supposed >> to collect a lot of names, with a lot of information, and present them >> to our community and let us all decide together. That's what has been >> lost. >> >> -Jim >> > From sfinucan at redhat.com Tue Aug 13 21:20:12 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 13 Aug 2019 22:20:12 +0100 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> Message-ID: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > Hello Nova. > > Traditionally summit project updates are done by the PTL(s) of the > foregoing and/or upcoming cycles. In this case, the former (that would > be me) will not be attending the summit, and we don't yet know who the > latter will be. > > So I am asking for a volunteer or two who is a) attending the summit > [1], and b) willing [2] to deliver a Nova project update presentation. I > (and others, I am sure) will be happy to help with slide content and > other prep. Unless someone else wants to do it, I think I can probably do it. Stephen > Please respond ASAP so we can reserve a slot. > > Thanks, > efried > > [1] it need not be definite at this point - obviously corporate, > political, travel, and personal contingencies may interfere > [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. > From sfinucan at redhat.com Tue Aug 13 21:31:42 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 13 Aug 2019 22:31:42 +0100 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> Message-ID: On Tue, 2019-08-13 at 17:14 +0100, Sean Mooney wrote: > On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: > > Since we take particular pride in our community participation, the fact > > that we have not been able or willing to do this correctly reflects very > > poorly on us. I would rather that we not do this at all than do it > > badly, so I think this should be the last release with a name. I've > > proposed that change here: > > > > https://review.opendev.org/675788 > > not to takethis out of context but it is rather long thread so i have sniped > the bit i wanted to comment on. > > i thnik not nameing release would be problemeatic on two fronts. > one without a common comunity name i think codename or other conventint names > are going to crop up as many have been refering to the U release as the unicorn > release just to avoid the confusion between "U" and "you" when speak about the release > untill we have an offical name. if we had no offical names i think we woudl keep using > those placeholders at least on irc or in person. (granted we would not use them for code > or docs) > > that is a minor thing but the more distributive issue i see is that nova's U release > will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the > set of compatiable project for a given version we woudl only have the letter and form a marketing > perspective and even from development perspective i think that will be problematic. > > we could just have the V release but i think it loses something in clarity. +1. As Sean points out, and as has been pointed out elsewhere in the thread, we already have waaay too many version-related numbers floating around. If we were to opt for numbers instead of a U-based name for this release, that would mean for _nova alone_, I'd personaly have to distinguish between OpenStack 22, nova 21.0 (I think) and OSP 17.0 (again, I think), and that's before I think about other projects and packages. Nooope. I haven't heard anyone objecting to the use of release names but rather the process used to choose those names. Change that process, by either loosening the constraints used in choosing it or by moving it from a community-driven decision to something the Foundation/TC just decides on, but please don't drop the alphabetic names entirely. Stephen From piotr.baranowski at osec.pl Tue Aug 13 22:21:03 2019 From: piotr.baranowski at osec.pl (Piotr Baranowski) Date: Wed, 14 Aug 2019 00:21:03 +0200 (CEST) Subject: OpenStack 14 CentOS and Nvidia driver for vgpu? Message-ID: <433243135.116623.1565734863200.JavaMail.zimbra@osec.pl> Hello list, I'm struggling deploying Rocky with vGPU using nvidia drivers. Has anyone experienced the issues loading nvidia modules? I'm talking about hypervisor part of the setup. There are two modules provided by nvidia. One loads correctly it's the nvidia.ko one. The other however does not. The module is called nvidia-vgpu-vfio.ko I'm trying to load it and it seems that 7.6 kernel is no longer compatible with it modprobe nvidia-vgpu-vfio modprobe: ERROR: could not insert 'nvidia_vgpu_vfio': Invalid argument dmesg shows this: nvidia_vgpu_vfio: disagrees about version of symbol vfio_pin_pages nvidia_vgpu_vfio: Unknown symbol vfio_pin_pages (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_unpin_pages nvidia_vgpu_vfio: Unknown symbol vfio_unpin_pages (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_register_notifier nvidia_vgpu_vfio: Unknown symbol vfio_register_notifier (err -22) nvidia_vgpu_vfio: disagrees about version of symbol vfio_unregister_notifier nvidia_vgpu_vfio: Unknown symbol vfio_unregister_notifier (err -22) modinfo nvidia-vgpu-vfio filename: /lib/modules/3.10.0-957.27.2.el7.x86_64/weak-updates/nvidia-vgpu-vfio.ko version: 430.27 supported: external license: MIT rhelversion: 7.6 srcversion: 0A179A61A02AD500D05FB1A alias: pci:v000010DEd00000E00sv*sd*bc04sc80i00* alias: pci:v000010DEd*sv*sd*bc03sc02i00* alias: pci:v000010DEd*sv*sd*bc03sc00i00* depends: nvidia,mdev,vfio vermagic: 3.10.0-940.el7.x86_64 SMP mod_unload modversions My guess is that somewhere along the rhel/centos 7.6 lifecycle vfio module changed the vfio module and broke the compatibility. Nvidia provides those modules built against the BETA 7.6 release and assume weak-modules to make it work. Somehow it does not. Anybody got any suggestions how to handle this? I'm working on it with nvidia enterprise support but maybe one of you got there first? best regards -- Piotr Baranowski -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Aug 14 01:27:51 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Wed, 14 Aug 2019 01:27:51 +0000 Subject: reply: [lists.openstack.org][nova] The race for 2.76 Message-ID: <90d2716296164247bc053b09f6bbf318@inspur.com> > There are several compute API microversion changes that are conflicting and will be fighting for 2.76, but I think we're trying to prioritize this one [1] for the ironic > power sync external event handling since (1) Surya is going to be on vacation soon, (2) there is an ironic change that depends on it which has had review [2] and (3) > the nova change has had quite a bit of review already. > As such I think others waiting to rebase from 2.75 to 2.76 should probably hold off until [1] is approved which should happen today or tomorrow. > > [1] https://review.opendev.org/#/c/645611/ > [2] https://review.opendev.org/#/c/664842/ > > -- > > Thanks, > > Matt Agree with Matt. It is recommended to speed up the review of the patches on nova-runway [1]. I found that it has accumulated a lot, and some of them are difficult to complete in one cycle. [1] https://etherpad.openstack.org/p/nova-runways-train From soulxu at gmail.com Wed Aug 14 01:36:04 2019 From: soulxu at gmail.com (Alex Xu) Date: Wed, 14 Aug 2019 09:36:04 +0800 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> Message-ID: I apply the second one. Stephen Finucane 于2019年8月14日周三 上午5:25写道: > On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > > Hello Nova. > > > > Traditionally summit project updates are done by the PTL(s) of the > > foregoing and/or upcoming cycles. In this case, the former (that would > > be me) will not be attending the summit, and we don't yet know who the > > latter will be. > > > > So I am asking for a volunteer or two who is a) attending the summit > > [1], and b) willing [2] to deliver a Nova project update presentation. I > > (and others, I am sure) will be happy to help with slide content and > > other prep. > > Unless someone else wants to do it, I think I can probably do it. > > Stephen > > > Please respond ASAP so we can reserve a slot. > > > > Thanks, > > efried > > > > [1] it need not be definite at this point - obviously corporate, > > political, travel, and personal contingencies may interfere > > [2] blah blah, opportunity for exposure, yatta yatta, good experience, > etc. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Aug 14 01:48:52 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Wed, 14 Aug 2019 01:48:52 +0000 Subject: =?utf-8?B?UmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVJlOiBbbm92YV0gU2hh?= =?utf-8?Q?nghai_Project_Update_-_Volunteer(s)_needed?= Message-ID: <6da5507a24944535802774e96f1aeee8@inspur.com> > I apply the second one. I can also provide some help if there is a need. Stephen Finucane > 于2019年8月14日周三 上午5:25写道: On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: > Hello Nova. > > Traditionally summit project updates are done by the PTL(s) of the > foregoing and/or upcoming cycles. In this case, the former (that would > be me) will not be attending the summit, and we don't yet know who the > latter will be. > > So I am asking for a volunteer or two who is a) attending the summit > [1], and b) willing [2] to deliver a Nova project update presentation. I > (and others, I am sure) will be happy to help with slide content and > other prep. Unless someone else wants to do it, I think I can probably do it. Stephen > Please respond ASAP so we can reserve a slot. > > Thanks, > efried > > [1] it need not be definite at this point - obviously corporate, > political, travel, and personal contingencies may interfere > [2] blah blah, opportunity for exposure, yatta yatta, good experience, etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Aug 14 02:10:11 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 13 Aug 2019 22:10:11 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <87imr13tye.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> <8736i56rl2.fsf@meyer.lemoncheese.net> <4b539001bc3b4881efeba211d122069e17d35c60.camel@redhat.com> <87imr13tye.fsf@meyer.lemoncheese.net> Message-ID: On 13/08/19 12:34 PM, James E. Blair wrote: > Sean Mooney writes: > >> On Tue, 2019-08-13 at 07:57 -0700, James E. Blair wrote: >>> Since we take particular pride in our community participation, the fact >>> that we have not been able or willing to do this correctly reflects very >>> poorly on us. I would rather that we not do this at all than do it >>> badly, so I think this should be the last release with a name. I've >>> proposed that change here: >>> >>> https://review.opendev.org/675788 >> >> not to takethis out of context but it is rather long thread so i have sniped >> the bit i wanted to comment on. >> >> i thnik not nameing release would be problemeatic on two fronts. >> one without a common comunity name i think codename or other conventint names >> are going to crop up as many have been refering to the U release as the unicorn >> release just to avoid the confusion between "U" and "you" when speak about the release >> untill we have an offical name. if we had no offical names i think we woudl keep using >> those placeholders at least on irc or in person. (granted we would not use them for code >> or docs) >> >> that is a minor thing but the more distributive issue i see is that nova's U release >> will be 21.0.0? and neutorns U release will be 16.0.0? without a name to refer to the >> set of compatiable project for a given version we woudl only have the >> letter and form a marketing >> perspective and even from development perspective i think that will be problematic. >> >> we could just have the V release but i think it loses something in clarity. > > That's a good point. > > Maybe we could just number them? V would be "OpenStack Release 22". > > Or we could refer to them by date, as we used to, but without attempting > to use dates as actual version numbers. I propose that once we wrap back to A, the next series should be named exclusively after words that generically describe a geographic feature (Park/Quay/Road/Street/Train/University &c.) since those should be less fraught and seem to be everyone's favourites anyway :P From li.canwei2 at zte.com.cn Wed Aug 14 02:41:18 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 14 Aug 2019 10:41:18 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIHRlYW0gbWVldGluZyBhdCAwODowMCBVVEMgdG9kYXk=?= Message-ID: <201908141041182213224@zte.com.cn> Hi team, Watcher team will have a meeting at 08:00 UTC today in the #openstack-meeting-alt channel. The agenda is available on https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda feel free to add any additional items. Thanks! Canwei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Aug 14 04:12:22 2019 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 14 Aug 2019 00:12:22 -0400 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: <875zn27z2o.fsf@meyer.lemoncheese.net> References: <003286A6-2F8C-4827-A290-EB32277C05E2@doughellmann.com> <87blx1olw9.fsf@meyer.lemoncheese.net> <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: On 12/08/19 7:18 PM, James E. Blair wrote: > As I understand it, the sequence of events that led us here was: > > A) Doug (as interim unofficial election official) removed the name for > unspecified reasons. [1] > > B) I objected to the removal. This is in accordance with step 5 of the > process: > > Once the list is finalized and publicized, a one-week period shall > elapse before the start of the election so that any names removed > from consideration because they did not meet the Release Name > Criteria may be discussed. Names erroneously removed may be > re-added during this period, and the Technical Committee may vote > to add exceptional names (which do not meet the standard criteria). > > C) Rico (the election official at the time) agreed with my reasoning > that it was erroneously removed and re-added the name. [2] > > D) The list was re-issued and the name was once again missing. Four > reasons were cited, three of which have no place being considered > prior to voting, and the fourth is a claim that it does not meet the > criteria. I'd just like to point out that Rico was placed in a very difficult position here - after he generously volunteered to step up as the co-ordinator at a time when the deadline to begin the vote had already passed, doing so from a timezone where any discussion with you, the rest of the TC, or indeed most people in the community effectively had a 24 hour round trip time. So when you pointed out that Doug's reason for dropping it from the list was not in line with the guidelines, he agreed. It was only after that that I raised the issue of it not appearing to meet the criteria. There wasn't a loud chorus of TC members (or people in general) saying that it did, so he essentially agreed that it didn't and we treated it as a proposed exception. Perhaps I gave him bad advice, but he's entitled to take advice from anyone and it's easy to see why the opinions of his fellow TC members might be influential. I must confess that I neglected to re-read the portion of the guidelines that says that in the case of questionable proposals the co-ordinator should err on the side of inclusion. Perhaps if you had been alerted to the discussion in time to raise this point then the outcome might have been different. Nevertheless, given that each step in the consultation process consumed another 12 hours following a deadline that had already passed before the process began, I think Rico handled it as well as anyone could have. My understanding (which may be wrong because it all seems to have gone down within a day that I happened to be on vacation) of how we got into that state to begin with is that after Tony did a ton of work figuring out how to get a local name beginning with U, collected a bunch of names + feedback, and was basically ready to start the poll, the Foundation implied that they would veto all of the names on the grounds that their China expert didn't feel that using the GR transliteration would be appropriate because of reasons. Those reasons conflicted with the interpretation of the China expert that Tony consulted and with all available information published in English, and honestly I wish somebody had pushed back on them, but at a certain point there's probably nothing else you can do but expand the geographic region, delay the poll, and start again. Which the TC did. And of course this had the knock-on effect of requiring someone to decide whether certain incandescently-hot potato options should be omitted from the poll. They were of course, and I know you think that's the wrong call but I disagree. IIRC the current process was put in place after the Lemming debacle, on the principle that in future the community should be allowed to have our fun and vote for Lemming (or not), and if the Foundation marketing want to veto that after the fact then fine, but don't let them take away our fun before the fact. I agree with that so far as it goes. (Full disclosure: I would have voted for Lemming.) However, it's just not the case that having a culturally-insensitive choice win the poll, or just do well in the poll, or even appear in the poll, cannot damage the community so long as marketing later rejects it. Nor does a public airing of dirty laundry seem conducive to _reducing_ the problem. This seems to be an issue that was not contemplated when the process was set down. (As if to prove the point, this very thing happened the very first time that the new process was used!) And quite frankly, it's not the responsibility of random people on the internet (the poll is open to anyone) to research the cultural sensitivity of all of the options. This is exactly the kind of reason we have representative governance. I agree that it's a problem that the TC has a written policy of abdicating this responsibility, and we have (mercifully) not followed it. We should change the policy if we don't believe in it. You wrote elsewhere in this thread that all of the delays and handoffs were due to nobody caring. I think this is completely wrong. The delays were due to people caring *a lot* under difficult circumstances (beginning with the fact that the official transliteration of local place names does not contain any syllables starting with U). Taking the Summit to Shanghai is a massive exercise and a huge opportunity to listen to the developer community there and find ways to engage with them better in the future, and nobody wants to waste that opportunity by alienating people unnecessarily. cheers, Zane. From sundar.nadathur at intel.com Wed Aug 14 04:29:58 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 14 Aug 2019 04:29:58 +0000 Subject: [cyborg] Poll for new weekly IRC meeting time In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5275E2395@fmsmsx122.amr.corp.intel.com> Message-ID: <1CC272501B5BC543A05DB90AA509DED5276005CD@fmsmsx122.amr.corp.intel.com> Based on the poll, we have chosen the new time for the Cyborg IRC weekly meeting: Thursday at UTC 0300 (China @11 am Thu; US West Coast @8 pm Wed) This will take effect from next week. The Cyborg meeting page [1] has been updated. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting Regards, Sundar From: Nadathur, Sundar Sent: Tuesday, August 6, 2019 11:07 PM To: openstack-discuss at lists.openstack.org Subject: [cyborg] Poll for new weekly IRC meeting time The current Cyborg weekly IRC meeting time [1] is a conflict for many. We are looking for a better time that works for more people, with the understanding that no time is perfect for all. Please fill out this poll: https://doodle.com/poll/6t279f9y6msztz7x Be sure to indicate which times do not work for you. You can propose a new timeslot beyond what I included in the poll. [1] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Weekly_IRC_Cyborg_team_meeting Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Aug 14 05:37:10 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 14 Aug 2019 05:37:10 +0000 Subject: [os-acc] os-acc will be retired as a project Message-ID: <1CC272501B5BC543A05DB90AA509DED5276006C9@fmsmsx122.amr.corp.intel.com> A project called os-acc [1] was created in Stein cycle based on an expectation that it will be used for Cyborg - Nova integration. It is not relevant anymore and we have no plans to support it in Train. We are discontinuing it with immediate effect. It was never used and had no developer base to speak of. So, we do not see any issues or impact for anybody. [1] https://opendev.org/openstack/os-acc/ Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Aug 14 06:38:18 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 14 Aug 2019 15:38:18 +0900 Subject: [Telemetry][Shanghai Summit] Looking for Project Update session volunteers Message-ID: Hi team, I cannot attend the next summit in Shanghai so I'm looking for one-two volunteers who want to represent the Telemetry team at the summit. Please reply to this email by this Sunday and we will work through the presentation together. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Aug 14 07:53:12 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 14 Aug 2019 07:53:12 +0000 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> Message-ID: <1565769180.30413.0@smtp.office365.com> On Tue, Aug 13, 2019 at 11:20 PM, Stephen Finucane wrote: > On Tue, 2019-08-13 at 15:20 -0500, Eric Fried wrote: >> Hello Nova. >> >> Traditionally summit project updates are done by the PTL(s) of the >> foregoing and/or upcoming cycles. In this case, the former (that >> would >> be me) will not be attending the summit, and we don't yet know who >> the >> latter will be. >> >> So I am asking for a volunteer or two who is a) attending the summit >> [1], and b) willing [2] to deliver a Nova project update >> presentation. I >> (and others, I am sure) will be happy to help with slide content and >> other prep. > > Unless someone else wants to do it, I think I can probably do it. You were faster than me noticing the IRC ping: 22:31 < mriedem> efried: i volunteer alex_xu and gibi for the project update in shanghai So you won! ;) gibi > > Stephen > >> Please respond ASAP so we can reserve a slot. >> >> Thanks, >> efried >> >> [1] it need not be definite at this point - obviously corporate, >> political, travel, and personal contingencies may interfere >> [2] blah blah, opportunity for exposure, yatta yatta, good >> experience, etc. >> > > From ralf.teckelmann at bertelsmann.de Wed Aug 14 08:41:28 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Wed, 14 Aug 2019 08:41:28 +0000 Subject: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent Message-ID: Hello, Working me through documentation and articles I am totally lost on the matter. All I want to know is: - if issueing "openstack snapshot create ...." - if klicking "create Snaphost" in Horizon for an instance will secure a consistent snapshot (of all volumes in question). With "consistent", I mean that all the data in memory are written to the disc before starting a snapshot. I hope someone can clear up, if using the setup described in the following is sufficient to achieve this goal or if I have to do something in addition. If you have any question I am eager to answer as fast as possible. Setup: We have a Stein-based OpenStack deployment with cinder backed by ceph. Instances are created with cinder volumes. Boot volumes are based on an image having the properties: - hw_qemu_guest_agent='yes' - os_require_quiesce='yes' The image is ubuntu 16.04 or 18.04 with quemu-guest-agent package installed and service running (no additional configuration besides distro-default): qemu-guest-agent.service - LSB: QEMU Guest Agent startup script Loaded: loaded (/etc/init.d/qemu-guest-agent; bad; vendor preset: enabled) Active: active (running) since Wed 2019-08-14 07:42:21 UTC; 9min ago Docs: man:systemd-sysv-generator(8) CGroup: /system.slice/qemu-guest-agent.service └─2300 /usr/sbin/qemu-ga --daemonize -m virtio-serial -p /dev/virtio-ports/org.qemu.guest_agent.0 Aug 14 07:42:21 ulthwe systemd[1]: Starting LSB: QEMU Guest Agent startup script... Aug 14 07:42:21 ulthwe systemd[1]: Started LSB: QEMU Guest Agent startup script. I can see the socket on the compute node and send pings successfully: ~# ls /var/lib/libvirt/qemu/*.sock /var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0000248e.sock root at pcevh2404:~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-ping"}' {"return":{}} I can also send freeze and thaw successfully: ~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-fsfreeze-freeze"}' {"return":1} ~# virsh qemu-agent-command instance-0000248e '{"execute":"guest-fsfreeze-thaw"}' {"return":1} Sending a simple write (echo "bla" > blub.file) in the "frozen" state will be blocked until "thaw" as expected. Best regards Ralf T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Wed Aug 14 11:16:22 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Wed, 14 Aug 2019 18:16:22 +0700 Subject: neutron-server won't start in OpenStack OVN Message-ID: Hi everyone, I try to install OpenStack with OVN enabled. But when trying to start neutron-server, the service is always inactive (exited) I try to check neutron-server logs and gets this message: 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ Does anyone know why this error happens? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Wed Aug 14 12:29:37 2019 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 14 Aug 2019 17:59:37 +0530 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: On Wed, Mar 27, 2019 at 11:52 PM Alex Schultz wrote: > > On Tue, Sep 18, 2018 at 1:30 PM Alex Schultz wrote: > > > > On Tue, Sep 18, 2018 at 1:27 PM, Matt Riedemann wrote: > > > The release page says Ocata is planned to go into extended maintenance mode > > > on Aug 27 [1]. There really isn't much to this except it means we don't do > > > releases for Ocata anymore [2]. There is a caveat that project teams that do > > > not wish to maintain stable/ocata after this point can immediately end of > > > life the branch for their project [3]. We can still run CI using tags, e.g. > > > if keystone goes ocata-eol, devstack on stable/ocata can still continue to > > > install from stable/ocata for nova and the ocata-eol tag for keystone. > > > Having said that, if there is no undue burden on the project team keeping > > > the lights on for stable/ocata, I would recommend not tagging the > > > stable/ocata branch end of life at this point. > > > > > > So, questions that need answering are: > > > > > > 1. Should we cut a final release for projects with stable/ocata branches > > > before going into extended maintenance mode? I tend to think "yes" to flush > > > the queue of backports. In fact, [3] doesn't mention it, but the resolution > > > said we'd tag the branch [4] to indicate it has entered the EM phase. > > > > > > 2. Are there any projects that would want to skip EM and go directly to EOL > > > (yes this feels like a Monopoly question)? > > > > > > > I believe TripleO would like to EOL instead of EM for Ocata as > > indicated by the thead > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134671.html > > > > Bringing this backup to see what we need to do to get the stable/ocata > branches ended for the TripleO projects. I'm bringing this up > because we have https://review.openstack.org/#/c/647009/ which is for > the upcoming rename but CI is broken and we have no interest in > continue to keep the stable/ocata branches alive (or fix ci for them). > So we had a discussion yesterday in TripleO meeting regarding EOL of Ocata and Pike Branches for TripleO projects, and there was no clarity regarding the process of making the branches EOL(is just pushing a change to openstack/releases(deliverables/ocata/.yaml) creating ocata-eol tag enough or something else is also needed), can someone from Release team point us in the right direction. > Thanks, > -Alex > > > Thanks, > > -Alex > > > > > [1] https://releases.openstack.org/ > > > [2] > > > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > > > [3] > > > https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance > > > [4] > > > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > > > > > > -- > > > > > > Thanks, > > > > > > Matt > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks and Regards Yatin Karel From merlin.blom at bertelsmann.de Wed Aug 14 12:43:28 2019 From: merlin.blom at bertelsmann.de (Blom, Merlin, NMU-OI) Date: Wed, 14 Aug 2019 12:43:28 +0000 Subject: [nova][stacktach] Message-ID: Hey, it anybody using stacktach with OpenStack Nova Stein? I can't stream messages to it via Nova: -> stacktach_worker_config.json {"deployments": [ { "name": "xxx.dev", "durable_queue": false, "rabbit_host": "10.x.x.x", "rabbit_port": 5672, "rabbit_userid": "nova", "rabbit_password": "xxx", "rabbit_virtual_host": "/nova", "exit_on_exception": true, "topics": { "nova": [ { "queue": "notification.info", "routing_key": "notification.info" }, { "queue": "monitor.error", "routing_key": "monitor.error" } . How do you configure it? Are there alternatives for reading RabbitMQ Messages for debug/billing purposes? Greetings, Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From pierre at stackhpc.com Wed Aug 14 12:48:30 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 14 Aug 2019 14:48:30 +0200 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: Hello, I am reviving this thread as I have never received any email notifications from starred projects in Storyboard, despite enabling them multiple times. Although the change appears to be saved correctly, if I log out and log back in, my preferences are reset to defaults (not only for email settings, but also for page size). I also noticed that increasing "Page size" doesn't have any effect within the same session, I always see 10 results per page. Is there a known issue with persisting preferences in Storyboard? Thanks, Pierre On Sun, 19 May 2019 at 06:17, Akihiro Motoki wrote: > > Thanks for the information. > > I re-enabled email notification and then started to receive notifications. I am not sure why this solved the problem but it now works for me. > > > 2019年5月15日(水) 22:43 : >> >> On 2019-05-15 13:58, Akihiro Motoki wrote: >> > Hi, >> > >> > Is there a way to get email notification on stories/tasks of >> > subscribed projects in storyboard? >> >> Yes, go to your preferences >> (https://storyboard.openstack.org/#!/profile/preferences) >> by clicking on your name in the top right, then Preferences. >> >> Scroll to the bottom and check the "Enable notification emails" >> checkbox, then >> click "Save". There's a UI bug where sometimes the displayed preferences >> will >> look like the save button didn't work, but rest assured that it did >> unless you >> get an error message. >> >> Once you've done this the email associated with your OpenID will receive >> notification emails for things you're subscribed to (which includes >> changes on >> stories/tasks related to projects you're subscribed to). >> >> Thanks, >> >> Adam (SotK) >> From rfolco at redhat.com Wed Aug 14 13:06:07 2019 From: rfolco at redhat.com (Rafael Folco) Date: Wed, 14 Aug 2019 10:06:07 -0300 Subject: [tripleo] TripleO CI Summary: Sprint 34 Message-ID: Greetings, The TripleO CI team has just completed Sprint 34 / Unified Sprint 13 (Jul 18 thru Aug 07). The following is a summary of completed work during this sprint cycle: - Created RHEL8 jobs to build a periodic pipeline in the RDO Software Factory and provide early feedback for CentOS8 coverage. - Fixed RHEL8 container and image build jobs in the periodic pipeline. - Bootstrapped RHEL8 standalone job and made progress on RHEL8 OVB featureset 001 job. - Completed scenario007 and featureset039 job updates upstream. - Promotion status: red on all branches at half of the sprint due to rhel8 changes and infra related issues (transient failures). - Disabled Fedora jobs from periodic pipeline. - Merged code for automatic creation of featureset matrix on TripleO quickstart documentation [3]. The planned work for the next sprint [1] are: - Create scenario1-4 jobs for RHEL8 in the periodic pipeline. - Design and test multi-arch container support. - Resume the design work for a staging environment to test changes in the promoter server for the multi-arch builds. - Continue OVB featureset001 bootstrapping on RHEL8. - Disable Fedora jobs upstream. The Ruck and Rover for this sprint are Chandan Kumar (chkumar) and Ronelle Landy (rlandy). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes are being tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-14 [2] https://etherpad.openstack.org/p/ruckroversprint14 [3] https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 14 14:47:34 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 14 Aug 2019 10:47:34 -0400 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG Message-ID: Hi everyone, One of the things that came up recently was collaborating more with other deployment tools and this has brought up things like working together on our Ansible roles as more and more deployments tools start to use it. However, as we start to practice this, we realize that a project team ownership starts not making much sense (due to the fact that a project can only be under one team in governance). It starts being confusing when a role is built by OSA, and consumed by TripleO, so the PTL is from another team and it all starts getting weird and odd, so we discussed the creation of an Ansible SIG for those who are interested in maintaining code across our community together that would be consumed together. We already have some deliverables that can live underneath it which is pretty awesome too, so I'm emailing this to ask for interested parties to speak up if they're interested and to mention that we're more than happy to have other co-chairs that are intersted. I've submitted the initial patch here: https://review.opendev.org/676428 Thank you, Mohammed From duc.openstack at gmail.com Wed Aug 14 16:34:54 2019 From: duc.openstack at gmail.com (Duc Truong) Date: Wed, 14 Aug 2019 09:34:54 -0700 Subject: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? In-Reply-To: References: Message-ID: I don't know how to solve this problem in aodh, but it is possible to use Prometheus to aggregate CPU utilization and trigger scaling. I wrote up how to do this with Senlin and Prometheus here: https://medium.com/@dkt26111/auto-scaling-openstack-instances-with-senlin-and-prometheus-46100a9a14e1?source=friends_link&sk=5c0a2aa9e541e8c350963e7ec72bcbb5 You can probably do something similar with Heat and Prometheus. On Sun, Aug 4, 2019 at 12:52 AM Bernd Bausch wrote: > > Prior to Stein, Ceilometer issued a metric named cpu_util, which I could use to trigger alarms and autoscaling when CPU utilization was too high. > > cpu_util doesn't exist anymore. Instead, we are asked to use Gnocchi's rate feature. However, when using rates, alarms on a group of resources require more parameters than just one metric: Both an aggregation and a reaggregation method are needed. > > For example, a group of instances that implement "myapp": > > gnocchi measures aggregation -m cpu --reaggregation mean --aggregation rate:mean --query server_group=myapp --resource-type instance > > Actually, this command uses a deprecated API (but from what I can see, Aodh still uses it). The new way is like this: > > gnocchi aggregates --resource-type instance '(aggregate rate:mean (metric cpu mean))' server_group=myapp > > If rate:mean is in the archive policy, it also works the other way around: > > gnocchi aggregates --resource-type instance '(aggregate mean (metric cpu rate:mean))' server_group=myapp > > Without reaggregation, I get quite unexpected numbers, including negative CPU rates. If you want to understand why, see this discussion with one of the Gnocchi maintainers [1]. > > My problem: Aodh allows me to set an aggregation method, but not a reaggregation method. How can I create alarms based on rates? The problem extends to Heat and autoscaling. > > Thanks much, > > Bernd. > > [1] https://github.com/gnocchixyz/gnocchi/issues/1044 From openstack at nemebean.com Wed Aug 14 17:07:01 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 14 Aug 2019 12:07:01 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> Message-ID: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> I have a PoC patch up in devstack[0] to start using the openstack-server client. It passed the basic devstack test and looking through the logs you can see that openstack calls are now completing in fractions of a second as opposed to 2.5 to 3, so I think it's working as intended. That said, it needs quite a bit of refinement. For example, I think we should disable this on any OSC patches. I also suspect it will fall over for any projects that use an OSC plugin since the server is started before any plugins are installed. This could probably be worked around by restarting the service after a project is installed, but it's something that needs to be dealt with. Before I start taking a serious look at those things, do we want to pursue this? It does add some potential complexity to debugging if a client call fails or if the server crashes. I'm not sure I can quantify the risk there though since it's always Just Worked(tm) for me. -Ben 0: https://review.opendev.org/676016 From cboylan at sapwetik.org Wed Aug 14 17:20:37 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Aug 2019 10:20:37 -0700 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> Message-ID: On Wed, Aug 14, 2019, at 10:07 AM, Ben Nemec wrote: > I have a PoC patch up in devstack[0] to start using the openstack-server > client. It passed the basic devstack test and looking through the logs > you can see that openstack calls are now completing in fractions of a > second as opposed to 2.5 to 3, so I think it's working as intended. > > That said, it needs quite a bit of refinement. For example, I think we > should disable this on any OSC patches. I also suspect it will fall over > for any projects that use an OSC plugin since the server is started > before any plugins are installed. This could probably be worked around > by restarting the service after a project is installed, but it's > something that needs to be dealt with. > > Before I start taking a serious look at those things, do we want to > pursue this? It does add some potential complexity to debugging if a > client call fails or if the server crashes. I'm not sure I can quantify > the risk there though since it's always Just Worked(tm) for me. Considering that our number one identified e-r bug is job timeouts [1] I think anything to reduce job time by measurable amounts is worthwhile. Additionally if we save 5 minutes per devstack run and then run devstack 10k times a day (not an up to date number but has been in that range in the past, someone can double check this with grafana or logstash or zuul dashboard) that is a massive savings when looked at on the whole. To me that makes it worthwhile. > > -Ben > > 0: https://review.opendev.org/676016 > [1] http://status.openstack.org/elastic-recheck/index.html#1686542 From sean.mcginnis at gmx.com Wed Aug 14 19:24:40 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Aug 2019 14:24:40 -0500 Subject: [openstack-dev] Are we ready to put stable/ocata into extended maintenance mode? In-Reply-To: References: Message-ID: <20190814192440.GA3048@sm-workstation> > > > > Bringing this backup to see what we need to do to get the stable/ocata > > branches ended for the TripleO projects. I'm bringing this up > > because we have https://review.openstack.org/#/c/647009/ which is for > > the upcoming rename but CI is broken and we have no interest in > > continue to keep the stable/ocata branches alive (or fix ci for them). > > > So we had a discussion yesterday in TripleO meeting regarding EOL of > Ocata and Pike Branches for TripleO projects, and there was no clarity > regarding the process of making the branches EOL(is just pushing a > change to openstack/releases(deliverables/ocata/.yaml) > creating ocata-eol tag enough or something else is also needed), can > someone from Release team point us in the right direction. > > > Thanks, > > -Alex > > It would appear we have additional information we should add to somewhere like: https://docs.openstack.org/project-team-guide/stable-branches.html or https://releases.openstack.org/#references I believe it really is just a matter of requesting the new tag in the openstack/releases repo. There is a good example of this when Tony did it for TripleO's stable/newton branch: https://review.opendev.org/#/c/583856/ I think I recall there were some additional steps Tony took at the time, but I think everything is now covered by the automated process. Tony, please correct me if I am wrong. Not sure if it applies, but you may want to see if there are any Zuul jobs that need to be cleaned up or anything of that sort. We do say branches will be in unmaintained in the Extended Maintenance phase for six months before going End of Life. Looking at Ocata, that happened April 5 of this year. Six months would put it at the beginning of October. But I think if the team knows they will not be accepting any more patches to these branches, then it is better to get it clearly marked as EOL so proper expectations are set. Sean From kendall at openstack.org Wed Aug 14 19:39:09 2019 From: kendall at openstack.org (Kendall Waters) Date: Wed, 14 Aug 2019 12:39:09 -0700 Subject: August 14 Early Bird Registration Deadline - Open Infrastructure Summit Shanghai Message-ID: <0F6C0A39-7D80-46E5-B4F7-A2D3B31F0709@openstack.org> Hi everyone, Friendly reminder that today, August 14, at 11:59pm PT (August 15 at 2:59pm China Standard Time) is the deadline to purchase passes for the Open Infrastructure Summit at the early bird price. Register now before the prices increase! There are 2 ways to register - in USD or in RMB (with e-fapiao) . In case you missed it, the agenda went live last week and features sessions covering CI/CD, Edge Computing, 5G, hybrid cloud, and more. Other Summit News If you require a visa to travel to China, please apply here Hiring talent to build your open infrastructure strategy? Have a new product to share? Join the Summit as as sponsor If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Aug 14 19:45:26 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 14 Aug 2019 14:45:26 -0500 Subject: [release] Release countdown for week R-8, August 19-23 Message-ID: <20190814194526.GA6075@sm-workstation> Your long awaited countdown email... Development Focus ----------------- It's probably a good time for teams to take stock of their library and client work that needs to be completed yet. The non-client library freeze is coming up, followed closely by the client lib freeze. Please plan accordingly to avoid any last minute rushes to get key functionality in. General Information ------------------- Looking ahead to Train-3, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (September 05). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (September 12) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, September 12. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. * After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (September 26) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (October 10) Finally, now is a good time to list contributors to your team who do not have a code contribution, and therefore won't automatically be considered an Active Technical Contributor and allowed to vote in election. This is done by adding extra-atcs to: https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml before the Extra-ATC freeze on August 29. Upcoming Deadlines & Dates -------------------------- Extra-ATC freeze: August 29 (R-7 week) Non-client library freeze: September 05 (R-6 week) Client library freeze: September 12 (R-5 week) Train-3 milestone: September 12 (R-5 week) -- Sean McGinnis (smcginnis) From openstack at fried.cc Wed Aug 14 19:52:52 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 14 Aug 2019 14:52:52 -0500 Subject: [nova] Shanghai Project Update - Volunteer(s) needed In-Reply-To: <1565769180.30413.0@smtp.office365.com> References: <4b1b121f-a376-3976-8678-b81f3a0e03c7@fried.cc> <0ae7cd298d9354946225be455be0dab4525c141f.camel@redhat.com> <1565769180.30413.0@smtp.office365.com> Message-ID: <04a77fa3-1c3f-efe2-e563-81bfb4554297@fried.cc> Thanks for the quick responses. I've requested a slot. efried . From aaronzhu1121 at gmail.com Thu Aug 15 00:42:43 2019 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Thu, 15 Aug 2019 08:42:43 +0800 Subject: [Telemetry][Shanghai Summit] Looking for Project Update session volunteers In-Reply-To: References: Message-ID: Hi Trinh, I probably attend the summit, we can discuss more later. Trinh Nguyen 于2019年8月14日 周三14:41写道: > Hi team, > > I cannot attend the next summit in Shanghai so I'm looking for one-two > volunteers who want to represent the Telemetry team at the summit. Please > reply to this email by this Sunday and we will work through the > presentation together. > > Bests, > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- Thanks, Rong Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Thu Aug 15 01:49:57 2019 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 15 Aug 2019 11:49:57 +1000 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> Message-ID: <20190815014957.GB5923@fedora19.localdomain> On Wed, Aug 14, 2019 at 12:07:01PM -0500, Ben Nemec wrote: > I have a PoC patch up in devstack[0] to start using the openstack-server > client. It passed the basic devstack test and looking through the logs you > can see that openstack calls are now completing in fractions of a second as > opposed to 2.5 to 3, so I think it's working as intended. I see this as having a couple of advantages * no bespoke API interfacing code to maintain * the wrapper is custom but pretty small * plugins can benefit by using the same wrapper * we can turn the wrapper off and fall back to the same calls directly with the client (also good for local interaction) * in a similar theme, it's still pretty close to "what I'd type on the command line to do this" which is a bit of a devstack theme So FWIW I'm positive on the direction, thanks! -i (some very experienced people have said "we know it's slow" and I guess we should take advice on if this is a temporary work-around, or an actual solution) From rico.lin.guanyu at gmail.com Thu Aug 15 02:49:54 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 15 Aug 2019 10:49:54 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open In-Reply-To: References: Message-ID: bump and *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 * :) On Tue, Aug 13, 2019 at 12:56 PM Rico Lin wrote: > Hi, all OpenStackers, > > It's time to vote for the naming of the U release!! > U 版本正式命名票选开始!! > > First, big thanks for all people who take their own time to propose names > on [2] or help to push/improve to the naming process. Thank you. > > We'll use a public polling option over per-user private URLs > for voting. This means everybody should proceed to use the following URL > to > cast their vote: > > *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 > * > > We've selected a public poll to ensure that the whole community, not just > Gerrit > change owners get a vote. Also, the size of our community has grown such > that we > can overwhelm CIVS if using private URLs. A public can mean that users > behind NAT, proxy servers or firewalls may receive a message saying > that your vote has already been lodged if this happens please try > another IP. > Because this is a public poll, results will currently be only viewable by > me > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results > while > the public poll is running. > > The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], > and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From chx769467092 at 163.com Thu Aug 15 06:09:25 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Thu, 15 Aug 2019 14:09:25 +0800 (CST) Subject: [question][placement][rest api][403] Message-ID: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> This is my problem with placement rest api.(Token and endpoint were OK) {"errors": [{"status": 403, "title": "Forbidden", "detail": "Access was denied to this resource.\n\n Policy does not allow placement:resource_providers:list to be performed. ", "request_id": "req-5b409f22-7741-4948-be6f-ea28c2896a3f" }]} Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20190814153351.png Type: image/png Size: 16544 bytes Desc: not available URL: From gregory.orange at pawsey.org.au Thu Aug 15 07:35:46 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 15 Aug 2019 15:35:46 +0800 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> Message-ID: <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> Hello Ruslanas and thank you for the response. I didn't see it until now! I have given some responses inline... On 1/8/19 3:57 pm, Ruslanas Gžibovskis wrote: > when in newton release were introduced role separation, we divided memory hungry processes into 4 different VM's on 3 physical boxes: > 1) Networker: all Neutron agent processes (network throughput) > 2) Systemd: all services started by systemd (Neutron) > 3) pcs: all services controlled by pcs (Galera + RabbitMQ) > 4) horizon We have separated each control plane service (Glance, Neutron, Cinder, etc) onto its own VM. We are considering containers instead of VMs in future. > Gregory > do you have local storage for swift and cinder background? Our Cinder and Glance use Ceph as backend. No Swift installed. > also double check where _base image is located? is it in /var/lib/nova/instances/_base/* ? and flavor disks stored in /var/lib/nova/instances ? (can check on compute by: virsh domiflist instance-00000## ) domiflist shows the VM's interface - how does that help? Greg. From gregory.orange at pawsey.org.au Thu Aug 15 07:55:16 2019 From: gregory.orange at pawsey.org.au (Gregory Orange) Date: Thu, 15 Aug 2019 15:55:16 +0800 Subject: [glance] worker, thread, taskflow interplay Message-ID: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> We are trying to figure out how these two settings interplay in: [DEFAULT]/workers [taskflow_executor]/max_workers [oslo_messaging_zmq]/rpc_thread_pool_size Just setting workers makes a bit of sense, and based on our testing: =0 creates one process =1 creates 1 plus 1 child =n creates 1 plus n children Are there green threads (i.e. coroutines, per https://eventlet.net/doc/basic_usage.html) within every process, regardless of the value of workers? Does max_workers affect that? We have read some Glance doco, hunted about various bug reports[1] and other discussions online to get some insight, but I think we're not clear on it. Can anyone explain this a bit better to me? This is all in Rocky. Thank you, Greg. [1] https://bugs.launchpad.net/glance/+bug/1748916 From ruslanas at lpic.lt Thu Aug 15 08:00:43 2019 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 15 Aug 2019 10:00:43 +0200 Subject: creating instances, haproxy eats CPU, glance eats RAM In-Reply-To: <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> References: <2f195ac1-fed4-25a4-9069-7f5b313333a4@pawsey.org.au> <533369cf-0ab7-6c3f-4c4a-0f687bd9cb92@pawsey.org.au> <97365b12-8e36-bbd7-8a0f-badb818ac706@pawsey.org.au> Message-ID: I am bad at containers, just starting to learn them, not sure how they are limited. So you are using local hard drives. I guess it is one of the points for slow down, somehow. I ask my developers to use heat to create more than 1 instance/resource. Try checking CEPH speed. I think CEPH has the option to send "created" callback after 1 copy created/written to HDD, and then finish duplicating or tripling data in the background, what makes CEPH data not so reliable but MUUUCH faster. Need to google for that, I do not remember it. sorry, yes my fault, not domiflist but domblklist: virsh domblklist instance-00000## Generally, I have the same issue as you have, but on older version of OpenStack (Mitaka, Mirantis implementation). I have difficulties when I have an instance, which is using CEPH based volume and sharing it over NFS in the instance1 in compute1 to another instance2 in another compute2. I receive around 13KB/s, if I reshare it on root drive, I get around 30KB/s still too low. On Thu, 15 Aug 2019 at 09:35, Gregory Orange wrote: > Hello Ruslanas and thank you for the response. I didn't see it until now! > I have given some responses inline... > > On 1/8/19 3:57 pm, Ruslanas Gžibovskis wrote: > > when in newton release were introduced role separation, we divided > memory hungry processes into 4 different VM's on 3 physical boxes: > > 1) Networker: all Neutron agent processes (network throughput) > > 2) Systemd: all services started by systemd (Neutron) > > 3) pcs: all services controlled by pcs (Galera + RabbitMQ) > > 4) horizon > > We have separated each control plane service (Glance, Neutron, Cinder, > etc) onto its own VM. We are considering containers instead of VMs in > future. > > > > Gregory > do you have local storage for swift and cinder background? > > Our Cinder and Glance use Ceph as backend. No Swift installed. > > > > also double check where _base image is located? is it in > /var/lib/nova/instances/_base/* ? and flavor disks stored in > /var/lib/nova/instances ? (can check on compute by: virsh domiflist > instance-00000## ) > > domiflist shows the VM's interface - how does that help? > > Greg. > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From taoyunupt at 126.com Thu Aug 15 08:51:52 2019 From: taoyunupt at 126.com (taoyunupt) Date: Thu, 15 Aug 2019 16:51:52 +0800 (CST) Subject: neutron-server won't start in OpenStack OVN In-Reply-To: References: Message-ID: Hi, I found you have a wrong configuration of "overlay_ip_version" from ml2_conf.ini, you can check it , it could config to "6" or "4". But I am not sure it is the reason for you problem. Thanks, Yun At 2019-08-14 19:16:22, "Zufar Dhiyaulhaq" wrote: Hi everyone, I try to install OpenStack with OVN enabled. But when trying to start neutron-server, the service is always inactive (exited) I try to check neutron-server logs and gets this message: 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ Does anyone know why this error happens? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Thu Aug 15 09:09:01 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Thu, 15 Aug 2019 16:09:01 +0700 Subject: neutron-server won't start in OpenStack OVN In-Reply-To: References: Message-ID: Hi, Yes, I fix it yesterday, sorry for not reporting it. The error is not in the configuration but in the crudini script. Thanks Best Regards, Zufar Dhiyaulhaq On Thu, Aug 15, 2019 at 3:52 PM taoyunupt wrote: > Hi, > I found you have a wrong configuration of "overlay_ip_version" from ml2_conf.ini, > you can check it , it could config to "6" or "4". But I am not sure it is > the reason for you problem. > > Thanks, > Yun > > > > > > > At 2019-08-14 19:16:22, "Zufar Dhiyaulhaq" > wrote: > > Hi everyone, > > I try to install OpenStack with OVN enabled. But when trying to start > neutron-server, the service is always inactive (exited) > > I try to check neutron-server logs and gets this message: > > 2019-08-14 05:40:32.649 4223 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver > 2019-08-14 05:40:32.651 4224 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for MaintenanceWorker with retry > 2019-08-14 05:40:32.654 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connecting... > 2019-08-14 05:40:32.655 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6641: connected > 2019-08-14 05:40:32.657 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.658 4220 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.660 4225 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbSbOvnIdl for AllServicesNeutronWorker with retry > 2019-08-14 05:40:32.670 4220 INFO neutron.wsgi [-] (4220) wsgi starting up on http://0.0.0.0:9696 > 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.692 4225 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.693 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connecting... > 2019-08-14 05:40:32.694 4224 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:10.101.101.10:6642: connected > 2019-08-14 05:40:32.697 4224 INFO networking_ovn.ml2.qos_driver [-] Starting OVNQosDriver > 2019-08-14 05:40:32.844 4225 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.844 4224 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.844 4222 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.845 4223 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > 2019-08-14 05:40:32.845 4221 INFO oslo_service.service [-] Parent process has died unexpectedly, exiting > > > neutron-server full logs: https://paste.ubuntu.com/p/GHfS38KFCr/ > ovsdb-server port is active: https://paste.ubuntu.com/p/MhgNs8SGdX/ > neutron.conf and ml2_conf.ini: https://paste.ubuntu.com/p/4J7hTVf5qz/ > > Does anyone know why this error happens? > > Best Regards, > Zufar Dhiyaulhaq > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Aug 15 09:44:41 2019 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 15 Aug 2019 10:44:41 +0100 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG In-Reply-To: References: Message-ID: On Wed, 14 Aug 2019 at 15:51, Mohammed Naser wrote: > Hi everyone, > > One of the things that came up recently was collaborating more with > other deployment tools and this has brought up things like working > together on our Ansible roles as more and more deployments tools start > to use it. However, as we start to practice this, we realize that a > project team ownership starts not making much sense (due to the fact > that a project can only be under one team in governance). > > It starts being confusing when a role is built by OSA, and consumed by > TripleO, so the PTL is from another team and it all starts getting > weird and odd, so we discussed the creation of an Ansible SIG for > those who are interested in maintaining code across our community > together that would be consumed together. > > We already have some deliverables that can live underneath it which is > pretty awesome too, so I'm emailing this to ask for interested parties > to speak up if they're interested and to mention that we're more than > happy to have other co-chairs that are intersted. > Nice idea. We (kolla-ansible) are not involved in any role sharing right now, but I don't want to rule it out and will be interested to see where this goes. I added my name on the etherpad ( https://etherpad.openstack.org/p/ansible-sig). > I've submitted the initial patch here: https://review.opendev.org/676428 > > Thank you, > Mohammed > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 15 11:48:46 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 15 Aug 2019 07:48:46 -0400 Subject: [ironic][ptl] Taking a break - i.e. disconnecting for a few weeks in September Message-ID: Greetings everyone, For my own mental health and with various things that have occurred in my life these past six months, I will be disconnecting for two weeks during the month of September, starting on the 6th. To put icing on the cake, I then have business related travel the two weeks following my return that will inhibit regular IRC access during peak contributor days/hours. In my absence, Dmitry Tantsur has agreed to take care of my PTL responsibilities. This will include the time through requirements freeze and quite possibly include the creation of the stable/train branch if necessary. That being said, fear not! I still intend to run for Ironic's PTL for the next cycle! -Julia From sfinucan at redhat.com Thu Aug 15 12:21:42 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 15 Aug 2019 13:21:42 +0100 Subject: More upgrade issues with PCPUs - input wanted Message-ID: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> tl;dr: Is breaking booting of pinned instances on Stein compute nodes in a Train deployment an acceptable thing to do, and if not, how do we best handle the VCPU->PCPU migration in Train? I've been working through the cpu-resources spec [1] and have run into a tricky issue I'd like some input on. In short, this spec means that pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start consuming a new resources type, PCPU, instead of VCPU. Many things need to change to make this happen but the key changes are: 1. The scheduler needs to start modifying requests for pinned instances to request PCPU resources instead of VCPU resources 2. The libvirt driver needs to start reporting PCPU resources 3. The libvirt driver needs to do a reshape, moving all existing allocations of VCPUs to PCPUs, if the instance holding that allocation is pinned The first two of these steps presents an issue for which we have a solution, but the solutions we've chosen are now resulting in this new issue. * For (1), the translation of VCPU to PCPU in the scheduler means compute nodes must now report PCPU in order for a pinned instance to land on that host. Since controllers are upgraded before compute nodes and all compute nodes aren't necessarily upgraded in one go (particularly for edge or other large or multi-cell deployments), this can mean there will be a period of time where there are very few or no hosts available on which to schedule pinned instances. * For (2), we're hampered by the fact that there is no clear way to determine if a host is used for pinned instances or not. Because of this, we can't determine if a host should be reporting PCPU or VCPU inventory. The solution we have for the issues with (1) is to add a workaround option that would disable this translation, allowing operators time to upgrade all their compute nodes to report PCPU resources before anything starts using them. For (2), we've decided to temporarily (i.e. for one release or until configuration is updated) report both, in the expectation that everyone using pinned instances has followed the long- standing advice to separate hosts intended for pinned instances from those intended for unpinned instances using host aggregates (e.g. even if we started reporting PCPUs on a host, nothing would consume that due to 'pinned=False' aggregate metadata or similar). These actually benefit each other, since if instances are still consuming VCPUs then the hosts need to continue reporting VCPUs. However, both interfere with our ability to do the reshape. Normally, a reshape is a one time thing. The way we'd planned to determine if a reshape was necessary was to check if PCPU inventory was registered against the host and, if not, whether there were any pinned instances on the host. If PCPU inventory was not available and there were pinned instances, we would update the allocations for these instances so that they would be consuming PCPUs instead of VCPUs and then update the inventory. This is problematic though, because our solution for the issue with (1) means pinned instances can continue to request VCPU resources, which in turn means we could end up with some pinned instances on a host consuming PCPU and other consuming VCPU. That obviously can't happen, so we need to change tacks slightly. The two obvious solutions would be to either (a) remove the workaround option so the scheduler would immediately start requesting PCPUs and just advise operators to upgrade their hosts for pinned instances asap or (b) add a different option, defaulting to True, that would apply to both the scheduler and compute nodes and prevent not only translation of flavors in the scheduler but also the reporting PCPUs and reshaping of allocations until disabled. I'm currently leaning towards (a) because it's a *lot* simpler, far more robust (IMO) and lets us finish this effort in a single cycle, but I imagine this could make upgrades very painful for operators if they can't fast track their compute node upgrades. (b) is more complex and would have some constraints, chief among them being that the option would have to be disabled at some point post-release and would have to be disabled on the scheduler first (to prevent the mismash or VCPU and PCPU resource allocations) above. It also means this becomes a three cycle effort at minimum, since this new option will default to True in Train, before defaulting to False and being deprecated in U and finally being removed in V. As such, I'd like some input, particularly from operators using pinned instances in larger deployments. What are your thoughts, and are there any potential solutions that I'm missing here? Cheers, Stephen [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html From mnaser at vexxhost.com Thu Aug 15 12:41:58 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 15 Aug 2019 08:41:58 -0400 Subject: [ansible][openstack-ansible][tripleo][kolla-ansible] Ansible SIG In-Reply-To: References: Message-ID: On Thu, Aug 15, 2019 at 5:44 AM Mark Goddard wrote: > > > > On Wed, 14 Aug 2019 at 15:51, Mohammed Naser wrote: >> >> Hi everyone, >> >> One of the things that came up recently was collaborating more with >> other deployment tools and this has brought up things like working >> together on our Ansible roles as more and more deployments tools start >> to use it. However, as we start to practice this, we realize that a >> project team ownership starts not making much sense (due to the fact >> that a project can only be under one team in governance). >> >> It starts being confusing when a role is built by OSA, and consumed by >> TripleO, so the PTL is from another team and it all starts getting >> weird and odd, so we discussed the creation of an Ansible SIG for >> those who are interested in maintaining code across our community >> together that would be consumed together. >> >> We already have some deliverables that can live underneath it which is >> pretty awesome too, so I'm emailing this to ask for interested parties >> to speak up if they're interested and to mention that we're more than >> happy to have other co-chairs that are intersted. > > > Nice idea. We (kolla-ansible) are not involved in any role sharing right now, but I don't want to rule it out and will be interested to see where this goes. I added my name on the etherpad (https://etherpad.openstack.org/p/ansible-sig). Great. The governance patch has merged. I've setup an IRC channel too so for those that are interested, please join #openstack-ansible-sig and I'm going to start organizing efforts around a meeting and bringing repos that live under it. >> >> I've submitted the initial patch here: https://review.opendev.org/676428 >> >> Thank you, >> Mohammed >> -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From a.settle at outlook.com Thu Aug 15 13:12:15 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 15 Aug 2019 13:12:15 +0000 Subject: [all] [tc] General questions on PDF community goal In-Reply-To: References: Message-ID: Thanks for responding to these questions, Doug. Appreciate you being so forward and working hard on this, Akihiro. Due to vacation and personal circumstance, I have been more-or-less offline for the last 2 months. I've been speaking with Stephen today on some action items we need to get through and work further on coordinating this goal. At this point in time, it is Stephen working on it by himself. He's been working on adding Python 3 support to rst2pdf, since he thinks that should provide a pure Python way to generate PDFs. However, he hasn't gone so far as to check if the output with Python 2. I'm going to send a separate email to see if we can get some volunteers to to help work on this. Otherwise, I will update in the email the current status too. Apologies again, Alex On 27/07/2019 07:26, Doug Hellmann wrote: > Akihiro Motoki writes: > >> Hi, >> >> I have a couple of general questions on PDF community goal. >> >> What is the criteria of completing the PDF community goal? >> Is it okay if we publish a PDF deliverable anyway? >> >> During working on it, I hit the following questions. >> We already reached Train-2 milestone, so I think it is the time to clarify >> the detail criteria of completing the community goal. >> >> - Where should the generated PDF file to be located and What is the >> recommended PDF file name? >> /.pdf? index.pdf? Any path and any >> file is okay? > The job will run the sphinx instructions to build PDFs, and then copy > them from build/latex (or build/pdf, I can't remember) into the > build/html directory so they are published as part of the project's > series-specific documentation set. > > Project teams should not add anything to their HTML documentation build > to move, rename, etc. the PDF files. That will all be done by the job > changes Stephen has been developing. > > Project teams should ensure there is exactly 1 PDF file being built and > that it has a meaningful name. That could be ${repository_name}.pdf as > you suggest, but it could be something else, for now. > >> - Do we create a link to the PDF file from index.html (per-project top page)? >> If needed, perhaps this would be a thing covered by openstackdocstheme. >> Otherwise, how can normal consumers know PDF documents? > Not yet. We should be able to do that automatically through the theme by > looking at the latex build parameters. If we do it in the theme, rather > than having projects add link content to their builds, we can ensure > that all projects have the link in a consistent location in their docs > pages with 1 patch. If you want to work on that, it would be good, but > it isn't part of the goal. > >> - Which repositories need to provide PDF version of documents? >> My understanding (amotoki) is that repositories with >> 'publish-openstack-docs-pti' should publish PDF doc. right? > Yes, all repositories that follow the documentation PTI. > >> - Do we need PDF version of release notes? >> - Do we need PDF version of API reference? > The goal is focused on publishing the content of the doc/source > directory in each repository. There is no need to deal with release > notes or the API reference. We may work on those later, but not for this > goal. > >> I see no coordination efforts recently and am afraid that individual >> projects cannot decide whether patches to their repositories are okay >> to merge. > The goal champions have been on vacation. Have a bit of patience, > please. :-) > > In the mean time, if there are questions about specific patches, please > raise those here on the mailing list. > > The most important thing to accomplish is to ensure that one PDF builds > *at all* from the content in doc/source in each repo. The goal was > purposefully scoped to this one task to allow teams to focus on getting > successful PDF builds from their content, because we weren't sure what > issues we might encounter. We can come back around later and improve the > experience of consuming the PDFs but there is no point in making a bunch > of decisions about how to do that until we know we have the files to > publish. > From gmann at ghanshyammann.com Thu Aug 15 13:18:52 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Aug 2019 22:18:52 +0900 Subject: [nova]review guide for the policy default refresh spec Message-ID: <16c956e577d.e430310375433.1480190884000484078@ghanshyammann.com> Hello Everyone, As many of you might know that in Train, we are doing Nova policy changes to adopt the keystone's new defaults and scope type[1]. There are multiple changes required per policy as mentioned in spec. I am writing this review guide for the patch sequence and at the end how each policy will look like. I have prepared the first set of patches. I would like to get the feedback on those so that we can modify the other policy also on the same line. My plan is to start the other policy work after we merge the first set of policy changes. Patch sequence: Example: os-services API policy: ------------------------------------------------------------- 1. Cover/Improve the test coverage for existing policies: This will be the first patch. We do not have good test coverage of the policy, current tests are not at all useful and do not perform the real checks. Idea is to add the actual tests coverage for each policy as the first patch. new tests try to access the API with all possible context and check for positive and negative cases. - https://review.opendev.org/#/c/669181/ 2. Introduce scope_types: This will add the scope_type for policy. It will be either 'system', 'project' or 'system and project'. In the same patch, along with existing test working as it is, new tests of scope type will be added which will run with [oslo_policy] enforce_scope=True so that we can capture the real scope checks. - https://review.opendev.org/#/c/645427/ 3. Add new default roles: This will add new defaults which can be SYSTEM_ADMIN, SYSTEM_READER, PROJECT_MEMBER_OR_SYSTEM_ADMIN, PROJECT_READER_OR_SYSTEM_READER etc depends on Policy. Test coverage of new defaults, as well as deprecated defaults, are covered in same patch. This patch will add the granularity in policy if needed. Without policy granularity, we cannot add new defaults per rule. - https://review.opendev.org/#/c/648480/ (I need to add more tests for deprecated rules) 4. Pass actual Targets in policy: This is to pass the actual targets in context.can(). Main goal is to remove the defaults targets which is nothing but context'user_id,project_id only. It will be {} if no actual target data needed in check_str. - https://review.opendev.org/#/c/676688/ Patch sequence: Example: Admin Action API policy: 1. https://review.opendev.org/#/c/657698/ 2. https://review.opendev.org/#/c/657823/ 3. https://review.opendev.org/#/c/676682/ 4. https://review.opendev.org/#/c/663095/ There are other patches I have posted in between for common changes or fix or framework etc. [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/policy-default-refresh.html -gmann From smooney at redhat.com Thu Aug 15 13:31:11 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 15 Aug 2019 14:31:11 +0100 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: On Thu, 2019-08-15 at 13:21 +0100, Stephen Finucane wrote: > tl;dr: Is breaking booting of pinned instances on Stein compute nodes > in a Train deployment an acceptable thing to do, and if not, how do we > best handle the VCPU->PCPU migration in Train? > > I've been working through the cpu-resources spec [1] and have run into > a tricky issue I'd like some input on. In short, this spec means that > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > consuming a new resources type, PCPU, instead of VCPU. Many things need > to change to make this happen but the key changes are: > > 1. The scheduler needs to start modifying requests for pinned instances > to request PCPU resources instead of VCPU resources > 2. The libvirt driver needs to start reporting PCPU resources > 3. The libvirt driver needs to do a reshape, moving all existing > allocations of VCPUs to PCPUs, if the instance holding that > allocation is pinned > > The first two of these steps presents an issue for which we have a > solution, but the solutions we've chosen are now resulting in this new > issue. > > * For (1), the translation of VCPU to PCPU in the scheduler means > compute nodes must now report PCPU in order for a pinned instance to > land on that host. Since controllers are upgraded before compute > nodes and all compute nodes aren't necessarily upgraded in one go > (particularly for edge or other large or multi-cell deployments), > this can mean there will be a period of time where there are very > few or no hosts available on which to schedule pinned instances. > > * For (2), we're hampered by the fact that there is no clear way to > determine if a host is used for pinned instances or not. Because of > this, we can't determine if a host should be reporting PCPU or VCPU > inventory. > > The solution we have for the issues with (1) is to add a workaround > option that would disable this translation, allowing operators time to > upgrade all their compute nodes to report PCPU resources before > anything starts using them. For (2), we've decided to temporarily (i.e. > for one release or until configuration is updated) report both, in the > expectation that everyone using pinned instances has followed the long- > standing advice to separate hosts intended for pinned instances from > those intended for unpinned instances using host aggregates (e.g. even > if we started reporting PCPUs on a host, nothing would consume that due > to 'pinned=False' aggregate metadata or similar). These actually > benefit each other, since if instances are still consuming VCPUs then > the hosts need to continue reporting VCPUs. However, both interfere > with our ability to do the reshape. > > Normally, a reshape is a one time thing. The way we'd planned to > determine if a reshape was necessary was to check if PCPU inventory was > registered against the host and, if not, whether there were any pinned > instances on the host. If PCPU inventory was not available and there > were pinned instances, we would update the allocations for these > instances so that they would be consuming PCPUs instead of VCPUs and > then update the inventory. This is problematic though, because our > solution for the issue with (1) means pinned instances can continue to > request VCPU resources, which in turn means we could end up with some > pinned instances on a host consuming PCPU and other consuming VCPU. > That obviously can't happen, so we need to change tacks slightly. The > two obvious solutions would be to either (a) remove the workaround > option so the scheduler would immediately start requesting PCPUs and > just advise operators to upgrade their hosts for pinned instances asap > or (b) add a different option, defaulting to True, that would apply to > both the scheduler and compute nodes and prevent not only translation > of flavors in the scheduler but also the reporting PCPUs and reshaping > of allocations until disabled. > > I'm currently leaning towards (a) because it's a *lot* simpler, far > more robust (IMO) and lets us finish this effort in a single cycle, but > I imagine this could make upgrades very painful for operators if they > can't fast track their compute node upgrades. (b) is more complex and > would have some constraints, chief among them being that the option > would have to be disabled at some point post-release and would have to > be disabled on the scheduler first (to prevent the mismash or VCPU and > PCPU resource allocations) above. It also means this becomes a three > cycle effort at minimum, since this new option will default to True in > Train, before defaulting to False and being deprecated in U and finally > being removed in V. As such, I'd like some input, particularly from > operators using pinned instances in larger deployments. What are your > thoughts, and are there any potential solutions that I'm missing here? if we go with (b) i would move the config your of the workarond section to the default seachtion and call it pcpus_in_placement and have it default false in train. i.e. we dont enabel the featue in train by default. in installer tools we would update them to set the configvale to true so new installs use this feature. in U we would cahnge teh default to True and deprecate as you said and finally remove in V. we should add a nova status check too for the U upgrade so that operators can define the correct config values before the upgrade. if we go with (a) then we would want to add that check for train i think. operators would need to add the new config options to all host before they upgrade. this could be problematic in some cases as the meaning of cpu_shared_set changes between stine and train. in stine it is used for emultor threads only, in train it will be used for all floating vms vcpus. (a) would also require you to upgrade all host in one go more or less. for fast forward upgrades this is requried anyway since we cant have contol plane manging agent that are older then n-1 but not all tool support FFU or recommend it. > > Cheers, > Stephen > > [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > From a.settle at outlook.com Thu Aug 15 13:37:38 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 15 Aug 2019 13:37:38 +0000 Subject: [all] [tc] PDF Community Goal Update Message-ID: Hi all, Apologies for the radio silence regarding the PDF Community Goal. Due to vacation and personal circumstance, I've been "offline" for the better part of the last 2 months. Update * Stephen Finucane has been working on adding Python 3 support to rst2pdf * Common issues are being tracked within this etherpad [1] * Overall status: https://review.opendev.org/#/q/topic:build-pdf-docs Help needed * We would appreciate anyone who is comfortable with Python to help and volunteer to test the rst2pdf output with Python 2. Working within a larger project like Neutron to see how much it can do, as they have specific styling capabilities. * NOTE: The original discussion included using the LaTeX builder instead of rst2pdf. However, the LaTeX builder is not playing ball as nicely as we'd like, so we're trying to figure out if this would be easier. The LaTeX builder is still the primary plan, since we still don't know what we're going with and the overlap between the two is significant. Questions? Thank you, Alex [1] https://etherpad.openstack.org/p/pdf-goal-train-common-problems -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Aug 15 13:40:56 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 15 Aug 2019 06:40:56 -0700 Subject: [keystone][edge] Edge Hacking Days - August 16 Message-ID: <017B8545-8153-42E0-99B1-CF3775DBD4CA@gmail.com> Hi, It is a friendly reminder that we are having the second edge hacking days in August this Friday (August 16). The dial-in information is the same, you can find the details here: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. We will keep on working on two items: * Keystone to Keystone federation testing in DevStack * Building the centralized edge reference architecture on Packet HW using TripleO Please let me know if you have any questions. See you on Friday! :) Thanks, Ildikó From gmann at ghanshyammann.com Thu Aug 15 13:45:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Aug 2019 22:45:46 +0900 Subject: [nova] API updates week 19-33 Message-ID: <16c9586f90d.126bdddaf76391.2502147131477263201@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. API Related BP : ============ COMPLETED: 1. Support adding description while locking an instance: - https://blueprints.launchpad.net/nova/+spec/add-locked-reason 2. Add host and hypervisor_hostname flag to create server - https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server Code Ready for Review: ------------------------------ 1. Specifying az when restore shelved server - Topic: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) - Weekly Progress: This has been rebased as 2.75 microversion is already merged. This will need again rebase as 2.76 is reserved for 'Add 'power-update' external event' 2. Nova API cleanup - Topic: https://review.opendev.org/#/q/topic:bp/api-consistency-cleanup+(status:open+OR+status:merged) - Weekly Progress: Nova patch is merged. python-novaclient patch is pending. 3. Show Server numa-topology - Topic: https://review.opendev.org/#/q/topic:bp/show-server-numa-topology+(status:open+OR+status:merged) - Weekly Progress: Alex is +1 on nova change, but this is with microversion number as 2.76. This might need to be hold on ? 4. Nova API policy improvement - Topic: https://review.openstack.org/#/q/topic:bp/policy-default-refresh+(status:open+OR+status:merged) - Weekly Progress: First set of os-service and Admin Action API policy series is ready to review. I have sent the review guide over ML - http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008504.html 5. Add 'power-update' external event: - Topic: https://review.opendev.org/#/q/topic:bp/nova-support-instance-power-update+(status:open+OR+status:merged) - Weekly Progress: This is reserved for 2.76 and we should merge this soon as many other microversion changes are waiting for grabbing 2.76. I do not have updates on the current state, maybe matt or surya can add more if needed. 6. Add User-id field in migrations table - Topic: https://review.opendev.org/#/q/topic:bp/add-user-id-field-to-the-migrations-table+(status:open+OR+status:merged) - Weekly Progress: Changes are up for review but with microversion 2.76. We can rebase the microverison number later and not blocking for review. I will review it next week. 7. Support delete_on_termination in volume attach api -Spec: https://review.opendev.org/#/q/topic:bp/support-delete-on-termination-in-server-attach-volume+(status:open+OR+status:merged) - Weekly Progress: Spec is merged and code is up for review. This is another one from Brin. Ready for review and rebase on available microversion number can be done later. Specs are merged and code in-progress: ------------------------------ ------------------ 1. Detach and attach boot volumes: - Topic: https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: No Progress. Patches are in merge conflict. Spec Ready for Review: ----------------------------- 1. Support for changing deleted_on_termination after boot -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: This has been added in backlog. Previously approved Spec needs to be re-proposed for Train: --------------------------------------------------------------------------- 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - I remember I planned this to re-propose but could not get time. If anyone would like to help on this please repropose. otherwise I will start this in U cycle. 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - This also need volutneer - http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007411.html Others: 1. Add API ref guideline for body text - 2 api-ref are left to fix. Bugs: ==== No progress report in this week. NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. -gmann From ed at leafe.com Thu Aug 15 13:54:37 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Aug 2019 08:54:37 -0500 Subject: [User-committee] [uc] Less than 4 days left to nominate for the UC! In-Reply-To: References: Message-ID: On Aug 12, 2019, at 4:44 PM, Ed Leafe wrote: > A week has gone by since nominations opened, and we have yet to receive a single nomination! The nomination period will close in less than a day. So far we have 1 candidate, but there are two positions up for election. So if you’ve been hesitating, don’t wait any longer! The info for how to nominate from my previous email is below: > Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: > > Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). > > Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. > > > -- Ed Leafe > > [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html -- Ed Leafe From ed at leafe.com Thu Aug 15 13:57:20 2019 From: ed at leafe.com (Ed Leafe) Date: Thu, 15 Aug 2019 08:57:20 -0500 Subject: [User-committee] [uc] Less than *1 DAY* left to nominate for the UC! In-Reply-To: References: Message-ID: <6AF63EFF-2845-494D-8D01-EB9902F604E6@leafe.com> (Re-sending with a more accurate subject line) On Aug 12, 2019, at 4:44 PM, Ed Leafe wrote: > A week has gone by since nominations opened, and we have yet to receive a single nomination! The nomination period will close in less than a day. So far we have 1 candidate, but there are two positions up for election. So if you’ve been hesitating, don’t wait any longer! The info for how to nominate from my previous email is below: > Now I’m sure everyone’s waiting until the last minute in order to make a dramatic moment, but don’t put it off for *too* long! If you missed the initial announcement [0], here’s the information you need: > > Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the three sitting UC members elected in the previous election). > > Self-nomination is common, no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 16, 05:59 UTC. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. > > > -- Ed Leafe > > [0] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html -- Ed Leafe _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee From pierre at stackhpc.com Thu Aug 15 14:32:55 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 15 Aug 2019 16:32:55 +0200 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: Some time after I posted to the list, I started to receive notifications. If someone fixed it, thanks a lot. As for the 10 stories limit, it appears to be specific to a project view, as I get the configured page size for global "Projects" and "Stories" lists. On Wed, 14 Aug 2019 at 14:48, Pierre Riteau wrote: > > Hello, > > I am reviving this thread as I have never received any email > notifications from starred projects in Storyboard, despite enabling > them multiple times. Although the change appears to be saved > correctly, if I log out and log back in, my preferences are reset to > defaults (not only for email settings, but also for page size). I also > noticed that increasing "Page size" doesn't have any effect within the > same session, I always see 10 results per page. > > Is there a known issue with persisting preferences in Storyboard? > > Thanks, > Pierre > > On Sun, 19 May 2019 at 06:17, Akihiro Motoki wrote: > > > > Thanks for the information. > > > > I re-enabled email notification and then started to receive notifications. I am not sure why this solved the problem but it now works for me. > > > > > > 2019年5月15日(水) 22:43 : > >> > >> On 2019-05-15 13:58, Akihiro Motoki wrote: > >> > Hi, > >> > > >> > Is there a way to get email notification on stories/tasks of > >> > subscribed projects in storyboard? > >> > >> Yes, go to your preferences > >> (https://storyboard.openstack.org/#!/profile/preferences) > >> by clicking on your name in the top right, then Preferences. > >> > >> Scroll to the bottom and check the "Enable notification emails" > >> checkbox, then > >> click "Save". There's a UI bug where sometimes the displayed preferences > >> will > >> look like the save button didn't work, but rest assured that it did > >> unless you > >> get an error message. > >> > >> Once you've done this the email associated with your OpenID will receive > >> notification emails for things you're subscribed to (which includes > >> changes on > >> stories/tasks related to projects you're subscribed to). > >> > >> Thanks, > >> > >> Adam (SotK) > >> From pierre at stackhpc.com Thu Aug 15 14:35:53 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 15 Aug 2019 16:35:53 +0200 Subject: [blazar] IRC meeting today Message-ID: Hello, Today we have our biweekly Blazar IRC meeting at 16:00 UTC on #openstack-meeting-alt: https://wiki.openstack.org/wiki/Meetings/Blazar#Agenda_for_15_Aug_2019_.28Americas.29 We can update on the status of upstream contributions from our user community. Everyone is welcome to join and bring up any other topic. Cheers, Pierre From colleen at gazlene.net Thu Aug 15 14:38:31 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 15 Aug 2019 07:38:31 -0700 Subject: =?UTF-8?Q?[keystone]_Feature_proposal_freeze_exception_for_refreshable_g?= =?UTF-8?Q?roup_membership?= Message-ID: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> Work is in progress to implement refreshable group membership in keystone[1]. In order to allow for some breathing room for thorough discussion on the implementation details, we're proposing a 1-week extension to our scheduled feature proposal freeze deadline (this week)[2]. Please let me or the team know if you have any concerns about this. Colleen [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/expiring-group-memberships.html [2] https://releases.openstack.org/train/schedule.html From kristi at nikolla.me Thu Aug 15 15:12:56 2019 From: kristi at nikolla.me (Kristi Nikolla) Date: Thu, 15 Aug 2019 11:12:56 -0400 Subject: [keystone] Feature proposal freeze exception for refreshable group membership In-Reply-To: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> References: <104eeb9c-3626-41d8-96a9-ad34f05f94e2@www.fastmail.com> Message-ID: Thanks Colleen! :) On Thu, Aug 15, 2019 at 10:46 AM Colleen Murphy wrote: > Work is in progress to implement refreshable group membership in > keystone[1]. In order to allow for some breathing room for thorough > discussion on the implementation details, we're proposing a 1-week > extension to our scheduled feature proposal freeze deadline (this week)[2]. > Please let me or the team know if you have any concerns about this. > > Colleen > > [1] > http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/expiring-group-memberships.html > [2] https://releases.openstack.org/train/schedule.html > > -- Kristi -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Aug 15 15:34:54 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 15 Aug 2019 11:34:54 -0400 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: Is there any agreement from stable-maint-core? Asking because we're unable to modify ironic's stable maintenance group directly. -Julia On Mon, Aug 12, 2019 at 11:00 AM Ruby Loo wrote: > > +2. Good idea! :) > > --ruby > > On Fri, Aug 9, 2019 at 9:06 AM Dmitry Tantsur wrote: >> >> Hi folks! >> >> I'd like to propose adding Riccardo to our stable team. He's been consistently >> checking stable patches [1], and we're clearly understaffed when it comes to >> stable reviews. Thoughts? >> >> Dmitry >> >> [1] >> https://review.opendev.org/#/q/reviewer:%22Riccardo+Pittau+%253Celfosardo%2540gmail.com%253E%22+NOT+branch:master >> From corvus at inaugust.com Thu Aug 15 17:04:53 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Aug 2019 10:04:53 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87y305onco.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 06 Aug 2019 17:01:11 -0700") References: <87y305onco.fsf@meyer.lemoncheese.net> Message-ID: <87wofepdfu.fsf@meyer.lemoncheese.net> Hi, We have made the switch to begin storing all of the build logs from Zuul in Swift. Each build's logs will be stored in one of 7 randomly chosen Swift regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those providers! You'll note that the links in Gerrit to the Zuul jobs now go to a page on the Zuul web app. A lot of the features previously available on the log server are now available there, plus some new ones. If you're looking for a link to a docs preview build, you'll find that on the build page under the "Artifacts" section now. If you're curious about where your logs ended up, you can see the Swift hostname under the "logs_url" row in the summary table. Please let us know if you have any questions or encounter any issues, either here, or in #openstack-infra on IRC. -Jim From openstack at nemebean.com Thu Aug 15 17:45:43 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Aug 2019 12:45:43 -0500 Subject: [glance] worker, thread, taskflow interplay In-Reply-To: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> References: <20a573e6-82d8-a22f-000e-ed19508a9d54@pawsey.org.au> Message-ID: <47aab914-f9b5-e8ee-bf1e-37b7b761f9c6@nemebean.com> On 8/15/19 2:55 AM, Gregory Orange wrote: > We are trying to figure out how these two settings interplay in: > > [DEFAULT]/workers > [taskflow_executor]/max_workers This depends on the executor being used. There are both thread and process executors, so it will either affect the number of threads or processes started. > [oslo_messaging_zmq]/rpc_thread_pool_size This is a zeromq-specific config opt that has been removed from recent versions of oslo.messaging (along with the zeromq driver). More generally, the thread_pool options in oslo.messaging will affect how many threads are created to handle messages. I believe that would be per-process, not per-service (but someone correct me if I'm wrong). > > Just setting workers makes a bit of sense, and based on our testing: > =0 creates one process > =1 creates 1 plus 1 child > =n creates 1 plus n children > > Are there green threads (i.e. coroutines, per https://eventlet.net/doc/basic_usage.html) within every process, regardless of the value of workers? Does max_workers affect that? > > We have read some Glance doco, hunted about various bug reports[1] and other discussions online to get some insight, but I think we're not clear on it. Can anyone explain this a bit better to me? This is all in Rocky. > > Thank you, > Greg. > > > [1] https://bugs.launchpad.net/glance/+bug/1748916 > From openstack at fried.cc Thu Aug 15 19:11:15 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 15 Aug 2019 14:11:15 -0500 Subject: [all][infra] Zuul logs are in swift Message-ID: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Hi infra. I wanted to blast out a handful of issues I've had since the cutover to swift. I'm using Chrome Version 76.0.3809.100 (Official Build) (64-bit) on bionic (18.04.3). - Hot tip: if you want the dynamic logs (with timestamp links and sev filters), use the twisties. Clicking through gets you to the raw files. The former was not obvious to me. - Some in-app logs aren't working. E.g. when I try to look at controller=>logs=>screen-n-cpu.txt.gz from [1], it redirects me to [2]. The hover [3] has a double slash in it, not sure if that's related, but when I try squashing to one slash, I get an error... sometimes [4]. - When the in-app logs do render, they don't wrap. There's a horizontal scroll bar, but it's at the bottom of an inner frame, so it's off the screen most of the time and therefore not useful. (I don't have horizontal mouse scroll capabilities; maybe I should look into that.) - The timestamp links in app anchor the line at the "top" - which (for me, anyway) is "underneath" the header menu (Status Projects Jobs Labels Nodes Builds Buildsets), so I have to scroll up to get the anchored line and a few of its successors. Thanks as always for all your hard work. efried [1] https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/logs [2] https://zuul.opendev.org/tenants [3] https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/log/controller//logs/screen-n-cpu.txt.gz [4] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-15.log.html#t2019-08-15T18:49:04 From mriedemos at gmail.com Thu Aug 15 19:22:00 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:22:00 -0500 Subject: [nova] The race for 2.76 In-Reply-To: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> References: <339e43ff-1183-d1b7-0a25-9ddf4274116d@gmail.com> Message-ID: <8a2dfaa7-5a79-c1dc-49d6-cbdcf4e543ce@gmail.com> On 8/13/2019 7:25 AM, Matt Riedemann wrote: > There are several compute API microversion changes that are conflicting > and will be fighting for 2.76, but I think we're trying to prioritize > this one [1] for the ironic power sync external event handling since (1) > Surya is going to be on vacation soon, (2) there is an ironic change > that depends on it which has had review [2] and (3) the nova change has > had quite a bit of review already. > > As such I think others waiting to rebase from 2.75 to 2.76 should > probably hold off until [1] is approved which should happen today or > tomorrow. > > [1] https://review.opendev.org/#/c/645611/ > [2] https://review.opendev.org/#/c/664842/ These are both approved now so let the rebasing begin! -- Thanks, Matt From mriedemos at gmail.com Thu Aug 15 19:24:05 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:24:05 -0500 Subject: [question][placement][rest api][403] In-Reply-To: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> References: <4cd9f6c3.7db9.16c93e52b74.Coremail.chx769467092@163.com> Message-ID: <048dbc78-b205-5aea-fe90-73b0e6854946@gmail.com> On 8/15/2019 1:09 AM, 崔恒香 wrote: > This is my problem with placement rest api.(Token and endpoint were OK) > >         {"errors": [{"status": 403, >                          "title": "Forbidden", > "detail": "Access was denied to this resource.\n\n Policy does not allow > placement:resource_providers:list to be performed.  ", >                          "request_id": > "req-5b409f22-7741-4948-be6f-ea28c2896a3f" >                         }]} This doesn't give much information. Does the token have the admin role in it? Has the placement:resource_providers:list policy rule been changed from the default (rule:admin_api)? -- Thanks, Matt From mriedemos at gmail.com Thu Aug 15 19:28:42 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Aug 2019 14:28:42 -0500 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: On 8/9/2019 4:11 PM, Artom Lifshitz wrote: > tl;dr If you care about NUMA live migration, check out [1] and test in > in your env(s), or review it. > As I've said in IRC a few times, this feature was mentioned (at the last summit/PTG in Denver) as being critical for the next StarlingX release so I'd really hope the StarlingX community can help review and test this. I know there was some help from WindRiver in Stein which uncovered some issues, so it would be good to have that same kind of attention here. Feature freeze for Train is less than a month away (Sept 12). > > So if you care about NUMA-aware live migration and have some spare > time and hardware (if you're in the former category I don't think I > need to explain what kind of hardware - though I'll try to answer > questions as best I can), I would greatly appreciate it if you > deployed the patches and tested them. I've done that myself, of > course, but, as at the end of Stein, I'm sure there are edge cases > that I didn't think of (though I'm selfishly hoping that there > aren't). Again the testing here with real hardware is key, and that's something I'd hope Intel/WindRiver/StarlingX folk can help with since I personally don't have a lab sitting around available for NUMA testing. Since we won't have third party CI for this feature, it's going to be important that at least someone is hitting this with a real environment, ideally with mixed Stein and Train compute services as well to make sure it behaves properly during rolling upgrades. -- Thanks, Matt From dtroyer at gmail.com Thu Aug 15 20:23:11 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 15 Aug 2019 15:23:11 -0500 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: On Thu, Aug 15, 2019 at 2:31 PM Matt Riedemann wrote: > As I've said in IRC a few times, this feature was mentioned (at the last > summit/PTG in Denver) as being critical for the next StarlingX release > so I'd really hope the StarlingX community can help review and test > this. I know there was some help from WindRiver in Stein which uncovered > some issues, so it would be good to have that same kind of attention > here. Feature freeze for Train is less than a month away (Sept 12). StarlingX does have time built in for this testing, intending to be complete before the STX 2.0 release at the end of August. I've suggested that we need to test both Train and our Stein backport but I am not the one with the resources to allocate. > Again the testing here with real hardware is key, and that's something > I'd hope Intel/WindRiver/StarlingX folk can help with since I personally > don't have a lab sitting around available for NUMA testing. Since we > won't have third party CI for this feature, it's going to be important > that at least someone is hitting this with a real environment, ideally > with mixed Stein and Train compute services as well to make sure it > behaves properly during rolling upgrades. Oddly enough, in my $OTHER_DAY_JOB Intel's new Third Party CI is at the top of my list and we are getting dangerously close there in general, but this testing is unfortunately not first in line. dt -- Dean Troyer dtroyer at gmail.com From openstack at nemebean.com Thu Aug 15 20:57:36 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Aug 2019 15:57:36 -0500 Subject: [qa][openstackclient] Debugging devstack slowness In-Reply-To: <20190815014957.GB5923@fedora19.localdomain> References: <56e637a9-8ef6-4783-98b0-325797b664b9@www.fastmail.com> <7f0a75d6-e6f6-a58f-3efe-a4fbc62f38ec@nemebean.com> <65b74f83-63f4-6b7f-7e19-33b2fc44dfe8@nemebean.com> <90f8e894-e30d-4e31-ec1d-189d80314ced@nemebean.com> <4d92a609-876a-ac97-eb53-3bad97ae55c6@nemebean.com> <8f799e47-ccde-c92f-383e-15c9891c8e10@nemebean.com> <20190815014957.GB5923@fedora19.localdomain> Message-ID: <0754e889-1e5d-94de-1ebb-646e995eff0e@nemebean.com> On 8/14/19 8:49 PM, Ian Wienand wrote: > On Wed, Aug 14, 2019 at 12:07:01PM -0500, Ben Nemec wrote: >> I have a PoC patch up in devstack[0] to start using the openstack-server >> client. It passed the basic devstack test and looking through the logs you >> can see that openstack calls are now completing in fractions of a second as >> opposed to 2.5 to 3, so I think it's working as intended. > > I see this as having a couple of advantages > > * no bespoke API interfacing code to maintain > * the wrapper is custom but pretty small > * plugins can benefit by using the same wrapper > * we can turn the wrapper off and fall back to the same calls directly > with the client (also good for local interaction) > * in a similar theme, it's still pretty close to "what I'd type on the > command line to do this" which is a bit of a devstack theme > > So FWIW I'm positive on the direction, thanks! > > -i > > (some very experienced people have said "we know it's slow" and I > guess we should take advice on if this is a temporary work-around, or > an actual solution) > Okay, I've got https://review.opendev.org/#/c/676016/ passing devstack ci now and I think it's ready for initial review. I don't know if everything I'm doing will fly with the devstack folks, but the reasons why should be covered in the commit message. I'm open to suggestions on alternate ways to accomplish the same things. From corvus at inaugust.com Thu Aug 15 22:23:56 2019 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Aug 2019 15:23:56 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> (Eric Fried's message of "Thu, 15 Aug 2019 14:11:15 -0500") References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Message-ID: <875zmym5j7.fsf@meyer.lemoncheese.net> Eric Fried writes: > - Hot tip: if you want the dynamic logs (with timestamp links and sev > filters), use the twisties. Clicking through gets you to the raw files. > The former was not obvious to me. Good point; we should work on the UI for that. > - Some in-app logs aren't working. E.g. when I try to look at > controller=>logs=>screen-n-cpu.txt.gz from [1], it redirects me to [2]. > The hover [3] has a double slash in it, not sure if that's related, but > when I try squashing to one slash, I get an error... sometimes [4]. > - When the in-app logs do render, they don't wrap. There's a horizontal > scroll bar, but it's at the bottom of an inner frame, so it's off the > screen most of the time and therefore not useful. (I don't have > horizontal mouse scroll capabilities; maybe I should look into that.) > - The timestamp links in app anchor the line at the "top" - which (for > me, anyway) is "underneath" the header menu (Status Projects Jobs Labels > Nodes Builds Buildsets), so I have to scroll up to get the anchored line > and a few of its successors. Thanks! Fixes for all of these (plus one more: making the log text easier to select for copy/paste) are in-flight. -Jim From smooney at redhat.com Fri Aug 16 01:20:03 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 16 Aug 2019 02:20:03 +0100 Subject: [nova] NUMA live migration is ready for review and testing In-Reply-To: References: Message-ID: <45af5fc47a54217f6756bc854d452e7164b30757.camel@redhat.com> On Thu, 2019-08-15 at 15:23 -0500, Dean Troyer wrote: > On Thu, Aug 15, 2019 at 2:31 PM Matt Riedemann wrote: > > As I've said in IRC a few times, this feature was mentioned (at the last > > summit/PTG in Denver) as being critical for the next StarlingX release > > so I'd really hope the StarlingX community can help review and test > > this. I know there was some help from WindRiver in Stein which uncovered > > some issues, so it would be good to have that same kind of attention > > here. Feature freeze for Train is less than a month away (Sept 12). > > StarlingX does have time built in for this testing, intending to be > complete before the STX 2.0 release at the end of August. I've > suggested that we need to test both Train and our Stein backport but I > am not the one with the resources to allocate. i doubt you will be able to safely backport this to Stein as it contains RPC/object chagnes which would normally will break things on upgrade. e.g. if you backport this to Stine in STX 1.Y.Z going form 1.0 to 1.Y.Z would require you to treat it like a majour upgrde and upgrade all your contoller first followed by the compute to ensure you never generate a copy of the updated object before the nodes that recive tehm are updated. if you don't do that then service will start exploding. we did a part backport of numa aware vswitch internally and had to drop all the object changes and schduler change and only backprot the virt driver chagnes as we could not figure out a safe way to backprot ovo changes that would not break deployment if you didnt syncquest eh update like a majory version upgrade wiche we cant assume for z releases (x.y.z). but glad to hear that in either case ye do plan to test it in some capasity. i have dual numa hardware i own that i plan to test it on personally but the more the better. > > > Again the testing here with real hardware is key, and that's something > > I'd hope Intel/WindRiver/StarlingX folk can help with since I personally > > don't have a lab sitting around available for NUMA testing. Since we > > won't have third party CI for this feature, it's going to be important > > that at least someone is hitting this with a real environment, ideally > > with mixed Stein and Train compute services as well to make sure it > > behaves properly during rolling upgrades. > > Oddly enough, in my $OTHER_DAY_JOB Intel's new Third Party CI is at > the top of my list and we are getting dangerously close there in > general, but this testing is unfortunately not first in line. speak of that. i see igor rebased https://review.opendev.org/#/c/652197/ i havent really look at that since may and it looks like some files permission have change so its currenlty broken. im not sure if he/ye planned on taking that over or if he was just interested either is fine. my fist party ci solution has kind of stalled since i just have not had time to work on it (given it snot part of my $OTHER_DAY_JOB) so looking forward to the third part ci you are working on. if i find time to work on that again i will but it still didnt have full parity with what the intel nfv ci was testing as it was running with the singel numa node guest we have in the gate but it would be nice to have even basic first party test of pinning/hugepages at some point. even though i wrote it i dont liek the fact that i was force to use fedroa with the virt preview repos enabled ot get a new enough qemu/libvirt to even to patial testing witout nested virt so i would still guess the third part ci would be more reliyable since it can actully use nested vert provide you replace the default ubuntu kernel with somthing based on 4.19 > > dt > From zhangbailin at inspur.com Fri Aug 16 02:44:41 2019 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 16 Aug 2019 02:44:41 +0000 Subject: [lists.openstack.org]Re: [nova] The race for 2.76 Message-ID: > On 8/13/2019 7:25 AM, Matt Riedemann wrote: > > There are several compute API microversion changes that are > > conflicting and will be fighting for 2.76, but I think we're trying to > > prioritize this one [1] for the ironic power sync external event > > handling since (1) Surya is going to be on vacation soon, (2) there is > > an ironic change that depends on it which has had review [2] and (3) > > the nova change has had quite a bit of review already. > > > > As such I think others waiting to rebase from 2.75 to 2.76 should > > probably hold off until [1] is approved which should happen today or > > tomorrow. > > > > [1] https://review.opendev.org/#/c/645611/ > > [2] https://review.opendev.org/#/c/664842/ > These are both approved now so let the rebasing begin! "Specifying az when restore shelved server" have been updated, and now it's in the nova-runway, please review, thanks. Links: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) -- Thanks, Matt From soulxu at gmail.com Fri Aug 16 04:09:01 2019 From: soulxu at gmail.com (Alex Xu) Date: Fri, 16 Aug 2019 12:09:01 +0800 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: Stephen Finucane 于2019年8月15日周四 下午8:25写道: > tl;dr: Is breaking booting of pinned instances on Stein compute nodes > in a Train deployment an acceptable thing to do, and if not, how do we > best handle the VCPU->PCPU migration in Train? > > I've been working through the cpu-resources spec [1] and have run into > a tricky issue I'd like some input on. In short, this spec means that > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > consuming a new resources type, PCPU, instead of VCPU. Many things need > to change to make this happen but the key changes are: > > 1. The scheduler needs to start modifying requests for pinned instances > to request PCPU resources instead of VCPU resources > 2. The libvirt driver needs to start reporting PCPU resources > 3. The libvirt driver needs to do a reshape, moving all existing > allocations of VCPUs to PCPUs, if the instance holding that > allocation is pinned > > The first two of these steps presents an issue for which we have a > solution, but the solutions we've chosen are now resulting in this new > issue. > > * For (1), the translation of VCPU to PCPU in the scheduler means > compute nodes must now report PCPU in order for a pinned instance to > land on that host. Since controllers are upgraded before compute > nodes and all compute nodes aren't necessarily upgraded in one go > (particularly for edge or other large or multi-cell deployments), > this can mean there will be a period of time where there are very > few or no hosts available on which to schedule pinned instances. > > * For (2), we're hampered by the fact that there is no clear way to > determine if a host is used for pinned instances or not. Because of > this, we can't determine if a host should be reporting PCPU or VCPU > inventory. > > The solution we have for the issues with (1) is to add a workaround > option that would disable this translation, allowing operators time to > upgrade all their compute nodes to report PCPU resources before > anything starts using them. For (2), we've decided to temporarily (i.e. > for one release or until configuration is updated) report both, in the > expectation that everyone using pinned instances has followed the long- > standing advice to separate hosts intended for pinned instances from > those intended for unpinned instances using host aggregates (e.g. even > if we started reporting PCPUs on a host, nothing would consume that due > to 'pinned=False' aggregate metadata or similar). These actually > benefit each other, since if instances are still consuming VCPUs then > the hosts need to continue reporting VCPUs. However, both interfere > with our ability to do the reshape. > > Normally, a reshape is a one time thing. The way we'd planned to > determine if a reshape was necessary was to check if PCPU inventory was > registered against the host and, if not, whether there were any pinned > instances on the host. If PCPU inventory was not available and there > were pinned instances, we would update the allocations for these > instances so that they would be consuming PCPUs instead of VCPUs and > then update the inventory. This is problematic though, because our > solution for the issue with (1) means pinned instances can continue to > request VCPU resources, which in turn means we could end up with some > pinned instances on a host consuming PCPU and other consuming VCPU. > That obviously can't happen, so we need to change tacks slightly. The > two obvious solutions would be to either (a) remove the workaround > option so the scheduler would immediately start requesting PCPUs and > just advise operators to upgrade their hosts for pinned instances asap > or (b) add a different option, defaulting to True, that would apply to > both the scheduler and compute nodes and prevent not only translation > of flavors in the scheduler but also the reporting PCPUs and reshaping > of allocations until disabled. > > The step I'm thinking is: 1. upgrade control plane, disable request PCPU, still request VCPU. 2. rolling upgrade compute node, compute nodes begin to report both PCPU and VCPU. But the request still add to VCPU. 3. enabling the PCPU request, the new request is request PCPU. In this point, some of instances are using VCPU, some of instances are using PCPU on same node. And the amount VCPU + PCPU will double the available cpu resources. The NUMATopology filter is responsible for stop over-consuming the total number of cpu. 4. rolling update compute node's configure to use cpu_dedicated_set, that trigger the reshape existed VCPU consuming to PCPU consuming. New request is going to PCPU at step3, no more VCPU request at this point. Roll upgrade node to get rid of existed VCPU consuming. 5. done > I'm currently leaning towards (a) because it's a *lot* simpler, far > more robust (IMO) and lets us finish this effort in a single cycle, but > I imagine this could make upgrades very painful for operators if they > can't fast track their compute node upgrades. (b) is more complex and > would have some constraints, chief among them being that the option > would have to be disabled at some point post-release and would have to > be disabled on the scheduler first (to prevent the mismash or VCPU and > PCPU resource allocations) above. It also means this becomes a three > cycle effort at minimum, since this new option will default to True in > Train, before defaulting to False and being deprecated in U and finally > being removed in V. As such, I'd like some input, particularly from > operators using pinned instances in larger deployments. What are your > thoughts, and are there any potential solutions that I'm missing here? > > Cheers, > Stephen > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Fri Aug 16 04:43:32 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Fri, 16 Aug 2019 11:43:32 +0700 Subject: [neutron][OVN] Instance not getting metadata Message-ID: Hi, I have set up OpenStack with OVN enabled (manual install) and I can create Instance, associate floating IP and test the ping. But my Instance not getting metadata from OpenStack. I check the reference architecture, https://docs.openstack.org/networking-ovn/queens/admin/refarch/refarch.html the compute nodes should have installed ovn-metadata-agent but I don't find any configuration document about ovn-metadata-agent. I have configured the configuration files like this: [DEFAULT] nova_metadata_ip = 10.100.100.10 [ovs] ovsdb_connection = unix:/var/run/openvswitch/db.sock [ovn] ovn_sb_connection = tcp:10.101.101.10:6642 But the agent always fails. Full logs: http://paste.openstack.org/show/757805/ Anyone know whats happen and how to fix this error? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Aug 16 05:19:34 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 16 Aug 2019 15:19:34 +1000 Subject: [ironic] [stable] Proposal to add Riccardo Pittau to ironic-stable-maint In-Reply-To: References: Message-ID: <20190816051934.GD15862@thor.bakeyournoodle.com> On Thu, Aug 15, 2019 at 11:34:54AM -0400, Julia Kreger wrote: > Is there any agreement from stable-maint-core? Asking because we're > unable to modify ironic's stable maintenance group directly. Sorry. +2 +W Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mark at stackhpc.com Fri Aug 16 08:32:41 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 16 Aug 2019 09:32:41 +0100 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87wofepdfu.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> Message-ID: On Thu, 15 Aug 2019 at 18:05, James E. Blair wrote: > > Hi, > > We have made the switch to begin storing all of the build logs from Zuul > in Swift. > > Each build's logs will be stored in one of 7 randomly chosen Swift > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > providers! > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > on the Zuul web app. A lot of the features previously available on the > log server are now available there, plus some new ones. > > If you're looking for a link to a docs preview build, you'll find that > on the build page under the "Artifacts" section now. > > If you're curious about where your logs ended up, you can see the Swift > hostname under the "logs_url" row in the summary table. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. One minor thing I noticed is that the emails to openstack-stable-maint list no longer reference the branch. It was previously visible in the URL, e.g. - openstack-tox-py27 https://logs.opendev.org/periodic-stable/opendev.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/649bbb2/ : RETRY_LIMIT in 3m 08s However now it is not: openstack-tox-py27 https://zuul.opendev.org/t/openstack/build/464ae8b594cf4dc5b6da532c4ea179a7 : RETRY_LIMIT in 3m 31s I can see the branch if I click through to the linked Zuul build page. > > -Jim > From sfinucan at redhat.com Fri Aug 16 09:58:50 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 16 Aug 2019 10:58:50 +0100 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: On Fri, 2019-08-16 at 12:09 +0800, Alex Xu wrote: > Stephen Finucane 于2019年8月15日周四 下午8:25写道: > > tl;dr: Is breaking booting of pinned instances on Stein compute > > nodes > > > > in a Train deployment an acceptable thing to do, and if not, how do > > we > > > > best handle the VCPU->PCPU migration in Train? > > > > > > > > I've been working through the cpu-resources spec [1] and have run > > into > > > > a tricky issue I'd like some input on. In short, this spec means > > that > > > > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > > > > consuming a new resources type, PCPU, instead of VCPU. Many things > > need > > > > to change to make this happen but the key changes are: > > > > > > > > 1. The scheduler needs to start modifying requests for pinned > > instances > > > > to request PCPU resources instead of VCPU resources > > > > 2. The libvirt driver needs to start reporting PCPU resources > > > > 3. The libvirt driver needs to do a reshape, moving all existing > > > > allocations of VCPUs to PCPUs, if the instance holding that > > > > allocation is pinned > > > > > > > > The first two of these steps presents an issue for which we have a > > > > solution, but the solutions we've chosen are now resulting in this > > new > > > > issue. > > > > > > > > * For (1), the translation of VCPU to PCPU in the scheduler means > > > > compute nodes must now report PCPU in order for a pinned > > instance to > > > > land on that host. Since controllers are upgraded before compute > > > > nodes and all compute nodes aren't necessarily upgraded in one > > go > > > > (particularly for edge or other large or multi-cell > > deployments), > > > > this can mean there will be a period of time where there are > > very > > > > few or no hosts available on which to schedule pinned instances. > > > > > > > > * For (2), we're hampered by the fact that there is no clear way > > to > > > > determine if a host is used for pinned instances or not. Because > > of > > > > this, we can't determine if a host should be reporting PCPU or > > VCPU > > > > inventory. > > > > > > > > The solution we have for the issues with (1) is to add a workaround > > > > option that would disable this translation, allowing operators time > > to > > > > upgrade all their compute nodes to report PCPU resources before > > > > anything starts using them. For (2), we've decided to temporarily > > (i.e. > > > > for one release or until configuration is updated) report both, in > > the > > > > expectation that everyone using pinned instances has followed the > > long- > > > > standing advice to separate hosts intended for pinned instances > > from > > > > those intended for unpinned instances using host aggregates (e.g. > > even > > > > if we started reporting PCPUs on a host, nothing would consume that > > due > > > > to 'pinned=False' aggregate metadata or similar). These actually > > > > benefit each other, since if instances are still consuming VCPUs > > then > > > > the hosts need to continue reporting VCPUs. However, both interfere > > > > with our ability to do the reshape. > > > > > > > > Normally, a reshape is a one time thing. The way we'd planned to > > > > determine if a reshape was necessary was to check if PCPU inventory > > was > > > > registered against the host and, if not, whether there were any > > pinned > > > > instances on the host. If PCPU inventory was not available and > > there > > > > were pinned instances, we would update the allocations for these > > > > instances so that they would be consuming PCPUs instead of VCPUs > > and > > > > then update the inventory. This is problematic though, because our > > > > solution for the issue with (1) means pinned instances can continue > > to > > > > request VCPU resources, which in turn means we could end up with > > some > > > > pinned instances on a host consuming PCPU and other consuming VCPU. > > > > That obviously can't happen, so we need to change tacks slightly. > > The > > > > two obvious solutions would be to either (a) remove the workaround > > > > option so the scheduler would immediately start requesting PCPUs > > and > > > > just advise operators to upgrade their hosts for pinned instances > > asap > > > > or (b) add a different option, defaulting to True, that would apply > > to > > > > both the scheduler and compute nodes and prevent not only > > translation > > > > of flavors in the scheduler but also the reporting PCPUs and > > reshaping > > > > of allocations until disabled. > > > > > > The step I'm thinking is: > > 1. upgrade control plane, disable request PCPU, still request VCPU. > 2. rolling upgrade compute node, compute nodes begin to report both > PCPU and VCPU. But the request still add to VCPU. > 3. enabling the PCPU request, the new request is request PCPU. > In this point, some of instances are using VCPU, some of > instances are using PCPU on same node. And the amount VCPU + PCPU > will double the available cpu resources. The NUMATopology filter is > responsible for stop over-consuming the total number of cpu. > 4. rolling update compute node's configure to use cpu_dedicated_set, > that trigger the reshape existed VCPU consuming to PCPU consuming. > New request is going to PCPU at step3, no more VCPU request at > this point. Roll upgrade node to get rid of existed VCPU consuming. > 5. done This had been my initial plan. The issue is that by reporting both PCPU and VCPU in (2), our compute node's resource provider will now have PCPU inventory available (though it won't be used). This is problematic since "does this resource provider have PCPU inventory" is one of the questions I need to ask to determine if I should do a reshape. If I can't rely on this heuristic, I need to start querying for allocation information (so I can ask "does this resource provider have PCPU *allocations*") every time I start a compute node. I'm guessing this is expensive, since we don't do it by default. Stephen > > I'm currently leaning towards (a) because it's a *lot* simpler, far > > > > more robust (IMO) and lets us finish this effort in a single cycle, > > but > > > > I imagine this could make upgrades very painful for operators if > > they > > > > can't fast track their compute node upgrades. (b) is more complex > > and > > > > would have some constraints, chief among them being that the option > > > > would have to be disabled at some point post-release and would have > > to > > > > be disabled on the scheduler first (to prevent the mismash or VCPU > > and > > > > PCPU resource allocations) above. It also means this becomes a > > three > > > > cycle effort at minimum, since this new option will default to True > > in > > > > Train, before defaulting to False and being deprecated in U and > > finally > > > > being removed in V. As such, I'd like some input, particularly from > > > > operators using pinned instances in larger deployments. What are > > your > > > > thoughts, and are there any potential solutions that I'm missing > > here? > > > > > > > > Cheers, > > > > Stephen > > > > > > > > [1] > > https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 16 10:44:04 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Aug 2019 10:44:04 +0000 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: <20190816104403.txlg6sbqdroz2ghm@yuggoth.org> On 2019-08-15 16:32:55 +0200 (+0200), Pierre Riteau wrote: > Some time after I posted to the list, I started to receive > notifications. If someone fixed it, thanks a lot. [...] I had flagged your report to look into once I wasn't bouncing between airplanes, but had not done so yet. I still intend to check the MTA logs for any earlier delivery failures to your address once I get home. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Aug 16 12:04:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Aug 2019 12:04:10 +0000 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> Message-ID: <20190816120409.t5cyb345cygytrgj@yuggoth.org> On 2019-08-14 14:48:30 +0200 (+0200), Pierre Riteau wrote: > I am reviving this thread as I have never received any email > notifications from starred projects in Storyboard, despite > enabling them multiple times. [...] It looks from the MTA logs like it began to send you notifications on 2019-08-14 at 14:56:23 UTC. I don't see any indication of any messages getting rejected. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From no-reply at openstack.org Fri Aug 16 12:52:54 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 16 Aug 2019 12:52:54 -0000 Subject: kayobe 6.0.0.0rc1 (stein) Message-ID: Hello everyone, A new release candidate for kayobe for the end of the Stein cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kayobe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Stein release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/stein release branch at: https://opendev.org/openstack/kayobe/log/?h=stable/stein Release notes for kayobe can be found at: https://docs.openstack.org/releasenotes/kayobe/ From lucasagomes at gmail.com Fri Aug 16 14:20:58 2019 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Fri, 16 Aug 2019 15:20:58 +0100 Subject: [neutron][OVN] Instance not getting metadata In-Reply-To: References: Message-ID: Hi Zufar, The ovn-metadata-agent is trying to connection with the local OVSDB instance via UNIX socket. The same socket used by the "ovn-controller" process running on your compute nodes. For example: $ ps ax | grep ovn-controller 1640 ? S wrote: > > Hi, > > I have set up OpenStack with OVN enabled (manual install) and I can create Instance, associate floating IP and test the ping. > > But my Instance not getting metadata from OpenStack. I check the reference architecture, https://docs.openstack.org/networking-ovn/queens/admin/refarch/refarch.html the compute nodes should have installed ovn-metadata-agent but I don't find any configuration document about ovn-metadata-agent. > > I have configured the configuration files like this: > > [DEFAULT] > nova_metadata_ip = 10.100.100.10 > [ovs] > ovsdb_connection = unix:/var/run/openvswitch/db.sock > [ovn] > ovn_sb_connection = tcp:10.101.101.10:6642 > > But the agent always fails. Full logs: http://paste.openstack.org/show/757805/ > > Anyone know whats happen and how to fix this error? > > Best Regards, > Zufar Dhiyaulhaq From zufardhiyaulhaq at gmail.com Fri Aug 16 14:31:53 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Fri, 16 Aug 2019 21:31:53 +0700 Subject: [neutron][OVN] Instance not getting metadata In-Reply-To: References: Message-ID: Hi Lucas, Its always same, my OVSDB local instance is listening in unix:/var/run/openvswitch/db.sock [root at zu-ovn-compute0 ~]# ps ax | grep ovn-controller 15398 ? S 2019-08-16 10:29:08.391 15647 ERROR neutron sys.exit(main()) 2019-08-16 10:29:08.391 15647 ERROR neutron File "/usr/lib/python2.7/site-packages/networking_ovn/cmd/eventlet/agents/metadata.py", line 17, in main 2019-08-16 10:29:08.391 15647 ERROR neutron metadata_agent.main() 2019-08-16 10:29:08.391 15647 ERROR neutron File "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata_agent.py", line 38, in main 2019-08-16 10:29:08.391 15647 ERROR neutron agt.start() 2019-08-16 10:29:08.391 15647 ERROR neutron File "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 139, in start 2019-08-16 10:29:08.391 15647 ERROR neutron self.ovs_idl = ovsdb.MetadataAgentOvsIdl().start() 2019-08-16 10:29:08.391 15647 ERROR neutron File "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/ovsdb.py", line 59, in start 2019-08-16 10:29:08.391 15647 ERROR neutron 'Open_vSwitch') 2019-08-16 10:29:08.391 15647 ERROR neutron File "/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 120, in get_schema_helper 2019-08-16 10:29:08.391 15647 ERROR neutron raise Exception("Could not connect to %s" % connection) 2019-08-16 10:29:08.391 15647 ERROR neutron Exception: Could not connect to unix:/var/run/openvswitch/db.sock 2019-08-16 10:29:08.391 15647 ERROR neutron [root at zu-ovn-compute0 ~]# systemctl status openvswitch ● openvswitch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled) Active: active (exited) since Jum 2019-08-16 10:27:01 EDT; 4min 22s ago Main PID: 15323 (code=exited, status=0/SUCCESS) Tasks: 0 CGroup: /system.slice/openvswitch.service Agu 16 10:27:01 zu-ovn-compute0 systemd[1]: Starting Open vSwitch... Agu 16 10:27:01 zu-ovn-compute0 systemd[1]: Started Open vSwitch. [root at zu-ovn-compute0 ~]# systemctl status ovn-controller ● ovn-controller.service - OVN controller daemon Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; enabled; vendor preset: disabled) Active: active (running) since Jum 2019-08-16 10:27:12 EDT; 4min 22s ago Main PID: 15398 (ovn-controller) Tasks: 1 CGroup: /system.slice/ovn-controller.service └─15398 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/openvswitch/ovn-controller... Agu 16 10:27:12 zu-ovn-compute0 systemd[1]: Starting OVN controller daemon... Agu 16 10:27:12 zu-ovn-compute0 ovn-ctl[15386]: Starting ovn-controller [ OK ] Agu 16 10:27:12 zu-ovn-compute0 systemd[1]: Started OVN controller daemon. Best Regards, Zufar Dhiyaulhaq On Fri, Aug 16, 2019 at 9:21 PM Lucas Alvares Gomes wrote: > Hi Zufar, > > The ovn-metadata-agent is trying to connection with the local OVSDB > instance via UNIX socket. The same socket used by the "ovn-controller" > process running on your compute nodes. For example: > > $ ps ax | grep ovn-controller > 1640 ? S unix:/usr/local/var/run/openvswitch/db.sock -vconsole:emer > -vsyslog:err -vfile:info --no-chdir > --log-file=/opt/stack/logs/ovn-controller.log > --pidfile=/usr/local/var/run/openvswitch/ovn-controller.pid --detach > > Can you see the "unix:/usr/local/var/run/openvswitch/db.sock" ? That's > the UNIX socket path that needs to be passed to the ovn-metadata-agent > via configuration option. > > You can find that configuration option at: > > $ grep ovsdb_connection /etc/neutron/networking_ovn_metadata_agent.ini > ovsdb_connection = unix:/usr/local/var/run/openvswitch/db.sock > > Hope that helps, > Lucas > > On Fri, Aug 16, 2019 at 5:53 AM Zufar Dhiyaulhaq > wrote: > > > > Hi, > > > > I have set up OpenStack with OVN enabled (manual install) and I can > create Instance, associate floating IP and test the ping. > > > > But my Instance not getting metadata from OpenStack. I check the > reference architecture, > https://docs.openstack.org/networking-ovn/queens/admin/refarch/refarch.html > the compute nodes should have installed ovn-metadata-agent but I don't find > any configuration document about ovn-metadata-agent. > > > > I have configured the configuration files like this: > > > > [DEFAULT] > > nova_metadata_ip = 10.100.100.10 > > [ovs] > > ovsdb_connection = unix:/var/run/openvswitch/db.sock > > [ovn] > > ovn_sb_connection = tcp:10.101.101.10:6642 > > > > But the agent always fails. Full logs: > http://paste.openstack.org/show/757805/ > > > > Anyone know whats happen and how to fix this error? > > > > Best Regards, > > Zufar Dhiyaulhaq > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Aug 16 14:36:20 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 16 Aug 2019 15:36:20 +0100 (BST) Subject: [placement] update 19-32 Message-ID: HTML: https://anticdent.org/placement-update-19-32.html Here's placement update 19-32. There will be no update 33; I'm going to take next week off. If there are Placement-related issues that need immediate attention please speak with any of Eric Fried (efried), Balazs Gibizer (gibi), or Tetsuro Nakamura (tetsuro). # Most Important Same as last week: The main things on the Placement radar are implementing Consumer Types and cleanups, performance analysis, and documentation related to nested resource providers. A thing we should place on the "important" list is bringing the osc placement plugin up to date. We also need to discuss what would we would like the plugin to be. Is it required that it have ways to perform all the functionality of the API, or is it about providing ways to do what humans need to do with the placement API? Is there a difference? We decided that consumer types is medium priority: The nova-side use of the functionality is not going to happen in Train, but it would be nice to have the placement-side ready when U opens. The primary person working on it, tssurya, is spread pretty thin so it might not happen unless someone else has the cycles to give it some attention. On the documentation front, we realized during some performance work [last](https://review.opendev.org/675606) [week](https://review.opendev.org/#/c/676204/4/placement/tests/functional/gabbits/same-subtree-deep.yaml at 29) that it easy to have an incorrect grasp of how `same_subtree` works when there are more than two groups involved. It is critical that we create good "how to use" documentation for this and other advanced placement features. Not only can it be easy to get wrong, it can be challenge to see that you've got it wrong (the failure mode is "more results, only some of which you actually wanted"). # What's Changed * Yet more [performance fixes](https://review.opendev.org/#/q/topic:optimize-_build_provider_summaries) are in the process of merging. Most of these are related to getting `_merge_candidates` and `_build_provider_summaries` to have less impact. The fixes are generally associated with avoiding duplicate work by generating dicts of reusable objects earlier in the request. This is possible because of the relatively new `RequestWideSearchContext`. In a request that returns many provider summaries `_build_provider_summaries` continues to have a significant impact because it has to create many objects but overall everything is much less heavyweight. More on performance in Themes, below. * The combination of all these performance fixes, and because of microversions, makes it reasonable for anyone running placement in a resource constrained environment (or simply wanting things to be faster) to consider running Train placement with _any_ release of OpenStack. Obviously you should test it first, but it is worth investigating. More information on how to achieve this can be found in the [upgrade to stein docs](https://docs.openstack.org/placement/latest/upgrade/to-stein.html) # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 23 (1) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 4 (1) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 11 (0) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 18 (1). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 4 (-1). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new '--amend' option which can update resource provider inventory without requiring the user to pass a full replacement for inventory. This has been broken up into three patches to help with review. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. As mentioned above, this is currently paused while other things take priority. If you have time that you could spend on this please respond here expressing that interest. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. As said above, there's lots of performance work in progress. We'll need to make a similar effort with regard to docs. For example, all of the coders involved in the creation and review of the `same_subtree` functionality struggle to explain, clearly and simply, how it will work in a variety of situations. We need to enumerate the situations and the outcomes, in documentation. One outcome of this work will be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. On the performance front, there is one major area of impact which has not received much attention yet. When requesting allocation candidates (or resource providers) that will return many results the cost of JSON serialization is just under one quarter of the processing time. This is to be expected when the response body is `2379k` big, and 154000 lines long (when pretty printed) for 7000 provider summaries and 2000 allocation requests. But there are ways to fix it. One is to ask more focused questions (so fewer results are expected). Another is to `limit=N` the results (but this can lead to issues with migrations). Another is to [use a different JSON serializer](https://review.opendev.org/674661). Should we do that? It make a _big_ difference with large result sets (which will be common in big and sparse clouds). # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). There are two [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Nova: nova-manage: heal port allocations * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * blazar: Fix placement operations in multi-region deployments * Nova: libvirt: Start reporting PCPU inventory to placement A part of Nova: support move ops with qos ports * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Zun: [WIP] Claim container allocation in placement # End Have a good next week. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From witold.bedyk at suse.com Fri Aug 16 15:02:58 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Fri, 16 Aug 2019 17:02:58 +0200 Subject: [monasca][ptl] Out of office Message-ID: <47032c26-b786-33b7-c491-ec3eea6eb976@suse.de> Hello everyone, I will be out of office from August 19th until 30th. In Monasca related matters please contact: Martin Chacon Piza Thanks Witek From witold.bedyk at suse.com Fri Aug 16 15:11:29 2019 From: witold.bedyk at suse.com (Witek Bedyk) Date: Fri, 16 Aug 2019 17:11:29 +0200 Subject: [aodh] [heat] Stein: How to create alarms based on rate metrics like CPU utilization? In-Reply-To: References: Message-ID: <20c4381e-eb06-3be1-ea59-2a306d187a30@suse.de> Hi all, You can also collect `cpu.utilization_perc` metric with Monasca and trigger Heat auto-scaling as we demonstrated in the hands-on workshop at the last Summit in Denver. Here the Heat template we've used [1]. You can find the workshop material here [2]. Cheers Witek [1] https://github.com/sjamgade/monasca-autoscaling/blob/master/final/autoscaling.yaml [2] https://github.com/sjamgade/monasca-autoscaling On 8/14/19 6:34 PM, Duc Truong wrote: > I don't know how to solve this problem in aodh, but it is possible to > use Prometheus to aggregate CPU utilization and trigger scaling. I > wrote up how to do this with Senlin and Prometheus here: > https://medium.com/@dkt26111/auto-scaling-openstack-instances-with-senlin-and-prometheus-46100a9a14e1?source=friends_link&sk=5c0a2aa9e541e8c350963e7ec72bcbb5 > > You can probably do something similar with Heat and Prometheus. > > On Sun, Aug 4, 2019 at 12:52 AM Bernd Bausch wrote: >> >> Prior to Stein, Ceilometer issued a metric named cpu_util, which I could use to trigger alarms and autoscaling when CPU utilization was too high. >> >> cpu_util doesn't exist anymore. Instead, we are asked to use Gnocchi's rate feature. However, when using rates, alarms on a group of resources require more parameters than just one metric: Both an aggregation and a reaggregation method are needed. >> >> For example, a group of instances that implement "myapp": >> >> gnocchi measures aggregation -m cpu --reaggregation mean --aggregation rate:mean --query server_group=myapp --resource-type instance >> >> Actually, this command uses a deprecated API (but from what I can see, Aodh still uses it). The new way is like this: >> >> gnocchi aggregates --resource-type instance '(aggregate rate:mean (metric cpu mean))' server_group=myapp >> >> If rate:mean is in the archive policy, it also works the other way around: >> >> gnocchi aggregates --resource-type instance '(aggregate mean (metric cpu rate:mean))' server_group=myapp >> >> Without reaggregation, I get quite unexpected numbers, including negative CPU rates. If you want to understand why, see this discussion with one of the Gnocchi maintainers [1]. >> >> My problem: Aodh allows me to set an aggregation method, but not a reaggregation method. How can I create alarms based on rates? The problem extends to Heat and autoscaling. >> >> Thanks much, >> >> Bernd. >> >> [1] https://github.com/gnocchixyz/gnocchi/issues/1044 From gouthampravi at gmail.com Fri Aug 16 15:52:10 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 16 Aug 2019 08:52:10 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Message-ID: On Thu, Aug 15, 2019 at 12:14 PM Eric Fried wrote: > Hi infra. I wanted to blast out a handful of issues I've had since the > cutover to swift. > > I'm using Chrome Version 76.0.3809.100 (Official Build) (64-bit) on > bionic (18.04.3). > > - Hot tip: if you want the dynamic logs (with timestamp links and sev > filters), use the twisties. Clicking through gets you to the raw files. > Unfamiliar with the term "twisties" and googling didn't help - what do you mean? > The former was not obvious to me. > - Some in-app logs aren't working. E.g. when I try to look at > controller=>logs=>screen-n-cpu.txt.gz from [1], it redirects me to [2]. > The hover [3] has a double slash in it, not sure if that's related, but > when I try squashing to one slash, I get an error... sometimes [4]. > - When the in-app logs do render, they don't wrap. There's a horizontal > scroll bar, but it's at the bottom of an inner frame, so it's off the > screen most of the time and therefore not useful. (I don't have > horizontal mouse scroll capabilities; maybe I should look into that.) > - The timestamp links in app anchor the line at the "top" - which (for > me, anyway) is "underneath" the header menu (Status Projects Jobs Labels > Nodes Builds Buildsets), so I have to scroll up to get the anchored line > and a few of its successors. > > Thanks as always for all your hard work. > > efried > > [1] > > https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/logs > [2] https://zuul.opendev.org/tenants > [3] > > https://zuul.opendev.org/t/openstack/build/402a73a9238643c2b893d53b37a6ce27/log/controller//logs/screen-n-cpu.txt.gz > [4] > > http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2019-08-15.log.html#t2019-08-15T18:49:04 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo.budai at gmail.com Fri Aug 16 16:10:24 2019 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Fri, 16 Aug 2019 19:10:24 +0300 Subject: [metrics] [telemetry] [stein] cpu_util Message-ID: <9058e09f-a5ce-4db9-5077-1217ece1695a@gmail.com> Hello all, the release release announce of ceilometer rocky is deprecating the cpu_util and *.rate metrics "* cpu_util and *.rate meters are deprecated and will be removed in future release in favor of the Gnocchi rate calculation equivalent." so we don't have them in Stein. Can you direct me to some document that describes how to achieve these with Gnocchi rate calculation? Thank you, Laszlo From zufardhiyaulhaq at gmail.com Fri Aug 16 17:17:58 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Sat, 17 Aug 2019 00:17:58 +0700 Subject: [neutron][OVN] Instance not getting metadata In-Reply-To: References: Message-ID: Hi Lucas, My networking-ovn-metadata-agent.service is running with user neutron [root at zu-ovn-compute1 ~]# cat /usr/lib/systemd/system/networking-ovn-metadata-agent.service [Unit] Description=OpenStack networking-ovn Metadata Agent After=syslog.target network.target openvswitch.service Requires=openvswitch.service [Service] Type=simple User=neutron But the sock is listen in openvswitch user. [root at zu-ovn-compute1 ~]# cd /var/run/openvswitch/ [root at zu-ovn-compute1 openvswitch]# ls -lh total 12K srwxr-x---. 1 openvswitch hugetlbfs 0 Agu 16 10:28 br-int.mgmt srwxr-x---. 1 openvswitch hugetlbfs 0 Agu 16 10:28 br-int.snoop srwxr-x---. 1 openvswitch hugetlbfs 0 Agu 16 10:28 br-provider.mgmt srwxr-x---. 1 openvswitch hugetlbfs 0 Agu 16 10:28 br-provider.snoop srwxr-x---. 1 openvswitch hugetlbfs 0 Agu 16 10:27 db.sock I have try to change the user in networking-ovn-metadata-agent.service from neutron to root and the service is running. [root at zu-ovn-compute0 openvswitch]# cat /usr/lib/systemd/system/networking-ovn-metadata-agent.service [Unit] Description=OpenStack networking-ovn Metadata Agent After=syslog.target network.target openvswitch.service Requires=openvswitch.service [Service] Type=simple User=root PermissionsStartOnly=true ExecStart=/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --config-dir /etc/neutron/conf.d/networking-ovn-metadata-agent --log-file /var/log/neutron/networking-ovn-metadata-agent.log PrivateTmp=false KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target [root at zu-ovn-compute0 openvswitch]# systemctl status networking-ovn-metadata-agent.service ● networking-ovn-metadata-agent.service - OpenStack networking-ovn Metadata Agent Loaded: loaded (/usr/lib/systemd/system/networking-ovn-metadata-agent.service; enabled; vendor preset: disabled) Active: active (running) since Jum 2019-08-16 13:13:24 EDT; 4min 5s ago Main PID: 18692 (networking-ovn-) Tasks: 3 CGroup: /system.slice/networking-ovn-metadata-agent.service ├─18692 /usr/bin/python2 /usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --conf... ├─18710 /usr/bin/python2 /usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --conf... └─18711 /usr/bin/python2 /usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --conf... Agu 16 13:13:24 zu-ovn-compute0 systemd[1]: Started OpenStack networking-ovn Metadata Agent. Agu 16 13:13:26 zu-ovn-compute0 sudo[18712]: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/privsep-helper --config-file /etc/neutron/plugins/networking-o... Hint: Some lines were ellipsized, use -l to show in full. Are this is a bug in centos upstream? Best Regards, Zufar Dhiyaulhaq On Fri, Aug 16, 2019 at 9:31 PM Zufar Dhiyaulhaq wrote: > Hi Lucas, > > Its always same, my OVSDB local instance is listening in > unix:/var/run/openvswitch/db.sock > > [root at zu-ovn-compute0 ~]# ps ax | grep ovn-controller > 15398 ? S unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info > --no-chdir --log-file=/var/log/openvswitch/ovn-controller.log > --pidfile=/var/run/openvswitch/ovn-controller.pid --detach > 15807 pts/0 S+ 0:00 grep --color=auto ovn-controller > > [root at zu-ovn-compute0 ~]# grep ovsdb_connection > /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini > ovsdb_connection = unix:/var/run/openvswitch/db.sock > > [root at zu-ovn-compute0 ~]# tail -f > /var/log/neutron/networking-ovn-metadata-agent.log > 2019-08-16 10:29:08.391 15647 CRITICAL neutron [-] Unhandled error: > Exception: Could not connect to unix:/var/run/openvswitch/db.sock > 2019-08-16 10:29:08.391 15647 ERROR neutron Traceback (most recent call > last): > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/bin/networking-ovn-metadata-agent", line 10, in > 2019-08-16 10:29:08.391 15647 ERROR neutron sys.exit(main()) > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/lib/python2.7/site-packages/networking_ovn/cmd/eventlet/agents/metadata.py", > line 17, in main > 2019-08-16 10:29:08.391 15647 ERROR neutron metadata_agent.main() > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata_agent.py", > line 38, in main > 2019-08-16 10:29:08.391 15647 ERROR neutron agt.start() > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", > line 139, in start > 2019-08-16 10:29:08.391 15647 ERROR neutron self.ovs_idl = > ovsdb.MetadataAgentOvsIdl().start() > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/ovsdb.py", > line 59, in start > 2019-08-16 10:29:08.391 15647 ERROR neutron 'Open_vSwitch') > 2019-08-16 10:29:08.391 15647 ERROR neutron File > "/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", > line 120, in get_schema_helper > 2019-08-16 10:29:08.391 15647 ERROR neutron raise Exception("Could not > connect to %s" % connection) > 2019-08-16 10:29:08.391 15647 ERROR neutron Exception: Could not connect > to unix:/var/run/openvswitch/db.sock > 2019-08-16 10:29:08.391 15647 ERROR neutron > > [root at zu-ovn-compute0 ~]# systemctl status openvswitch > ● openvswitch.service - Open vSwitch > Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; > vendor preset: disabled) > Active: active (exited) since Jum 2019-08-16 10:27:01 EDT; 4min 22s ago > Main PID: 15323 (code=exited, status=0/SUCCESS) > Tasks: 0 > CGroup: /system.slice/openvswitch.service > > Agu 16 10:27:01 zu-ovn-compute0 systemd[1]: Starting Open vSwitch... > Agu 16 10:27:01 zu-ovn-compute0 systemd[1]: Started Open vSwitch. > > [root at zu-ovn-compute0 ~]# systemctl status ovn-controller > ● ovn-controller.service - OVN controller daemon > Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; > enabled; vendor preset: disabled) > Active: active (running) since Jum 2019-08-16 10:27:12 EDT; 4min 22s ago > Main PID: 15398 (ovn-controller) > Tasks: 1 > CGroup: /system.slice/ovn-controller.service > └─15398 ovn-controller unix:/var/run/openvswitch/db.sock > -vconsole:emer -vsyslog:err -vfile:info --no-chdir > --log-file=/var/log/openvswitch/ovn-controller... > > Agu 16 10:27:12 zu-ovn-compute0 systemd[1]: Starting OVN controller > daemon... > Agu 16 10:27:12 zu-ovn-compute0 ovn-ctl[15386]: Starting ovn-controller [ > OK ] > Agu 16 10:27:12 zu-ovn-compute0 systemd[1]: Started OVN controller daemon. > > Best Regards, > Zufar Dhiyaulhaq > > > On Fri, Aug 16, 2019 at 9:21 PM Lucas Alvares Gomes > wrote: > >> Hi Zufar, >> >> The ovn-metadata-agent is trying to connection with the local OVSDB >> instance via UNIX socket. The same socket used by the "ovn-controller" >> process running on your compute nodes. For example: >> >> $ ps ax | grep ovn-controller >> 1640 ? S> unix:/usr/local/var/run/openvswitch/db.sock -vconsole:emer >> -vsyslog:err -vfile:info --no-chdir >> --log-file=/opt/stack/logs/ovn-controller.log >> --pidfile=/usr/local/var/run/openvswitch/ovn-controller.pid --detach >> >> Can you see the "unix:/usr/local/var/run/openvswitch/db.sock" ? That's >> the UNIX socket path that needs to be passed to the ovn-metadata-agent >> via configuration option. >> >> You can find that configuration option at: >> >> $ grep ovsdb_connection /etc/neutron/networking_ovn_metadata_agent.ini >> ovsdb_connection = unix:/usr/local/var/run/openvswitch/db.sock >> >> Hope that helps, >> Lucas >> >> On Fri, Aug 16, 2019 at 5:53 AM Zufar Dhiyaulhaq >> wrote: >> > >> > Hi, >> > >> > I have set up OpenStack with OVN enabled (manual install) and I can >> create Instance, associate floating IP and test the ping. >> > >> > But my Instance not getting metadata from OpenStack. I check the >> reference architecture, >> https://docs.openstack.org/networking-ovn/queens/admin/refarch/refarch.html >> the compute nodes should have installed ovn-metadata-agent but I don't find >> any configuration document about ovn-metadata-agent. >> > >> > I have configured the configuration files like this: >> > >> > [DEFAULT] >> > nova_metadata_ip = 10.100.100.10 >> > [ovs] >> > ovsdb_connection = unix:/var/run/openvswitch/db.sock >> > [ovn] >> > ovn_sb_connection = tcp:10.101.101.10:6642 >> > >> > But the agent always fails. Full logs: >> http://paste.openstack.org/show/757805/ >> > >> > Anyone know whats happen and how to fix this error? >> > >> > Best Regards, >> > Zufar Dhiyaulhaq >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Fri Aug 16 17:28:19 2019 From: opensrloo at gmail.com (Ruby Loo) Date: Fri, 16 Aug 2019 13:28:19 -0400 Subject: [ironic][ptl] Taking a break - i.e. disconnecting for a few weeks in September In-Reply-To: References: Message-ID: On Thu, Aug 15, 2019 at 7:52 AM Julia Kreger wrote: > Greetings everyone, > > For my own mental health and with various things that have occurred in > my life these past six months, I will be disconnecting for two weeks > during the month of September, starting on the 6th. To put icing on > the cake, I then have business related travel the two weeks following > my return that will inhibit regular IRC access during peak contributor > days/hours. Thanks for letting us know. A well-deserved break and we need to make sure you take more of them! :) > In my absence, Dmitry Tantsur has agreed to take care of my PTL > responsibilities. This will include the time through requirements > freeze and quite possibly include the creation of the stable/train > branch if necessary. > > That being said, fear not! I still intend to run for Ironic's PTL for > the next cycle! > Glad you mentioned this cuz I was beginning to worry 😓 > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Aug 16 17:47:06 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 16 Aug 2019 12:47:06 -0500 Subject: [all][infra] Zuul logs are in swift In-Reply-To: References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> Message-ID: <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> > - Hot tip: if you want the dynamic logs (with timestamp links and sev > filters), use the twisties. Clicking through gets you to the raw files. > > > Unfamiliar with the term "twisties" and googling didn't help - what do > you mean? Sorry about that. On the left-hand side next to the folder names there's an icon that looks like '>'. If you click on the folder, it pops you into a regular directory browser with none of the whizbang anchoring/filtering features. But if instead you click the '>', it rotates downward ('v') and expands the directory tree in place. Keep navigating until you get to the file you want, and then click on the file (not 'raw') to open it within the "app". Thanks, efried . From corey.bryant at canonical.com Fri Aug 16 19:44:49 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 16 Aug 2019 15:44:49 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-4) Message-ID: This is the goal-4 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 4 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == Thank you to all who have contributed their time and fixes to enable patches to land. We're down to 23 projects with failing tests. == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Aug 16 19:47:20 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 16 Aug 2019 12:47:20 -0700 Subject: [keystone] Feature proposal freeze exception for resource options Message-ID: Work is underway to implement resource options for all resources[1] but the code has not been proposed yet. Since the implementation is close to ready to review, and because immutable resources[2] depends on it, we're proposing a 1-week extension to our feature proposal freeze deadline[3]. This will also push out the work for immutable resources, but the majority of this feature has been implemented[4] and will only require a rebase on top of the resource options work. If you have any concerns about this, please let me or the team know. Colleen [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/resource-options-for-all.html [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/immutable-resources.html [3] https://releases.openstack.org/train/schedule.html [4] https://review.opendev.org/#/q/topic:immutable-resources From corvus at inaugust.com Fri Aug 16 20:49:54 2019 From: corvus at inaugust.com (James E. Blair) Date: Fri, 16 Aug 2019 13:49:54 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> (Eric Fried's message of "Fri, 16 Aug 2019 12:47:06 -0500") References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> Message-ID: <87mug8j0nh.fsf@meyer.lemoncheese.net> All of the fixes to the issues that Eric identified have landed, so you should see them now (if not, hit reload). Plus a couple more -- log lines are now displayed with line numbers, and it is these numbers which are the clickable links to create links for sharing. Tristan also added a feature to support selecting a range. Click the first line number to start the range, then Shift-click it to select the end. For example: http://zuul.openstack.org/build/602ab1629aca4f4ebe21ff7024884f87/log/job-output.txt#456-464 -Jim From rlennie at verizonmedia.com Fri Aug 16 21:24:26 2019 From: rlennie at verizonmedia.com (Robert Lennie) Date: Fri, 16 Aug 2019 14:24:26 -0700 Subject: How are we managing quotas, flavors, projects and more across large org Openstack deployments? Message-ID: Hi, I am a complete neophyte to this forum so I may have missed any earlier discussions on this topic if there have been any? I am currently working on a proprietary legacy (Django based) "OpenStack Operations Manager" dashboard system that attempts to manage quotas, flavors, clusters and more across a very large distributed OpenStack deployment for multiple customers. There are a number of potential shortcomings with our current dashboard system and I was wondering how other teams in other organizations with similar large Openstack deployments are handling these types of system management issues? The issues addressed are likely common to other large environments. In particular what tools might exist upstream that allow system (particularly quota and flavor) management across large distributed Openstack environments? I welcome any feedback, comments, suggestions and any other information regarding the tools that currently exist or may be planned for this purpose? Regards, Robert Lennie Principal Software Engineer Verizon Media -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Aug 16 21:41:09 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 16 Aug 2019 14:41:09 -0700 Subject: [keystone] Feature proposal freeze status Message-ID: This week is the keystone team's feature proposal freeze week. Here's a status update for our planned features for this cycle[1]: * Add Fine Grained Restrictions to Application Credentials[2]: implementation is proposed and ready for review[3], most work is merged, some patches remaining for review, client work not started * Expiring Group Memberships Through Mapping Rules[4]: implementation not yet proposed, 1-week extension granted[5] * Explicit Domain IDs[6]: Complete[7] * Immutable Resources[8]: Implementation proposed[9], partially WIP; depends on resource options work * Resource options for all resource types[10]: implementation not yet proposed, 1-week extension granted[11] * Extend user API to support federated attributes[12]: proposed and ready for review[13] We're only holding spec work accountable to the feature proposal freeze deadline. Other roadmap features[14], including system scope and default roles work, is open for code proposals until feature freeze week. [1] http://specs.openstack.org/openstack/keystone-specs/ [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/capabilities-app-creds.html [3] https://review.opendev.org/#/q/topic:bp/whitelist-extension-for-app-creds [4] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/expiring-group-memberships.html [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008513.html [6] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/explicit-domains-ids.html [7] https://review.opendev.org/605235 [8] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/immutable-resources.html [9] https://review.opendev.org/#/q/topic:immutable-resources [10] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/resource-options-for-all.html [11] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008546.html [12] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/train/support-federated-attr.html [13] https://review.opendev.org/#/q/topic:bp/support-federated-attr [14] https://trello.com/b/ClKW9C8x/keystone-train-roadmap From openstack at fried.cc Fri Aug 16 22:43:08 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 16 Aug 2019 17:43:08 -0500 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87mug8j0nh.fsf@meyer.lemoncheese.net> References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> <87mug8j0nh.fsf@meyer.lemoncheese.net> Message-ID: <7eb6acd9-afcf-35fe-0925-000e9ffc23f8@fried.cc> Thank you very much for fixing these so quickly! The changes look great! efried On 8/16/19 3:49 PM, James E. Blair wrote: > All of the fixes to the issues that Eric identified have landed, so you > should see them now (if not, hit reload). > > Plus a couple more -- log lines are now displayed with line numbers, and > it is these numbers which are the clickable links to create links for > sharing. > > Tristan also added a feature to support selecting a range. Click the > first line number to start the range, then Shift-click it to select the > end. > > For example: > > http://zuul.openstack.org/build/602ab1629aca4f4ebe21ff7024884f87/log/job-output.txt#456-464 > > -Jim > From cboylan at sapwetik.org Fri Aug 16 23:17:58 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 16 Aug 2019 16:17:58 -0700 Subject: =?UTF-8?Q?Re:_How_are_we_managing_quotas, _flavors, _projects_and_more_acr?= =?UTF-8?Q?oss_large_org_Openstack_deployments=3F?= In-Reply-To: References: Message-ID: On Fri, Aug 16, 2019, at 2:24 PM, Robert Lennie wrote: > Hi, > > I am a complete neophyte to this forum so I may have missed any earlier > discussions on this topic if there have been any? > > I am currently working on a proprietary legacy (Django based) > "OpenStack Operations Manager" dashboard system that attempts to manage > quotas, flavors, clusters and more across a very large distributed > OpenStack deployment for multiple customers. > > There are a number of potential shortcomings with our current dashboard > system and I was wondering how other teams in other organizations with > similar large Openstack deployments are handling these types of system > management issues? The issues addressed are likely common to other > large environments. > > In particular what tools might exist upstream that allow system > (particularly quota and flavor) management across large distributed > Openstack environments? > > I welcome any feedback, comments, suggestions and any other information > regarding the tools that currently exist or may be planned for this > purpose? We are primarily cloud users not administrators but have been using the ansible cloud launcher role [0]. This has the ability to manage resources that are typically set by the administrator including quotas and flavors. For image management we have nodepool [1] building images with disk image builder daily and uploading those images to the clouds we use. [0] https://opendev.org/x/ansible-role-cloud-launcher/src/branch/master/tasks [1] https://zuul-ci.org/docs/nodepool/operation.html#nodepool-builder Hope this helps, Clark From colleen at gazlene.net Fri Aug 16 23:34:09 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 16 Aug 2019 16:34:09 -0700 Subject: [keystone] Keystone Team Update - Week of 12 August 2019 Message-ID: # Keystone Team Update - Week of 12 August 2019 ## News ### Feature Proposal Freeze This week is our scheduled feature proposal freeze[1], see status summary post[2]. [1] https://releases.openstack.org/train/schedule.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008549.html ### Trusts API While implementing system scope and default roles for the trusts API we discovered an inconsistency in the error handling for the GET trust details request: most of our APIs do RBAC enforcement first thing, and return a 403 if the resource is missing so as not to divulge whether there's a record in the database for the requested resource. The GET trust details request does the database lookup first and exposes a 404 to the user if the record is missing. We discussed in the bug report[3] whether this is desireable, intended, acceptable, or dangerous behavior, and so far have converged on not fixing the issue in the interest of not breaking the API contract. If you have feelings to the contrary, please speak up in the bug report. [3] https://bugs.launchpad.net/bugs/1840288 ## Action Items * knikolla to finish initial implementation proposal of renewable group membership next week * kmalloc to finish initial implementation proposal of resource options migration next week ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. The topic for next week's office hour will be: feature proposal review - we'll walk through code implementations (if available) and answer any questions, or discuss design details if code is not available yet The location for next week's office hour will be: https://meet.jit.si/keystone-office-hours Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Open Specs Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 15 changes this week, which included support for auth receipts in keystoneauth[4], the IPv6 community goal work[5], and some more changes to implement access rules in application credentials[6]. [4] https://review.opendev.org/675049 [5] https://review.opendev.org/671903 [6] https://review.opendev.org/#/q/status:merged+topic:bp/whitelist-extension-for-app-creds+-age:1week ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 47 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories - System scope/default roles (https://trello.com/c/ERo50T7r , https://trello.com/c/RlYyb4DU) + https://review.opendev.org/#/q/status:open+topic:implement-default-roles+label:verified%253D%252B1 + https://review.opendev.org/#/q/status:open+topic:trust-policies + https://review.opendev.org/#/q/topic:bug/1805409 - Federated attributes for users (https://trello.com/c/dEmSumDQ) + https://review.opendev.org/#/q/status:open+topic:bp/support-federated-attr - Application credential access rules (https://trello.com/c/dJsWMI4W) + https://review.opendev.org/#/q/status:open+topic:bp/whitelist-extension-for-app-creds * Closes bugs - Honor group_members_are_ids for user_enabled_emulation https://review.opendev.org/674782 - Cleanup session on delete https://review.opendev.org/674139 - token: consistently decode binary types https://review.opendev.org/665617 * Oldest - OpenID Connect improved support https://review.opendev.org/373983 ## Bugs This week we opened 6 new bugs and closed 3. Bugs opened (6) Bug #1840288 (keystone:High) opened by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1840288 Bug #1840291 (keystone:Medium) opened by Rabi Mishra https://bugs.launchpad.net/keystone/+bug/1840291 Bug #1840090 (keystone:Undecided) opened by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1840090 Bug #1840403 (keystone:Undecided) opened by Ariya Jantaravises https://bugs.launchpad.net/keystone/+bug/1840403 Bug #1839748 (keystoneauth:High) opened by Adrian Turjak https://bugs.launchpad.net/keystoneauth/+bug/1839748 Bug #1840235 (keystoneauth:Undecided) opened by Rabi Mishra https://bugs.launchpad.net/keystoneauth/+bug/1840235 Bugs closed (1) Bug #1840288 (keystone:High) https://bugs.launchpad.net/keystone/+bug/1840288 Bugs fixed (2) Bug #1839577 (keystone:Medium) fixed by Adrian Turjak https://bugs.launchpad.net/keystone/+bug/1839577 Bug #1839748 (keystoneauth:High) fixed by Adrian Turjak https://bugs.launchpad.net/keystoneauth/+bug/1839748 ## Milestone Outlook https://releases.openstack.org/train/schedule.html This week is feature proposal freeze week for the keystone team, which as mentioned previously is being extended for some initiatives. Oslo feature freeze is in two weeks: anything we need to complete for oslo.policy needs to be merged before then. Oslo.limit is still pre-1.0 so feature freeze won't apply to it. The PTL nomination period is also in two weeks: while I intend to run again I'm also happy to answer questions about the role if anyone wants to also put their name in. Final release for non-client libraries (keystonemiddleware, keystoneauth) is in three weeks. Feature freeze and client library freeze is in four weeks. This is also the soft string freeze and the requirements freeze and the community goals deadline. ## Shout-outs Keystoneauth now supports multi-factor authentication and auth receipts[7]. Thanks to Adrian for tackling this ahead of the library freeze deadline! [7] https://docs.openstack.org/keystoneauth/latest/authentication-plugins.html#multi-factor-with-v3-identity-plugins ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From dleaberry at purestorage.com Fri Aug 16 23:45:02 2019 From: dleaberry at purestorage.com (Daniel Leaberry) Date: Fri, 16 Aug 2019 17:45:02 -0600 Subject: Kombu 4.6.4 is breaking devstack with python 3.7 Message-ID: <8CD16FA4-C9B9-46C0-BA81-1131A7028094@purestorage.com> I'm not sure if this is the correct place to report it but I'm working on the Cinder Thirdparty CI requirement to move testing to Python 3.7. https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update Unfortunately this commit moving Kombu to 4.6.4 a day ago has broken the devstack setup scripts. https://opendev.org/openstack/requirements/commit/b236f0af43259959cb2a0f82880cebbdd0da7f27 It breaks because (I believe) Eventlet is monkey patching and kombu 4.6.4 interacts badly. See these two bug reports for more details. https://github.com/eventlet/eventlet/issues/534 https://github.com/nameko/nameko/issues/655 Kombu 4.6.4 now results in this error when running /usr/local/bin/nova-manage --config-file /etc/nova/nova.conf api_db sync 2019-08-16 22:56:25.446 | File "/usr/local/lib/python3.7/dist-packages/eventlet/green/os.py", line 107, in open 2019-08-16 22:56:25.446 | fd = __original_open__(file, flags, mode, dir_fd=dir_fd) 2019-08-16 22:56:25.446 | TypeError: open: path should be string, bytes or os.PathLike, not _NormalAccessor You can see full logs here. http://openstack-logs.purestorage.com/PureISCSIDriver-tempest-dsvm-xenial-aio-multipath-chap/4816/logs/devstacklog.txt.gz I'm open to any recommendation of a workaround. Downgrading to kombu 4.6.3 apparently works fine but I'm not sure how to do that within an automated devstack run. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From amrith.kumar at gmail.com Sat Aug 17 02:32:31 2019 From: amrith.kumar at gmail.com (Amrith Kumar) Date: Fri, 16 Aug 2019 22:32:31 -0400 Subject: How are we managing quotas, flavors, projects and more across large org Openstack deployments? In-Reply-To: References: Message-ID: Could you describe what the system you are building would do. Thanks! On Fri, Aug 16, 2019, 17:30 Robert Lennie wrote: > Hi, > > I am a complete neophyte to this forum so I may have missed any earlier > discussions on this topic if there have been any? > > I am currently working on a proprietary legacy (Django based) "OpenStack > Operations Manager" dashboard system that attempts to manage quotas, > flavors, clusters and more across a very large distributed OpenStack > deployment for multiple customers. > > There are a number of potential shortcomings with our current dashboard > system and I was wondering how other teams in other organizations with > similar large Openstack deployments are handling these types of system > management issues? The issues addressed are likely common to other large > environments. > > In particular what tools might exist upstream that allow system > (particularly quota and flavor) management across large distributed > Openstack environments? > > I welcome any feedback, comments, suggestions and any other information > regarding the tools that currently exist or may be planned for this purpose? > > Regards, > > Robert Lennie > Principal Software Engineer > Verizon Media > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Sat Aug 17 03:05:28 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Sat, 17 Aug 2019 15:05:28 +1200 Subject: How are we managing quotas, flavors, projects and more across large org Openstack deployments? In-Reply-To: Message-ID: <1267478d-b235-4ca9-9b25-b9b284acd652@email.android.com> An HTML attachment was scrubbed... URL: From lychzhz at gmail.com Sat Aug 17 06:34:11 2019 From: lychzhz at gmail.com (Douglas Zhang) Date: Sat, 17 Aug 2019 14:34:11 +0800 Subject: Some questions about contributing a golang project Message-ID: Hello everyone, My colleagues and me have been working on an openstack admin project(like horizon, but more efficient) using Golang, and now we are willing to contribute it to openstack community. While looking through the project creators' guide [1], we have met some questions that need to be answered: As this project is written by go, it is not possible to register it on PyPI, would this have any influence? Would openstack community accept golang projects as its related projects? [1] https://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases Thanks, Douglas -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Sat Aug 17 17:42:29 2019 From: openstack at fried.cc (Eric Fried) Date: Sat, 17 Aug 2019 12:42:29 -0500 Subject: Kombu 4.6.4 is breaking devstack with python 3.7 Message-ID: <167b0e7c-4ec1-57a7-b7f5-73779da5850f@fried.cc> Hi Daniel. > Unfortunately this commit moving Kombu to 4.6.4 a day ago has broken the devstack setup scripts. > https://opendev.org/openstack/requirements/commit/b236f0af43259959cb2a0f82880cebbdd0da7f27 > TypeError: open: path should be string, bytes or os.PathLike, not _NormalAccessor After I saw this go by, nova (at least) started failing all openstack-tox-py37 runs with the same error. I was able to recreate the problem locally. I was also able to resolve it by moving kombu back to 4.6.3. Clearly we need to blacklist kombu 4.6.4 for the moment. I've proposed that patch here [1]. Thank you very much for identifying the root of the problem. efried [1] https://review.opendev.org/#/c/677070/ From openstack at fried.cc Sat Aug 17 17:58:51 2019 From: openstack at fried.cc (Eric Fried) Date: Sat, 17 Aug 2019 12:58:51 -0500 Subject: Kombu 4.6.4 is breaking devstack with python 3.7 In-Reply-To: <167b0e7c-4ec1-57a7-b7f5-73779da5850f@fried.cc> References: <167b0e7c-4ec1-57a7-b7f5-73779da5850f@fried.cc> Message-ID: <0fb4bc8e-0246-c3a2-9f78-5f137c87e9c8@fried.cc> >> TypeError: open: path should be string, bytes or os.PathLike, not > _NormalAccessor> Clearly we need to blacklist kombu 4.6.4 for the moment. I've proposed > that patch here [1]. I opened a bug against nova [2]. I tried to do a logstash thing to see what else is affected, but I'm clearly not smart enough for that, and Mr. Logstash [3] is camping. So feel free to mark "affects" for your project etc. efried > [1] https://review.opendev.org/#/c/677070/ [2] https://bugs.launchpad.net/nova/+bug/1840551 [3] aka mriedem From fungi at yuggoth.org Sat Aug 17 18:38:05 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 17 Aug 2019 18:38:05 +0000 Subject: Some questions about contributing a golang project In-Reply-To: References: Message-ID: <20190817183805.husmoe7rmxoaq7wp@yuggoth.org> On 2019-08-17 14:34:11 +0800 (+0800), Douglas Zhang wrote: > My colleagues and me have been working on an openstack admin project(like > horizon, but more efficient) using Golang, and now we are willing to > contribute it to openstack community. Well, just to make sure you understand the culture, you don't really "contribute [software] to OpenStack" so much as you develop software within the OpenStack community. OpenStack is about the process of openly designing and creating infrastructure software, so anything written in private behind closed doors and then exposed to the light of day is going to suffer accordingly. Building a community around and maintaining/improving your project are going to be substantially more important if the project wasn't a public collaboration from its very inception. > While looking through the project creators' guide [1], we have met > some questions that need to be answered: > > As this project is written by go, it is not possible to register > it on PyPI, would this have any influence? Would openstack > community accept golang projects as its related projects? The OpenStack Technical Committee has previously entertained allowing projects in Go, and voted in favor of a resolution[*] to that effect. The gist of that decision was that deviating from the OpenStack language norms (primarily Python with some JavaScript for Web interfaces and Bash for shell automation) is allowable if it's done because it's reasonably challenging to meet the goals of the project otherwise. To restate, I don't know that OpenStack would accept a project written in Go if the reason it's being asked to do so is "well, we've already written it and we chose Go because we like the language." As I said earlier, OpenStack projects are normally designed and built collaboratively within the OpenStack community, and any time people have come to us with something they already wrote outside the community that they want to add, it's generally not gone that well for a number of reasons. If it's the case that Go was chosen because the thing you want to accomplish simply cannot be done (or at least can't be done with the necessary efficiency) in one or more of OpenStack's primary languages, we do maintain a Project Testing Interface definition[**] for Go. It's not been well-exercised and so may still reflect the state of the Go ecosystem as it was when drafted two years ago, in which case we welcome help improving it to better represent modern Go software development expectations. [*] https://governance.openstack.org/tc/resolutions/20170329-golang-use-case.html [**] https://governance.openstack.org/tc/reference/pti/golang.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Sat Aug 17 21:27:42 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 17 Aug 2019 21:27:42 +0000 Subject: [all][tc] U Cycle Naming Poll In-Reply-To: References: <871rxxm4k7.fsf@meyer.lemoncheese.net> <87pnlbehjb.fsf@meyer.lemoncheese.net> <20190811180304.tqqpgkm342bdhzzg@yuggoth.org> <87mugecw7i.fsf@meyer.lemoncheese.net> <0b62519e-ec09-be2a-11d1-684e2f82d003@redhat.com> <875zn27z2o.fsf@meyer.lemoncheese.net> Message-ID: <20190817212742.tqqq5vdxyiyaqqqn@yuggoth.org> On 2019-08-14 00:12:22 -0400 (-0400), Zane Bitter wrote: [...] > My understanding (which may be wrong because it all seems to have > gone down within a day that I happened to be on vacation) of how > we got into that state to begin with is that after Tony did a ton > of work figuring out how to get a local name beginning with U, > collected a bunch of names + feedback, and was basically ready to > start the poll, the Foundation implied that they would veto all of > the names on the grounds that their China expert didn't feel that > using the GR transliteration would be appropriate because of > reasons. [...] (Sorry about the late reply, but I'm finally catching up on the thread after a week of nonstop in-person meetings.) I think this particular detail spun out of control and got unnecessarily exaggerated with the retelling, yet people believed the exaggeration and allowed it to unduly influence decisions. The *only* references I'm aware of to this consultation are: 1. In the #openstack-tc IRC channel, mnaser said, "I think it might be better to hold off on that a little bit. I have talked with Horace a bit regarding this and it doesn't seem like it might be setting us up for success. We're likely ending up in a position where we don't have much choice of names (and the usage of GR seemed to have some not-so-ideal background because of it's popularity in Taiwan and not being used in China)." 2. In the governance change to establish the original poll details, Mohammed Naser commented, "I've taken the time to discuss this with Horace (the OSF's China community manager) regarding the choice of name that we're about to have. They've shared a few concerns with me about the choice that we're making here and I think we should re-consider it before we actually make a decision to go with it. First of all, it seems that the GR romanization isn't exactly popular in Mainland China and it's actually quite more popular in regions outside of it (in Taiwan) for example. Therefore, it wouldn't probably be a great image of our community to do that for our Chinese contributors." https://review.opendev.org/666974 I interpreted neither of those as a statement that OSF would "veto" GR names, merely as a suggestion in support of broadening our regional criteria or choosing to allow exceptions so that we don't rely exclusively on GR-transliterated options in the final poll. I also questioned this second-hand assertion, as a week earlier in the same review, Eric Kao commented, "Gwoyeu Romatzyh is not in use in Taiwan, avoiding another possible sensitivity." There was no response to further explain the apparent contradiction there, and in retrospect I'm sorry I didn't push for further clarification on the matter. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Sun Aug 18 16:09:50 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 19 Aug 2019 01:09:50 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-9 Update Message-ID: <16ca57df0ed.112d7fef0121509.7413543320906046197@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R9 week. At the first step, I am preparing the ipv6 jobs for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. Summary: * Number of Ipv6 jobs proposed Projects: 25 * Number of pass projects: 11 ** Number of project merged: 6 * Number of failing projects: 14 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Cinder is error when configuring the conder's my_ip as IPv6. iscsi is not able to _connect_single_volume [1]. 2. Configuring the tempest test regex to run only smoke tests which can be extended to include future IPv6 tests also. Running all test is not actually required as such in IPv6 job but if any project wants to run all then also fine. Example: [1] 3. Fixing the Murano's MURANO_DEFAULT_DNS to set as IPv6 for IPv6 env[2]. 4. Solum job need Zun to configure the host_ip properly for IPv6. I will make the dependent patch. 5. For Monasca, kafka was not working for IPv6 but witek is upgrading the Kafka version in Monasca. I will rebase IPv6 job patch on top of that and check the result. 6. This week new projects ipv6 jobs patch and status: - Tacker: link: https://review.opendev.org/#/c/671908/ status: job is failing, I need to properly configure the job. - Senlin: links: https://review.opendev.org/#/c/676910/ status: jobs are failing. In same patch I have fixed the devstack plugin to deploy the Selin service on IPv6 which was hardcoded to HOST_IP(ipv4). But it seems Senlin endpoint is not created in keystone. Need to debug more for the root cause. - Solum: links: https://review.opendev.org/#/c/676912/ Status: job is failing. Fixed the devstack plugin for 'host' for IPv6 env. It also need fix on Zun side to configure the host_ip properly for IPv6. - Trove: link: https://review.opendev.org/#/c/677015/ status: job is passing and it is good to merge. - Watcher: link: https://review.opendev.org/#/c/677017/ status: job is passing and it is good to merge. In same patch, I have fixed the devstack plugin for 'host' for IPv6 env. - Sahara link: https://review.opendev.org/#/c/676903/ status: Job is failing to start the sahara service. I could not find the logs for sahara service(it shows empty log under apache). Need help from sahara team. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific varification then it can be added in project side job as post-run playbooks as described in wiki page[3]. [1] https://zuul.opendev.org/t/openstack/build/5b7b823d6faa4f5393b4c46d36e15d80/log/controller/logs/screen-n-cpu.txt.gz#2733 [2] https://review.opendev.org/#/c/676857/ [3] https://review.opendev.org/#/c/676900/ [4] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From mthode at mthode.org Sun Aug 18 16:16:11 2019 From: mthode at mthode.org (Matthew Thode) Date: Sun, 18 Aug 2019 11:16:11 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions Message-ID: <20190818161611.6ira6oezdat4alke@mthode.org> NOVA: lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 websockify===0.9.0 tempest test failing KEYSTONE: oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 NEUTRON: tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz this could be caused by pytest===5.1.0 as well KURYR: kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it https://review.opendev.org/665352 MISC: tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 jsonschema===3.0.2 see https://review.opendev.org/649789 I'm trying to get this in place as we are getting closer to the requirements freeze (sept 9th-13th). Any help clearing up these bugs would be appreciated. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Sun Aug 18 16:45:35 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 19 Aug 2019 01:45:35 +0900 Subject: Some questions about contributing a golang project In-Reply-To: <20190817183805.husmoe7rmxoaq7wp@yuggoth.org> References: <20190817183805.husmoe7rmxoaq7wp@yuggoth.org> Message-ID: <16ca59eaea3.f5dcc754121683.2788516427810235145@ghanshyammann.com> ---- On Sun, 18 Aug 2019 03:38:05 +0900 Jeremy Stanley wrote ---- > On 2019-08-17 14:34:11 +0800 (+0800), Douglas Zhang wrote: > > My colleagues and me have been working on an openstack admin project(like > > horizon, but more efficient) using Golang, and now we are willing to > > contribute it to openstack community. > > Well, just to make sure you understand the culture, you don't really > "contribute [software] to OpenStack" so much as you develop software > within the OpenStack community. OpenStack is about the process of > openly designing and creating infrastructure software, so anything > written in private behind closed doors and then exposed to the light > of day is going to suffer accordingly. Building a community around > and maintaining/improving your project are going to be substantially > more important if the project wasn't a public collaboration from its > very inception. As explained by Jeremy, contributing projects to OpenStack also involve maintaining it we per common standard of OpenStack. For example, active contributors, project leader (PTL) etc. If you have not checked the exact requirements of new project, you can refer to this doc[1]. > > > While looking through the project creators' guide [1], we have met > > some questions that need to be answered: > > > > As this project is written by go, it is not possible to register > > it on PyPI, would this have any influence? Would openstack > > community accept golang projects as its related projects? > > The OpenStack Technical Committee has previously entertained > allowing projects in Go, and voted in favor of a resolution[*] to > that effect. The gist of that decision was that deviating from the > OpenStack language norms (primarily Python with some JavaScript for > Web interfaces and Bash for shell automation) is allowable if it's > done because it's reasonably challenging to meet the goals of the > project otherwise. To restate, I don't know that OpenStack would > accept a project written in Go if the reason it's being asked to do > so is "well, we've already written it and we chose Go because we > like the language." As I said earlier, OpenStack projects are > normally designed and built collaboratively within the OpenStack > community, and any time people have come to us with something they > already wrote outside the community that they want to add, it's > generally not gone that well for a number of reasons. I am not sure if the swift requirement of GO lang went to second step[2] and we have all the CI setup etc. The main challenge I see for any new language is from horizontal support team like QA, Infra, release team etc especially in the current situation where most of those team have less number of maintainers. But yes, we should first discuss the technical requirement of the new project and why GO language is must for it. Based on that we can proceed for further support. I will suggest you start the ML discussion about 'what is the project', 'its use case' and 'why GO lang'. > > If it's the case that Go was chosen because the thing you want to > accomplish simply cannot be done (or at least can't be done with the > necessary efficiency) in one or more of OpenStack's primary > languages, we do maintain a Project Testing Interface definition[**] > for Go. It's not been well-exercised and so may still reflect the > state of the Go ecosystem as it was when drafted two years ago, in > which case we welcome help improving it to better represent modern > Go software development expectations. > > [*] https://governance.openstack.org/tc/resolutions/20170329-golang-use-case.html > [**] https://governance.openstack.org/tc/reference/pti/golang.html [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html [2] https://governance.openstack.org/tc/reference/new-language-requirements.html -gmann > > -- > Jeremy Stanley > From fungi at yuggoth.org Sun Aug 18 21:10:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 18 Aug 2019 21:10:11 +0000 Subject: [public-cloud-sig] Debian Cloud Sprint Message-ID: <20190818211010.hrtg43fzmmdcqy3d@yuggoth.org> Just a heads-up, there was a suggestion[*] on the debian-cloud mailing list that it would be nice if some representatives from public OpenStack service providers are able to attend/participate in the upcoming Debian Cloud Sprint[**], October 14-16 on MIT campus in Cambridge, Massachusetts, USA. I too think it would be awesome for OpenStack to have a seat at the table alongside representatives of the usual closed-source clouds when it comes time to talk about (among other things) what Debian's official "cloud" image builds should be doing to better support our collective users. If you're interested in going, I recommend reaching out via the debian-cloud mailing list[***]. [*] https://lists.debian.org/debian-cloud/2019/08/msg00065.html [**] https://wiki.debian.org/Sprints/2019/DebianCloud2019 [***] https://lists.debian.org/debian-cloud/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chx769467092 at 163.com Thu Aug 15 05:21:29 2019 From: chx769467092 at 163.com (=?GBK?B?tN6648/j?=) Date: Thu, 15 Aug 2019 13:21:29 +0800 (CST) Subject: [question][placement][rest api][403] Message-ID: This is my problem with placement rest api.(Token and endpoint were OK) {"errors": [{"status": 403, "title": "Forbidden", "detail": "Access was denied to this resource.\n\n Policy does not allow placement:resource_providers:list to be performed. ", "request_id": "req-5b409f22-7741-4948-be6f-ea28c2896a3f" }]} Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20190814153351.png Type: image/png Size: 16544 bytes Desc: not available URL: From soulxu at gmail.com Mon Aug 19 02:41:55 2019 From: soulxu at gmail.com (Alex Xu) Date: Mon, 19 Aug 2019 10:41:55 +0800 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: Stephen Finucane 于2019年8月16日周五 下午5:58写道: > On Fri, 2019-08-16 at 12:09 +0800, Alex Xu wrote: > > > > Stephen Finucane 于2019年8月15日周四 下午8:25写道: > > tl;dr: Is breaking booting of pinned instances on Stein compute nodes > in a Train deployment an acceptable thing to do, and if not, how do we > best handle the VCPU->PCPU migration in Train? > > I've been working through the cpu-resources spec [1] and have run into > a tricky issue I'd like some input on. In short, this spec means that > pinned instances (i.e. 'hw:cpu_policy=dedicated') will now start > consuming a new resources type, PCPU, instead of VCPU. Many things need > to change to make this happen but the key changes are: > > 1. The scheduler needs to start modifying requests for pinned instances > to request PCPU resources instead of VCPU resources > 2. The libvirt driver needs to start reporting PCPU resources > 3. The libvirt driver needs to do a reshape, moving all existing > allocations of VCPUs to PCPUs, if the instance holding that > allocation is pinned > > The first two of these steps presents an issue for which we have a > solution, but the solutions we've chosen are now resulting in this new > issue. > > * For (1), the translation of VCPU to PCPU in the scheduler means > compute nodes must now report PCPU in order for a pinned instance to > land on that host. Since controllers are upgraded before compute > nodes and all compute nodes aren't necessarily upgraded in one go > (particularly for edge or other large or multi-cell deployments), > this can mean there will be a period of time where there are very > few or no hosts available on which to schedule pinned instances. > > * For (2), we're hampered by the fact that there is no clear way to > determine if a host is used for pinned instances or not. Because of > this, we can't determine if a host should be reporting PCPU or VCPU > inventory. > > The solution we have for the issues with (1) is to add a workaround > option that would disable this translation, allowing operators time to > upgrade all their compute nodes to report PCPU resources before > anything starts using them. For (2), we've decided to temporarily (i.e. > for one release or until configuration is updated) report both, in the > expectation that everyone using pinned instances has followed the long- > standing advice to separate hosts intended for pinned instances from > those intended for unpinned instances using host aggregates (e.g. even > if we started reporting PCPUs on a host, nothing would consume that due > to 'pinned=False' aggregate metadata or similar). These actually > benefit each other, since if instances are still consuming VCPUs then > the hosts need to continue reporting VCPUs. However, both interfere > with our ability to do the reshape. > > Normally, a reshape is a one time thing. The way we'd planned to > determine if a reshape was necessary was to check if PCPU inventory was > registered against the host and, if not, whether there were any pinned > instances on the host. If PCPU inventory was not available and there > were pinned instances, we would update the allocations for these > instances so that they would be consuming PCPUs instead of VCPUs and > then update the inventory. This is problematic though, because our > solution for the issue with (1) means pinned instances can continue to > request VCPU resources, which in turn means we could end up with some > pinned instances on a host consuming PCPU and other consuming VCPU. > That obviously can't happen, so we need to change tacks slightly. The > two obvious solutions would be to either (a) remove the workaround > option so the scheduler would immediately start requesting PCPUs and > just advise operators to upgrade their hosts for pinned instances asap > or (b) add a different option, defaulting to True, that would apply to > both the scheduler and compute nodes and prevent not only translation > of flavors in the scheduler but also the reporting PCPUs and reshaping > of allocations until disabled. > > > The step I'm thinking is: > > 1. upgrade control plane, disable request PCPU, still request VCPU. > 2. rolling upgrade compute node, compute nodes begin to report both PCPU > and VCPU. But the request still add to VCPU. > 3. enabling the PCPU request, the new request is request PCPU. > In this point, some of instances are using VCPU, some of instances > are using PCPU on same node. And the amount VCPU + PCPU will double the > available cpu resources. The NUMATopology filter is responsible for stop > over-consuming the total number of cpu. > 4. rolling update compute node's configure to use cpu_dedicated_set, that > trigger the reshape existed VCPU consuming to PCPU consuming. > New request is going to PCPU at step3, no more VCPU request at this > point. Roll upgrade node to get rid of existed VCPU consuming. > 5. done > > > This had been my initial plan. The issue is that by reporting both PCPU > and VCPU in (2), our compute node's resource provider will now have PCPU > inventory available (though it won't be used). This is problematic since > "does this resource provider have PCPU inventory" is one of the questions I > need to ask to determine if I should do a reshape. If I can't rely on this > heuristic, I need to start querying for allocation information (so I can > ask "does this resource provider have PCPU *allocations*") every time I > start a compute node. I'm guessing this is expensive, since we don't do it > by default. > I'm not quite ensure understand the problem. How about question you should ask is "Does the current amount of VCPU and PCPU is double of actual available cpu resources". If the answer is yes, then do a reshape. > > Stephen > > I'm currently leaning towards (a) because it's a *lot* simpler, far > more robust (IMO) and lets us finish this effort in a single cycle, but > I imagine this could make upgrades very painful for operators if they > can't fast track their compute node upgrades. (b) is more complex and > would have some constraints, chief among them being that the option > would have to be disabled at some point post-release and would have to > be disabled on the scheduler first (to prevent the mismash or VCPU and > PCPU resource allocations) above. It also means this becomes a three > cycle effort at minimum, since this new option will default to True in > Train, before defaulting to False and being deprecated in U and finally > being removed in V. As such, I'd like some input, particularly from > operators using pinned instances in larger deployments. What are your > thoughts, and are there any potential solutions that I'm missing here? > > Cheers, > Stephen > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Aug 19 05:20:28 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 19 Aug 2019 13:20:28 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open In-Reply-To: References: Message-ID: The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time). So remember to vote:) On Tue, Aug 13, 2019 at 12:56 PM Rico Lin wrote: > Hi, all OpenStackers, > > It's time to vote for the naming of the U release!! > U 版本正式命名票选开始!! > > First, big thanks for all people who take their own time to propose names > on [2] or help to push/improve to the naming process. Thank you. > > We'll use a public polling option over per-user private URLs > for voting. This means everybody should proceed to use the following URL > to > cast their vote: > > *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 > * > > We've selected a public poll to ensure that the whole community, not just > Gerrit > change owners get a vote. Also, the size of our community has grown such > that we > can overwhelm CIVS if using private URLs. A public can mean that users > behind NAT, proxy servers or firewalls may receive a message saying > that your vote has already been lodged if this happens please try > another IP. > Because this is a public poll, results will currently be only viewable by > me > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results > while > the public poll is running. > > The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], > and results will be > posted shortly after. > > [1] https://governance.openstack.org/tc/reference/release-naming.html > [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Mon Aug 19 06:28:25 2019 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Mon, 19 Aug 2019 06:28:25 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVtxdWVzdGlvbl1bcGxh?= =?gb2312?Q?cement][rest_api][403]?= In-Reply-To: References: <1ef1cd92038a62e943d50766b2eca760@sslemail.net> Message-ID: <6c801d1fb7ca439fb02e5145e087e07f@inspur.com> > > On 8/15/2019 1:09 AM, 崔恒香 wrote: > > This is my problem with placement rest api.(Token and endpoint were > > OK) > > > > {"errors": [{"status": 403, > > "title": "Forbidden", > > "detail": "Access was denied to this resource.\n\n Policy does not > > allow placement:resource_providers:list to be performed. ", > > "request_id": > > "req-5b409f22-7741-4948-be6f-ea28c2896a3f" > > }]} > On 8/16/2019 3:24 PM, Matt Riedemann wrote: > This doesn't give much information. Does the token have the admin role in it? Has the placement:resource_providers:list > policy rule been changed from the default (rule:admin_api)? As Matt said, you should check you policy limit for your request user, it’s default role is “rule:admin_api” [1]. If your authenticated user is not admin, then the 403 error is normal. [1] https://github.com/openstack/placement/blob/master/placement/handlers/resource_provider.py#L178 https://github.com/openstack/placement/blob/master/placement/policies/resource_provider.py#L51 发件人: 崔恒香 [mailto:chx769467092 at 163.com] 发送时间: 2019年8月15日 13:21 收件人: openstack-discuss at lists.openstack.org 主题: [lists.openstack.org代发][question][placement][rest api][403] This is my problem with placement rest api.(Token and endpoint were OK) {"errors": [{"status": 403, "title": "Forbidden", "detail": "Access was denied to this resource.\n\n Policy does not allow placement:resource_providers:list to be performed. ", "request_id": "req-5b409f22-7741-4948-be6f-ea28c2896a3f" }]} Regards, Cuihx -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Mon Aug 19 07:28:47 2019 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 19 Aug 2019 07:28:47 +0000 Subject: [cyborg] [nova] [placement] Shelve/unshelve behavior Message-ID: <1CC272501B5BC543A05DB90AA509DED527601F64@fmsmsx122.amr.corp.intel.com> Hi, I tested various instance operations using the Nova patch series for Cyborg integration [1]. Many of them worked as expected: pause/unpause, lock/unlock, rescue/unrescue, etc. That is, the application in the VM can successfully offload to the accelerator device before and after the sequence. Suspend fails with the Libvirt error: "Domain has assigned non-USB devices." I think this is known behavior. But, shelve/shelve-offloaded/unshelve sequence shows two discrepancies: * After shelve, the instance is shut off in Libvirt but is shown as ACTIVE in 'openstack server list'. * After unshelve, the PCI VF gets re-attached on VM startup and the application inside the VM can access the accelerator device. However, 'openstack resource provider usage show ' shows the RC usage as 0, i.e., there seems to be no claim in Placement for the resource in use. After shelve, the instance transitions to 'shelve-offloaded' automatically after the configured time interval. The resource class usage is 0. This part is good. But, after the unshelve, one would think the usage would be bumped up automatically. P.S.: Still investigating hard reboots and start/stop operations. [1] https://review.opendev.org/#/q/status:open+project:openstack/nova+bp/nova-cyborg-interaction Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Mon Aug 19 10:31:47 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 19 Aug 2019 10:31:47 +0000 Subject: [all] [tc] [ptls] [docs] PDF Community Goal Update In-Reply-To: References: Message-ID: To add to this, I've (finally - sorry) created a tracking etherpad [1] with summary information, getting started info, and project volunteer list per project. Please add your name against the project (if it isn't there already) if you are able to, or have been, testing for PDF support. We are still looking for volunteers from each project to kick start the testing. Thanks, Alex [1] https://etherpad.openstack.org/p/train-pdf-support-goal On 15/08/2019 23:37, Alexandra Settle wrote: Hi all, Apologies for the radio silence regarding the PDF Community Goal. Due to vacation and personal circumstance, I've been "offline" for the better part of the last 2 months. Update * Stephen Finucane has been working on adding Python 3 support to rst2pdf * Common issues are being tracked within this etherpad [1] * Overall status: https://review.opendev.org/#/q/topic:build-pdf-docs Help needed * We would appreciate anyone who is comfortable with Python to help and volunteer to test the rst2pdf output with Python 2. Working within a larger project like Neutron to see how much it can do, as they have specific styling capabilities. * NOTE: The original discussion included using the LaTeX builder instead of rst2pdf. However, the LaTeX builder is not playing ball as nicely as we'd like, so we're trying to figure out if this would be easier. The LaTeX builder is still the primary plan, since we still don't know what we're going with and the overlap between the two is significant. Questions? Thank you, Alex [1] https://etherpad.openstack.org/p/pdf-goal-train-common-problems -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Mon Aug 19 13:17:35 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Mon, 19 Aug 2019 13:17:35 +0000 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org Message-ID: Hi all, Quick recap: The documentation team is set to disband as an official project, and make the leap to becoming a SIG (Special Interest Group). This decision is not one we have made lightly, but with the changes of direction since Pike (project documentation belonging with the projects, etc) the need for a centralised documentation team as an official project is no longer integral to producing the software. The transition of the docs team to a SIG has already begun [1] and [2]. The openstackdocstheme, openstack-doc-tools, os-api-ref and whereto have been placed within the remit of the Oslo team and approved. The remaining individuals working on the docs team are sorting through what's left and where it is best placed. In [1], Doug has rightfully pointed out that whilst the documentation team as it stands today is no longer "integral", the docs.openstack.org web site is. An owner is required. The suggestion is for the Release Management team is the "least worst" (thanks, tonyb) place for the website manaagement to land. As Tony points out, this requires learning new tools and processes but the individuals working on the docs team currently have no intention to leave, and are around to help manage this from the SIG. Open to discussion and suggestions, but to summarise the proposal here: docs.openstack.org ownership is to transition to be the responsibility of the Release Management team officially provided there are no strong objections. Thanks, Alex IRC: asettle [1] https://review.opendev.org/#/c/657142/ [2] https://review.opendev.org/#/c/657141/ From jon at csail.mit.edu Mon Aug 19 14:02:17 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 19 Aug 2019 10:02:17 -0400 Subject: [public-cloud-sig] Debian Cloud Sprint In-Reply-To: <20190818211010.hrtg43fzmmdcqy3d@yuggoth.org> References: <20190818211010.hrtg43fzmmdcqy3d@yuggoth.org> Message-ID: <20190819140217.grgkfipooj2ivlfm@csail.mit.edu> Hi All, I'm hosting this shindig but it's also my first time participating so can't shine too much more light on how they got , but do have a couple URLs to add: https://wiki.debian.org/Sprints/2019/DebianCloud2019 is the planning page. Currently a bit skeletal but does have links to past sprints: https://wiki.debian.org/Sprints/2018/DebianCloudOct2018 https://wiki.debian.org/Sprints/2017/DebianCloudOct2017 https://wiki.debian.org/Sprints/2016/DebianCloudNov2016 Hopefully that can give some sense of scope and function of these things. Hope to see some of you in Oct. -Jon On Sun, Aug 18, 2019 at 09:10:11PM +0000, Jeremy Stanley wrote: :Just a heads-up, there was a suggestion[*] on the debian-cloud :mailing list that it would be nice if some representatives from :public OpenStack service providers are able to attend/participate in :the upcoming Debian Cloud Sprint[**], October 14-16 on MIT campus in :Cambridge, Massachusetts, USA. I too think it would be awesome for :OpenStack to have a seat at the table alongside representatives of :the usual closed-source clouds when it comes time to talk about :(among other things) what Debian's official "cloud" image builds :should be doing to better support our collective users. If you're :interested in going, I recommend reaching out via the debian-cloud :mailing list[***]. : :[*] https://lists.debian.org/debian-cloud/2019/08/msg00065.html :[**] https://wiki.debian.org/Sprints/2019/DebianCloud2019 :[***] https://lists.debian.org/debian-cloud/ :-- :Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From pierre at stackhpc.com Mon Aug 19 14:36:26 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 19 Aug 2019 16:36:26 +0200 Subject: [blazar] IRC meeting tomorrow Message-ID: Hello, We have our weekly Blazar IRC meeting scheduled for tomorrow at 09:00 UTC on #openstack-meeting-alt. As the last weekly meeting was at the end of July, we have a busy agenda this week: https://wiki.openstack.org/wiki/Meetings/Blazar#Agenda_for_20_August_2019 Everyone is welcome to join and propose additional topics, time permitting. Best wishes, Pierre From mtreinish at kortar.org Mon Aug 19 14:54:37 2019 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 19 Aug 2019 10:54:37 -0400 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190818161611.6ira6oezdat4alke@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> Message-ID: <20190819145437.GA29162@zeong> On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > NOVA: > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > websockify===0.9.0 tempest test failing > > KEYSTONE: > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > NEUTRON: > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > this could be caused by pytest===5.1.0 as well > > KURYR: > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > https://review.opendev.org/665352 > > MISC: > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 This actually doesn't fix the underlying issue blocking it here. PR 265 is for fixing a compatibility issue with python 3.4, which we don't officially support in stestr but was a simple fix. The blocker is actually not an stestr issue, it's a testtools bug: https://github.com/testing-cabal/testtools/issues/272 Where this is coming into play here is that stestr 2.5.0 switched to using an internal test runner built off of stdlib unittest instead of testtools/subunit for python 3. This was done to fix a huge number of compatibility issues people had reported when trying to run stdlib unittest suites using stestr on python >= 3.5 (which were caused by unittest2 and testools). The complication for openstack (more specificially tempest) is that it's built off of testtools not stdlib unittest. So when tempest raises 'self.skipException' as part of it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is and treats it as an unhandled exception which is a test failure, instead of the intended skip result. [1] This is actually a general bug and will come up whenever anyone tries to use stdlib unittest to run tempest. We need to come up with a fix for this problem in testtools [2] or just workaround it in tempest. [1] skip decorators typically aren't effected by this because they set an attribute that gets checked before the test method is executed instead of relying on an exception, which is why this is mostly only an issue for tempest because it does a lot of run time skips via exceptions. [2] testtools is mostly unmaintained at this point, I was recently granted merge access but haven't had much free time to actively maintain it -Matt Treinish > jsonschema===3.0.2 see https://review.opendev.org/649789 > > I'm trying to get this in place as we are getting closer to the > requirements freeze (sept 9th-13th). Any help clearing up these bugs > would be appreciated. > > -- > Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Mon Aug 19 14:59:18 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Aug 2019 14:59:18 +0000 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: References: Message-ID: <20190819145918.6jw3p6jcnzuskdch@yuggoth.org> On 2019-08-19 13:17:35 +0000 (+0000), Alexandra Settle wrote: [...] > Doug has rightfully pointed out that whilst the documentation team > as it stands today is no longer "integral", the docs.openstack.org > web site is. An owner is required. > > The suggestion is for the Release Management team is the "least > worst" (thanks, tonyb) place for the website manaagement to land. > As Tony points out, this requires learning new tools and processes > but the individuals working on the docs team currently have no > intention to leave, and are around to help manage this from the > SIG. > > Open to discussion and suggestions, but to summarise the proposal > here: > > docs.openstack.org ownership is to transition to be the > responsibility of the Release Management team officially provided > there are no strong objections. [...] The prose above is rather vague on what "the docs.openstack.org web site" entails. Inferring from Doug's comments on 657142, I think you and he are referring specifically to the content in the https://opendev.org/openstack/openstack-manuals/src/branch/master/www subtree. Is that pretty much it? The Apache virtual host configuration and filesystem hosting the site itself are managed by the Infra/OpenDev team, and there aren't any plans to change that as far as I'm aware. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dleaberry at purestorage.com Mon Aug 19 15:40:52 2019 From: dleaberry at purestorage.com (Daniel Leaberry) Date: Mon, 19 Aug 2019 09:40:52 -0600 Subject: Kombu 4.6.4 is breaking devstack with python 3.7 In-Reply-To: <0fb4bc8e-0246-c3a2-9f78-5f137c87e9c8@fried.cc> References: <167b0e7c-4ec1-57a7-b7f5-73779da5850f@fried.cc> <0fb4bc8e-0246-c3a2-9f78-5f137c87e9c8@fried.cc> Message-ID: Excellent Eric, thanks for the quick resolution! On Sat, Aug 17, 2019 at 12:00 PM Eric Fried wrote: > >> TypeError: open: path should be string, bytes or os.PathLike, not > > _NormalAccessor> Clearly we need to blacklist kombu 4.6.4 for > the moment. I've proposed > > that patch here [1]. > > I opened a bug against nova [2]. I tried to do a logstash thing to see > what else is affected, but I'm clearly not smart enough for that, and > Mr. Logstash [3] is camping. So feel free to mark "affects" for your > project etc. > > efried > > > [1] https://review.opendev.org/#/c/677070/ > > [2] https://bugs.launchpad.net/nova/+bug/1840551 > [3] aka mriedem > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Aug 19 15:41:06 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Aug 2019 10:41:06 -0500 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: References: Message-ID: <20190819154106.GA25909@sm-workstation> > > Quick recap: The documentation team is set to disband as an official > project, and make the leap to becoming a SIG (Special Interest Group). > > > The remaining individuals working on the docs team are sorting through > what's left and where it is best placed. In [1], Doug has rightfully > pointed out that whilst the documentation team as it stands today is no > longer "integral", the docs.openstack.org web site is. An owner is > required. > Unless things have changed, SIGs can be owners of a resource published via docs.openstack.org (and I am assuming that means be extension, docs.o.o itself). Is there a reason the Docs SIG would not still be able to own the site? > The suggestion is for the Release Management team is the "least worst" > (thanks, tonyb) place for the website manaagement to land. As Tony > points out, this requires learning new tools and processes but the > individuals working on the docs team currently have no intention to > leave, and are around to help manage this from the SIG. > I'm personally fine with the release team taking this on, but it does seem like an odd fit. I think it would make a lot more sense for the Docs SIG to own the docs site than the release team. > Open to discussion and suggestions, but to summarise the proposal here: > > docs.openstack.org ownership is to transition to be the responsibility > of the Release Management team officially provided there are no strong > objections. > > Thanks, > > Alex > IRC: asettle > > [1] https://review.opendev.org/#/c/657142/ > [2] https://review.opendev.org/#/c/657141/ From openstack at nemebean.com Mon Aug 19 16:04:37 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 19 Aug 2019 11:04:37 -0500 Subject: [oslo] Kombu 4.6.4 is breaking devstack with python 3.7 In-Reply-To: <8CD16FA4-C9B9-46C0-BA81-1131A7028094@purestorage.com> References: <8CD16FA4-C9B9-46C0-BA81-1131A7028094@purestorage.com> Message-ID: <2465e3eb-f071-4f33-45cb-177f2ab52342@nemebean.com> Looks like this was blacklisted in g-r: https://opendev.org/openstack/requirements/commit/a81aa355d78054c96568de284a34394e4056097f I've also tagged this with oslo for visibility to our messaging folks. On 8/16/19 6:45 PM, Daniel Leaberry wrote: > I'm not sure if this is the correct place to report it but I'm working > on the Cinder Thirdparty CI requirement to move testing to Python 3.7. > https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update > > Unfortunately this commit moving Kombu to 4.6.4 a day ago has broken the > devstack setup scripts. > https://opendev.org/openstack/requirements/commit/b236f0af43259959cb2a0f82880cebbdd0da7f27 > > It breaks because (I believe) Eventlet is monkey patching and kombu > 4.6.4 interacts badly. See these two bug reports for more details. > > https://github.com/eventlet/eventlet/issues/534 > https://github.com/nameko/nameko/issues/655 > > Kombu 4.6.4 now results in this error when running > > /usr/local/bin/nova-manage --config-file /etc/nova/nova.conf api_db sync > > 2019-08-16 22:56:25.446 > > | File "/usr/local/lib/python3.7/dist-packages/eventlet/green/os.py", > line 107, in open 2019-08-16 22:56:25.446 > > | fd = __original_open__(file, flags, mode, dir_fd=dir_fd) 2019-08-16 > 22:56:25.446 > > | TypeError: open: path should be string, bytes or os.PathLike, not > _NormalAccessor > > You can see full logs here. > > http://openstack-logs.purestorage.com/PureISCSIDriver-tempest-dsvm-xenial-aio-multipath-chap/4816/logs/devstacklog.txt.gz > > I'm open to any recommendation of a workaround. Downgrading to kombu > 4.6.3 apparently works fine but I'm not sure how to do that within an > automated devstack run. > > Thanks > > From Shilpa.Devharakar at nttdata.com Mon Aug 19 16:09:51 2019 From: Shilpa.Devharakar at nttdata.com (Devharakar, Shilpa) Date: Mon, 19 Aug 2019 16:09:51 +0000 Subject: openstack-discuss Digest, Vol 10, Issue 59 In-Reply-To: References: Message-ID: Hi Ralf Teckelmann, > Hello, > > Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. > For the hostmonitor a pacemaker cluster is missing. > > Can anyone give me an overview how the pacemaker cluster setup would look like? > Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? > > Best regards, > > Ralf Teckelmann Sorry for the late answer... Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [1]. You can refer 'devstack/plugin.sh', please note it has IPMI Hardware support dependency. Also PFA 'detailed_steps_on_ubuntu_for_pacemaker_verification.txt' for verification of same carried out on Ubuntu distribution (installed openstack using devstack with Masakari enabled). [1]: https://review.opendev.org/#/c/671200/ Add devstack support to install host-monitor Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 12, 2019 6:30 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 59 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA (Arx Cruz) 2. [swauth][swift] Retiring swauth (Ondrej Novy) 3. [ironic] Resuming having weekly meetings (Julia Kreger) 4. [masakari] pacemaker-remote Setup Overview (Teckelmann, Ralf, NMU-OIP) ---------------------------------------------------------------------- Message: 1 Date: Mon, 12 Aug 2019 12:32:34 +0200 From: Arx Cruz To: Jean-Philippe Evrard Cc: openstack-discuss at lists.openstack.org Subject: Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://review.opendev.org/#/c/675858/ On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 12 Aug 2019 12:56:15 +0200 From: Ondrej Novy To: openstack-discuss at lists.openstack.org Subject: [swauth][swift] Retiring swauth Message-ID: Content-Type: text/plain; charset="utf-8" Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 12 Aug 2019 07:55:07 -0400 From: Julia Kreger To: openstack-discuss Subject: [ironic] Resuming having weekly meetings Message-ID: Content-Type: text/plain; charset="UTF-8" All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia ------------------------------ Message: 4 Date: Mon, 12 Aug 2019 12:59:20 +0000 From: "Teckelmann, Ralf, NMU-OIP" To: "openstack-discuss at lists.openstack.org" Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 59 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- A non-text attachment was scrubbed... Name: detailed_steps_on_ubuntu_for_pacemaker_verification Type: application/octet-stream Size: 3262 bytes Desc: detailed_steps_on_ubuntu_for_pacemaker_verification URL: From mriedemos at gmail.com Mon Aug 19 16:23:52 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 19 Aug 2019 11:23:52 -0500 Subject: [cyborg] [nova] [placement] Shelve/unshelve behavior In-Reply-To: <1CC272501B5BC543A05DB90AA509DED527601F64@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED527601F64@fmsmsx122.amr.corp.intel.com> Message-ID: On 8/19/2019 2:28 AM, Nadathur, Sundar wrote: > Many of them worked as expected: pause/unpause, lock/unlock, > rescue/unrescue, etc. That is, the application in the VM can > successfully offload to the accelerator device before and after the > sequence. I just wanted to point out that lock/unlock has nothing to do with the guest and is control-plane only in the compute API. > > But, shelve/shelve-offloaded/unshelve sequence shows two discrepancies: > > * After shelve, the instance is shut off in Libvirt but is shown as > ACTIVE in ‘openstack server list’. After a successful shelve/shelve offload, the server status should be SHELVED or SHELVED_OFFLOADED, not ACTIVE. Did something fail during the shelve and the instance was left in ACTIVE state rather than ERROR state? > > * After unshelve, the PCI VF gets re-attached on VM startup and the > application inside the VM can access the accelerator device. However, > ‘openstack resource provider usage show ’ shows the RC usage as > 0, i.e., there seems to be no claim in Placement for the resource in use. What is the resource class? Something reported by cyborg on a nested resource provider under the compute node provider? Note that unshelve will go through the scheduler to pick a destination host (like the initial create) and call placement. If you're not persisting information about the resources to "claim" during scheduling on the RequestSpec, then that would need to be re-calculated and set on the RequestSpec prior to calling select_destinations during the unshelve flow in conductor. gibi's series to add move support for bandwidth-aware QoS ports is needing to do something similar. This patch is for resize/cold migration but you get the idea: https://review.opendev.org/#/c/655112/ > > After shelve, the instance transitions to ‘shelve-offloaded’ > automatically after the configured time interval. The resource class > usage is 0. This part is good. But, after the unshelve, one would think > the usage would be bumped up automatically. > -- Thanks, Matt From mnaser at vexxhost.com Mon Aug 19 16:26:48 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 19 Aug 2019 12:26:48 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Retired projects - Networking-generic-switch-tempest (from networking-generic-switch): https://review.opendev.org/#/c/674430/ # General changes - Michał Dulko as Kuryr PTL: https://review.opendev.org/#/c/674624/ - Added a mission to Swift taken from its wiki: https://review.opendev.org/#/c/675307/ - Sean McGinnis as release PTL: https://review.opendev.org/#/c/675246/ Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From Shilpa.Devharakar at nttdata.com Mon Aug 19 16:34:26 2019 From: Shilpa.Devharakar at nttdata.com (Devharakar, Shilpa) Date: Mon, 19 Aug 2019 16:34:26 +0000 Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Hi Ralf Teckelmann, Sorry resending by correcting subject with appropriate subject. > Hello, > > Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. > For the hostmonitor a pacemaker cluster is missing. > > Can anyone give me an overview how the pacemaker cluster setup would look like? > Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? > > Best regards, > > Ralf Teckelmann Sorry for the late answer... Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [1]. You can refer 'devstack/plugin.sh', please note it has IPMI Hardware support dependency. Also PFA 'detailed_steps_on_ubuntu_for_pacemaker_verification.txt' for verification of same carried out on Ubuntu distribution (installed openstack using devstack with Masakari enabled). [1]: https://review.opendev.org/#/c/671200/ Add devstack support to install host-monitor Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 12, 2019 6:30 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 59 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA (Arx Cruz) 2. [swauth][swift] Retiring swauth (Ondrej Novy) 3. [ironic] Resuming having weekly meetings (Julia Kreger) 4. [masakari] pacemaker-remote Setup Overview (Teckelmann, Ralf, NMU-OIP) ---------------------------------------------------------------------- Message: 1 Date: Mon, 12 Aug 2019 12:32:34 +0200 From: Arx Cruz To: Jean-Philippe Evrard Cc: openstack-discuss at lists.openstack.org Subject: Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://review.opendev.org/#/c/675858/ On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 12 Aug 2019 12:56:15 +0200 From: Ondrej Novy To: openstack-discuss at lists.openstack.org Subject: [swauth][swift] Retiring swauth Message-ID: Content-Type: text/plain; charset="utf-8" Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 12 Aug 2019 07:55:07 -0400 From: Julia Kreger To: openstack-discuss Subject: [ironic] Resuming having weekly meetings Message-ID: Content-Type: text/plain; charset="UTF-8" All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia ------------------------------ Message: 4 Date: Mon, 12 Aug 2019 12:59:20 +0000 From: "Teckelmann, Ralf, NMU-OIP" To: "openstack-discuss at lists.openstack.org" Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 59 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From aj at suse.com Mon Aug 19 16:36:01 2019 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Aug 2019 18:36:01 +0200 Subject: [tc] weekly update In-Reply-To: References: Message-ID: <803b9cce-9887-95a5-b257-21f164f5569a@suse.com> On 19/08/2019 18.26, Mohammed Naser wrote: > Hi everyone, > > Here’s the update for what happened in the OpenStack TC this week. You > can get more information by checking for changes in > openstack/governance repository. > > # Retired projects > - Networking-generic-switch-tempest (from networking-generic-switch): > https://review.opendev.org/#/c/674430/ The process for retiring is documented here: https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project Could you follow those steps to get the system out of Zuul properly, please? Andreas > # General changes > - Michał Dulko as Kuryr PTL: https://review.opendev.org/#/c/674624/ > - Added a mission to Swift taken from its wiki: > https://review.opendev.org/#/c/675307/ > - Sean McGinnis as release PTL: https://review.opendev.org/#/c/675246/ > > Thanks for tuning in! > > Regards, > Mohammed > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Nils Brauckmann, Felix Imendörffer, Enrica Angelone, HRB 247165 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From doug at doughellmann.com Mon Aug 19 16:48:40 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Aug 2019 12:48:40 -0400 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <20190819145918.6jw3p6jcnzuskdch@yuggoth.org> References: <20190819145918.6jw3p6jcnzuskdch@yuggoth.org> Message-ID: <740FFC7B-F7F0-427C-95F8-C6D6E8A0FA7E@doughellmann.com> > On Aug 19, 2019, at 10:59 AM, Jeremy Stanley wrote: > > On 2019-08-19 13:17:35 +0000 (+0000), Alexandra Settle wrote: > [...] >> Doug has rightfully pointed out that whilst the documentation team >> as it stands today is no longer "integral", the docs.openstack.org >> web site is. An owner is required. >> >> The suggestion is for the Release Management team is the "least >> worst" (thanks, tonyb) place for the website manaagement to land. >> As Tony points out, this requires learning new tools and processes >> but the individuals working on the docs team currently have no >> intention to leave, and are around to help manage this from the >> SIG. >> >> Open to discussion and suggestions, but to summarise the proposal >> here: >> >> docs.openstack.org ownership is to transition to be the >> responsibility of the Release Management team officially provided >> there are no strong objections. > [...] > > The prose above is rather vague on what "the docs.openstack.org web > site" entails. Inferring from Doug's comments on 657142, I think you > and he are referring specifically to the content in the > https://opendev.org/openstack/openstack-manuals/src/branch/master/www > subtree. Is that pretty much it? The Apache virtual host > configuration and filesystem hosting the site itself are managed by > the Infra/OpenDev team, and there aren't any plans to change that as > far as I'm aware. > -- > Jeremy Stanley Yes, that’s correct. Doug From doug at doughellmann.com Mon Aug 19 16:59:32 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Aug 2019 12:59:32 -0400 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <20190819154106.GA25909@sm-workstation> References: <20190819154106.GA25909@sm-workstation> Message-ID: <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> > On Aug 19, 2019, at 11:41 AM, Sean McGinnis wrote: > >> >> Quick recap: The documentation team is set to disband as an official >> project, and make the leap to becoming a SIG (Special Interest Group). >> >> >> The remaining individuals working on the docs team are sorting through >> what's left and where it is best placed. In [1], Doug has rightfully >> pointed out that whilst the documentation team as it stands today is no >> longer "integral", the docs.openstack.org web site is. An owner is >> required. >> > > Unless things have changed, SIGs can be owners of a resource published via > docs.openstack.org (and I am assuming that means be extension, docs.o.o > itself). Is there a reason the Docs SIG would not still be able to own the > site? > >> The suggestion is for the Release Management team is the "least worst" >> (thanks, tonyb) place for the website manaagement to land. As Tony >> points out, this requires learning new tools and processes but the >> individuals working on the docs team currently have no intention to >> leave, and are around to help manage this from the SIG. >> > > I'm personally fine with the release team taking this on, but it does seem like > an odd fit. I think it would make a lot more sense for the Docs SIG to own the > docs site than the release team. Thierry has always drawn (one of) the distinction(s) between teams and SIGs as being that teams own tasks that are “part of” the thing we produce that we call OpenStack, while SIGs are less formally part of that input to that production process. Updating the docs site every time we have a new release felt like it should be part of the formal process, and definitely not something we should leave to chance. The cadence made me think the release team could be a good home. It’s 95% automated now, for what that’s worth. I imagine someone could automate the step of adding the new series pages, although I’m not sure what the trigger would be. We could also look at other ways to build the site that don’t require any action each cycle, of course, including not updating it at all and only publishing docs out of master. That’s especially appealing if we don’t have anyone in the community willing and able to pick up the work. > >> Open to discussion and suggestions, but to summarise the proposal here: >> >> docs.openstack.org ownership is to transition to be the responsibility >> of the Release Management team officially provided there are no strong >> objections. >> >> Thanks, >> >> Alex >> IRC: asettle >> >> [1] https://review.opendev.org/#/c/657142/ >> [2] https://review.opendev.org/#/c/657141/ > From zsj950618 at gmail.com Mon Aug 19 12:46:05 2019 From: zsj950618 at gmail.com (Shengjing Zhu) Date: Mon, 19 Aug 2019 20:46:05 +0800 Subject: [openstack-dev] Dropping lazy translation support Message-ID: Sorry for replying the old mail, and please cc me when reply. Matt Riedemann writes: > This is a follow up to a dev ML email [1] where I noticed that some > implementations of the upgrade-checkers goal were failing because some > projects still use the oslo_i18n.enable_lazy() hook for lazy log message > translation (and maybe API responses?). > > The very old blueprints related to this can be found here [2][3][4]. > > If memory serves me correctly from my time working at IBM on this, this > was needed to: > > 1. Generate logs translated in other languages. > > 2. Return REST API responses if the "Accept-Language" header was used > and a suitable translation existed for that language. > > #1 is a dead horse since I think at least the Ocata summit when we > agreed to no longer translate logs since no one used them. > > #2 is probably something no one knows about. I can't find end-user > documentation about it anywhere. It's not tested and therefore I have no > idea if it actually works anymore. > > I would like to (1) deprecate the oslo_i18n.enable_lazy() function so > new projects don't use it and (2) start removing the enable_lazy() usage > from existing projects like keystone, glance and cinder. > > Are there any users, deployments or vendor distributions that still rely > on this feature? If so, please speak up now. I was pointed to this discussion when I tried to fix this feature in keystone, https://review.opendev.org/677117 For #2 translated API response, this feature probably hasn't been working for some time, but it's still a valid user case. Has the decision been settled? -- Regards, Shengjing Zhu From rktidwell85 at gmail.com Mon Aug 19 16:32:23 2019 From: rktidwell85 at gmail.com (Ryan Tidwell) Date: Mon, 19 Aug 2019 11:32:23 -0500 Subject: [neutron] Bug Deputy Report August 12 - August 19 Message-ID: Hello Neutron Team, Here is my bug deputy report for August 12 - August 19: Medium: -[OVS agent] Physical bridges can't be initialized if there is no connectivity to rabbitmq https://bugs.launchpad.net/neutron/+bug/1840443 The fix for this has been merged and a number of backports have merged as well. It appears to have been handled by the team. - excessive number of dvrs where vm got a fixed ip on floating network https://bugs.launchpad.net/bugs/1840579 https://review.opendev.org/#/c/677092/ has been proposed and is being worked on. This appears to be related to how we determine whether to instantiate a DVR local router on a compute node. It looks like there is a case where we create a local router when it's not necessary, thereby consuming too many IP's on the floating IP network at scale. Undecided: - network/subnet resources cannot be read and written separated https://bugs.launchpad.net/neutron/+bug/1840638 https://review.opendev.org/#/c/677166/ has been proposed. This was just reported just a few hours ago and still needs some triage to confirm severity of the issue. RFE's: - [RFE] Add new config option to enable IGMP snooping in ovs https://bugs.launchpad.net/bugs/1840136 Regards, Ryan Tidwell -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shilpa.Devharakar at nttdata.com Mon Aug 19 16:33:40 2019 From: Shilpa.Devharakar at nttdata.com (Devharakar, Shilpa) Date: Mon, 19 Aug 2019 16:33:40 +0000 Subject: [masakari] how to install masakari on centos 7 (Vu Tan) Message-ID: Hi Vu Tan, Sorry for late reply. Here are the steps we verified on Ubuntu Distribution, installed openstack via devstack. PFA 'detailed_steps_to_install_masakari.txt' Do not refer '/masakari/etc/Masakari.conf' sample file, instead generate it using 'tox -egenconfig' after 'git clone' > Hi Patil, > May I know how is it going ? > >> On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: >> >> Hi Patil, >> Thank you for your reply, please instruct me if you successfully >> install it. Thanks a lot >> >> On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > > >> wrote: >> >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and >> let you know the solution or will update you what steps I have >> followed to bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running >> but it is not running as it should be, a lots of error in logs, here >> is a sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, >> in _is_daemo n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent >> call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown >> to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, >> can you please take a look at my configuration to see what is wrong >> or what is missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "__file__" in conf >> is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "here" in conf is >> not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token >> [-] AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova os_privileged_user_password = P at ssword >> masakari_api_listen = controller masakari_api_listen_port = 15868 >> debug = False auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 auth_url = >> http://controller:5000 auth_type = password project_domain_id = >> default project_domain_name = default user_domain_name = default >> user_domain_id = default project_name = service username = masakari >> password = P at ssword region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar >> > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded >> on the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th >> July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options >> into your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig >> > on top level of masakari but the output is error, is this masakari >> > still in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig genconfig >> > create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ >> > summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run >> > masakari-api with this command: masakari-api, but I dont know >> > whether the process is running like that or is it just hang there, >> > here is what it shows when I run the command, I leave it there for >> > a while but it does not change anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] >> > Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "__file__" in conf >> > is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "here" in conf is >> > not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token >> > [-] AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This >> > is backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] >> > Starting 4 workers >> > 2019-07-08 15:21:09.984 30274 INFO >> > masakari.masakari_api.wsgi.server [-] (30274) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO >> > masakari.masakari_api.wsgi.server [-] (30275) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO >> > masakari.masakari_api.wsgi.server [-] (30277) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO >> > masakari.masakari_api.wsgi.server [-] (30276) wsgi starting up on >> > http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> >> >> >om>>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured >> > some stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to >> > systemctl command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, >> > you will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api >> > and masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of >> > prefix on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step >> > by step or other documentation to get started with. Which part >> > should be installed on controller, which is should be on compute, >> > and what is the prerequisite to install masakari, I have installed >> > corosync and pacemaker on compute and controller nodes, , what else >> > do I need to do ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take >> > a look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller # Enable to specify port other >> > than default masakari_api_listen_port = 15868 debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path api_paste_config = >> > /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 auth_url = >> > http://controller:5000 auth_type = password project_domain_id = >> > default user_domain_id = default project_name = service username = >> > masakari password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> > Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Sunday, August 4, 2019 12:42 AM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 13 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [nova][ops] Documenting nova tunables at scale (Joe Robinson) 2. Re: [masakari] how to install masakari on centos 7 (Vu Tan) ---------------------------------------------------------------------- Message: 1 Date: Sat, 3 Aug 2019 10:40:11 +1000 From: Joe Robinson To: Matt Riedemann Cc: openstack-discuss at lists.openstack.org Subject: Re: [nova][ops] Documenting nova tunables at scale Message-ID: Content-Type: text/plain; charset="utf-8" Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along > and see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people > to dump some of their settings. The most common ones could go into a > simple admin doc to start. > > I know there is more I've thought about in the past that I don't have > in here but this is just a starting point so I don't make the mistake > of not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 1 Aug 2019 11:43:13 +0700 From: Vu Tan To: "Patil, Tushar" Cc: Gaëtan Trellu , "openstack-discuss at lists.openstack.org" Subject: Re: [masakari] how to install masakari on centos 7 Message-ID: Content-Type: text/plain; charset="utf-8" Hi Patil, May I know how is it going ? On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: > Hi Patil, > Thank you for your reply, please instruct me if you successfully > install it. Thanks a lot > > On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > > wrote: > >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and >> let you know the solution or will update you what steps I have >> followed to bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running >> but it is not running as it should be, a lots of error in logs, here >> is a sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, >> in _is_daemo n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent >> call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown >> to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, >> can you please take a look at my configuration to see what is wrong >> or what is missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "__file__" in conf >> is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "here" in conf is >> not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token >> [-] AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova os_privileged_user_password = P at ssword >> masakari_api_listen = controller masakari_api_listen_port = 15868 >> debug = False auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 auth_url = >> http://controller:5000 auth_type = password project_domain_id = >> default project_domain_name = default user_domain_name = default >> user_domain_id = default project_name = service username = masakari >> password = P at ssword region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar >> > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded >> on the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th >> July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options >> into your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig >> > on top level of masakari but the output is error, is this masakari >> > still in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig genconfig >> > create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ >> > summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run >> > masakari-api with this command: masakari-api, but I dont know >> > whether the process is running like that or is it just hang there, >> > here is what it shows when I run the command, I leave it there for >> > a while but it does not change anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] >> > Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "__file__" in conf >> > is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "here" in conf is >> > not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token >> > [-] AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This >> > is backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] >> > Starting 4 workers >> > 2019-07-08 15:21:09.984 30274 INFO >> > masakari.masakari_api.wsgi.server [-] (30274) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO >> > masakari.masakari_api.wsgi.server [-] (30275) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO >> > masakari.masakari_api.wsgi.server [-] (30277) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO >> > masakari.masakari_api.wsgi.server [-] (30276) wsgi starting up on >> > http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> >> >> >om>>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured >> > some stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to >> > systemctl command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, >> > you will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api >> > and masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of >> > prefix on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step >> > by step or other documentation to get started with. Which part >> > should be installed on controller, which is should be on compute, >> > and what is the prerequisite to install masakari, I have installed >> > corosync and pacemaker on compute and controller nodes, , what else >> > do I need to do ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take >> > a look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller # Enable to specify port other >> > than default masakari_api_listen_port = 15868 debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path api_paste_config = >> > /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 auth_url = >> > http://controller:5000 auth_type = password project_domain_id = >> > default user_domain_id = default project_name = service username = >> > masakari password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 13 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: detailed_steps_to_install_masakari.txt URL: From ryan at 6tidwells.com Mon Aug 19 16:45:04 2019 From: ryan at 6tidwells.com (Ryan Tidwell) Date: Mon, 19 Aug 2019 11:45:04 -0500 Subject: [neutron] Bug Deputy Report August 12 - August 19 Message-ID: Hello Neutron Team, Here is my bug deputy report for August 12 - August 19: Medium: -[OVS agent] Physical bridges can't be initialized if there is no connectivity to rabbitmq https://bugs.launchpad.net/neutron/+bug/1840443 The fix for this has been merged and a number of backports have merged as well. It appears to have been handled by the team. - excessive number of dvrs where vm got a fixed ip on floating network https://bugs.launchpad.net/bugs/1840579 https://review.opendev.org/#/c/677092/ has been proposed and is being worked on. This appears to be related to how we determine whether to instantiate a DVR local router on a compute node. It looks like there is a case where we create a local router when it's not necessary, thereby consuming too many IP's on the floating IP network at scale. Undecided: - network/subnet resources cannot be read and written separated https://bugs.launchpad.net/neutron/+bug/1840638 https://review.opendev.org/#/c/677166/ has been proposed. This was just reported just a few hours ago and still needs some triage to confirm severity of the issue. RFE's: - [RFE] Add new config option to enable IGMP snooping in ovs https://bugs.launchpad.net/neutron/+bug/1840136 Regards, Ryan Tidwell -------------- next part -------------- An HTML attachment was scrubbed... URL: From tidwellrdev at gmail.com Mon Aug 19 17:15:18 2019 From: tidwellrdev at gmail.com (Ryan Tidwell) Date: Mon, 19 Aug 2019 12:15:18 -0500 Subject: [neutron] Bug Deputy Report August 12-19 Message-ID: Hello Neutron Team, Here is my bug deputy report for August 12 - August 19: Medium: -[OVS agent] Physical bridges can't be initialized if there is no connectivity to rabbitmq https://bugs.launchpad.net/neutron/+bug/1840443 The fix for this has been merged and a number of backports have merged as well. It appears to have been handled by the team. - excessive number of dvrs where vm got a fixed ip on floating network https://bugs.launchpad.net/bugs/1840579 https://review.opendev.org/#/c/677092/ has been proposed and is being worked on. This appears to be related to how we determine whether to instantiate a DVR local router on a compute node. It looks like there is a case where we create a local router when it's not necessary, thereby consuming too many IP's on the floating IP network at scale. Undecided: - network/subnet resources cannot be read and written separated https://bugs.launchpad.net/neutron/+bug/1840638 https://review.opendev.org/#/c/677166/ has been proposed. This was just reported just a few hours ago and still needs some triage to confirm severity of the issue. RFE's: - [RFE] Add new config option to enable IGMP snooping in ovs https://bugs.launchpad.net/bugs/1840136 Regards, Ryan Tidwell -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Aug 19 17:17:08 2019 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 19 Aug 2019 12:17:08 -0500 Subject: [openstack-dev] [oslo][i18n] Dropping lazy translation support In-Reply-To: References: Message-ID: On 8/19/19 7:46 AM, Shengjing Zhu wrote: > Sorry for replying the old mail, and please cc me when reply. > > Matt Riedemann writes: > >> This is a follow up to a dev ML email [1] where I noticed that some >> implementations of the upgrade-checkers goal were failing because some >> projects still use the oslo_i18n.enable_lazy() hook for lazy log message >> translation (and maybe API responses?). >> >> The very old blueprints related to this can be found here [2][3][4]. >> >> If memory serves me correctly from my time working at IBM on this, this >> was needed to: >> >> 1. Generate logs translated in other languages. >> >> 2. Return REST API responses if the "Accept-Language" header was used >> and a suitable translation existed for that language. >> >> #1 is a dead horse since I think at least the Ocata summit when we >> agreed to no longer translate logs since no one used them. >> >> #2 is probably something no one knows about. I can't find end-user >> documentation about it anywhere. It's not tested and therefore I have no >> idea if it actually works anymore. >> >> I would like to (1) deprecate the oslo_i18n.enable_lazy() function so >> new projects don't use it and (2) start removing the enable_lazy() usage >> from existing projects like keystone, glance and cinder. >> >> Are there any users, deployments or vendor distributions that still rely >> on this feature? If so, please speak up now. > > I was pointed to this discussion when I tried to fix this feature in keystone, > https://review.opendev.org/677117 > > For #2 translated API response, this feature probably hasn't been > working for some time, but it's still a valid user case. > > Has the decision been settled? > Not to my knowledge. Lazy translation still exists, but I don't know that anyone is testing it. Are you saying that you are using this feature now, or are you interested in using it going forward? From zsj950618 at gmail.com Mon Aug 19 17:33:38 2019 From: zsj950618 at gmail.com (Shengjing Zhu) Date: Tue, 20 Aug 2019 01:33:38 +0800 Subject: [openstack-dev] [oslo][i18n] Dropping lazy translation support In-Reply-To: References: Message-ID: On Tue, Aug 20, 2019 at 1:17 AM Ben Nemec wrote: > [...] > > Not to my knowledge. Lazy translation still exists, but I don't know > that anyone is testing it. > > Are you saying that you are using this feature now, or are you > interested in using it going forward? I was developing an application using keystone. And I need keystone API to response translated message recently. (This is broken in latest keystone, and I'm trying to fix it.) -- Regards, Shengjing Zhu From rlennie at verizonmedia.com Mon Aug 19 17:47:16 2019 From: rlennie at verizonmedia.com (Robert Lennie) Date: Mon, 19 Aug 2019 10:47:16 -0700 Subject: How are we managing quotas, flavors, projects and more across large org Openstack deployments? In-Reply-To: References: Message-ID: The Operations Dashboard System that we currently use currently exists but as I mentioned it has several shortcomings (chief of them being that it is fragile, overly complex and frankly it probably needs to be completely re-written and simplified). For that reason one big consideration for us is what replacements might exist upstream? What are other organizations using to manage large deployments? On Fri, Aug 16, 2019 at 7:32 PM Amrith Kumar wrote: > Could you describe what the system you are building would do. > > Thanks! > > On Fri, Aug 16, 2019, 17:30 Robert Lennie > wrote: > >> Hi, >> >> I am a complete neophyte to this forum so I may have missed any earlier >> discussions on this topic if there have been any? >> >> I am currently working on a proprietary legacy (Django based) "OpenStack >> Operations Manager" dashboard system that attempts to manage quotas, >> flavors, clusters and more across a very large distributed OpenStack >> deployment for multiple customers. >> >> There are a number of potential shortcomings with our current dashboard >> system and I was wondering how other teams in other organizations with >> similar large Openstack deployments are handling these types of system >> management issues? The issues addressed are likely common to other large >> environments. >> >> In particular what tools might exist upstream that allow system >> (particularly quota and flavor) management across large distributed >> Openstack environments? >> >> I welcome any feedback, comments, suggestions and any other information >> regarding the tools that currently exist or may be planned for this purpose? >> >> Regards, >> >> Robert Lennie >> Principal Software Engineer >> Verizon Media >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Aug 19 17:49:41 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Aug 2019 12:49:41 -0500 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> Message-ID: <20190819174941.GA4730@sm-workstation> > > > >> The suggestion is for the Release Management team is the "least worst" > >> (thanks, tonyb) place for the website manaagement to land. As Tony > >> points out, this requires learning new tools and processes but the > >> individuals working on the docs team currently have no intention to > >> leave, and are around to help manage this from the SIG. > >> > > > > I'm personally fine with the release team taking this on, but it does seem like > > an odd fit. I think it would make a lot more sense for the Docs SIG to own the > > docs site than the release team. > > Thierry has always drawn (one of) the distinction(s) between teams and SIGs as > being that teams own tasks that are “part of” the thing we produce that we call > OpenStack, while SIGs are less formally part of that input to that production process. > > Updating the docs site every time we have a new release felt like it should > be part of the formal process, and definitely not something we should leave to > chance. The cadence made me think the release team could be a good home. > > It’s 95% automated now, for what that’s worth. I imagine someone could automate the step of > adding the new series pages, although I’m not sure what the trigger would be. > > We could also look at other ways to build the site that don’t require any action > each cycle, of course, including not updating it at all and only publishing docs > out of master. That’s especially appealing if we don’t have anyone in the > community willing and able to pick up the work. > This is a little different scope though, isn't it? Maybe I misunderstood the original proposal, but to me there seems to be a big difference between owning the task of configuring the site for the next release (which totally makes sense as a release team task) and owning the entire docs.openstack.org site. From fungi at yuggoth.org Mon Aug 19 17:56:53 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Aug 2019 17:56:53 +0000 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <20190819174941.GA4730@sm-workstation> References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> <20190819174941.GA4730@sm-workstation> Message-ID: <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> On 2019-08-19 12:49:41 -0500 (-0500), Sean McGinnis wrote: [...] > there seems to be a big difference between owning the task of > configuring the site for the next release (which totally makes > sense as a release team task) and owning the entire > docs.openstack.org site. That's why I also requested clarification in my earlier message on this thread. The vast majority of the content hosted under https://docs.openstack.org/ is maintained in a distributed fashion by the various teams writing documentation in their respective projects. The hosting (configuration apart from .htaccess files, storage, DNS, and so on) is handled by Infra/OpenDev folks. If it's *just* the stuff inside the "www" tree in the openstack-manuals repo then that's not a lot, but it's also possible what the release team actually needs to touch in there could be successfully scaled back even more (with the caveat that I haven't looked through it in detail). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Aug 19 18:03:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Aug 2019 18:03:58 +0000 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> <20190819174941.GA4730@sm-workstation> <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> Message-ID: <20190819180358.vmdjmkvzkscvxihr@yuggoth.org> On 2019-08-19 17:56:53 +0000 (+0000), Jeremy Stanley wrote: [...] > The vast majority of the content hosted under > https://docs.openstack.org/ is maintained in a distributed fashion > by the various teams writing documentation in their respective > projects. The hosting (configuration apart from .htaccess files, > storage, DNS, and so on) is handled by Infra/OpenDev folks. [...] Oh, and let's not forget the publication automation/jobs for the site, which are also presumably not part of the scope of what's being discussed as becoming the release team's direct responsibility. Those are already overseen by job configuration reviewers for the project-config and openstack-zuul-jobs repositories. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Aug 19 18:16:13 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Aug 2019 14:16:13 -0400 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> <20190819174941.GA4730@sm-workstation> <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> Message-ID: > On Aug 19, 2019, at 1:56 PM, Jeremy Stanley wrote: > > On 2019-08-19 12:49:41 -0500 (-0500), Sean McGinnis wrote: > [...] >> there seems to be a big difference between owning the task of >> configuring the site for the next release (which totally makes >> sense as a release team task) and owning the entire >> docs.openstack.org site. > > That's why I also requested clarification in my earlier message on > this thread. The vast majority of the content hosted under > https://docs.openstack.org/ is maintained in a distributed fashion > by the various teams writing documentation in their respective > projects. The hosting (configuration apart from .htaccess files, > storage, DNS, and so on) is handled by Infra/OpenDev folks. If it's > *just* the stuff inside the "www" tree in the openstack-manuals repo > then that's not a lot, but it's also possible what the release team > actually needs to touch in there could be successfully scaled back > even more (with the caveat that I haven't looked through it in > detail). > -- > Jeremy Stanley The suggestion is for the release team to take over the site generator for docs.openstack.org (the stuff under “www” in the current openstack-manuals git repository) and for the SIG to own anything that looks remotely like “content”. There isn’t much of that left anyway, now that most of it is in the project repositories. Most of what is under www is series-specific templates and data files that tell the site generator how to insert links to parts of the project documentation in the right places (the “install guides” page links to /foo/$series/install/ for example). They’re very simple, very dumb, templates, driven with a little custom script that wraps jinja2, feeding the right data to the right templates based on series name. There is pretty good documentation for how to use it in the tools [1] and release [2] sections of the docs contributor guide. The current site-generator definitely could be simpler, especially if it only linked to the master docs and *those* linked to the older versions of themselves (so /nova/latest/ had a link that pointed to /nova/ocata/ somewhere). That would take some work, though. The simplest thing we could do is just make the release team committers on openstack-manuals, leave everything else as it is, and exercise trust between the two groups. If we absolutely want to separate the builds, then we could make a new repo with just the template-driven pages under “www”, but that’s going to involve changing/creating several publishing jobs. Doug [1] https://docs.openstack.org/doc-contrib-guide/doc-tools.html [2] https://docs.openstack.org/doc-contrib-guide/release.html From zbitter at redhat.com Mon Aug 19 18:49:29 2019 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 19 Aug 2019 14:49:29 -0400 Subject: Heat Template In-Reply-To: References: Message-ID: <02d43ab2-e9e3-08c8-01c6-50a61f54db8f@redhat.com> On 13/08/19 3:32 PM, mesut aygün wrote: > Hi everyone; > I am writing the template for cluster but i cant  inject the cloud-init > data. > > How can I inject the password data for vm? > > heat_template_version: 2014-10-16 > > > new_instance: > type: OS::Nova::Server > properties: > key_name: {get_param: key_name } > image: {get_param: image_id } > flavor: bir > name: > str_replace: > template: master-$NONCE-dmind > params: > $NONCE: {get_resource: name_nonce } > user_data: | > #!/bin/bash > #cloud-config > password: 724365 > echo "Running boot script" >> /home/ubuntu/test > sudo sed -i "/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes" /etc/ssh/sshd_config > sudo useradd -d /home/mesut -m mesut > sudo usermod --password 724365 ubuntu > /etc/init.d/ssh restart > I believe to pass plain cloud-init data to OS::Nova::Server you need to set the property: user_data_format: RAW From pa at pauloangelo.com Mon Aug 19 21:25:30 2019 From: pa at pauloangelo.com (Paulo Angelo) Date: Mon, 19 Aug 2019 18:25:30 -0300 Subject: [keystone][horizon] Integration with GuardianKey In-Reply-To: References: Message-ID: > > > > We are trying to integrate OpenStack (Horizon or Keystone) with > > > GuardianKey. However, we have doubts related to the best way to do > this > > > and the best point in the code for this integration. > > > > > > > > > GuardianKey is a solution to protect systems against authentication > > > attacks. It uses Machine Learning and analyses the user's behavior, > > > threat intelligence and psychometrics (or behavioral biometrics). The > > > protected system (in the concrete case, OpenStack admin interface) > must > > > send an event via REST for the GuardianKey on each login attempt. More > > > info at https://guardiankey.io . > > > > > > The best way to integrate would be on having a hook in the procedure > > > that process the user credentials submission in OpenStack (the script > > > that receives the POST), something such as: > > > > > > > > > if() { > > > boolean loginFailed = checkLogin(); > > > GuardianKeyEvent event = > createEventForGuardianKey(username,loginFailed); > > > boolean GuardianKeyValidation = checkGuardianKeyViaREST(event); > > > if(GuardianKeyValidation){ > > > // Allow access > > > } else { > > > // Deny access > > > } > > > } > > > > > > Where is the best place to create this integration? Horizon or > Keystone? > > > Is there a way to create a hook for this purpose? Should we create an > > > extension? > > Keystone would be the best place for this. Horizon is only one way a user > can log in to OpenStack, so hooking into Horizon would not cover your > attack vector. Keystone has a built-in auditing system specifically for > this, using CADF notifications to emit events when a user logs in: > > https://docs.openstack.org/keystone/latest/admin/event_notifications.html > > All you need to do is create a consumer for those notifications. > > Colleen > Thank you, Colleen, for your message. These days, I spent some time on it to understand better about the OpenStack events. As I could see, the authentication event does not have the user login name and other useful user information, such as e-mail. Is there a way to retrieve this information using the OpenStack resources? Or the best way is to access directly the database? Another question: We will need to deny the access if GuardianKey identifies a high attack risk. In this case, is there an easy way to drop a user session using the Keystone API or resources (and the information in the event)? Finally, you mentioned about the consumer. Is this a consumer for a RabbitMQ queue or an OpenStack API consumer? In the last case, is there example code for this? Thank you in advance. Regards, Paulo Angelo -------------- next part -------------- An HTML attachment was scrubbed... URL: From hid-nakamura at vf.jp.nec.com Tue Aug 20 05:56:46 2019 From: hid-nakamura at vf.jp.nec.com (Hidekazu Nakamura) Date: Tue, 20 Aug 2019 05:56:46 +0000 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> Message-ID: <8FC2060F93794D44942D588B8B5472871242631E@BPXM03GP.gisp.nec.co.jp> Hi Jay, Sorry late. NEC is working hard to move NEC Cinder CI to python3.7. I updated the py3-ci-review etherpad. Thanks, Hidekazu Nakamura > -----Original Message----- > From: Jay Bryant > Sent: Tuesday, August 6, 2019 12:41 PM > To: openstack-discuss at lists.openstack.org > Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration > > All, > > This e-mail has multiple purposes.  First, I have expanded the mail audience > to go beyond just openstack-discuss to a mailing list I have created for all 3rd > Party CI Maintainers associated with Cinder.  I apologize to those of you who > are getting this as a duplicate e-mail. > > For all 3rd Party CI maintainers who have already migrated your systems to > using Python3.7...Thank you!  We appreciate you keeping up-to-date with > Cinder's requirements and maintaining your CI systems. > > If this is the first time you are hearing of the Python3.7 requirement please > continue reading. > > It has been decided by the OpenStack TC that support for Py2.7 would be > deprecated [1].  The Train development cycle is the last cycle that will support > Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. > > It was discussed at the Train PTG that we would require all 3rd Party CIs to be > running using Python3 by the Train milestone 2: [2]  We have been > communicating the importance of getting 3rd Party CI running with > py3 in meetings and e-mail for quite some time now, but it still appears that > nearly half of all vendors are not yet running with Python 3. [3] > > If you are a vendor who has not yet moved to using Python 3 please take some > time to review this document [4] as it has guidance on how to get your CI > system updated.  It also includes some additional details as to why this > requirement has been set and the associated background.  Also, please > update the py3-ci-review etherpad with notes indicating that you are working > on adding py3 support. > > I would also ask all vendors to review the etherpad I have created as it indicates > a number of other drivers that have been marked unsupported due to CI > systems not running properly.  If you are not planning to continue to support a > driver adding such a note in the etherpad would be appreciated. > > Thanks! > > Jay > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/00825 > 5.html > > [2] > https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_P > arty_CI > > [3] https://etherpad.openstack.org/p/cinder-py3-ci-review > > [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update > > From a.settle at outlook.com Tue Aug 20 08:32:16 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Tue, 20 Aug 2019 08:32:16 +0000 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 Message-ID: Hi all, For the Train cycle Artem Goncharov proposed moving the legacy client CLIs to OSC. This goal did not move forward due to a broad range of concerns and was eventually -1'd by Erno Kuvaja (Glance PTL) as he was unable to support it from a Glance point of view. Moving forward, I'd like to end the discussion in the review [1] by abandoning the patch and move this to the mailing list. The following looks like it needs to happen: 1. This PR should be abandoned. It is not going to be accepted as a commmunity goal in this format and the debate within the comments is circular. Let's step out of here, and start having conversation elsewhere. 2. To start those conversations, a pop-up team would be a suitable alternative to begin driving that work. Someone needs to step forward to lead the pop-up team. The TC recommends two individuals step forwards as indicated in the pop-up team Governance document [2]. 3. I recommend the pop-up team include Erno Kuvaja or a representative from the Glance team and including Matt Reidemann (who has been working to close the compute API gaps). 4. The leaders must then identify a clear objective, and a clear disband criteria. If in case the pop-up team decide that this could be better drafted as a community goal for U, then that is a suitable alternative that can be defined using the new goal selection criteria that is being defined here [3] But this should not be decided before clear objectives are defined. Thanks, Alex IRC: asettle [1] https://review.opendev.org/#/c/639376/ [2] https://governance.openstack.org/tc/reference/popup-teams.html [3] https://review.opendev.org/#/c/667932/ From lychzhz at gmail.com Tue Aug 20 10:14:36 2019 From: lychzhz at gmail.com (Douglas Zhang) Date: Tue, 20 Aug 2019 18:14:36 +0800 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters Message-ID: Hello everyone, To help users interact with openstack, we’re currently developing a client-side web tool which enables administrators to manage their openstack cluster in a more efficient and convenient way. (Since we have not named it officially yet, I’m going to call it openstack-admin) *# Introduction* Some may ask, “Why do we need an extra web-based user interface since we have horizon?” Well, although horizon is a mature and powerful dashboard, it is far not efficient enough on big clusters (a simple list operation could take seconds to complete). What’s more, its flexibility of searching could not match our requirements. To overcome obstacles above, a more efficient tool is urgently required, that’s why we started to develop openstack-admin. *# Highlights* Comparing with the current user interface, openstack-admin has following advantages: - *Fast*: openstack-admin gets data straightly from SQL databases instead of calling standard openstack API, which accelerates the querying period to a large extent (especially when we’re dealing with a large amount of data). - *Flexible*: openstack-admin supports the fuzzy search for any important field(e.g. display_name/uuid/ip_address/project_name of an instance), which enables users to locate a particular object in no time. - *User-friendly*: the backend of openstack-admin gets necessary messages from the message queue used by nova, and send them to the frontend using websocket. This way, not only more realistic progress bars could be implemented, but more detailed information could be provided to users as well. *# Issues* To make this tool more efficient and provide better support for concurrency, we chose Golang to implement openstack-admin. As I’ve asked before (truly appreciate advises from Jeremy and Ghanshyam), a project written by an unofficial language may be accepted only if existing languages have been proven to not meet the technical requirements, so we’re considering re-implementing openstack-admin using python if we can’t come to an agreement on the language issue. So that’s all. How do you guys think of this project? Thanks, Douglas Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Tue Aug 20 11:21:36 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Tue, 20 Aug 2019 12:21:36 +0100 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87mug8j0nh.fsf@meyer.lemoncheese.net> References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> <87mug8j0nh.fsf@meyer.lemoncheese.net> Message-ID: <439B3C2E-E531-4CA5-8CAF-BE0ECCBE4227@redhat.com> I am really glad to see the improvements on the log browsing UX. Can we also add address few other issues? like: a) auto-wrap long lines, even your link is a perfect example of endless horizontal bar created by pip requirements line. log should never need horizontal scrolling in browser. b) coloring in logs? c) auto hiding/collapsing the timestamp column -- it is boilerplate in most cases taking 20% entire screen real estate. same info could be visible as tooltip d) shorter urls? do we really need "602ab1629aca4f4ebe21ff7024884f87" as build number of we can get around with 3-6 chars? -- i always need to horizontally scroll the URL bar to see the file I am on. PS. I am more than willing to invest some of my personal time on addressing these. > On 16 Aug 2019, at 21:49, James E. Blair wrote: > > All of the fixes to the issues that Eric identified have landed, so you > should see them now (if not, hit reload). > > Plus a couple more -- log lines are now displayed with line numbers, and > it is these numbers which are the clickable links to create links for > sharing. > > Tristan also added a feature to support selecting a range. Click the > first line number to start the range, then Shift-click it to select the > end. > > For example: > > http://zuul.openstack.org/build/602ab1629aca4f4ebe21ff7024884f87/log/job-output.txt#456-464 > > -Jim > From Shilpa.Devharakar at nttdata.com Tue Aug 20 11:53:25 2019 From: Shilpa.Devharakar at nttdata.com (Devharakar, Shilpa) Date: Tue, 20 Aug 2019 11:53:25 +0000 Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Hi Ralf Teckelmann, Sorry for the attachment, instead please refer [1] Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [2]. You can refer 'devstack/plugin.sh' [1]: http://paste.openstack.org/show/760301/ [2]: https://review.opendev.org/#/c/671200/ > Hi Ralf Teckelmann, > > Sorry resending by correcting subject with appropriate subject. Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 19, 2019 10:19 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 94 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. [tc] weekly update (Mohammed Naser) 2. RE: [masakari] pacemaker-remote Setup Overview (Devharakar, Shilpa) 3. Re: [tc] weekly update (Andreas Jaeger) 4. Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org (Doug Hellmann) ---------------------------------------------------------------------- Message: 1 Date: Mon, 19 Aug 2019 12:26:48 -0400 From: Mohammed Naser To: OpenStack Discuss Subject: [tc] weekly update Message-ID: Content-Type: text/plain; charset="UTF-8" Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Retired projects - Networking-generic-switch-tempest (from networking-generic-switch): https://review.opendev.org/#/c/674430/ # General changes - Michał Dulko as Kuryr PTL: https://review.opendev.org/#/c/674624/ - Added a mission to Swift taken from its wiki: https://review.opendev.org/#/c/675307/ - Sean McGinnis as release PTL: https://review.opendev.org/#/c/675246/ Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com ------------------------------ Message: 2 Date: Mon, 19 Aug 2019 16:34:26 +0000 From: "Devharakar, Shilpa" To: "openstack-discuss at lists.openstack.org" Subject: RE: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hi Ralf Teckelmann, Sorry resending by correcting subject with appropriate subject. > Hello, > > Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. > For the hostmonitor a pacemaker cluster is missing. > > Can anyone give me an overview how the pacemaker cluster setup would look like? > Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? > > Best regards, > > Ralf Teckelmann Sorry for the late answer... Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [1]. You can refer 'devstack/plugin.sh', please note it has IPMI Hardware support dependency. Also PFA 'detailed_steps_on_ubuntu_for_pacemaker_verification.txt' for verification of same carried out on Ubuntu distribution (installed openstack using devstack with Masakari enabled). [1]: https://review.opendev.org/#/c/671200/ Add devstack support to install host-monitor Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 12, 2019 6:30 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 59 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA (Arx Cruz) 2. [swauth][swift] Retiring swauth (Ondrej Novy) 3. [ironic] Resuming having weekly meetings (Julia Kreger) 4. [masakari] pacemaker-remote Setup Overview (Teckelmann, Ralf, NMU-OIP) ---------------------------------------------------------------------- Message: 1 Date: Mon, 12 Aug 2019 12:32:34 +0200 From: Arx Cruz To: Jean-Philippe Evrard Cc: openstack-discuss at lists.openstack.org Subject: Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://review.opendev.org/#/c/675858/ On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 12 Aug 2019 12:56:15 +0200 From: Ondrej Novy To: openstack-discuss at lists.openstack.org Subject: [swauth][swift] Retiring swauth Message-ID: Content-Type: text/plain; charset="utf-8" Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 12 Aug 2019 07:55:07 -0400 From: Julia Kreger To: openstack-discuss Subject: [ironic] Resuming having weekly meetings Message-ID: Content-Type: text/plain; charset="UTF-8" All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia ------------------------------ Message: 4 Date: Mon, 12 Aug 2019 12:59:20 +0000 From: "Teckelmann, Ralf, NMU-OIP" To: "openstack-discuss at lists.openstack.org" Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 59 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ------------------------------ Message: 3 Date: Mon, 19 Aug 2019 18:36:01 +0200 From: Andreas Jaeger To: Mohammed Naser , OpenStack Discuss , juliaashleykreger at gmail.com Subject: Re: [tc] weekly update Message-ID: <803b9cce-9887-95a5-b257-21f164f5569a at suse.com> Content-Type: text/plain; charset=utf-8 On 19/08/2019 18.26, Mohammed Naser wrote: > Hi everyone, > > Here’s the update for what happened in the OpenStack TC this week. You > can get more information by checking for changes in > openstack/governance repository. > > # Retired projects > - Networking-generic-switch-tempest (from networking-generic-switch): > https://review.opendev.org/#/c/674430/ The process for retiring is documented here: https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project Could you follow those steps to get the system out of Zuul properly, please? Andreas > # General changes > - Michał Dulko as Kuryr PTL: https://review.opendev.org/#/c/674624/ > - Added a mission to Swift taken from its wiki: > https://review.opendev.org/#/c/675307/ > - Sean McGinnis as release PTL: https://review.opendev.org/#/c/675246/ > > Thanks for tuning in! > > Regards, > Mohammed > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Nils Brauckmann, Felix Imendörffer, Enrica Angelone, HRB 247165 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ------------------------------ Message: 4 Date: Mon, 19 Aug 2019 12:48:40 -0400 From: Doug Hellmann To: Jeremy Stanley Cc: openstack-discuss at lists.openstack.org Subject: Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org Message-ID: <740FFC7B-F7F0-427C-95F8-C6D6E8A0FA7E at doughellmann.com> Content-Type: text/plain;charset=utf-8 > On Aug 19, 2019, at 10:59 AM, Jeremy Stanley wrote: > > On 2019-08-19 13:17:35 +0000 (+0000), Alexandra Settle wrote: > [...] >> Doug has rightfully pointed out that whilst the documentation team >> as it stands today is no longer "integral", the docs.openstack.org >> web site is. An owner is required. >> >> The suggestion is for the Release Management team is the "least >> worst" (thanks, tonyb) place for the website manaagement to land. >> As Tony points out, this requires learning new tools and processes >> but the individuals working on the docs team currently have no >> intention to leave, and are around to help manage this from the >> SIG. >> >> Open to discussion and suggestions, but to summarise the proposal >> here: >> >> docs.openstack.org ownership is to transition to be the >> responsibility of the Release Management team officially provided >> there are no strong objections. > [...] > > The prose above is rather vague on what "the docs.openstack.org web > site" entails. Inferring from Doug's comments on 657142, I think you > and he are referring specifically to the content in the > https://opendev.org/openstack/openstack-manuals/src/branch/master/www > subtree. Is that pretty much it? The Apache virtual host > configuration and filesystem hosting the site itself are managed by > the Infra/OpenDev team, and there aren't any plans to change that as > far as I'm aware. > -- > Jeremy Stanley Yes, that’s correct. Doug ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 94 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From mnaser at vexxhost.com Tue Aug 20 12:02:13 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 20 Aug 2019 08:02:13 -0400 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: On Tue, Aug 20, 2019 at 6:21 AM Douglas Zhang wrote: > > Hello everyone, > > To help users interact with openstack, we’re currently developing a client-side web tool which enables administrators to manage their openstack cluster in a more efficient and convenient way. (Since we have not named it officially yet, I’m going to call it openstack-admin) > > # Introduction > > Some may ask, “Why do we need an extra web-based user interface since we have horizon?” Well, although horizon is a mature and powerful dashboard, it is far not efficient enough on big clusters (a simple list operation could take seconds to complete). What’s more, its flexibility of searching could not match our requirements. To overcome obstacles above, a more efficient tool is urgently required, that’s why we started to develop openstack-admin. That's great. Thanks for working on something like this. > # Highlights > > Comparing with the current user interface, openstack-admin has following advantages: > > Fast: openstack-admin gets data straightly from SQL databases instead of calling standard openstack API, which accelerates the querying period to a large extent (especially when we’re dealing with a large amount of data). While I agree with you that querying database is much faster, this introduces two issues that I imagine for users: - Dashboards generally having direct access via SQL is a scary thing from an operators perspective, also, it will make maintaining the project quite hard because I don't think any projects expose a *stable* database API. > Flexible: openstack-admin supports the fuzzy search for any important field(e.g. display_name/uuid/ip_address/project_name of an instance), which enables users to locate a particular object in no time. This is really useful to be honest, but we probably can work around it by using the filtering that APIs provide. > User-friendly: the backend of openstack-admin gets necessary messages from the message queue used by nova, and send them to the frontend using websocket. This way, not only more realistic progress bars could be implemented, but more detailed information could be provided to users as well. Neat. > # Issues > > To make this tool more efficient and provide better support for concurrency, we chose Golang to implement openstack-admin. As I’ve asked before (truly appreciate advises from Jeremy and Ghanshyam), a project written by an unofficial language may be accepted only if existing languages have been proven to not meet the technical requirements, so we’re considering re-implementing openstack-admin using python if we can’t come to an agreement on the language issue. > > So that’s all. How do you guys think of this project? I like the idea overall. However, I'm not for or against the whole software that gets deeply integrated/engrained within the systems. It's not something that we've ever done before, but I don't know why we wouldn't do it either. The only concern is if we want to make this really super integrated, most of the OpenStack projects are written in Python so having something that lives deep in the heart of the deployment (i.e. connected to databases and message queues) would probably best be using Python to make it easier to keep running together. Having said that, do you think perhaps we can take some of the finding/improvements that you've done in this and apply them into Horizon, perhaps that might be the path to most success? I'm not attached to any of those ideas but trying to throw things out to spark some other ideas that anyone else has. > Thanks, > > Douglas Zhang -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Tue Aug 20 12:03:55 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 20 Aug 2019 08:03:55 -0400 Subject: How are we managing quotas, flavors, projects and more across large org Openstack deployments? In-Reply-To: References: Message-ID: On Mon, Aug 19, 2019 at 1:50 PM Robert Lennie wrote: > > The Operations Dashboard System that we currently use currently exists but as I mentioned it has several shortcomings (chief of them being that it is fragile, overly complex and frankly it probably needs to be completely re-written and simplified). > > For that reason one big consideration for us is what replacements might exist upstream? What are other organizations using to manage large deployments? Is there anything that it does that OpenStack Horizon doesn't do? > > On Fri, Aug 16, 2019 at 7:32 PM Amrith Kumar wrote: >> >> Could you describe what the system you are building would do. >> >> Thanks! >> >> On Fri, Aug 16, 2019, 17:30 Robert Lennie wrote: >>> >>> Hi, >>> >>> I am a complete neophyte to this forum so I may have missed any earlier discussions on this topic if there have been any? >>> >>> I am currently working on a proprietary legacy (Django based) "OpenStack Operations Manager" dashboard system that attempts to manage quotas, flavors, clusters and more across a very large distributed OpenStack deployment for multiple customers. >>> >>> There are a number of potential shortcomings with our current dashboard system and I was wondering how other teams in other organizations with similar large Openstack deployments are handling these types of system management issues? The issues addressed are likely common to other large environments. >>> >>> In particular what tools might exist upstream that allow system (particularly quota and flavor) management across large distributed Openstack environments? >>> >>> I welcome any feedback, comments, suggestions and any other information regarding the tools that currently exist or may be planned for this purpose? >>> >>> Regards, >>> >>> Robert Lennie >>> Principal Software Engineer >>> Verizon Media >>> -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From anlin.kong at gmail.com Tue Aug 20 12:43:09 2019 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 21 Aug 2019 00:43:09 +1200 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: Hi Douglas, Sounds like a great project, thanks for proposing that in the community and trying to open source it. We have been using Horizon for a long time but not satisfied with the design and the slow progress of the innovation. I have a few questions/suggestions: 1. It'd be great and gain more attractions if you could provide a demo about how "openstack-admin" looks like 2. What OpenStack services has "openstack-admin" already integrated? Is it easy to integrate with others? - Best regards, Lingxian Kong Catalyst Cloud On Tue, Aug 20, 2019 at 10:22 PM Douglas Zhang wrote: > Hello everyone, > > To help users interact with openstack, we’re currently developing a > client-side web tool which enables administrators to manage their openstack > cluster in a more efficient and convenient way. (Since we have not named it > officially yet, I’m going to call it openstack-admin) > > *# Introduction* > > Some may ask, “Why do we need an extra web-based user interface since we > have horizon?” Well, although horizon is a mature and powerful dashboard, > it is far not efficient enough on big clusters (a simple list operation > could take seconds to complete). What’s more, its flexibility of searching > could not match our requirements. To overcome obstacles above, a more > efficient tool is urgently required, that’s why we started to develop > openstack-admin. > > *# Highlights* > > Comparing with the current user interface, openstack-admin has following > advantages: > > - > > *Fast*: openstack-admin gets data straightly from SQL databases > instead of calling standard openstack API, which accelerates the querying > period to a large extent (especially when we’re dealing with a large amount > of data). > - > > *Flexible*: openstack-admin supports the fuzzy search for any > important field(e.g. display_name/uuid/ip_address/project_name of an > instance), which enables users to locate a particular object in no time. > - > > *User-friendly*: the backend of openstack-admin gets necessary > messages from the message queue used by nova, and send them to the frontend > using websocket. This way, not only more realistic progress bars could be > implemented, but more detailed information could be provided to users as > well. > > *# Issues* > > To make this tool more efficient and provide better support for > concurrency, we chose Golang to implement openstack-admin. As I’ve asked > before (truly appreciate advises from Jeremy and Ghanshyam), a project > written by an unofficial language may be accepted only if existing > languages have been proven to not meet the technical requirements, so we’re > considering re-implementing openstack-admin using python if we can’t come > to an agreement on the language issue. > > So that’s all. How do you guys think of this project? > > Thanks, > > Douglas Zhang > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue Aug 20 13:24:44 2019 From: corvus at inaugust.com (James E. Blair) Date: Tue, 20 Aug 2019 06:24:44 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <439B3C2E-E531-4CA5-8CAF-BE0ECCBE4227@redhat.com> (Sorin Sbarnea's message of "Tue, 20 Aug 2019 12:21:36 +0100") References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> <87mug8j0nh.fsf@meyer.lemoncheese.net> <439B3C2E-E531-4CA5-8CAF-BE0ECCBE4227@redhat.com> Message-ID: <87k1b8c6lf.fsf@meyer.lemoncheese.net> Sorin Sbarnea writes: > I am really glad to see the improvements on the log browsing UX. > > Can we also add address few other issues? like: > > a) auto-wrap long lines, even your link is a perfect example of > endless horizontal bar created by pip requirements line. log should > never need horizontal scrolling in browser. > b) coloring in logs? > c) auto hiding/collapsing the timestamp column -- it is boilerplate in > most cases taking 20% entire screen real estate. same info could be > visible as tooltip > d) shorter urls? do we really need "602ab1629aca4f4ebe21ff7024884f87" > as build number of we can get around with 3-6 chars? -- i always need > to horizontally scroll the URL bar to see the file I am on. > > PS. I am more than willing to invest some of my personal time on addressing these. If you would like to implement some of those, the source is in the web/src directory of the zuul repo. Local development against the production OpenDev data is easy, and this information may help you get started: https://zuul-ci.org/docs/zuul/developer/javascript.html B seems straightforward. D is probably best achieved not by shortening the URL, but by displaying the filename on the page and/or in the page title. A and C are matters of personal preference. You may want to discuss those with the upstream Zuul community on the zuul-discuss at lists.zuul-ci.org mailing list. -Jim From e0ne at e0ne.info Tue Aug 20 13:28:18 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 20 Aug 2019 16:28:18 +0300 Subject: [Horizon] [stable] Adding Radomir Dopieralski to horizon-stable-maint Message-ID: Hi team, I'd like to propose adding Radomir Dopieralski to the horizon-stable-maint team. He's doing good quality reviews for stable branches [1] on a regular basis and I think Radomir will be a good member of our small group. [1] https://review.opendev.org/#/q/reviewer:openstack%2540sheep.art.pl+NOT+branch:master Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue Aug 20 14:12:16 2019 From: aj at suse.com (Andreas Jaeger) Date: Tue, 20 Aug 2019 16:12:16 +0200 Subject: [all][infra] Changes to docs publishing (promote pipeline) Message-ID: <6787d191-00d4-1221-fb3d-d130df121fb9@suse.com> We have set up in Zuul a new way of publishing artifacts and are beginning to use that now for documentation jobs [1]: The documents built in the gate queue are published as is with just copying over to our webserver in the promote pipeline. The jobs for building are still the same name (small implementation change), only the publish job has changed: It's now in the promote pipeline. There's no need for you to do anything. This work has been done for openstack-tox-docs (used by most OpenStack projects), api-ref and api-guide jobs, and for openstack-manuals and friends. To iterate the old way was: 1) Build docs in gate pipeline 2) Merge 3) Build docs in post pipeline (same git tree!) 4) Publish Now we do: 1) Build docs in gate pipeline 2) Merge 3) Take logs from gate pipeline and publish them You will notice a changed behaviour: The promote pipeline reports to gerrit - unlike the post one. So, you see that in the review and also get notified via email. Note that changes currently in gate pipeline might have used the previous openstack-tox-docs implemention and will fail since the promote job needs the changed implementation. The next publish will work. If you have any questions, please ask on #openstack-infra IRC channel. [1] https://review.opendev.org/677009 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From openstack at fried.cc Tue Aug 20 14:34:33 2019 From: openstack at fried.cc (Eric Fried) Date: Tue, 20 Aug 2019 09:34:33 -0500 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87k1b8c6lf.fsf@meyer.lemoncheese.net> References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> <87mug8j0nh.fsf@meyer.lemoncheese.net> <439B3C2E-E531-4CA5-8CAF-BE0ECCBE4227@redhat.com> <87k1b8c6lf.fsf@meyer.lemoncheese.net> Message-ID: <1c3dfedb-6e26-e20b-09b9-484d28509f66@fried.cc> >> a) auto-wrap long lines, even your link is a perfect example of >> endless horizontal bar created by pip requirements line. log should >> never need horizontal scrolling in browser. > A and C are matters of personal preference. FWIW, I requested this as a "toggle wrap" button. My CSS/DHTML is rusty, but can't you just twiddle `overflow-wrap` on and off? efried . From mthode at mthode.org Tue Aug 20 14:44:01 2019 From: mthode at mthode.org (Matthew Thode) Date: Tue, 20 Aug 2019 09:44:01 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190819145437.GA29162@zeong> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> Message-ID: <20190820144401.4dculyxz3s2jqgyk@mthode.org> On 19-08-19 10:54:37, Matthew Treinish wrote: > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > NOVA: > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > websockify===0.9.0 tempest test failing > > > > KEYSTONE: > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > NEUTRON: > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > this could be caused by pytest===5.1.0 as well > > > > KURYR: > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > https://review.opendev.org/665352 > > > > MISC: > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > fixing a compatibility issue with python 3.4, which we don't officially support > in stestr but was a simple fix. The blocker is actually not an stestr issue, > it's a testtools bug: > > https://github.com/testing-cabal/testtools/issues/272 > > Where this is coming into play here is that stestr 2.5.0 switched to using an > internal test runner built off of stdlib unittest instead of testtools/subunit > for python 3. This was done to fix a huge number of compatibility issues people > had reported when trying to run stdlib unittest suites using stestr on > python >= 3.5 (which were caused by unittest2 and testools). The complication > for openstack (more specificially tempest) is that it's built off of testtools > not stdlib unittest. So when tempest raises 'self.skipException' as part of > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > and treats it as an unhandled exception which is a test failure, instead of the > intended skip result. [1] This is actually a general bug and will come up whenever > anyone tries to use stdlib unittest to run tempest. We need to come up with a > fix for this problem in testtools [2] or just workaround it in tempest. > > [1] skip decorators typically aren't effected by this because they set an > attribute that gets checked before the test method is executed instead of > relying on an exception, which is why this is mostly only an issue for tempest > because it does a lot of run time skips via exceptions. > > [2] testtools is mostly unmaintained at this point, I was recently granted > merge access but haven't had much free time to actively maintain it > > -Matt Treinish > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > I'm trying to get this in place as we are getting closer to the > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > would be appreciated. > > > > -- > > Matthew Thode > Thanks for the clarification, now to get the other projects to pay attention :| -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mihalis68 at gmail.com Tue Aug 20 14:45:00 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 20 Aug 2019 10:45:00 -0400 Subject: ops meetup team meeting 2019-8-20 Message-ID: Minutes for the openstack ops meetups team meeting today are linked below. Here are some notes: - Attendance at the upcoming NYC meetup (hosted at Bloomberg) is up to 40 and still growing - Hoping to arrange a social event on the evening of day 1, a sponsor for that is still being sought - easiest way to get updates is to follow https://twitter.com/osopsmeetup - one of the sessions will propose bringing more ceph content to these meetups or starting a new similar dedicated ceph operators meetup series Meeting ended Tue Aug 20 14:37:20 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:37 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-08-20-14.00.html 10:37 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-08-20-14.00.txt 10:37 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2019/ops_meetup_team.2019-08-20-14.00.log.html Cheers, Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Shilpa.Devharakar at nttdata.com Tue Aug 20 11:49:28 2019 From: Shilpa.Devharakar at nttdata.com (Devharakar, Shilpa) Date: Tue, 20 Aug 2019 11:49:28 +0000 Subject: [masakari] how to install masakari on centos 7 (Vu Tan) Message-ID: Hi Vu Tan, Sorry for the attachment, instead please refer [1] Here are the steps we verified for Masakari installation on Ubuntu Distribution with openstack installed via devstack. [1]: http://paste.openstack.org/show/760303/ > Hi Vu Tan, > > Sorry for late reply. > Here are the steps we verified on Ubuntu Distribution, installed openstack via devstack. > > PFA 'detailed_steps_to_install_masakari.txt' > Do not refer '/masakari/etc/Masakari.conf' sample file, instead generate it using 'tox -egenconfig' after 'git clone' Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 19, 2019 10:42 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 95 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org (Doug Hellmann) 2. Re: [openstack-dev] Dropping lazy translation support (Shengjing Zhu) 3. [neutron] Bug Deputy Report August 12 - August 19 (Ryan Tidwell) 4. Re: [masakari] how to install masakari on centos 7 (Vu Tan) (Devharakar, Shilpa) ---------------------------------------------------------------------- Message: 1 Date: Mon, 19 Aug 2019 12:59:32 -0400 From: Doug Hellmann To: Sean McGinnis Cc: Alexandra Settle , OpenStack Discuss Subject: Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org Message-ID: <9DABCC6E-1E61-45A6-8370-4F086428B3B6 at doughellmann.com> Content-Type: text/plain;charset=utf-8 > On Aug 19, 2019, at 11:41 AM, Sean McGinnis wrote: > >> >> Quick recap: The documentation team is set to disband as an official >> project, and make the leap to becoming a SIG (Special Interest Group). >> >> >> The remaining individuals working on the docs team are sorting >> through what's left and where it is best placed. In [1], Doug has >> rightfully pointed out that whilst the documentation team as it >> stands today is no longer "integral", the docs.openstack.org web site >> is. An owner is required. >> > > Unless things have changed, SIGs can be owners of a resource published > via docs.openstack.org (and I am assuming that means be extension, > docs.o.o itself). Is there a reason the Docs SIG would not still be > able to own the site? > >> The suggestion is for the Release Management team is the "least worst" >> (thanks, tonyb) place for the website manaagement to land. As Tony >> points out, this requires learning new tools and processes but the >> individuals working on the docs team currently have no intention to >> leave, and are around to help manage this from the SIG. >> > > I'm personally fine with the release team taking this on, but it does > seem like an odd fit. I think it would make a lot more sense for the > Docs SIG to own the docs site than the release team. Thierry has always drawn (one of) the distinction(s) between teams and SIGs as being that teams own tasks that are “part of” the thing we produce that we call OpenStack, while SIGs are less formally part of that input to that production process. Updating the docs site every time we have a new release felt like it should be part of the formal process, and definitely not something we should leave to chance. The cadence made me think the release team could be a good home. It’s 95% automated now, for what that’s worth. I imagine someone could automate the step of adding the new series pages, although I’m not sure what the trigger would be. We could also look at other ways to build the site that don’t require any action each cycle, of course, including not updating it at all and only publishing docs out of master. That’s especially appealing if we don’t have anyone in the community willing and able to pick up the work. > >> Open to discussion and suggestions, but to summarise the proposal here: >> >> docs.openstack.org ownership is to transition to be the >> responsibility of the Release Management team officially provided >> there are no strong objections. >> >> Thanks, >> >> Alex >> IRC: asettle >> >> [1] https://review.opendev.org/#/c/657142/ >> [2] https://review.opendev.org/#/c/657141/ > ------------------------------ Message: 2 Date: Mon, 19 Aug 2019 20:46:05 +0800 From: Shengjing Zhu To: openstack-discuss at lists.openstack.org Cc: mriedemos at gmail.com Subject: Re: [openstack-dev] Dropping lazy translation support Message-ID: Content-Type: text/plain; charset="UTF-8" Sorry for replying the old mail, and please cc me when reply. Matt Riedemann writes: > This is a follow up to a dev ML email [1] where I noticed that some > implementations of the upgrade-checkers goal were failing because some > projects still use the oslo_i18n.enable_lazy() hook for lazy log > message translation (and maybe API responses?). > > The very old blueprints related to this can be found here [2][3][4]. > > If memory serves me correctly from my time working at IBM on this, > this was needed to: > > 1. Generate logs translated in other languages. > > 2. Return REST API responses if the "Accept-Language" header was used > and a suitable translation existed for that language. > > #1 is a dead horse since I think at least the Ocata summit when we > agreed to no longer translate logs since no one used them. > > #2 is probably something no one knows about. I can't find end-user > documentation about it anywhere. It's not tested and therefore I have > no idea if it actually works anymore. > > I would like to (1) deprecate the oslo_i18n.enable_lazy() function so > new projects don't use it and (2) start removing the enable_lazy() > usage from existing projects like keystone, glance and cinder. > > Are there any users, deployments or vendor distributions that still > rely on this feature? If so, please speak up now. I was pointed to this discussion when I tried to fix this feature in keystone, https://review.opendev.org/677117 For #2 translated API response, this feature probably hasn't been working for some time, but it's still a valid user case. Has the decision been settled? -- Regards, Shengjing Zhu ------------------------------ Message: 3 Date: Mon, 19 Aug 2019 11:32:23 -0500 From: Ryan Tidwell To: openstack-discuss at lists.openstack.org Subject: [neutron] Bug Deputy Report August 12 - August 19 Message-ID: Content-Type: text/plain; charset="utf-8" Hello Neutron Team, Here is my bug deputy report for August 12 - August 19: Medium: -[OVS agent] Physical bridges can't be initialized if there is no connectivity to rabbitmq https://bugs.launchpad.net/neutron/+bug/1840443 The fix for this has been merged and a number of backports have merged as well. It appears to have been handled by the team. - excessive number of dvrs where vm got a fixed ip on floating network https://bugs.launchpad.net/bugs/1840579 https://review.opendev.org/#/c/677092/ has been proposed and is being worked on. This appears to be related to how we determine whether to instantiate a DVR local router on a compute node. It looks like there is a case where we create a local router when it's not necessary, thereby consuming too many IP's on the floating IP network at scale. Undecided: - network/subnet resources cannot be read and written separated https://bugs.launchpad.net/neutron/+bug/1840638 https://review.opendev.org/#/c/677166/ has been proposed. This was just reported just a few hours ago and still needs some triage to confirm severity of the issue. RFE's: - [RFE] Add new config option to enable IGMP snooping in ovs https://bugs.launchpad.net/bugs/1840136 Regards, Ryan Tidwell -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Mon, 19 Aug 2019 16:33:40 +0000 From: "Devharakar, Shilpa" To: "openstack-discuss at lists.openstack.org" Subject: Re: [masakari] how to install masakari on centos 7 (Vu Tan) Message-ID: Content-Type: text/plain; charset="utf-8" Hi Vu Tan, Sorry for late reply. Here are the steps we verified on Ubuntu Distribution, installed openstack via devstack. PFA 'detailed_steps_to_install_masakari.txt' Do not refer '/masakari/etc/Masakari.conf' sample file, instead generate it using 'tox -egenconfig' after 'git clone' > Hi Patil, > May I know how is it going ? > >> On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: >> >> Hi Patil, >> Thank you for your reply, please instruct me if you successfully >> install it. Thanks a lot >> >> On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > > >> wrote: >> >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and >> let you know the solution or will update you what steps I have >> followed to bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running >> but it is not running as it should be, a lots of error in logs, here >> is a sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, >> in _is_daemo n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent >> call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown >> to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, >> can you please take a look at my configuration to see what is wrong >> or what is missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "__file__" in conf >> is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "here" in conf is >> not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token >> [-] AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova os_privileged_user_password = P at ssword >> masakari_api_listen = controller masakari_api_listen_port = 15868 >> debug = False auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 auth_url = >> http://controller:5000 auth_type = password project_domain_id = >> default project_domain_name = default user_domain_name = default >> user_domain_id = default project_name = service username = masakari >> password = P at ssword region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar >> > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded >> on the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th >> July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options >> into your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig >> > on top level of masakari but the output is error, is this masakari >> > still in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig genconfig >> > create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ >> > summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run >> > masakari-api with this command: masakari-api, but I dont know >> > whether the process is running like that or is it just hang there, >> > here is what it shows when I run the command, I leave it there for >> > a while but it does not change anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] >> > Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "__file__" in conf >> > is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "here" in conf is >> > not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token >> > [-] AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This >> > is backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] >> > Starting 4 workers >> > 2019-07-08 15:21:09.984 30274 INFO >> > masakari.masakari_api.wsgi.server [-] (30274) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO >> > masakari.masakari_api.wsgi.server [-] (30275) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO >> > masakari.masakari_api.wsgi.server [-] (30277) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO >> > masakari.masakari_api.wsgi.server [-] (30276) wsgi starting up on >> > http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> >> >> >om>>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured >> > some stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to >> > systemctl command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, >> > you will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api >> > and masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of >> > prefix on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step >> > by step or other documentation to get started with. Which part >> > should be installed on controller, which is should be on compute, >> > and what is the prerequisite to install masakari, I have installed >> > corosync and pacemaker on compute and controller nodes, , what else >> > do I need to do ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take >> > a look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller # Enable to specify port other >> > than default masakari_api_listen_port = 15868 debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path api_paste_config = >> > /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 auth_url = >> > http://controller:5000 auth_type = password project_domain_id = >> > default user_domain_id = default project_name = service username = >> > masakari password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> > Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Sunday, August 4, 2019 12:42 AM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 13 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [nova][ops] Documenting nova tunables at scale (Joe Robinson) 2. Re: [masakari] how to install masakari on centos 7 (Vu Tan) ---------------------------------------------------------------------- Message: 1 Date: Sat, 3 Aug 2019 10:40:11 +1000 From: Joe Robinson To: Matt Riedemann Cc: openstack-discuss at lists.openstack.org Subject: Re: [nova][ops] Documenting nova tunables at scale Message-ID: Content-Type: text/plain; charset="utf-8" Hi Matt, My name is Joe - docs person from years back - this looks like a good initiative and I would be up for documenting these settings at scale. Next step I can see is gathering more Info about this pain point (already started :)) and then I can draft something together for feedback. On Sat, 3 Aug. 2019, 6:25 am Matt Riedemann, wrote: > I wanted to send this to get other people's feedback if they have > particular nova configurations once they hit a certain scale (hundreds > or thousands of nodes). Every once in awhile in IRC I'll be chatting > with someone about configuration changes they've made running at large > scale to avoid, for example, hammering the control plane. I don't know > how many times I've thought, "it would be nice if we had a doc > highlighting some of these things so a new operator could come along > and see, oh I've never tried changing that value before". > > I haven't started that doc, but I've started a bug report for people > to dump some of their settings. The most common ones could go into a > simple admin doc to start. > > I know there is more I've thought about in the past that I don't have > in here but this is just a starting point so I don't make the mistake > of not taking action on this again. > > https://bugs.launchpad.net/nova/+bug/1838819 > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 1 Aug 2019 11:43:13 +0700 From: Vu Tan To: "Patil, Tushar" Cc: Gaëtan Trellu , "openstack-discuss at lists.openstack.org" Subject: Re: [masakari] how to install masakari on centos 7 Message-ID: Content-Type: text/plain; charset="utf-8" Hi Patil, May I know how is it going ? On Tue, Jul 23, 2019 at 10:18 PM Vu Tan wrote: > Hi Patil, > Thank you for your reply, please instruct me if you successfully > install it. Thanks a lot > > On Tue, Jul 23, 2019 at 8:12 PM Patil, Tushar > > wrote: > >> Hi Vu Tan, >> >> I'm trying to install Masakari using source code to reproduce the issue. >> If I hit the same issue as yours, I will troubleshoot this issue and >> let you know the solution or will update you what steps I have >> followed to bring up Masakari services successfully. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan >> Sent: Monday, July 22, 2019 12:33 PM >> To: Gaëtan Trellu >> Cc: Patil, Tushar; openstack-discuss at lists.openstack.org >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Patil, >> May I know when the proper document for masakari is released ? I have >> configured conf file in controller and compute node, it seems running >> but it is not running as it should be, a lots of error in logs, here >> is a sample log: >> >> 2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 10:25:26.360 7745 DEBUG oslo_service.service [-] bindir >> = /usr/local/bin log_opt_values /usr/lib/ >> python2.7/site-packages/oslo_config/cfg.py:3024 >> 2019-07-19 18:46:21.291 7770 ERROR masakari File >> "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 65, >> in _is_daemo n >> 2019-07-19 18:46:21.291 7770 ERROR masakari is_daemon = os.getpgrp() >> != os.tcgetpgrp(sys.stdout.fileno()) >> 2019-07-19 18:46:21.291 7770 ERROR masakari OSError: [Errno 5] >> Input/output error >> 2019-07-19 18:46:21.291 7770 ERROR masakari >> 2019-07-19 18:46:21.300 7745 CRITICAL masakari [-] Unhandled error: >> OSError: [Errno 5] Input/output error >> 2019-07-19 18:46:21.300 7745 ERROR masakari Traceback (most recent >> call >> last): >> 2019-07-19 18:46:21.300 7745 ERROR masakari File >> "/usr/bin/masakari-api", line 10, in >> >> I dont know if it is missing package or wrong configuration >> >> >> On Thu, Jul 11, 2019 at 6:14 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> You will have to enable the debit, debug = true and check the APi log. >> >> Did you try to use the openstack CLi ? >> >> Gaetan >> >> On Jul 11, 2019 12:32 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> I know it's just a warning, just take a look at this image: >> [image.png] >> it's just hang there forever, and in the log show what I have shown >> to you >> >> On Wed, Jul 10, 2019 at 8:07 PM Gaëtan Trellu < >> gaetan.trellu at incloudus.com> wrote: >> This is just a warning, not an error. >> >> On Jul 10, 2019 3:12 AM, Vu Tan > vungoctan252 at gmail.com>> wrote: >> Hi Gaetan, >> I follow you the guide you gave me, but the problem still persist, >> can you please take a look at my configuration to see what is wrong >> or what is missing in my config ? >> the error: >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "__file__" in conf >> is not known to auth_token >> 2019-07-10 14:08:46.876 17292 WARNING >> keystonemiddleware._common.config [-] The option "here" in conf is >> not known to auth_token >> 2019-07-10 14:08:46.882 17292 WARNING keystonemiddleware.auth_token >> [-] AuthToken middleware is set with keystone_authtoken.service_ >> >> the config: >> >> [DEFAULT] >> enabled_apis = masakari_api >> log_dir = /var/log/kolla/masakari >> state_path = /var/lib/masakari >> os_user_domain_name = default >> os_project_domain_name = default >> os_privileged_user_tenant = service >> os_privileged_user_auth_url = http://controller:5000/v3 >> os_privileged_user_name = nova os_privileged_user_password = P at ssword >> masakari_api_listen = controller masakari_api_listen_port = 15868 >> debug = False auth_strategy=keystone >> >> [wsgi] >> # The paste configuration file path >> api_paste_config = /etc/masakari/api-paste.ini >> >> [keystone_authtoken] >> www_authenticate_uri = http://controller:5000 auth_url = >> http://controller:5000 auth_type = password project_domain_id = >> default project_domain_name = default user_domain_name = default >> user_domain_id = default project_name = service username = masakari >> password = P at ssword region_name = RegionOne >> >> [oslo_middleware] >> enable_proxy_headers_parsing = True >> >> [database] >> connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> >> >> >> On Tue, Jul 9, 2019 at 10:25 PM Vu Tan > vungoctan252 at gmail.com>> wrote: >> Thank Patil Tushar, I hope it will be available soon >> >> On Tue, Jul 9, 2019 at 8:18 AM Patil, Tushar >> > wrote: >> Hi Vu and Gaetan, >> >> Gaetan, thank you for helping out Vu in setting up masakari-monitors >> service. >> >> As a masakari team ,we have noticed there is a need to add proper >> documentation to help the community run Masakari services in their >> environment. We are working on adding proper documentation in this 'Train' >> cycle. >> >> Will send an email on this mailing list once the patches are uploaded >> on the gerrit so that you can give your feedback on the same. >> >> If you have any trouble in setting up Masakari, please let us know on >> this mailing list or join the bi-weekly IRC Masakari meeting on the >> #openstack-meeting IRC channel. The next meeting will be held on 16th >> July >> 2019 @0400 UTC. >> >> Regards, >> Tushar Patil >> >> ________________________________________ >> From: Vu Tan > >> Sent: Monday, July 8, 2019 11:21:16 PM >> To: Gaëtan Trellu >> Cc: openstack-discuss at lists.openstack.org> openstack-discuss at lists.openstack.org> >> Subject: Re: [masakari] how to install masakari on centos 7 >> >> Hi Gaetan, >> Thanks for pinpoint this out, silly me that did not notice the simple >> "error InterpreterNotFound: python3". Thanks a lot, I appreciate it >> >> On Mon, Jul 8, 2019 at 9:15 PM > gaetan.trellu at incloudus.com>> gaetan.trellu at incloudus.com>>> wrote: >> Vu Tan, >> >> About "auth_token" error, you need "os_privileged_user_*" options >> into your masakari.conf for the API. >> As mentioned previously please have a look here to have an example of >> configuration working (for me at least): >> >> - masakari.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari.conf.j2 >> - masakari-monitor.conf: >> >> https://review.opendev.org/#/c/615715/42/ansible/roles/masakari/templ >> ates/masakari-monitors.conf.j2 >> >> About your tox issue make sure you have Python3 installed. >> >> Gaëtan >> >> On 2019-07-08 06:08, Vu Tan wrote: >> >> > Hi Gaetan, >> > I try to generate config file by using this command tox -egenconfig >> > on top level of masakari but the output is error, is this masakari >> > still in beta version ? >> > [root at compute1 masakari-monitors]# tox -egenconfig genconfig >> > create: /root/masakari-monitors/.tox/genconfig >> > ERROR: InterpreterNotFound: python3 >> > _____________________________________________________________ >> > summary >> > ______________________________________________________________ >> > ERROR: genconfig: InterpreterNotFound: python3 >> > >> > On Mon, Jul 8, 2019 at 3:24 PM Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > Hi, >> > Thanks a lot for your reply, I install pacemaker/corosync, >> > masakari-api, maskari-engine on controller node, and I run >> > masakari-api with this command: masakari-api, but I dont know >> > whether the process is running like that or is it just hang there, >> > here is what it shows when I run the command, I leave it there for >> > a while but it does not change anything : >> > [root at controller masakari]# masakari-api >> > 2019-07-08 15:21:09.946 30250 INFO masakari.api.openstack [-] >> > Loaded >> > extensions: ['extensions', 'notifications', 'os-hosts', 'segments', >> > 'versions'] >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "__file__" in conf >> > is not known to auth_token >> > 2019-07-08 15:21:09.955 30250 WARNING >> > keystonemiddleware._common.config [-] The option "here" in conf is >> > not known to auth_token >> > 2019-07-08 15:21:09.960 30250 WARNING keystonemiddleware.auth_token >> > [-] AuthToken middleware is set with >> > keystone_authtoken.service_token_roles_required set to False. This >> > is backwards compatible but deprecated behaviour. Please set this to True. >> > 2019-07-08 15:21:09.974 30250 INFO masakari.wsgi [-] masakari_api >> > listening on 127.0.0.1:15868< >> http://127.0.0.1:15868> >> > 2019-07-08 15:21:09.975 30250 INFO oslo_service.service [-] >> > Starting 4 workers >> > 2019-07-08 15:21:09.984 30274 INFO >> > masakari.masakari_api.wsgi.server [-] (30274) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.985 30275 INFO >> > masakari.masakari_api.wsgi.server [-] (30275) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.992 30277 INFO >> > masakari.masakari_api.wsgi.server [-] (30277) wsgi starting up on >> > http://127.0.0.1:15868 >> > 2019-07-08 15:21:09.994 30276 INFO >> > masakari.masakari_api.wsgi.server [-] (30276) wsgi starting up on >> > http://127.0.0.1:15868 >> > >> > On Sun, Jul 7, 2019 at 7:37 PM Gaëtan Trellu >> >> >> >om>>> >> wrote: >> > >> > Hi Vu Tan, >> > >> > Masakari documentation doesn't really exist... I had to figured >> > some stuff by myself to make it works into Kolla project. >> > >> > On controller nodes you need: >> > >> > - pacemaker >> > - corosync >> > - masakari-api (openstack/masakari repository) >> > - masakari- engine (openstack/masakari repository) >> > >> > On compute nodes you need: >> > >> > - pacemaker-remote (integrated to pacemaker cluster as a resource) >> > - masakari- hostmonitor (openstack/masakari-monitor repository) >> > - masakari-instancemonitor (openstack/masakari-monitor repository) >> > - masakari-processmonitor (openstack/masakari-monitor repository) >> > >> > For masakari-hostmonitor, the service needs to have access to >> > systemctl command (make sure you are not using sysvinit). >> > >> > For masakari-monitor, the masakari-monitor.conf is a bit different, >> > you will have to configure the [api] section properly. >> > >> > RabbitMQ needs to be configured (as transport_url) on masakari-api >> > and masakari-engine too. >> > >> > Please check this review[1], you will have masakari.conf and >> > masakari-monitor.conf configuration examples. >> > >> > [1] https://review.opendev.org/#/c/615715 >> > >> > Gaëtan >> > >> > On Jul 7, 2019 12:08 AM, Vu Tan > vungoctan252 at gmail.com>> vungoctan252 at gmail.com>>> wrote: >> > >> > VU TAN > VUNGOCTAN252 at GMAIL.COM>> >> > >> > 10:30 AM (35 minutes ago) >> > >> > to openstack-discuss >> > >> > Sorry, I resend this email because I realized that I lacked of >> > prefix on this email's subject >> > >> > Hi, >> > >> > I would like to use Masakari and I'm having trouble finding a step >> > by step or other documentation to get started with. Which part >> > should be installed on controller, which is should be on compute, >> > and what is the prerequisite to install masakari, I have installed >> > corosync and pacemaker on compute and controller nodes, , what else >> > do I need to do ? step I have done so far: >> > - installed corosync/pacemaker >> > - install masakari on compute node on this github repo: >> > https://github.com/openstack/masakari >> > - add masakari in to mariadb >> > here is my configuration file of masakari.conf, do you mind to take >> > a look at it, if I have misconfigured anything? >> > >> > [DEFAULT] >> > enabled_apis = masakari_api >> > >> > # Enable to specify listening IP other than default >> > masakari_api_listen = controller # Enable to specify port other >> > than default masakari_api_listen_port = 15868 debug = False >> > auth_strategy=keystone >> > >> > [wsgi] >> > # The paste configuration file path api_paste_config = >> > /etc/masakari/api-paste.ini >> > >> > [keystone_authtoken] >> > www_authenticate_uri = http://controller:5000 auth_url = >> > http://controller:5000 auth_type = password project_domain_id = >> > default user_domain_id = default project_name = service username = >> > masakari password = P at ssword >> > >> > [database] >> > connection = mysql+pymysql://masakari:P at ssword@controller/masakari >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> >> >> Disclaimer: This email and any attachments are sent in strictest >> confidence for the sole use of the addressee and may contain legally >> privileged, confidential, and proprietary data. If you are not the >> intended recipient, please advise the sender by replying promptly to >> this email and then delete and destroy this email and any attachments >> without any further use, copying or forwarding. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 13 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: detailed_steps_to_install_masakari.txt URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 95 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From pawel.konczalski at everyware.ch Tue Aug 20 15:13:03 2019 From: pawel.konczalski at everyware.ch (Pawel Konczalski) Date: Tue, 20 Aug 2019 17:13:03 +0200 Subject: Update / change user_id from server / VM Message-ID: <831f42db-1f9e-f101-d055-83fa07d6d1a8@everyware.ch> Hi all, is it possible to update / change the user_id of a OpenStack server (VM) if the user no more exists?     # openstack server show 8cf8164a-6f55-4435-9f4e-621617d7951f -c user_id     +---------+----------------------------------+     | Field   | Value                            |     +---------+----------------------------------+     | user_id | ebfd5b2bf26a4f4381a290948cb3ce8b |     +---------+----------------------------------+     # openstack user show ebfd5b2bf26a4f4381a290948cb3ce8b     No user with a name or ID of 'ebfd5b2bf26a4f4381a290948cb3ce8b' exists. BR Pawel -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5227 bytes Desc: not available URL: From a.settle at outlook.com Tue Aug 20 15:18:15 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Tue, 20 Aug 2019 15:18:15 +0000 Subject: [User-committee] [uc] Less than *1 DAY* left to nominate for the UC! In-Reply-To: <6AF63EFF-2845-494D-8D01-EB9902F604E6@leafe.com> References: <6AF63EFF-2845-494D-8D01-EB9902F604E6@leafe.com> Message-ID: Hi UC, > > The nomination period will close in less than a day. So far we have 1 > candidate, but there are two positions up for election. So if you’ve > been hesitating, don’t wait any longer! The info for how to nominate > from my previous email is below: How did this go? I was unable to find if any other candidate stood up? Cheers, Alex -- Alexandra Settle IRC: asettle From ianyrchoi at gmail.com Tue Aug 20 16:55:21 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 21 Aug 2019 01:55:21 +0900 Subject: [ALL][UC] Train EC Election - nomination results, and what is going on Message-ID: Hello all, As announced [1-2], the UC nomination period was from August 5 - August 16, 2019, and as the election officials, we would like to share the result, as well as  the next steps. This was discussed by the User Committee yesterday [3]:     - There was one UC candidacy [4]. The candicacy has been validated and election official announce that "Mohamed Elsakhawy" will serve on the UC. Congratulations!     - There will be no election since there was only one candicate out of two positions for this election.     - There were discussions during the UC meeting [3] yesterday, and the UC decided to have a second, special election for another seat within the timeframe spelled out in the charter.       The same election officials will work, and the officials and UC members are discussing a feasible time frame for the election. As mentioned, the upcoming election is *special*, so please pay attention for more election information. The election officials will post an email soon with more details on upcoming special UC election. Thank you, - Ed & Ian [1] http://lists.openstack.org/pipermail/user-committee/2019-July/002862.html [2] http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html [3] http://eavesdrop.openstack.org/meetings/uc/2019/uc.2019-08-19-15.04.log.html#l-70 [4] http://lists.openstack.org/pipermail/user-committee/2019-August/002866.html From ianyrchoi at gmail.com Tue Aug 20 16:59:46 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Wed, 21 Aug 2019 01:59:46 +0900 Subject: [User-committee] [uc] Less than *1 DAY* left to nominate for the UC! In-Reply-To: References: <6AF63EFF-2845-494D-8D01-EB9902F604E6@leafe.com> Message-ID: <6f155299-870b-88f0-6f84-12a30bc35c9d@gmail.com> Hello Alex, Thank you for your asking on this - I have just shared: http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008617.html With many thanks, /Ian Alexandra Settle wrote on 8/21/2019 12:18 AM: > Hi UC, > >> The nomination period will close in less than a day. So far we have 1 >> candidate, but there are two positions up for election. So if you’ve >> been hesitating, don’t wait any longer! The info for how to nominate >> from my previous email is below: > How did this go? I was unable to find if any other candidate stood up? > > Cheers, > > Alex > From rico.lin.guanyu at gmail.com Tue Aug 20 17:00:39 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 21 Aug 2019 01:00:39 +0800 Subject: [all][tc]Naming the U release of OpenStack -- Poll open In-Reply-To: References: Message-ID: Dear OpenStackers, There are around 7 hours left before the poll ends. So if you still not voting yet. Well, go for it now!! :) On Mon, Aug 19, 2019 at 1:20 PM Rico Lin wrote: > The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time). So > remember to vote:) > > On Tue, Aug 13, 2019 at 12:56 PM Rico Lin > wrote: > >> Hi, all OpenStackers, >> >> It's time to vote for the naming of the U release!! >> U 版本正式命名票选开始!! >> >> First, big thanks for all people who take their own time to propose names >> on [2] or help to push/improve to the naming process. Thank you. >> >> We'll use a public polling option over per-user private URLs >> for voting. This means everybody should proceed to use the following URL >> to >> cast their vote: >> >> *https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_19e5119b14f86294&akey=0cde542cb3de1b12 >> * >> >> We've selected a public poll to ensure that the whole community, not just >> Gerrit >> change owners get a vote. Also, the size of our community has grown such >> that we >> can overwhelm CIVS if using private URLs. A public can mean that users >> behind NAT, proxy servers or firewalls may receive a message saying >> that your vote has already been lodged if this happens please try >> another IP. >> Because this is a public poll, results will currently be only viewable by >> me >> until the poll closes. Once closed, I'll post the URL making the results >> viewable to everybody. This was done to avoid everybody seeing the >> results while >> the public poll is running. >> >> The poll will officially end on 2019-08-20 23:59:00+00:00 (UTC time)[1], >> and results will be >> posted shortly after. >> >> [1] https://governance.openstack.org/tc/reference/release-naming.html >> [2] https://wiki.openstack.org/wiki/Release_Naming/U_Proposals >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Aug 20 17:49:34 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 20 Aug 2019 19:49:34 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-9 Update In-Reply-To: <16ca57df0ed.112d7fef0121509.7413543320906046197@ghanshyammann.com> References: <16ca57df0ed.112d7fef0121509.7413543320906046197@ghanshyammann.com> Message-ID: <20190820174934.saqk5cxoswgdfcc5@localhost> On 19/08, Ghanshyam Mann wrote: > Hello Everyone, > > Below is the progress on Ipv6 goal during R9 week. At the first step, I am preparing the ipv6 jobs > for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. > > > Summary: > * Number of Ipv6 jobs proposed Projects: 25 > * Number of pass projects: 11 > ** Number of project merged: 6 > * Number of failing projects: 14 > > Storyboard: > ========= > - https://storyboard.openstack.org/#!/story/2005477 > > Current status: > ============ > 1. Cinder is error when configuring the conder's my_ip as IPv6. iscsi is not able to _connect_single_volume [1]. Hi, Looking at the logs this looks like a Cinder driver bug to me. I don't have a system I can use for testing right now, but I have proposed a WIP patch [1] to Cinder with a possible solution. It would be great if someone could test it or if we could make a patch depend on it (ie: this devstack patch [2]) for confirmation. If it is, I'll create the bug report and write a proper commit message and unit tests. Cheers, Gorka. [1]: https://review.opendev.org/677524 [2]: https://review.opendev.org/#/c/673266/ > 2. Configuring the tempest test regex to run only smoke tests which can be extended to include future IPv6 tests also. > Running all test is not actually required as such in IPv6 job but if any project wants to run all then also fine. Example: [1] > 3. Fixing the Murano's MURANO_DEFAULT_DNS to set as IPv6 for IPv6 env[2]. > 4. Solum job need Zun to configure the host_ip properly for IPv6. I will make the dependent patch. > 5. For Monasca, kafka was not working for IPv6 but witek is upgrading the Kafka version in Monasca. I will rebase IPv6 job > patch on top of that and check the result. > 6. This week new projects ipv6 jobs patch and status: > - Tacker: > link: https://review.opendev.org/#/c/671908/ > status: job is failing, I need to properly configure the job. > - Senlin: > links: https://review.opendev.org/#/c/676910/ > status: jobs are failing. In same patch I have fixed the devstack plugin to deploy the Selin service on IPv6 which was hardcoded to HOST_IP(ipv4). > But it seems Senlin endpoint is not created in keystone. Need to debug more for the root cause. > - Solum: > links: https://review.opendev.org/#/c/676912/ > Status: job is failing. Fixed the devstack plugin for 'host' for IPv6 env. It also need fix on Zun side to configure the host_ip properly for IPv6. > - Trove: > link: https://review.opendev.org/#/c/677015/ > status: job is passing and it is good to merge. > - Watcher: > link: https://review.opendev.org/#/c/677017/ > status: job is passing and it is good to merge. In same patch, I have fixed the devstack plugin for 'host' for IPv6 env. > - Sahara > link: https://review.opendev.org/#/c/676903/ > status: Job is failing to start the sahara service. I could not find the logs for sahara service(it shows empty log under apache). Need help from sahara team. > > > IPv6 missing support found: > ===================== > 1. https://review.opendev.org/#/c/673397/ > 2. https://review.opendev.org/#/c/673449/ > > How you can help: > ============== > - Each project needs to look for and review the ipv6 job patch. > - Verify it works fine on ipv6 and no ipv4 used in conf etc > - Any other specific scenario needs to be added as part of project IPv6 verification. > - Help on debugging and fix the bug in IPv6 job is failing. > > Everything related to this goal can be found under this topic: > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > How to define and run new IPv6 Job on project side: > ======================================= > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > Review suggestion: > ============== > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > that point of view. If anything missing, comment on patch. > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > setting. But if your project needs more specific varification then it can be added in project side job as post-run > playbooks as described in wiki page[3]. > > [1] https://zuul.opendev.org/t/openstack/build/5b7b823d6faa4f5393b4c46d36e15d80/log/controller/logs/screen-n-cpu.txt.gz#2733 > [2] https://review.opendev.org/#/c/676857/ > [3] https://review.opendev.org/#/c/676900/ > [4] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > -gmann > > From mriedemos at gmail.com Tue Aug 20 18:05:36 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Aug 2019 13:05:36 -0500 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> On 8/20/2019 7:02 AM, Mohammed Naser wrote: >> Flexible: openstack-admin supports the fuzzy search for any important field(e.g. display_name/uuid/ip_address/project_name of an instance), which enables users to locate a particular object in no time. > This is really useful to be honest, but we probably can work around it > by using the filtering that APIs provide. > Isn't this what searchlight integration into horizon was for? -- Thanks, Matt From gouthampravi at gmail.com Tue Aug 20 18:28:21 2019 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 20 Aug 2019 11:28:21 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> References: <0e76382d-88ae-a750-c890-053eced496f5@fried.cc> <53c00e61-2f12-9613-7bbe-f2fa04dcd6fc@fried.cc> Message-ID: On Fri, Aug 16, 2019 at 10:50 AM Eric Fried wrote: > > - Hot tip: if you want the dynamic logs (with timestamp links and sev > > filters), use the twisties. Clicking through gets you to the raw > files. > > > > > > Unfamiliar with the term "twisties" and googling didn't help - what do > > you mean? > > Sorry about that. > > On the left-hand side next to the folder names there's an icon that > looks like '>'. > > If you click on the folder, it pops you into a regular directory browser > with none of the whizbang anchoring/filtering features. > > But if instead you click the '>', it rotates downward ('v') and expands > the directory tree in place. Keep navigating until you get to the file > you want, and then click on the file (not 'raw') to open it within the > "app". > This is super cool :) Thank you for the explanation! Wasn't obvious for me the first time around either, so I was just clicking the "log url" in the build summary. > > Thanks, > efried > . > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Aug 20 18:51:13 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 20 Aug 2019 11:51:13 -0700 Subject: [ptl][keystone] PTL Out of Office August 24 - September 4 Message-ID: <45f66b32-3b0c-43f3-9102-4272b18b1da5@www.fastmail.com> As mentioned in today's keystone meeting, I will be on vacation between August 24 and September 4 with no laptop and little to no cell service. Morgan Fainberg (kmalloc) has graciously agreed to step up as acting PTL while I am gone (thank you!). We agreed in today's meeting that the team will plan to hold next week's meeting as usual and possibly skip the following one. Office hours will be cancelled unless someone decides they really want to cover something with the team. The priority efforts for the next couple of weeks are: * finish remaining system-scope/default roles migrations (deadline is feature freeze, Sept 9-13) * reviews for spec implementations and system-scope/default roles migrations (deadline is feature freeze, Sept 9-13) * reviews for keystonemiddleware and keystoneauth (final release Sept 2-6) * reviews for python-keystoneclient (final release Sept 9-13) * helping the requirements team with any requirements issues[1] before requirements freeze (Sept 9-13) * completing community goals (follow the PDF generation news[2]) In the event there's feedback on one of my own outstanding patches that needs to be addressed in a timely manner, I grant any keystone core permission to go ahead and update it (that goes generally even when I'm not completely AFK). Colleen [1] https://bugs.launchpad.net/keystone/+bug/1839393 [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008506.html From rico.lin.guanyu at gmail.com Wed Aug 21 01:05:36 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 21 Aug 2019 09:05:36 +0800 Subject: [all][tc] U release naming poll result Message-ID: Dear all OpenStackers, First, we got 298 voters to participate in the poll, so thank you for your vote. And here is the poll result: 1. Ussuri [乌苏里江 https://en.wikipedia.org/wiki/Ussuri_River] (Condorcet winner: wins contests with all other choices) 2. Uma [http://www.fallingrain.com/world/CH/20/Uma.html] 3. Ula [Miocene Baogeda Ula] 4. Urad [乌拉特中旗 https://en.wikipedia.org/wiki/Urad_Middle_Banner] 5. Ulansu [乌兰苏海组 Ulansu sea] 6. Ulanhot [乌兰浩特市 https://en.wikipedia.org/wiki/Ulanhot] 7. Ulanqab [乌兰察布市 https://en.wikipedia.org/wiki/Ulanqab] 8. Ujimqin [东/西乌珠穆沁旗 https://en.wikipedia.org/wiki/Ujimqin] Here is result in detail: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_19e5119b14f86294&rkey=364d4feb45355b2c -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Wed Aug 21 06:59:50 2019 From: soulxu at gmail.com (Alex Xu) Date: Wed, 21 Aug 2019 14:59:50 +0800 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. Message-ID: We get a lot of discussion on how to do the claim for the vpmem. There are a few points we are trying to match: * Avoid race problem. (the current VGPU assignment has been found having race issue https://launchpad.net/bugs/1836204) * Avoid the device assignment management to be virt driver and platform-specific. * Keep it simple. Currently, we go through two solutions here. This email is going to summary the pros/cons of these two solutions. #1 Without Nova DB persistent for the assignment info, depends on hypervisor persistent it. The idea is adding VirtDriver.claim/unclaim_for_instance(instance_uuid, flavor_id) interface. The assignment info is populated from hypervisor when nova-compute startup. And keep in the memory of VirtDriver. The instance_uuid is used to distinguish the claim from the different instance. The flavor_id is used for the same host resize, to distinguish the claim for source and target. This virt driver method is being invoked inside ResourceTracker to avoid the race problem. There is no any nova DB persistent for the assignment info. https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virtual-persistent-memory pros: * Hidden all the device detail and virt driver detail inside the virt driver. * Less upgrade issue in the future since it doesn't involve any nova DB model change * Expecting as simple implementation since everything inside virt driver. cons: * Two cases are being found, the domain XML being lost for Libvirt virt driver. And we don't know other hypervisor behavior yet. * For the same host resize, the source and target instance are sharing single one domain XML. After the libvirt virt driver updated the domain XML to the target instance, the source instance's assignment information will be lost when a nova-compute restart happened. That means the resized instance can't be revert, the only choice for the user is to confirm the resize. * For live migration, the target host's domain XML will be cleanup by libvirt after a host restart. The assignment information is lost before nova-compute startup and doing a cleanup. * Can not support the same host cold migration. Since we need a way to identify the source and target instance's assignment in memory. But the same host cold migration means the same instance UUID and same flavor ID, there isn't another else can be used to distinguish the assignment. * There are workarounds added for above points, the code becomes fragile. #2 With nova DB persistent, but using virt driver specific blob to store virt driver specific info. The idea is persistent the assignment for instance into DB. The resource tracker gets available resources from virt driver. The resource tracker will calculate on the fly based on available resources and assigned resources from instance DB. The new field ·instance.resources· is designed for supporting virt driver specific metadata, then hidden the virt driver and platform detail from RT. https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new pros: * Persistent assignment into instance object. Avoid the corner case we lost the assignment. * The ResourceTracker is responsible for doing the claim job. This is more reliable and no race problem, since ResourceTracker works very well for a long time. * The virt driver specific json-blob hidden the virt driver/platform detail from the ResourceTracker. * The free resource is calculated on the fly, keeping the implementation simple. Actually, the RT just provides a point to do the claim, needn't involve the complex of RT.update_available_resources cons: * Doesn't like PCIManager which has both instance side and host side persistent info. On the fly calculation should take care of the orphaned instance(the instance is deleted from DB, but still existing on the host), so actually, it isn't unresolvable issue. And it isn't too hard to upgrade to have host side persistent info in the future if we want. * Data model change for the original proposal. Need review to decide the data model enough generic Currently, Sean, Eric and I prefer the #2 now since the #1 has flaws for the same host resize and live migration can't be skipped by design. Looking for more feedback, and will appreciate it! Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.teckelmann at bertelsmann.de Wed Aug 21 07:50:52 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Wed, 21 Aug 2019 07:50:52 +0000 Subject: AW: [masakari] pacemaker-remote Setup Overview In-Reply-To: References: Message-ID: Hello Shilpa, Thank you for the links provided. To summarize it, they show how to setup a corrosync + pacemaker cluster in action. However, I have to ask for more details on the broader picture. From the scraps I found about masakari-host-monitor and the limitations of the pacemaker/corrosync installations (limited to 16 nodes, there is a specification detailing this somewhere, etc.) I wonder if another solution is actually intended. To give you a better insight where I am atm at: I found that pacemaker-remote (no need for corrosync) should be favored on the nova compute nodes (Someone said so somewhere, maybe Pushkar), because it scales better. This then would presume a pacemaker + corrosync cluster existing to work somewhere else, because pacemaker-remote can't live without (as far as I know). The pacemaker + corrosync cluster does only need to exist without further operational relevance for masakari-host-monitor, though. Having the lxc-container based setup openstack-ansible produces in mind, the setup would then look like: - 3 masakari containers (like os_masakari-install.yaml from OSA already delivers) being extended with the pacemaker + corrosync cluster - n compute nodes with pacemaker-remote installed and configured (to join the preexisting pacemaker + corrosync cluster residing on the masakari-containers) - all the containers and compute nodes talk via the regular management network to each other - adding new compute nodes means only having a local configuration on exactly that new node. There is no need to change configuration on any other part of the system (because pacemaker-remote manages the population of the existence of the new resource). Is this where masakaris pacemaker-based setup would develop to or did I get it completely wrong? Best regards, Ralf -----Ursprüngliche Nachricht----- Von: Devharakar, Shilpa Gesendet: Dienstag, 20. August 2019 13:53 An: openstack-discuss at lists.openstack.org Betreff: RE: [masakari] pacemaker-remote Setup Overview Hi Ralf Teckelmann, Sorry for the attachment, instead please refer [1] Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [2]. You can refer 'devstack/plugin.sh' [1]: https://urldefense.proofpoint.com/v2/url?u=http-3A__paste.openstack.org_show_760301_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=BT_bkEAenK5jBZ--5hTVqt0GZq12Ij9-q7qGTn86rvk&e= [2]: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_671200_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=y0pr15KIaa1XPwej0egzCp7mPGDKBLhxN7fhJMDeywM&e= > Hi Ralf Teckelmann, > > Sorry resending by correcting subject with appropriate subject. Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 19, 2019 10:19 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 94 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddiscuss&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=zXtwRY6Jm3E4Jt2ihjpRJZ8VIwFdTMcuMg0A7eJ9XYA&e= or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. [tc] weekly update (Mohammed Naser) 2. RE: [masakari] pacemaker-remote Setup Overview (Devharakar, Shilpa) 3. Re: [tc] weekly update (Andreas Jaeger) 4. Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org (Doug Hellmann) ---------------------------------------------------------------------- Message: 1 Date: Mon, 19 Aug 2019 12:26:48 -0400 From: Mohammed Naser To: OpenStack Discuss Subject: [tc] weekly update Message-ID: Content-Type: text/plain; charset="UTF-8" Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Retired projects - Networking-generic-switch-tempest (from networking-generic-switch): https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_674430_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=u1TtWtLxQw9wq6IOi2bbDXtaLl4zt5J-5ckqAjh-5iE&e= # General changes - Michał Dulko as Kuryr PTL: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_674624_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=KLNfBuQ1X6WkQzI9yyVScUlkVFw91KTNz8ySzC2SK4I&e= - Added a mission to Swift taken from its wiki: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_675307_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=eZFduAqetS5dHRgx2V93U7dqTvoACzz45DKc2p1tlPI&e= - Sean McGinnis as release PTL: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_675246_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=1Q2nQef0P-85E1K0QM5nGAtO3BYo8Y3zuMR_XC6L2f0&e= Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://urldefense.proofpoint.com/v2/url?u=http-3A__vexxhost.com&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=HxEHZ8J4B5M1r4rAsOa1R9MqRDH-RE3gr663kc2dbVU&e= ------------------------------ Message: 2 Date: Mon, 19 Aug 2019 16:34:26 +0000 From: "Devharakar, Shilpa" To: "openstack-discuss at lists.openstack.org" Subject: RE: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hi Ralf Teckelmann, Sorry resending by correcting subject with appropriate subject. > Hello, > > Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. > For the hostmonitor a pacemaker cluster is missing. > > Can anyone give me an overview how the pacemaker cluster setup would look like? > Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? > > Best regards, > > Ralf Teckelmann Sorry for the late answer... Masakari team is in process of 'adding devstack support to install host-monitor', please refer community patch [1]. You can refer 'devstack/plugin.sh', please note it has IPMI Hardware support dependency. Also PFA 'detailed_steps_on_ubuntu_for_pacemaker_verification.txt' for verification of same carried out on Ubuntu distribution (installed openstack using devstack with Masakari enabled). [1]: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_671200_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=y0pr15KIaa1XPwej0egzCp7mPGDKBLhxN7fhJMDeywM&e= Add devstack support to install host-monitor Thanks with Regards, Shilpa Shivaji Devharakar| Software Development Supervisor | NTT DATA Services| w. 91-020- 67095703 | Shilpa.Devharakar at nttdata.com | Learn more at nttdata.com/americas -----Original Message----- From: openstack-discuss-request at lists.openstack.org Sent: Monday, August 12, 2019 6:30 PM To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 10, Issue 59 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddiscuss&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=zXtwRY6Jm3E4Jt2ihjpRJZ8VIwFdTMcuMg0A7eJ9XYA&e= or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA (Arx Cruz) 2. [swauth][swift] Retiring swauth (Ondrej Novy) 3. [ironic] Resuming having weekly meetings (Julia Kreger) 4. [masakari] pacemaker-remote Setup Overview (Teckelmann, Ralf, NMU-OIP) ---------------------------------------------------------------------- Message: 1 Date: Mon, 12 Aug 2019 12:32:34 +0200 From: Arx Cruz To: Jean-Philippe Evrard Cc: openstack-discuss at lists.openstack.org Subject: Re: [tripleo][openstack-ansible] Integrating ansible-role-collect-logs in OSA Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I've started to split the logs collection tasks in small tasks [1] in order to allow other users to choose what exactly they want to collect. For example, if you don't need the openstack information, or if you don't care about networking, etc. Please take a look. I'll also add it on the OSA agenda for tomorrow's meeting. Kind regards, 1 - https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_675858_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=9Ug8vtvllesG65u1zM2vPSsXJH5ti5SXEREhqcY0yQc&e= On Mon, Jul 22, 2019 at 8:44 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Sorry for the late answer... > > On Wed, 2019-07-10 at 12:12 -0600, Wesley Hayutin wrote: > > > > These are of course just passed in as extra-config. I think each > > project would want to define their own list of files and maintain it > > in their own project. WDYT? > > Looks good. We can either clean up the defaults, or OSA can just > override the defaults, and it would be good enough. I would say that > this can still be improved later, after OSA has started using the role > too. > > > It simple enough. But I am happy to see a different approach. > > Simple is good! > > > Any thoughts on additional work that I am not seeing? > > None :) > > > > > Thanks for responding! I know our team is very excited about the > > continued collaboration with other upstream projects, so thanks!! > > > > Likewise. Let's reduce tech debt/maintain more code together! > > Regards, > Jean-Philippe Evrard (evrardjp) > > > > > -- Arx Cruz Software Engineer Red Hat EMEA arxcruz at redhat.com @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 12 Aug 2019 12:56:15 +0200 From: Ondrej Novy To: openstack-discuss at lists.openstack.org Subject: [swauth][swift] Retiring swauth Message-ID: Content-Type: text/plain; charset="utf-8" Hi, because swauth is not compatible with current Swift, doesn't support Python 3, I don't have time to maintain it and my employer is not interested in swauth, I'm going to retire swauth project. If nobody take over it, I will start removing swauth from opendev on 08/24. Thanks. -- Best regards Ondřej Nový -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 12 Aug 2019 07:55:07 -0400 From: Julia Kreger To: openstack-discuss Subject: [ironic] Resuming having weekly meetings Message-ID: Content-Type: text/plain; charset="UTF-8" All, I meant to send this specific email last week, but got distracted and $life. I believe we need to go back to having weekly meetings. The couple times I floated this in the past two weeks, there hasn't seemed to be any objections, but I also did not perceive any real thoughts on the subject. While the concept and use of office hours has seemingly helped bring some more activity to our IRC channel, we don't have a check-point/sync-up mechanism without an explicit meeting. With that being said, I'm going to start the meeting today and if we have quorum, try to proceed with it today. -Julia ------------------------------ Message: 4 Date: Mon, 12 Aug 2019 12:59:20 +0000 From: "Teckelmann, Ralf, NMU-OIP" To: "openstack-discuss at lists.openstack.org" Subject: [masakari] pacemaker-remote Setup Overview Message-ID: Content-Type: text/plain; charset="utf-8" Hello, Utilizing openstack-ansible we successfully installed all the masakari services. Besides masakari-hostmonitor all are running fine. For the hostmonitor a pacemaker cluster is missing. Can anyone give me an overview how the pacemaker cluster setup would look like? Which (pacemaker) services is running where (compute nodes, something on any other node,...), etc? Best regards, Ralf Teckelmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddiscuss&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=zXtwRY6Jm3E4Jt2ihjpRJZ8VIwFdTMcuMg0A7eJ9XYA&e= ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 59 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ------------------------------ Message: 3 Date: Mon, 19 Aug 2019 18:36:01 +0200 From: Andreas Jaeger To: Mohammed Naser , OpenStack Discuss , juliaashleykreger at gmail.com Subject: Re: [tc] weekly update Message-ID: <803b9cce-9887-95a5-b257-21f164f5569a at suse.com> Content-Type: text/plain; charset=utf-8 On 19/08/2019 18.26, Mohammed Naser wrote: > Hi everyone, > > Here’s the update for what happened in the OpenStack TC this week. You > can get more information by checking for changes in > openstack/governance repository. > > # Retired projects > - Networking-generic-switch-tempest (from networking-generic-switch): > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.or > g_-23_c_674430_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco > &r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbP > bnITV-ICkTgiy2DwRx3UO8&s=u1TtWtLxQw9wq6IOi2bbDXtaLl4zt5J-5ckqAjh-5iE&e > = The process for retiring is documented here: https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_infra_manual_drivers.html-23retiring-2Da-2Dproject&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=nXUYBv9J2FYcNfWHc2e-39LKUWlvyv1ajFIZAtZjWWM&e= Could you follow those steps to get the system out of Zuul properly, please? Andreas > # General changes > - Michał Dulko as Kuryr PTL: > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.or > g_-23_c_674624_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco > &r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbP > bnITV-ICkTgiy2DwRx3UO8&s=KLNfBuQ1X6WkQzI9yyVScUlkVFw91KTNz8ySzC2SK4I&e > = > - Added a mission to Swift taken from its wiki: > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.or > g_-23_c_675307_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco > &r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbP > bnITV-ICkTgiy2DwRx3UO8&s=eZFduAqetS5dHRgx2V93U7dqTvoACzz45DKc2p1tlPI&e > = > - Sean McGinnis as release PTL: > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.or > g_-23_c_675246_&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco > &r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbP > bnITV-ICkTgiy2DwRx3UO8&s=1Q2nQef0P-85E1K0QM5nGAtO3BYo8Y3zuMR_XC6L2f0&e > = > > Thanks for tuning in! > > Regards, > Mohammed > -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg GF: Nils Brauckmann, Felix Imendörffer, Enrica Angelone, HRB 247165 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 ------------------------------ Message: 4 Date: Mon, 19 Aug 2019 12:48:40 -0400 From: Doug Hellmann To: Jeremy Stanley Cc: openstack-discuss at lists.openstack.org Subject: Re: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org Message-ID: <740FFC7B-F7F0-427C-95F8-C6D6E8A0FA7E at doughellmann.com> Content-Type: text/plain;charset=utf-8 > On Aug 19, 2019, at 10:59 AM, Jeremy Stanley wrote: > > On 2019-08-19 13:17:35 +0000 (+0000), Alexandra Settle wrote: > [...] >> Doug has rightfully pointed out that whilst the documentation team as >> it stands today is no longer "integral", the docs.openstack.org web >> site is. An owner is required. >> >> The suggestion is for the Release Management team is the "least >> worst" (thanks, tonyb) place for the website manaagement to land. >> As Tony points out, this requires learning new tools and processes >> but the individuals working on the docs team currently have no >> intention to leave, and are around to help manage this from the SIG. >> >> Open to discussion and suggestions, but to summarise the proposal >> here: >> >> docs.openstack.org ownership is to transition to be the >> responsibility of the Release Management team officially provided >> there are no strong objections. > [...] > > The prose above is rather vague on what "the docs.openstack.org web > site" entails. Inferring from Doug's comments on 657142, I think you > and he are referring specifically to the content in the > https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_opens > tack_openstack-2Dmanuals_src_branch_master_www&d=DwIGaQ&c=vo2ie5TPcLdc > gWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloS > PQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=4VwjzhhaDidxNS > aaNi7uzBGmYIylK_5jNk7B73DKhZg&e= subtree. Is that pretty much it? The > Apache virtual host configuration and filesystem hosting the site > itself are managed by the Infra/OpenDev team, and there aren't any > plans to change that as far as I'm aware. > -- > Jeremy Stanley Yes, that’s correct. Doug ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddiscuss&d=DwIGaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=ms75peRY0DgZY2AG9eCbPbnITV-ICkTgiy2DwRx3UO8&s=zXtwRY6Jm3E4Jt2ihjpRJZ8VIwFdTMcuMg0A7eJ9XYA&e= ------------------------------ End of openstack-discuss Digest, Vol 10, Issue 94 ************************************************* Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From gmann at ghanshyammann.com Wed Aug 21 08:55:35 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Aug 2019 17:55:35 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-9 Update In-Reply-To: <20190820174934.saqk5cxoswgdfcc5@localhost> References: <16ca57df0ed.112d7fef0121509.7413543320906046197@ghanshyammann.com> <20190820174934.saqk5cxoswgdfcc5@localhost> Message-ID: <16cb3637496.11a96d806202742.6095726236363290194@ghanshyammann.com> ---- On Wed, 21 Aug 2019 02:49:34 +0900 Gorka Eguileor wrote ---- > On 19/08, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Below is the progress on Ipv6 goal during R9 week. At the first step, I am preparing the ipv6 jobs > > for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. > > > > > > Summary: > > * Number of Ipv6 jobs proposed Projects: 25 > > * Number of pass projects: 11 > > ** Number of project merged: 6 > > * Number of failing projects: 14 > > > > Storyboard: > > ========= > > - https://storyboard.openstack.org/#!/story/2005477 > > > > Current status: > > ============ > > 1. Cinder is error when configuring the conder's my_ip as IPv6. iscsi is not able to _connect_single_volume [1]. > > Hi, > > Looking at the logs this looks like a Cinder driver bug to me. > > I don't have a system I can use for testing right now, but I have > proposed a WIP patch [1] to Cinder with a possible solution. It would be > great if someone could test it or if we could make a patch depend on it > (ie: this devstack patch [2]) for confirmation. > > If it is, I'll create the bug report and write a proper commit message > and unit tests. > I have rebased devstck patch on your fix and it is working fine - https://review.opendev.org/#/c/673266/ You can log bug and merge your fix now. Thanks for fix Gorka and much appreciate your quick response. -gmann > Cheers, > Gorka. > > [1]: https://review.opendev.org/677524 > [2]: https://review.opendev.org/#/c/673266/ > > > > 2. Configuring the tempest test regex to run only smoke tests which can be extended to include future IPv6 tests also. > > Running all test is not actually required as such in IPv6 job but if any project wants to run all then also fine. Example: [1] > > 3. Fixing the Murano's MURANO_DEFAULT_DNS to set as IPv6 for IPv6 env[2]. > > 4. Solum job need Zun to configure the host_ip properly for IPv6. I will make the dependent patch. > > 5. For Monasca, kafka was not working for IPv6 but witek is upgrading the Kafka version in Monasca. I will rebase IPv6 job > > patch on top of that and check the result. > > 6. This week new projects ipv6 jobs patch and status: > > - Tacker: > > link: https://review.opendev.org/#/c/671908/ > > status: job is failing, I need to properly configure the job. > > - Senlin: > > links: https://review.opendev.org/#/c/676910/ > > status: jobs are failing. In same patch I have fixed the devstack plugin to deploy the Selin service on IPv6 which was hardcoded to HOST_IP(ipv4). > > But it seems Senlin endpoint is not created in keystone. Need to debug more for the root cause. > > - Solum: > > links: https://review.opendev.org/#/c/676912/ > > Status: job is failing. Fixed the devstack plugin for 'host' for IPv6 env. It also need fix on Zun side to configure the host_ip properly for IPv6. > > - Trove: > > link: https://review.opendev.org/#/c/677015/ > > status: job is passing and it is good to merge. > > - Watcher: > > link: https://review.opendev.org/#/c/677017/ > > status: job is passing and it is good to merge. In same patch, I have fixed the devstack plugin for 'host' for IPv6 env. > > - Sahara > > link: https://review.opendev.org/#/c/676903/ > > status: Job is failing to start the sahara service. I could not find the logs for sahara service(it shows empty log under apache). Need help from sahara team. > > > > > > IPv6 missing support found: > > ===================== > > 1. https://review.opendev.org/#/c/673397/ > > 2. https://review.opendev.org/#/c/673449/ > > > > How you can help: > > ============== > > - Each project needs to look for and review the ipv6 job patch. > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > Everything related to this goal can be found under this topic: > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > How to define and run new IPv6 Job on project side: > > ======================================= > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > Review suggestion: > > ============== > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > that point of view. If anything missing, comment on patch. > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > setting. But if your project needs more specific varification then it can be added in project side job as post-run > > playbooks as described in wiki page[3]. > > > > [1] https://zuul.opendev.org/t/openstack/build/5b7b823d6faa4f5393b4c46d36e15d80/log/controller/logs/screen-n-cpu.txt.gz#2733 > > [2] https://review.opendev.org/#/c/676857/ > > [3] https://review.opendev.org/#/c/676900/ > > [4] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > -gmann > > > > > From tobias.rydberg at citynetwork.eu Wed Aug 21 09:02:30 2019 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Wed, 21 Aug 2019 11:02:30 +0200 Subject: [public-cloud-sig] Debian Cloud Sprint In-Reply-To: <20190819140217.grgkfipooj2ivlfm@csail.mit.edu> References: <20190818211010.hrtg43fzmmdcqy3d@yuggoth.org> <20190819140217.grgkfipooj2ivlfm@csail.mit.edu> Message-ID: <2750db67-5a22-2a23-eb7d-d9a826b2322e@citynetwork.eu> Good suggestion! Personally I can't attend but I'll make sure that we bring it up on our upcoming meeting. Cheers, Tobias Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED On 2019-08-19 16:02, Jonathan Proulx wrote: > Hi All, > > I'm hosting this shindig but it's also my first time participating so > can't shine too much more light on how they got , but do have a couple > URLs to add: > > https://wiki.debian.org/Sprints/2019/DebianCloud2019 > > is the planning page. Currently a bit skeletal but does have links to > past sprints: > > https://wiki.debian.org/Sprints/2018/DebianCloudOct2018 > https://wiki.debian.org/Sprints/2017/DebianCloudOct2017 > https://wiki.debian.org/Sprints/2016/DebianCloudNov2016 > > Hopefully that can give some sense of scope and function of these things. > > Hope to see some of you in Oct. > > -Jon > > On Sun, Aug 18, 2019 at 09:10:11PM +0000, Jeremy Stanley wrote: > :Just a heads-up, there was a suggestion[*] on the debian-cloud > :mailing list that it would be nice if some representatives from > :public OpenStack service providers are able to attend/participate in > :the upcoming Debian Cloud Sprint[**], October 14-16 on MIT campus in > :Cambridge, Massachusetts, USA. I too think it would be awesome for > :OpenStack to have a seat at the table alongside representatives of > :the usual closed-source clouds when it comes time to talk about > :(among other things) what Debian's official "cloud" image builds > :should be doing to better support our collective users. If you're > :interested in going, I recommend reaching out via the debian-cloud > :mailing list[***]. > : > :[*] https://lists.debian.org/debian-cloud/2019/08/msg00065.html > :[**] https://wiki.debian.org/Sprints/2019/DebianCloud2019 > :[***] https://lists.debian.org/debian-cloud/ > :-- > :Jeremy Stanley > > From geguileo at redhat.com Wed Aug 21 09:49:35 2019 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 21 Aug 2019 11:49:35 +0200 Subject: [goals][IPv6-Only Deployments and Testing] Week R-9 Update In-Reply-To: <16cb3637496.11a96d806202742.6095726236363290194@ghanshyammann.com> References: <16ca57df0ed.112d7fef0121509.7413543320906046197@ghanshyammann.com> <20190820174934.saqk5cxoswgdfcc5@localhost> <16cb3637496.11a96d806202742.6095726236363290194@ghanshyammann.com> Message-ID: <20190821094935.lseqmbkfmfiwwyp3@localhost> On 21/08, Ghanshyam Mann wrote: > ---- On Wed, 21 Aug 2019 02:49:34 +0900 Gorka Eguileor wrote ---- > > On 19/08, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > Below is the progress on Ipv6 goal during R9 week. At the first step, I am preparing the ipv6 jobs > > > for the projects having zuulv3 jobs. The projects having zuulv2 jobs will be my second take. > > > > > > > > > Summary: > > > * Number of Ipv6 jobs proposed Projects: 25 > > > * Number of pass projects: 11 > > > ** Number of project merged: 6 > > > * Number of failing projects: 14 > > > > > > Storyboard: > > > ========= > > > - https://storyboard.openstack.org/#!/story/2005477 > > > > > > Current status: > > > ============ > > > 1. Cinder is error when configuring the conder's my_ip as IPv6. iscsi is not able to _connect_single_volume [1]. > > > > Hi, > > > > Looking at the logs this looks like a Cinder driver bug to me. > > > > I don't have a system I can use for testing right now, but I have > > proposed a WIP patch [1] to Cinder with a possible solution. It would be > > great if someone could test it or if we could make a patch depend on it > > (ie: this devstack patch [2]) for confirmation. > > > > If it is, I'll create the bug report and write a proper commit message > > and unit tests. > > > > I have rebased devstck patch on your fix and it is working fine - https://review.opendev.org/#/c/673266/ > > You can log bug and merge your fix now. Thanks for fix Gorka and much appreciate your quick response. > > -gmann Hi, Thanks for checking it! I have looked and it turns out there was already an old bug report for this issue [1], so I just updated the patch. Cheers, Gorka. [1]: https://launchpad.net/bugs/1696866 > > > Cheers, > > Gorka. > > > > [1]: https://review.opendev.org/677524 > > [2]: https://review.opendev.org/#/c/673266/ > > > > > > > 2. Configuring the tempest test regex to run only smoke tests which can be extended to include future IPv6 tests also. > > > Running all test is not actually required as such in IPv6 job but if any project wants to run all then also fine. Example: [1] > > > 3. Fixing the Murano's MURANO_DEFAULT_DNS to set as IPv6 for IPv6 env[2]. > > > 4. Solum job need Zun to configure the host_ip properly for IPv6. I will make the dependent patch. > > > 5. For Monasca, kafka was not working for IPv6 but witek is upgrading the Kafka version in Monasca. I will rebase IPv6 job > > > patch on top of that and check the result. > > > 6. This week new projects ipv6 jobs patch and status: > > > - Tacker: > > > link: https://review.opendev.org/#/c/671908/ > > > status: job is failing, I need to properly configure the job. > > > - Senlin: > > > links: https://review.opendev.org/#/c/676910/ > > > status: jobs are failing. In same patch I have fixed the devstack plugin to deploy the Selin service on IPv6 which was hardcoded to HOST_IP(ipv4). > > > But it seems Senlin endpoint is not created in keystone. Need to debug more for the root cause. > > > - Solum: > > > links: https://review.opendev.org/#/c/676912/ > > > Status: job is failing. Fixed the devstack plugin for 'host' for IPv6 env. It also need fix on Zun side to configure the host_ip properly for IPv6. > > > - Trove: > > > link: https://review.opendev.org/#/c/677015/ > > > status: job is passing and it is good to merge. > > > - Watcher: > > > link: https://review.opendev.org/#/c/677017/ > > > status: job is passing and it is good to merge. In same patch, I have fixed the devstack plugin for 'host' for IPv6 env. > > > - Sahara > > > link: https://review.opendev.org/#/c/676903/ > > > status: Job is failing to start the sahara service. I could not find the logs for sahara service(it shows empty log under apache). Need help from sahara team. > > > > > > > > > IPv6 missing support found: > > > ===================== > > > 1. https://review.opendev.org/#/c/673397/ > > > 2. https://review.opendev.org/#/c/673449/ > > > > > > How you can help: > > > ============== > > > - Each project needs to look for and review the ipv6 job patch. > > > - Verify it works fine on ipv6 and no ipv4 used in conf etc > > > - Any other specific scenario needs to be added as part of project IPv6 verification. > > > - Help on debugging and fix the bug in IPv6 job is failing. > > > > > > Everything related to this goal can be found under this topic: > > > Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) > > > > > > How to define and run new IPv6 Job on project side: > > > ======================================= > > > - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > > > Review suggestion: > > > ============== > > > - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any > > > other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with > > > that point of view. If anything missing, comment on patch. > > > - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ > > > - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var > > > setting. But if your project needs more specific varification then it can be added in project side job as post-run > > > playbooks as described in wiki page[3]. > > > > > > [1] https://zuul.opendev.org/t/openstack/build/5b7b823d6faa4f5393b4c46d36e15d80/log/controller/logs/screen-n-cpu.txt.gz#2733 > > > [2] https://review.opendev.org/#/c/676857/ > > > [3] https://review.opendev.org/#/c/676900/ > > > [4] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing > > > > > > -gmann > > > > > > > > > From gmann at ghanshyammann.com Wed Aug 21 10:21:41 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 21 Aug 2019 19:21:41 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190819145437.GA29162@zeong> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> Message-ID: <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > NOVA: > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > websockify===0.9.0 tempest test failing > > > > KEYSTONE: > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > NEUTRON: > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > this could be caused by pytest===5.1.0 as well > > > > KURYR: > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > https://review.opendev.org/665352 > > > > MISC: > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > fixing a compatibility issue with python 3.4, which we don't officially support > in stestr but was a simple fix. The blocker is actually not an stestr issue, > it's a testtools bug: > > https://github.com/testing-cabal/testtools/issues/272 > > Where this is coming into play here is that stestr 2.5.0 switched to using an > internal test runner built off of stdlib unittest instead of testtools/subunit > for python 3. This was done to fix a huge number of compatibility issues people > had reported when trying to run stdlib unittest suites using stestr on > python >= 3.5 (which were caused by unittest2 and testools). The complication > for openstack (more specificially tempest) is that it's built off of testtools > not stdlib unittest. So when tempest raises 'self.skipException' as part of > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > and treats it as an unhandled exception which is a test failure, instead of the > intended skip result. [1] This is actually a general bug and will come up whenever > anyone tries to use stdlib unittest to run tempest. We need to come up with a > fix for this problem in testtools [2] or just workaround it in tempest. > > [1] skip decorators typically aren't effected by this because they set an > attribute that gets checked before the test method is executed instead of > relying on an exception, which is why this is mostly only an issue for tempest > because it does a lot of run time skips via exceptions. > > [2] testtools is mostly unmaintained at this point, I was recently granted > merge access but haven't had much free time to actively maintain it Thanks matt for details. As you know, for Tempest where we need to support py2.7 (including unitest2 use) for stable branches, we are going to use the specific stetsr version/branch( > -Matt Treinish > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > I'm trying to get this in place as we are getting closer to the > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > would be appreciated. > > > > -- > > Matthew Thode > > > From thierry at openstack.org Wed Aug 21 10:24:35 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 21 Aug 2019 12:24:35 +0200 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> <20190819174941.GA4730@sm-workstation> <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> Message-ID: <8e1673fd-fb8c-098e-afd9-2cf0aea9f9e6@openstack.org> Doug Hellmann wrote: > >> On 2019-08-19 12:49:41 -0500 (-0500), Sean McGinnis wrote: >> [...] >>> there seems to be a big difference between owning the task of >>> configuring the site for the next release (which totally makes >>> sense as a release team task) and owning the entire >>> docs.openstack.org site. > > The suggestion is for the release team to take over the site generator > for docs.openstack.org (the stuff under “www” in the current > openstack-manuals git repository) and for the SIG to own anything > that looks remotely like “content”. There isn’t much of that left anyway, > now that most of it is in the project repositories. Yes, my understanding was that the release team would only own the site generation and associated cycle-tied mechanics. I felt like that was akin to "releasing the docs" (whatever they end up containing), and at this point docs are just a specific form of project team deliverable. We have to care for those things getting done as part of the release cycle anyway. It's not a great fit, but it's probably not the worst either. And there aren't that many alternatives solutions (the TC would arguably be a worse fit). -- Thierry Carrez (ttx) From thierry at openstack.org Wed Aug 21 10:31:42 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 21 Aug 2019 12:31:42 +0200 Subject: [all][tc] U release naming poll result In-Reply-To: References: Message-ID: <4e3da673-7254-39ce-061b-620c80973d88@openstack.org> Rico Lin wrote: > Dear all OpenStackers, > > First, we got 298 voters to participate in the poll, so thank you for > your vote. > > And here is the poll result: > 1. Ussuri [乌苏里江 https://en.wikipedia.org/wiki/Ussuri_River] > (Condorcet winner: wins contests with all other choices) > 2. Uma [http://www.fallingrain.com/world/CH/20/Uma.html] > 3. Ula [Miocene Baogeda Ula] > 4. Urad [乌拉特中旗 https://en.wikipedia.org/wiki/Urad_Middle_Banner] > 5. Ulansu [乌兰苏海组 Ulansu sea] > 6. Ulanhot [乌兰浩特市 https://en.wikipedia.org/wiki/Ulanhot] > 7. Ulanqab [乌兰察布市 https://en.wikipedia.org/wiki/Ulanqab] > 8. Ujimqin [东/西乌珠穆沁旗 https://en.wikipedia.org/wiki/Ujimqin] Note that following our release naming process[1], the OSF will now conduct trademark searches on the most popular option, before it can be made official. If Ussuri ends up being conflicting or risky, we'll go down the ranked list. [1] https://governance.openstack.org/tc/reference/release-naming.html -- Thierry Carrez (ttx) From jungleboyj at gmail.com Wed Aug 21 12:48:15 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 21 Aug 2019 08:48:15 -0400 Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration In-Reply-To: <8FC2060F93794D44942D588B8B5472871242631E@BPXM03GP.gisp.nec.co.jp> References: <195417b6-b687-0cf3-6475-af04a2c40c95@gmail.com> <1104fc67-67d4-23dd-d406-373ccb5a3b01@gmail.com> <8FC2060F93794D44942D588B8B5472871242631E@BPXM03GP.gisp.nec.co.jp> Message-ID: Hidekazu, Thank you for the update.  Please keep us updated on your progress. Thanks! Jay On 8/20/2019 1:56 AM, Hidekazu Nakamura wrote: > Hi Jay, > > Sorry late. > NEC is working hard to move NEC Cinder CI to python3.7. > I updated the py3-ci-review etherpad. > > Thanks, > Hidekazu Nakamura > >> -----Original Message----- >> From: Jay Bryant >> Sent: Tuesday, August 6, 2019 12:41 PM >> To: openstack-discuss at lists.openstack.org >> Subject: [cinder] [3rd party ci] Deadline Has Past for Python3 Migration >> >> All, >> >> This e-mail has multiple purposes.  First, I have expanded the mail audience >> to go beyond just openstack-discuss to a mailing list I have created for all 3rd >> Party CI Maintainers associated with Cinder.  I apologize to those of you who >> are getting this as a duplicate e-mail. >> >> For all 3rd Party CI maintainers who have already migrated your systems to >> using Python3.7...Thank you!  We appreciate you keeping up-to-date with >> Cinder's requirements and maintaining your CI systems. >> >> If this is the first time you are hearing of the Python3.7 requirement please >> continue reading. >> >> It has been decided by the OpenStack TC that support for Py2.7 would be >> deprecated [1].  The Train development cycle is the last cycle that will support >> Py2.7 and therefore all vendor drivers need to demonstrate support for Py3.7. >> >> It was discussed at the Train PTG that we would require all 3rd Party CIs to be >> running using Python3 by the Train milestone 2: [2]  We have been >> communicating the importance of getting 3rd Party CI running with >> py3 in meetings and e-mail for quite some time now, but it still appears that >> nearly half of all vendors are not yet running with Python 3. [3] >> >> If you are a vendor who has not yet moved to using Python 3 please take some >> time to review this document [4] as it has guidance on how to get your CI >> system updated.  It also includes some additional details as to why this >> requirement has been set and the associated background.  Also, please >> update the py3-ci-review etherpad with notes indicating that you are working >> on adding py3 support. >> >> I would also ask all vendors to review the etherpad I have created as it indicates >> a number of other drivers that have been marked unsupported due to CI >> systems not running properly.  If you are not planning to continue to support a >> driver adding such a note in the etherpad would be appreciated. >> >> Thanks! >> >> Jay >> >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-August/00825 >> 5.html >> >> [2] >> https://wiki.openstack.org/wiki/CinderTrainSummitandPTGSummary#3rd_P >> arty_CI >> >> [3] https://etherpad.openstack.org/p/cinder-py3-ci-review >> >> [4] https://wiki.openstack.org/wiki/Cinder/3rdParty-drivers-py3-update >> >> From jungleboyj at gmail.com Wed Aug 21 12:52:01 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 21 Aug 2019 08:52:01 -0400 Subject: [cinder] Not Weekly Meeting Today ... Message-ID: Team, Just a reminder that we will not have our weekly meeting this week due to our mid-cycle meeting going on. If you are able to join the mid-cycle please check the #openstack-cinder channel for details. Thanks! Jay (irc: jungleboyj) From a.settle at outlook.com Wed Aug 21 13:33:58 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 21 Aug 2019 13:33:58 +0000 Subject: [all] [tc] [docs] [release] [ptls] Docs as SIG: Ownership of docs.openstack.org In-Reply-To: <8e1673fd-fb8c-098e-afd9-2cf0aea9f9e6@openstack.org> References: <20190819154106.GA25909@sm-workstation> <9DABCC6E-1E61-45A6-8370-4F086428B3B6@doughellmann.com> <20190819174941.GA4730@sm-workstation> <20190819175652.dkbyerlmblqkvzdk@yuggoth.org> <8e1673fd-fb8c-098e-afd9-2cf0aea9f9e6@openstack.org> Message-ID: > Yes, my understanding was that the release team would only own the > site > generation and associated cycle-tied mechanics. I felt like that was > akin to "releasing the docs" (whatever they end up containing), and > at > this point docs are just a specific form of project team deliverable. +1 that's how I see it too > We > have to care for those things getting done as part of the release > cycle > anyway. > > It's not a great fit, but it's probably not the worst either. And > there > aren't that many alternatives solutions (the TC would arguably be a > worse fit). > On another note, the transition of docs to a SIG means we need to thoroughly define what a SIG is. JP and I discussed this offline, and provided our thoughts on Rico's creating the comparison of offical group structures document [1]. At this point in time, I see the definition as being too vague to truly be able to adopt docs and other projects that may inevitably want to do the same thing. Discussion is welcomed on the patch. [1] https://review.opendev.org/#/c/668093/ -- Alexandra Settle IRC: asettle From jeremyfreudberg at gmail.com Wed Aug 21 14:15:14 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 21 Aug 2019 10:15:14 -0400 Subject: [sahara] Cancelling Sahara meeting August 22 Message-ID: Hi all, There will be no Sahara meeting 2019-08-22, the reason being that Luigi is not around and I myself will most likely not be around either. Holler if you need anything. Thanks, Jeremy From openstack at nemebean.com Wed Aug 21 14:25:56 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 21 Aug 2019 09:25:56 -0500 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core Message-ID: Hello Norsk, It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a new member of the oslo.messaging core team. He has been contributing to the project for about a cycle now and has gotten up to speed on our development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) Obviously we think he'd make a good addition to the core team. If there are no objections, I'll make that happen in a week. Thanks. -Ben 0: http://shop.oreilly.com/product/9781849516501.do From amy at demarco.com Wed Aug 21 14:30:51 2019 From: amy at demarco.com (Amy Marrich) Date: Wed, 21 Aug 2019 09:30:51 -0500 Subject: [User-committee] [ALL][UC] Train EC Election - nomination results, and what is going on In-Reply-To: References: Message-ID: Congrats Mohamed and welcome aboard! Amy (spotz) On Wed, Aug 21, 2019 at 8:57 AM Mohamed Elsakhawy wrote: > Thanks Ian and Ed. Looking forward to working with the rest of the team. > > Mohamed > > On Tue, Aug 20, 2019 at 12:55 PM Ian Y. Choi wrote: > >> Hello all, >> >> As announced [1-2], the UC nomination period was from August 5 - August >> 16, 2019, and as the election officials, we would like to share the >> result, as well as the next steps. This was discussed by the User >> Committee yesterday [3]: >> >> - There was one UC candidacy [4]. The candicacy has been validated >> and election official announce that "Mohamed Elsakhawy" will serve on >> the UC. Congratulations! >> - There will be no election since there was only one candicate out >> of two positions for this election. >> - There were discussions during the UC meeting [3] yesterday, and >> the UC decided to have a second, special election for another seat >> within the timeframe spelled out in the charter. >> The same election officials will work, and the officials and UC >> members are discussing a feasible time frame for the election. >> >> As mentioned, the upcoming election is *special*, so please pay >> attention for more election information. >> The election officials will post an email soon with more details on >> upcoming special UC election. >> >> >> Thank you, >> >> - Ed & Ian >> >> [1] >> http://lists.openstack.org/pipermail/user-committee/2019-July/002862.html >> [2] >> >> http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html >> [3] >> >> http://eavesdrop.openstack.org/meetings/uc/2019/uc.2019-08-19-15.04.log.html#l-70 >> [4] >> >> http://lists.openstack.org/pipermail/user-committee/2019-August/002866.html >> >> _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Wed Aug 21 14:40:31 2019 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 21 Aug 2019 10:40:31 -0400 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: A big +1 for Gabriele - I've been working with him and it's been a pleasure. And his considerable expertise in all things RabbitMQ is a big win for the project! -K On Wed, Aug 21, 2019 at 10:32 AM Ben Nemec wrote: > Hello Norsk, > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a > new member of the oslo.messaging core team. He has been contributing to > the project for about a cycle now and has gotten up to speed on our > development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > -Ben > > 0: http://shop.oreilly.com/product/9781849516501.do > > -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.santomaggio at gmail.com Wed Aug 21 14:49:00 2019 From: g.santomaggio at gmail.com (Gabriele Santomaggio) Date: Wed, 21 Aug 2019 16:49:00 +0200 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: Thank you! - Gabriele Santomaggio Il giorno mer 21 ago 2019 alle ore 16:41 Ken Giusti ha scritto: > A big +1 for Gabriele - I've been working with him and it's been a > pleasure. > And his considerable expertise in all things RabbitMQ is a big win for the > project! > -K > > On Wed, Aug 21, 2019 at 10:32 AM Ben Nemec wrote: > >> Hello Norsk, >> >> It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a >> new member of the oslo.messaging core team. He has been contributing to >> the project for about a cycle now and has gotten up to speed on our >> development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) >> >> Obviously we think he'd make a good addition to the core team. If there >> are no objections, I'll make that happen in a week. >> >> Thanks. >> >> -Ben >> >> 0: http://shop.oreilly.com/product/9781849516501.do >> >> > > -- > Ken Giusti (kgiusti at gmail.com) > -- Gabriele Santomaggio -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Aug 21 15:32:00 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 21 Aug 2019 11:32:00 -0400 Subject: [ansible-sig] meetings! Message-ID: Hi everyone, I'd like to schedule a meeting every week so we can discuss the details of what we can do together.. You'll find the link to the meeting and time slots available below. If you're interested in being involved, please go and select the day and time that would work best for you, this way we can find a time that works for everyone. Link: https://doodle.com/poll/6f9qdddk6icw92iq Looking forward to discussing with all of you! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From moguimar at redhat.com Wed Aug 21 16:12:47 2019 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 21 Aug 2019 18:12:47 +0200 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: +1 o/ On Wed, Aug 21, 2019 at 4:49 PM Gabriele Santomaggio < g.santomaggio at gmail.com> wrote: > Thank you! > - > Gabriele Santomaggio > > Il giorno mer 21 ago 2019 alle ore 16:41 Ken Giusti > ha scritto: > >> A big +1 for Gabriele - I've been working with him and it's been a >> pleasure. >> And his considerable expertise in all things RabbitMQ is a big win for >> the project! >> -K >> >> On Wed, Aug 21, 2019 at 10:32 AM Ben Nemec >> wrote: >> >>> Hello Norsk, >>> >>> It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a >>> new member of the oslo.messaging core team. He has been contributing to >>> the project for about a cycle now and has gotten up to speed on our >>> development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) >>> >>> Obviously we think he'd make a good addition to the core team. If there >>> are no objections, I'll make that happen in a week. >>> >>> Thanks. >>> >>> -Ben >>> >>> 0: http://shop.oreilly.com/product/9781849516501.do >>> >>> >> >> -- >> Ken Giusti (kgiusti at gmail.com) >> > > > -- > Gabriele Santomaggio > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Aug 21 16:15:46 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 21 Aug 2019 09:15:46 -0700 Subject: [tc] Release naming process Message-ID: <87ftlu8pfx.fsf@meyer.lemoncheese.net> Hi, In the previous thread about the U release name, we discussed how the actual process for selecting the name has diverged from the written process. I think it's important that we follow our own processes, so we should reconcile those. We should change our actions or change the process. Based on the previous discussion, I've proposed 6 changes to openstack/governance for some initial feedback. We can, of course, tweak these options, eliminate them, or add new ones. Ultimately, we're aiming for the TC to formally vote on one or a small number of changes similar to these. I'd like for anyone interested in this to review these options and leave feedback on the changes themselves, or here on the mailing list. Either is fine, and all will be considered. Leaving "code-review" votes on the changes themselves will help us gauge relative support for the different options. In a week, I'll collect the feedback and propose a next step. https://review.opendev.org/675788 - Stop naming releases https://review.opendev.org/677745 - Name releases after major cities https://review.opendev.org/677746 - Name releases after the ICAO alphabet https://review.opendev.org/677747 - Ask the Foundation to name releases https://review.opendev.org/677748 - Name releases after random words https://review.opendev.org/677749 - Clarify the existing release naming process The last one is worth particular mention -- it keeps the current process with some minor clarifications. -Jim From gr at ham.ie Wed Aug 21 16:46:38 2019 From: gr at ham.ie (Graham Hayes) Date: Wed, 21 Aug 2019 17:46:38 +0100 Subject: [tc] Release naming process In-Reply-To: <87ftlu8pfx.fsf@meyer.lemoncheese.net> References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> Message-ID: <4d526554-6700-60d8-4477-19aa31e9e3ae@ham.ie> On 21/08/2019 17:15, James E. Blair wrote: > Hi, > > In the previous thread about the U release name, we discussed how the > actual process for selecting the name has diverged from the written > process. I think it's important that we follow our own processes, so we > should reconcile those. We should change our actions or change the > process. > > Based on the previous discussion, I've proposed 6 changes to > openstack/governance for some initial feedback. We can, of course, > tweak these options, eliminate them, or add new ones. > > Ultimately, we're aiming for the TC to formally vote on one or a small > number of changes similar to these. > > I'd like for anyone interested in this to review these options and leave > feedback on the changes themselves, or here on the mailing list. Either > is fine, and all will be considered. Leaving "code-review" votes on the > changes themselves will help us gauge relative support for the different > options. > > In a week, I'll collect the feedback and propose a next step. > > https://review.opendev.org/675788 - Stop naming releases As I said in the review I am -1 on this, due to the amount of tooling that assumes names, and alphabetical names. also, we need something to combine Nova v20 and Designate v9 into a combined release (in this case Train), and moving to a new number would make this interesting > https://review.opendev.org/677745 - Name releases after major cities +1 from me on this, I like the idea, and is less controversial > https://review.opendev.org/677746 - Name releases after the ICAO alphabet A solid "backstop"[1] option, but not something we should actively promote in my opinion. This also only works for V->Z as I don't to ever have a OpenStack Alpha / Beta release when Nova will be on v27 and over a decade old. > https://review.opendev.org/677747 - Ask the Foundation to name releases I am not sure how I feel about this - I think this is a community deliverable, and as such *we* should name it. > https://review.opendev.org/677748 - Name releases after random words -1 - I feel that we will get into the same issues we have for the current process of bikeshedding over words[2], without adding any benefits. > https://review.opendev.org/677749 - Clarify the existing release naming process Seems OK to update the docs with this if we stick with the status quo. Not sure it would have avoided the current issues if it was in place, but lets see. > The last one is worth particular mention -- it keeps the current process > with some minor clarifications. Zane put up 2 more: https://review.opendev.org/#/c/677772/ - Clarify proscription of generic release names https://review.opendev.org/#/c/677771/ - Align release naming process with practice These document the current state of play, and is a +1 from me if we stick with the current process. Overall, these are all short(ish) term solutions - we need to do something more drastic in ~ 3 years for the Z->A roll over (if we keep doing named releases). - Graham 1 - If you are in the europe area, or follow our news, I am sorry 2 - I am aware a lot of the TC discussions are bikeshedding over words, but lets try and limit it? From sfinucan at redhat.com Wed Aug 21 17:07:29 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 21 Aug 2019 18:07:29 +0100 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: <0a06481d955d1a4faf05c829cd9b0020cc0f6009.camel@redhat.com> No objections from me. +1 On Wed, 2019-08-21 at 09:25 -0500, Ben Nemec wrote: > Hello Norsk, > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a > new member of the oslo.messaging core team. He has been contributing to > the project for about a cycle now and has gotten up to speed on our > development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > -Ben > > 0: http://shop.oreilly.com/product/9781849516501.do > From corvus at inaugust.com Wed Aug 21 17:34:47 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 21 Aug 2019 10:34:47 -0700 Subject: [tc] Release naming process In-Reply-To: <4d526554-6700-60d8-4477-19aa31e9e3ae@ham.ie> (Graham Hayes's message of "Wed, 21 Aug 2019 17:46:38 +0100") References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> <4d526554-6700-60d8-4477-19aa31e9e3ae@ham.ie> Message-ID: <87sgpu777s.fsf@meyer.lemoncheese.net> Thanks for the excellent feedback! Graham Hayes writes: > On 21/08/2019 17:15, James E. Blair wrote: >> https://review.opendev.org/675788 - Stop naming releases > > As I said in the review I am -1 on this, due to the amount of tooling > that assumes names, and alphabetical names. also, we need something to > combine Nova v20 and Designate v9 into a combined release (in this case > Train), and moving to a new number would make this interesting What about using dates? We used to use dates as version numbers, which we found was problematic. But what if we used dates for the coordinated release but not as software version numbers? >> https://review.opendev.org/677745 - Name releases after major cities > > +1 from me on this, I like the idea, and is less controversial Yeah, I really like this one too. >> https://review.opendev.org/677746 - Name releases after the ICAO alphabet > > A solid "backstop"[1] option, but not something we should actively > promote in my opinion. We know what the TC doesn't want, now we need to find out what it does want. ;) > This also only works for V->Z as I don't to ever have a OpenStack Alpha > / Beta release when Nova will be on v27 and over a decade old. Yes, I probably should clarify that many of these options get us to Z, and that we should separately start thinking about what happens after Z. Alexandra plans to start that discussion soon. >> https://review.opendev.org/677747 - Ask the Foundation to name releases > > I am not sure how I feel about this - I think this is a community > deliverable, and as such *we* should name it. > >> https://review.opendev.org/677748 - Name releases after random words > > -1 - I feel that we will get into the same issues we have for the > current process of bikeshedding over words[2], without adding any > benefits. To be clear about this one -- the proposal is for the TC to decide among a slate of 10 candidates produced by random number generator. In practice, this probably will mean picking from among 2 or 3 boring-but-okay names and ignoring 7 bad-or-stupid names. I don't expect the discussions to be contentious. Maybe produce the occasional chuckle. I agree though, cities are better. >> https://review.opendev.org/677749 - Clarify the existing release naming process > > Seems OK to update the docs with this if we stick with the status quo. > Not sure it would have avoided the current issues if it was in place, > but lets see. I'm personally not in favor of it and also don't think it would have. >> The last one is worth particular mention -- it keeps the current process >> with some minor clarifications. > > > Zane put up 2 more: > > https://review.opendev.org/#/c/677772/ - > Clarify proscription of generic release names > https://review.opendev.org/#/c/677771/ - Align release naming process > with practice > > These document the current state of play, and is a +1 from me if > we stick with the current process. I agree they bring them closer to describing practice (but still not 100% in my view). But I just want to take a step back and remind folks that we are having this conversations because the current process is unpleasant. It is resulting in contention and disagreement about things which have no bearing on our producing good software. I think we should take this opportunity to choose a new process which produces boring non-controversial names and ideally start planning to phase them out altogether soon. > Overall, these are all short(ish) term solutions - we need to do > something more drastic in ~ 3 years for the Z->A roll over (if we keep > doing named releases). Agreed. -Jim From smooney at redhat.com Wed Aug 21 18:28:00 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 21 Aug 2019 19:28:00 +0100 Subject: [tc] Release naming process In-Reply-To: <87sgpu777s.fsf@meyer.lemoncheese.net> References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> <4d526554-6700-60d8-4477-19aa31e9e3ae@ham.ie> <87sgpu777s.fsf@meyer.lemoncheese.net> Message-ID: <20693f22a8d9711b37104af55e312143ae4ab903.camel@redhat.com> On Wed, 2019-08-21 at 10:34 -0700, James E. Blair wrote: > Thanks for the excellent feedback! > > Graham Hayes writes: > > > On 21/08/2019 17:15, James E. Blair wrote: > > > https://review.opendev.org/675788 - Stop naming releases > > > > As I said in the review I am -1 on this, due to the amount of tooling > > that assumes names, and alphabetical names. also, we need something to > > combine Nova v20 and Designate v9 into a combined release (in this case > > Train), and moving to a new number would make this interesting > > What about using dates? We used to use dates as version numbers, which > we found was problematic. But what if we used dates for the coordinated > release but not as software version numbers? year.release i think would work e.g. 20.01 20.02 i donthink think we shoudl use yy.mm as that qould cause confution with things like ubuntu i think that was more or less what we did before mybe it was 2020.1 but i prefer names but am not opposed to this. > > > > https://review.opendev.org/677745 - Name releases after major cities > > > > +1 from me on this, I like the idea, and is less controversial > > Yeah, I really like this one too. i like this too although i would proably expand it to city,region or geograpical feature e.g. artic, bayou, canyon, delta/desert, are gograpical feature / regions that i think would be in a similar spirit to cities. > > > > https://review.opendev.org/677746 - Name releases after the ICAO alphabet > > > > A solid "backstop"[1] option, but not something we should actively > > promote in my opinion. > > We know what the TC doesn't want, now we need to find out what it does > want. ;) > :) it may be a bit too soon. lets decide the day before we ship... > > This also only works for V->Z as I don't to ever have a OpenStack Alpha > > / Beta release when Nova will be on v27 and over a decade old. > > Yes, I probably should clarify that many of these options get us to Z, > and that we should separately start thinking about what happens after > Z. Alexandra plans to start that discussion soon. some could apply beyond that but ya when we get to Z it might be a good time to revisit it again. > > > > https://review.opendev.org/677747 - Ask the Foundation to name releases > > > > I am not sure how I feel about this - I think this is a community > > deliverable, and as such *we* should name it. if we did this i would prefer there to still be a comunity vote element and have the TC prepare a list of viable names and comunity decide form that list rather then it be totally a tc decision > > > > > https://review.opendev.org/677748 - Name releases after random words > > > > -1 - I feel that we will get into the same issues we have for the > > current process of bikeshedding over words[2], without adding any > > benefits. > > To be clear about this one -- the proposal is for the TC to decide among > a slate of 10 candidates produced by random number generator. In > practice, this probably will mean picking from among 2 or 3 > boring-but-okay names and ignoring 7 bad-or-stupid names. I don't > expect the discussions to be contentious. Maybe produce the occasional > chuckle. > > I agree though, cities are better. yep although another variation on this would be for the TC to select a theme and have the community suggest words related to that theme and then ballot on that. e.g. use a human random number generort seed with a general directly like trying to heard cat, the resonce will be random and most will ignore the request but it would be an option. > > > > https://review.opendev.org/677749 - Clarify the existing release naming process > > > > Seems OK to update the docs with this if we stick with the status quo. > > Not sure it would have avoided the current issues if it was in place, > > but lets see. > > I'm personally not in favor of it and also don't think it would have. personaly i was watching but mostly outside of the process but i did not feel it was as big of an issue as it has been suggested how this release was handeled. i think most people acted in good intent. that said i also get that we want to improve going forward and it good we can have this discourse as a community on how to do that. > > > > The last one is worth particular mention -- it keeps the current process > > > with some minor clarifications. > > > > > > Zane put up 2 more: > > > > https://review.opendev.org/#/c/677772/ - > > Clarify proscription of generic release names > > https://review.opendev.org/#/c/677771/ - Align release naming process > > with practice > > > > These document the current state of play, and is a +1 from me if > > we stick with the current process. > > I agree they bring them closer to describing practice (but still not > 100% in my view). But I just want to take a step back and remind folks > that we are having this conversations because the current process is > unpleasant. It is resulting in contention and disagreement about things > which have no bearing on our producing good software. I think we should > take this opportunity to choose a new process which produces boring > non-controversial names and ideally start planning to phase them out > altogether soon. > > > Overall, these are all short(ish) term solutions - we need to do > > something more drastic in ~ 3 years for the Z->A roll over (if we keep > > doing named releases). > > Agreed. > > -Jim > From florian at citynetwork.eu Wed Aug 21 19:08:35 2019 From: florian at citynetwork.eu (Florian Haas) Date: Wed, 21 Aug 2019 21:08:35 +0200 Subject: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent In-Reply-To: References: Message-ID: <69b7774c-608d-5552-e300-99e91da32799@citynetwork.eu> [apologies for the top-post] Hi Ralf, it looks like you've met all the necessary prerequisites. Basically, 1. The image you are booting from must have the hw_qemu_guest_agent=yes property set (this configures the Nova instance with a virtual serial device consumed by nova-guest-agent). 2. The instance must run the qemu-guest-agent daemon. 3. The image you are booting from should have the os_require_quiesce=yes property set. This isn't strictly necessary, as libvirt should always try to send the freeze/thaw commands over the serial device if your instance is configured with hw_qemu_guest_agent — but if os_require_quiesce is set then the snapshot will actually fail if libvirt can't freeze, which is what you probably want. 4. The filesystem used within the guest must support fsfreeze. This includes btrfs, ext2/3/4, and xfs, and a few others. vfat on Linux does not support being frozen, though Windows guests with the Windows Qemu Guest Agent apparently do support freezing if VSS is enabled — I am no expert on Windows guests though. What happens under the covers is that qemu-guest-agent invokes the FIFREEZE ioctl on each mounted filesystem in the guest, as seen here: https://git.qemu.org/?p=qemu.git;a=blob;f=qga/commands-posix.c#l1327 (the comments immediately above that line explain under which circumstances the FIFREEZE ioctl may fail). The FIFREEZE ioctl maps to the kernel freeze_super() function, which flushes the filesystem superblock, syncs the filesystem, and then disallows any further I/O. Which, to answer your other question, should indeed persist all in-flight I/O to disk. Unfortunately, nothing in the code path (that I know of) issues any printk's on success, so dmesg won't tell you that the filesystem has been flushed/frozen successfully. You'd only see "VFS:Filesystem freeze failed" in your guest's kernel log on error. The same is true for FITHAW/thaw_super(), which thaws the superblock and makes the filesystem writable again. However, you can (at least on an Ubuntu guest), create a file named /etc/default/qemu-guest-agent, in which you can define DAEMON_ARGS like this: DAEMON_ARGS="--logfile /var/log/qemu-ga.log --verbose" Then, while you are creating a snapshot with "nova image-create" or "openstack server image create", /var/log/qemu-ga.log should be populated with log entries related to the fsfreeze events. The same should be true for creating a snapshot from Horizon. On Ubuntu bionic, you should also make sure that you are running qemu-guest-agent from bionic-security (or a recent daily build of an Ubuntu cloud image), because at least in the initial bionic release qemu-guest-agent was suffering from a packaging issue, described in https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1820291. For RBD-backed Nova/libvirt, things are a bit more complicated still, due to what appears like somewhat inconsistent/unexpected behavior in Nova. See the discussion in: https://lists.ceph.io/hyperkitty/list/ceph-users at ceph.io/thread/3YQCRO4JP56EDJN5KX5DWW5N2CSBHRHZ/ Does this give you enough information so you can verify whether or not freeze/thaw is working as expected for you? Cheers, Florian On 14/08/2019 10:41, Teckelmann, Ralf, NMU-OIP wrote: > Hello, > > > Working me through documentation and articles I am totally lost on the > matter. > > All I want to know is: > > - if issueing "openstack snapshot create ...." > > - if klicking "create Snaphost" in Horizon for an instance > > will secure a consistent snapshot (of all volumes in question). > With "consistent", I mean that all the data in memory are written to the > disc before starting a snapshot. > > I hope someone can clear up, if using the setup described in the > following is sufficient to achieve this goal or if I have to do > something in addition. > > > If you have any question I am eager to answer as fast as possible. > > > Setup: > > > We have a Stein-based OpenStack deployment with cinder backed by ceph. > > Instances are created with cinder volumes. Boot volumes are based on an > image having the properties: > > - hw_qemu_guest_agent='yes' > - os_require_quiesce='yes' > > > The image is ubuntu 16.04 or 18.04 with quemu-guest-agent package > installed and service running (no additional configuration besides > distro-default): > > > qemu-guest-agent.service - LSB: QEMU Guest Agent startup script >    Loaded: loaded (/etc/init.d/qemu-guest-agent; bad; vendor preset: > enabled) >    Active: active (running) since Wed 2019-08-14 07:42:21 UTC; 9min ago >      Docs: man:systemd-sysv-generator(8) >    CGroup: /system.slice/qemu-guest-agent.service >            └─2300 /usr/sbin/qemu-ga --daemonize -m virtio-serial -p > /dev/virtio-ports/org.qemu.guest_agent.0 > > Aug 14 07:42:21 ulthwe systemd[1]: Starting LSB: QEMU Guest Agent > startup script... > Aug 14 07:42:21 ulthwe systemd[1]: Started LSB: QEMU Guest Agent startup > script. > > I can see the socket on the compute node and send pings successfully: > > ~# ls /var/lib/libvirt/qemu/*.sock > /var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0000248e.sock > root at pcevh2404:~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-ping"}' > {"return":{}} > > > I can also send freeze and thaw successfully: > > ~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-fsfreeze-freeze"}' > {"return":1} > > ~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-fsfreeze-thaw"}' > {"return":1} > > Sending a simple write (echo "bla" > blub.file) in the "frozen" state > will be blocked until "thaw" as expected. > > Best regards > > > Ralf T. From jimmy at openstack.org Wed Aug 21 19:10:11 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 21 Aug 2019 14:10:11 -0500 Subject: [TC] [UC] Shanghai Forum Selection Committee: Help Needed Message-ID: <5D5D9713.6020100@openstack.org> Hi everyone! The Forum in Shanghai is coming up. We require 2 volunteers from the TC and 2 from the UC for the Forum Selection Committee. For more information, please see:https://wiki.openstack.org/wiki/Forum Please reach out to myself orknelson at openstack.org if you're interested. Volunteers should respond on or before September 2, 2019. Note: volunteers are required to be currently serving on either the UC or the TC. Cheers, Jimmy From amy at demarco.com Wed Aug 21 19:17:17 2019 From: amy at demarco.com (Amy Marrich) Date: Wed, 21 Aug 2019 14:17:17 -0500 Subject: [TC] [UC] Shanghai Forum Selection Committee: Help Needed In-Reply-To: <5D5D9713.6020100@openstack.org> References: <5D5D9713.6020100@openstack.org> Message-ID: I can help from the UC side if needed just let me know Amy (spotz) On Wed, Aug 21, 2019 at 2:11 PM Jimmy McArthur wrote: > Hi everyone! > > The Forum in Shanghai is coming up. We require 2 volunteers from the TC > and 2 from the UC for the Forum Selection Committee. For more information, > please see:https://wiki.openstack.org/wiki/Forum > > Please reach out to myself orknelson at openstack.org if you're interested. > Volunteers should respond on or before September 2, 2019. > > Note: volunteers are required to be currently serving on either the UC or > the TC. > > Cheers, > Jimmy > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Aug 21 19:29:30 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 21 Aug 2019 12:29:30 -0700 Subject: [election] Coordination + Prep Message-ID: Hello Election Officials! (and anyone else curious about running elections :) ) Next week we are set to get this party started. Things that need to happen before then: 1. Election season email to the ML-- we have a template for this, but it and all the other emails this run will need to be tweaked (or we add new ones for this combined election type <-- I'm in favor of this approach in case we do this again in the future). 2. Patch to set up U nominations directories-- We don't have the next release name settled so I guess we just go for 'U' and we can update it later. 3. Patch to update officials list on site-- I will need to be removed because I won't be able to help with the actual election this round, but I am here to advise if you need anything. ...I could be missing things, but that's for sure what I know we need to accomplish before the start date and the actual nominations kickoff email. For reference, here is the general process that we run though[1]. That being said, things are a bit different this time since we are running the elections simultaneously. We should probably add a section for what happens when we have to run them in parallel (same vein as writing new email templates for the combined election). -Kendall (diablo_rojo) [1]https://opendev.org/openstack/election/src/branch/master/README.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Wed Aug 21 19:41:44 2019 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 21 Aug 2019 15:41:44 -0400 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> Message-ID: <20190821194144.GA1844@zeong> On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > NOVA: > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > websockify===0.9.0 tempest test failing > > > > > > KEYSTONE: > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > NEUTRON: > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > this could be caused by pytest===5.1.0 as well > > > > > > KURYR: > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > https://review.opendev.org/665352 > > > > > > MISC: > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > fixing a compatibility issue with python 3.4, which we don't officially support > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > it's a testtools bug: > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > internal test runner built off of stdlib unittest instead of testtools/subunit > > for python 3. This was done to fix a huge number of compatibility issues people > > had reported when trying to run stdlib unittest suites using stestr on > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > for openstack (more specificially tempest) is that it's built off of testtools > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > and treats it as an unhandled exception which is a test failure, instead of the > > intended skip result. [1] This is actually a general bug and will come up whenever > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > [1] skip decorators typically aren't effected by this because they set an > > attribute that gets checked before the test method is executed instead of > > relying on an exception, which is why this is mostly only an issue for tempest > > because it does a lot of run time skips via exceptions. > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > merge access but haven't had much free time to actively maintain it > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > (including unitest2 use) for stable branches, we are going to use the specific stetsr > version/branch( is good option to me. I think your PR to remove the unittest2 use form testtools > make sense to me [1]. A workaround in Tempest can be last option for us. https://github.com/testing-cabal/testtools/pull/277 isn't a short term solution, unittest2 is still needed for python < 3.5 in testtools and testtools has not deprecated support for python 2.7 or 3.4 yet. I probably can rework that PR so that it's conditional and always uses stdlib unittest for python >= 3.5 but then testtools ends up maintaining two separate paths depending on python version. I'd like to continue thinking about that is as a long term solution because I don't know when I'll have the time to keep pushing that PR forward. > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > not the options you like. > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > -gmann > > > > > -Matt Treinish > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > I'm trying to get this in place as we are getting closer to the > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > would be appreciated. > > > > > > -- > > > Matthew Thode > > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Wed Aug 21 19:47:19 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 21 Aug 2019 15:47:19 -0400 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: > On Aug 21, 2019, at 10:25 AM, Ben Nemec wrote: > > Hello Norsk, > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a new member of the oslo.messaging core team. He has been contributing to the project for about a cycle now and has gotten up to speed on our development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > Obviously we think he'd make a good addition to the core team. If there are no objections, I'll make that happen in a week. > > Thanks. > > -Ben > > 0: http://shop.oreilly.com/product/9781849516501.do > +1 From openstack at fried.cc Wed Aug 21 19:51:48 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 21 Aug 2019 14:51:48 -0500 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. In-Reply-To: References: Message-ID: <9ace50aa-c5a9-8d35-9be9-05d7d42cc846@fried.cc> Alex- Thanks for writing this up. > #1 Without Nova DB persistent for the assignment info, depends on > hypervisor persistent it. I liked the "no persistence" option in theory, but it unfortunately turned out to be too brittle when it came to the corner cases. > #2 With nova DB persistent, but using virt driver specific blob to store > virt driver specific info.  > >    The idea is persistent the assignment for instance into DB. The > resource tracker gets available resources from virt driver. The resource > tracker will calculate on the fly based on available resources and > assigned resources from instance DB. The new field ·instance.resources· > is designed for supporting virt driver specific metadata, then hidden > the virt driver and platform detail from RT. > https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new I just took a closer look at this, and I really like it. Persisting local resource information with the Instance and MigrationContext objects ensures we don't lose it in weird corner cases, regardless of a specific hypervisor's "persistence model" (e.g. domain XML for libvirt). MigrationContext is already being used for this old_* new_* concept - but the existing fields are hypervisor-specific (numa and pci). Storing this information in a generic, opaque-outside-of-virt way means we're not constantly bolting hypervisor-specific fields onto what *should* be non-hypervisor-specific objects. As you've stated in the etherpad, this framework sets us up nicely to start transitioning existing PCI/NUMA-isms over to a Placement-driven model in the near future. Having the virt driver report provider tree (placement-specific) and "real" (hypervisor-specific) resource information at the same time makes all kinds of sense. So, quite aside from solving the stated race condition and enabling vpmem, all of this is excellent movement toward the "generic device (resource) management" we've been talking about for years. Let's make it so. efried . From fungi at yuggoth.org Wed Aug 21 20:39:08 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 21 Aug 2019 20:39:08 +0000 Subject: [election] Coordination + Prep In-Reply-To: References: Message-ID: <20190821203908.epzgfr5xn7al7bc7@yuggoth.org> On 2019-08-21 12:29:30 -0700 (-0700), Kendall Nelson wrote: [...] > 1. Election season email to the ML-- we have a template for this, > but it and all the other emails this run will need to be tweaked > (or we add new ones for this combined election type <-- I'm in > favor of this approach in case we do this again in the future). Yes, if we can get that done quickly. The nomination period begins in 6 days, so I advocate for trying to get at least the first one in shape to send some time tomorrow. > 2. Patch to set up U nominations directories-- We don't have the next > release name settled so I guess we just go for 'U' and we can update it > later. [...] This is up as https://review.opendev.org/677816 (to update the cycle name to "U") and https://review.opendev.org/677817 (to create the directories). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Wed Aug 21 22:15:47 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 21 Aug 2019 17:15:47 -0500 Subject: More upgrade issues with PCPUs - input wanted In-Reply-To: References: <2bea14b419a73a5fee0ea93f5b27d4c6438b35de.camel@redhat.com> Message-ID: <88926972-2050-eb56-6a0b-a111d9e8ce90@fried.cc> >> The step I'm thinking is: >> >> 1. upgrade control plane, disable request PCPU, still request VCPU. >> 2. rolling upgrade compute node, compute nodes begin to report >> both PCPU and VCPU. But the request still add to VCPU. >> 3. enabling the PCPU request, the new request is request PCPU. >>        In this point, some of instances are using VCPU, some of >> instances are using PCPU on same node. And the amount VCPU + PCPU >> will double the available cpu resources. The NUMATopology filter >> is responsible for stop over-consuming the total number of cpu. >> 4. rolling update compute node's configure to use >> cpu_dedicated_set, that trigger the reshape existed VCPU consuming >> to PCPU consuming. >>      New request is going to PCPU at step3, no more VCPU request >> at this point. Roll upgrade node to get rid of existed VCPU consuming. >> 5. done > > This had been my initial plan. The issue is that by reporting both > PCPU and VCPU in (2), our compute node's resource provider will now > have PCPU inventory available (though it won't be used). This is > problematic since "does this resource provider have PCPU inventory" > is one of the questions I need to ask to determine if I should do a > reshape. If I can't rely on this heuristic, I need to start querying > for allocation information (so I can ask "does this resource > provider have PCPU *allocations*") every time I start a compute > node. I'm guessing this is expensive, since we don't do it by default. We already do it as part of update_available_resource via _remove_deleted_instances_allocations (there we're only checking the compute node RP, but in the future we'll have to do it for the whole tree anyway). We restricted it to the reshape path in _update_to_placement because it's not free and it was possible to make the flow work in the general case without it. We can still avoid it in the general case by only doing it when startup is True. So if you can solve the problem (which I'm still wrapping my brain around) by looking at the allocations, let's do that. Because... > I'm not quite ensure understand the problem. How about question you > should ask is "Does the current amount of VCPU and PCPU is double of > actual available cpu resources". If the answer is yes, then do a reshape. Alex's suggestion makes sense to me, but it's a bit of a hack, and the math might break down if you e.g. stop compute, twiddle your cpu_*_setZ, and restart. efried . From openstack at fried.cc Wed Aug 21 22:44:04 2019 From: openstack at fried.cc (Eric Fried) Date: Wed, 21 Aug 2019 17:44:04 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190820144401.4dculyxz3s2jqgyk@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> Message-ID: >>> NOVA: >>> lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 Sean Mooney agreed to take a look at this one. >>> websockify===0.9.0 tempest test failing This is now known as bug 1840788 [1]. I did some initial investigation and tried to fix it. Turned out my fix was already part of another change. But it also turns out that that fix is only part of the solution. See the bug report for (links to) details. At this point, it needs someone who understands what test_novnc [2] is actually trying to do. Because I don't. Anyone? Thanks, efried [1] https://bugs.launchpad.net/nova/+bug/1840788 [2] https://opendev.org/openstack/tempest/src/branch/master/tempest/api/compute/servers/test_novnc.py#L180 From smooney at redhat.com Wed Aug 21 23:00:37 2019 From: smooney at redhat.com (Sean Mooney) Date: Thu, 22 Aug 2019 00:00:37 +0100 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. In-Reply-To: <9ace50aa-c5a9-8d35-9be9-05d7d42cc846@fried.cc> References: <9ace50aa-c5a9-8d35-9be9-05d7d42cc846@fried.cc> Message-ID: On Wed, 2019-08-21 at 14:51 -0500, Eric Fried wrote: > Alex- > > Thanks for writing this up. > > > #1 Without Nova DB persistent for the assignment info, depends on > > hypervisor persistent it. > > I liked the "no persistence" option in theory, but it unfortunately > turned out to be too brittle when it came to the corner cases. > > > #2 With nova DB persistent, but using virt driver specific blob to store > > virt driver specific info. > > > > The idea is persistent the assignment for instance into DB. The > > resource tracker gets available resources from virt driver. The resource > > tracker will calculate on the fly based on available resources and > > assigned resources from instance DB. The new field ·instance.resources· > > is designed for supporting virt driver specific metadata, then hidden > > the virt driver and platform detail from RT. > > https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new > > I just took a closer look at this, and I really like it. > > Persisting local resource information with the Instance and > MigrationContext objects ensures we don't lose it in weird corner cases, > regardless of a specific hypervisor's "persistence model" (e.g. domain > XML for libvirt). > > MigrationContext is already being used for this old_* new_* concept - > but the existing fields are hypervisor-specific (numa and pci). > > Storing this information in a generic, opaque-outside-of-virt way means > we're not constantly bolting hypervisor-specific fields onto what > *should* be non-hypervisor-specific objects. > > As you've stated in the etherpad, this framework sets us up nicely to > start transitioning existing PCI/NUMA-isms over to a Placement-driven > model in the near future. > > Having the virt driver report provider tree (placement-specific) and > "real" (hypervisor-specific) resource information at the same time makes > all kinds of sense. > > So, quite aside from solving the stated race condition and enabling > vpmem, all of this is excellent movement toward the "generic device > (resource) management" we've been talking about for years. > > Let's make it so. i agree with most of what erric said above and the content of the etherpad. i left a couple of comment inline but i also dont want to pollute it too much with comments so i will summrise some addtional thoughts here. tl;dr i think this would allow us to converge tracking and assignment of vpmem, vgpus, pci devices, and pcpus each of the these resoeces requrie nova to do assignment of sepcfic host device and that can be done genericly in some cases and delegetad to the driver in other via this proposal. the simple case of vpmem usage of this is relitvly self contained but the use of it for other reseoce will require though and work to enabled. more detailed thoughts. 1.) short term i belive host side tracking is not needed for vPMEM or vGPU 2.) medium term having host side tracking of resouces might simplfy vPMEM,vGPU,pCPU and PCI tracking 3.) long term i think if we use placement correctly and have instace level tracking we might not need host side tracking at all. 3.a) instance level tracking will allow use to reliably compute the host side view in the virt driver form config and device discovery. 3.b) with netsted resocue providers and the new abbitly to do nested queires we can move filtering mostly to placemnet. 4.) we use a host/instance numa toplogy blob for mempages(hugepages) today, if we model them in plamcent i dont think we will need host side tracking for filtering.(see note on weighing later) 4.a) if we have pcpus and mempages as childern of cache regions or numa nodes we can do numa/cache afinity of those resouce and pci deivce using same_subtree or whatever it ended up being called in placemetn. 4.b) hugepages are currenly not assigned by nova we just do a tally count of how many are free on indvigual numa nodes of a given size and select a numa node which i think can entirely be done via placment as of about 5-6 weeks ago. the asignem is done by the kernel which is why we dont need to track indivigual hugepages at the host level. 5.) if we dont have host side tracking we cannot do classic weighing of local resocues as we do not have teh data 6.) if we pass allocation candiates to the filters instead of hosts we can replace our exisitng filters with plamcent aware filters that can use the placement tree sturcture and traits to weight the possible allocation candiates which will inturn weigh the hosts. 7.) pcpus unlike hugepages are assigned by nova and would need to be track in memroy at the host level. this host view could be computed by the virt driver if we track the assignemt in the instace and migrations but host side track would be simplier to port the existing code too. pcpus would need to do the assignment within the driver from teh free resouce retrun by the resouce tracker. 8.) this might move some eof the logic form nova/virt/hardware.py to the libvirt dirver where i proably should always have been. 8.a) the validate of flavor exctra in nova/virt/hardware.py that is used in the api would not be moved to the driver. regards sean. > > efried > . > From melwittt at gmail.com Wed Aug 21 23:24:40 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Aug 2019 16:24:40 -0700 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> Message-ID: <09113cee-5777-781a-9fed-285f279da63d@gmail.com> On 8/21/19 3:44 PM, Eric Fried wrote: >>>> NOVA: >>>> lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > Sean Mooney agreed to take a look at this one. > >>>> websockify===0.9.0 tempest test failing > > This is now known as bug 1840788 [1]. > > I did some initial investigation and tried to fix it. Turned out my fix > was already part of another change. But it also turns out that that fix > is only part of the solution. See the bug report for (links to) details. > > At this point, it needs someone who understands what test_novnc [2] is > actually trying to do. Because I don't. > > Anyone? From the test run on your patch [3], it looks like we're going to need a change in nova as well in nova/console/websocketproxy.py [4]: AttributeError: module 'websockify' has no attribute 'WebSocketServer' It looks like the WebSocketServer class got moved from websockify.websocket to websockify.websocketserver in v0.9.0: https://github.com/novnc/websockify/commit/8a697622495fd319582cd1c604e7eb2cc0ac0ef6#diff-308aaa63704b4177c97728bfa9cb0183 and thus is no longer accessible via the top level 'websockify' module as a result: https://github.com/novnc/websockify/blob/v0.9.0/websockify/__init__.py Since this is a change to upper-constraints to allow v0.9.0, we will need a way for it to work with both module layouts, yeah? Cheers, -melanie [3] https://review.opendev.org/677798 [4] https://zuul.opendev.org/t/openstack/build/e0a8a19021b64350a0a55cc08f374d02/log/controller/logs/screen-n-novnc-cell1.txt.gz#20-51 > Thanks, > efried > > [1] https://bugs.launchpad.net/nova/+bug/1840788 > [2] > https://opendev.org/openstack/tempest/src/branch/master/tempest/api/compute/servers/test_novnc.py#L180 > From cboylan at sapwetik.org Wed Aug 21 23:30:39 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 21 Aug 2019 16:30:39 -0700 Subject: =?UTF-8?Q?Re:_[nova][keystone][neutron][kuryr][requirements]_breaking_te?= =?UTF-8?Q?sts_with_new_library_versions?= In-Reply-To: <09113cee-5777-781a-9fed-285f279da63d@gmail.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> <09113cee-5777-781a-9fed-285f279da63d@gmail.com> Message-ID: <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> On Wed, Aug 21, 2019, at 4:25 PM, melanie witt wrote: Snip > > Since this is a change to upper-constraints to allow v0.9.0, we will > need a way for it to work with both module layouts, yeah? Yup, the constrained version is likely one of many valid versions people can use. The constrained version is just what we test with to prevent issues like these from breaking the gate. Unless you also change the lower bounds of the dep people may still continue to use the old version too. Good idea to support the older code in that case. > > Cheers, > -melanie From gmann at ghanshyammann.com Wed Aug 21 23:47:21 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Aug 2019 08:47:21 +0900 Subject: [qa] QA Office Hour New Time: 0 UTC Message-ID: <16cb693e451.b0adbabd228571.5687062737637934496@ghanshyammann.com> Hello Everyone, Please note the QA office hour new time which is 0 UTC(which will include the Asia TZ members also) on every Thursday. I have updated the wiki page [1] also for that. I have cleaned up the PING LIST which was old, feel free to add yourself in the PING list on wiki if you want the IRC notification. [1] https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours -gmann From melwittt at gmail.com Thu Aug 22 00:18:07 2019 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Aug 2019 17:18:07 -0700 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> <09113cee-5777-781a-9fed-285f279da63d@gmail.com> <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> Message-ID: <8b515ac5-e15c-43b6-ea8f-058891b0336d@gmail.com> On 8/21/19 4:30 PM, Clark Boylan wrote: > On Wed, Aug 21, 2019, at 4:25 PM, melanie witt wrote: > > Snip > >> >> Since this is a change to upper-constraints to allow v0.9.0, we will >> need a way for it to work with both module layouts, yeah? > > Yup, the constrained version is likely one of many valid versions people can use. The constrained version is just what we test with to prevent issues like these from breaking the gate. Unless you also change the lower bounds of the dep people may still continue to use the old version too. Good idea to support the older code in that case. Thanks Clark! I've uploaded a nova change at: https://review.opendev.org/677856 and added it as a Depends-On to: https://review.opendev.org/677798 to see if it solves the problem. Cheers, -melanie From mthode at mthode.org Thu Aug 22 00:18:36 2019 From: mthode at mthode.org (Matthew Thode) Date: Wed, 21 Aug 2019 19:18:36 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> <09113cee-5777-781a-9fed-285f279da63d@gmail.com> <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> Message-ID: <20190822001836.qeidevqjxgjgqdb5@mthode.org> On 19-08-21 16:30:39, Clark Boylan wrote: > On Wed, Aug 21, 2019, at 4:25 PM, melanie witt wrote: > > Snip > > > > > Since this is a change to upper-constraints to allow v0.9.0, we will > > need a way for it to work with both module layouts, yeah? > > Yup, the constrained version is likely one of many valid versions people can use. The constrained version is just what we test with to prevent issues like these from breaking the gate. Unless you also change the lower bounds of the dep people may still continue to use the old version too. Good idea to support the older code in that case. Yep, if you do want to set a minimum you still can, but that can also affect co-installability. You'll need to have at least one sha that supports both though, otherwise there'd have to be a force push at some point... So might as well support older stuff too (unless it's bad code wise). -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mdulko at redhat.com Thu Aug 22 07:53:30 2019 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Thu, 22 Aug 2019 09:53:30 +0200 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190818161611.6ira6oezdat4alke@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> Message-ID: On Sun, 2019-08-18 at 11:16 -0500, Matthew Thode wrote: > NOVA: > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > websockify===0.9.0 tempest test failing > > KEYSTONE: > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > NEUTRON: > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > this could be caused by pytest===5.1.0 as well > > KURYR: > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > https://review.opendev.org/665352 Alright, I tested and fixed the commit above and I should be able to get it merged today. Thanks for following up on this. > MISC: > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > jsonschema===3.0.2 see https://review.opendev.org/649789 > > I'm trying to get this in place as we are getting closer to the > requirements freeze (sept 9th-13th). Any help clearing up these bugs > would be appreciated. > From a.settle at outlook.com Thu Aug 22 09:56:40 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 22 Aug 2019 09:56:40 +0000 Subject: [tc] Release naming process In-Reply-To: <87sgpu777s.fsf@meyer.lemoncheese.net> References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> <4d526554-6700-60d8-4477-19aa31e9e3ae@ham.ie> <87sgpu777s.fsf@meyer.lemoncheese.net> Message-ID: Thanks for kick starting the conversation, Jim :D On Wed, 2019-08-21 at 10:34 -0700, James E. Blair wrote: > Thanks for the excellent feedback! > > Graham Hayes writes: > > > On 21/08/2019 17:15, James E. Blair wrote: > > > https://review.opendev.org/675788 - Stop naming releases > > > > As I said in the review I am -1 on this, due to the amount of > > tooling > > that assumes names, and alphabetical names. also, we need something > > to > > combine Nova v20 and Designate v9 into a combined release (in this > > case > > Train), and moving to a new number would make this interesting > > What about using dates? We used to use dates as version numbers, > which > we found was problematic. But what if we used dates for the > coordinated > release but not as software version numbers? I think this is just me being a wordy person, but I'd like to avoid numbers. It becomes very generic, not identifiable, and it gets lost amongst everything. There's something pleasant about being able to talk about a release with a name (Juno, Kilo, etc) that actually means something. > > > > https://review.opendev.org/677745 - Name releases after major > > > cities > > > > +1 from me on this, I like the idea, and is less controversial > > Yeah, I really like this one too. +2. I've included my thoughts in the patch. > > > > https://review.opendev.org/677746 - Name releases after the ICAO > > > alphabet > > > > A solid "backstop"[1] option, but not something we should actively > > promote in my opinion. > > We know what the TC doesn't want, now we need to find out what it > does > want. ;) > > > This also only works for V->Z as I don't to ever have a OpenStack > > Alpha > > / Beta release when Nova will be on v27 and over a decade old. > > Yes, I probably should clarify that many of these options get us to > Z, > and that we should separately start thinking about what happens after > Z. Alexandra plans to start that discussion soon. Hoping to kick start that conversation off the back of this one. Let me know if anyone disagrees, but I think tackling one issue at a time is best. At the very least, we clarify what's going on here - and then we can adapt and change. > > > > https://review.opendev.org/677747 - Ask the Foundation to name > > > releases > > > > I am not sure how I feel about this - I think this is a community > > deliverable, and as such *we* should name it. Ditto. I included my thoughts on the review. But I think this is out of the Foundation's scope and I personally would not like to add it to their scope. This is a community driven project, and I believe the community should name it :D (regardless of our issues...) > > > > > https://review.opendev.org/677748 - Name releases after random > > > words > > > > -1 - I feel that we will get into the same issues we have for the > > current process of bikeshedding over words[2], without adding any > > benefits. > > To be clear about this one -- the proposal is for the TC to decide > among > a slate of 10 candidates produced by random number generator. In > practice, this probably will mean picking from among 2 or 3 > boring-but-okay names and ignoring 7 bad-or-stupid names. I don't > expect the discussions to be contentious. Maybe produce the > occasional > chuckle. > Maybe. I can see this working, tbh. But it feels a little less tied to the project and more at random. The city option (as you've expressed desire for too) just feels like it creates more meaning. > I agree though, cities are better. > > > > https://review.opendev.org/677749 - Clarify the existing release > > > naming process > > > > Seems OK to update the docs with this if we stick with the status > > quo. > > Not sure it would have avoided the current issues if it was in > > place, > > but lets see. > > I'm personally not in favor of it and also don't think it would have. > > > > The last one is worth particular mention -- it keeps the current > > > process > > > with some minor clarifications. > > > > > > Zane put up 2 more: > > > > https://review.opendev.org/#/c/677772/ - > > Clarify proscription of generic release names > > https://review.opendev.org/#/c/677771/ - Align release naming > > process > > with practice > > > > These document the current state of play, and is a +1 from me if > > we stick with the current process. > > I agree they bring them closer to describing practice (but still not > 100% in my view). But I just want to take a step back and remind > folks > that we are having this conversations because the current process is > unpleasant. It is resulting in contention and disagreement about > things > which have no bearing on our producing good software. I think we > should > take this opportunity to choose a new process which produces boring > non-controversial names and ideally start planning to phase them out > altogether soon. My vote is to finish Z before we make any changes. Finish what you started, even if it's hard. And from there we can say we've learnt from this... experience... and work on something that's better. Jumping off the alphabet bandwagon at U will look odd to the user base. > > > Overall, these are all short(ish) term solutions - we need to do > > something more drastic in ~ 3 years for the Z->A roll over (if we > > keep > > doing named releases). > > Agreed. > > -Jim > -- Alexandra Settle IRC: asettle From pierre at stackhpc.com Thu Aug 22 13:14:49 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 22 Aug 2019 15:14:49 +0200 Subject: [storyboard] email notification on stories/tasks of subscribed projects In-Reply-To: <20190816120409.t5cyb345cygytrgj@yuggoth.org> References: <1db76780066130ccb661d2b1f632f163@sotk.co.uk> <20190816120409.t5cyb345cygytrgj@yuggoth.org> Message-ID: Thanks for checking. I would have sworn that I tried enabling them before that date, but I could be wrong. Anyway, now it works :-) On Fri, 16 Aug 2019 at 14:11, Jeremy Stanley wrote: > > On 2019-08-14 14:48:30 +0200 (+0200), Pierre Riteau wrote: > > I am reviving this thread as I have never received any email > > notifications from starred projects in Storyboard, despite > > enabling them multiple times. > [...] > > It looks from the MTA logs like it began to send you notifications > on 2019-08-14 at 14:56:23 UTC. I don't see any indication of any > messages getting rejected. > -- > Jeremy Stanley From mthode at mthode.org Thu Aug 22 13:17:34 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 22 Aug 2019 08:17:34 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: References: <20190818161611.6ira6oezdat4alke@mthode.org> Message-ID: <20190822131734.4f2t735qrkheccjo@mthode.org> On 19-08-22 09:53:30, Michał Dulko wrote: > On Sun, 2019-08-18 at 11:16 -0500, Matthew Thode wrote: > > NOVA: > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > websockify===0.9.0 tempest test failing > > > > KEYSTONE: > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > NEUTRON: > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > this could be caused by pytest===5.1.0 as well > > > > KURYR: > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > https://review.opendev.org/665352 > > Alright, I tested and fixed the commit above and I should be able to > get it merged today. Thanks for following up on this. > Yep, saw it merge. Now all it needs is a release so the version on pypy / upper-constraints does not hold back the version of kubernetes. > > MISC: > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > I'm trying to get this in place as we are getting closer to the > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > would be appreciated. > > -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Aug 22 13:42:28 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Aug 2019 22:42:28 +0900 Subject: [nova]review guide for the policy default refresh spec In-Reply-To: <16c956e577d.e430310375433.1480190884000484078@ghanshyammann.com> References: <16c956e577d.e430310375433.1480190884000484078@ghanshyammann.com> Message-ID: <16cb9907985.fd914df7250506.3959464795148203935@ghanshyammann.com> ---- On Thu, 15 Aug 2019 22:18:52 +0900 Ghanshyam Mann wrote ---- > Hello Everyone, > > As many of you might know that in Train, we are doing Nova policy changes to adopt the > keystone's new defaults and scope type[1]. There are multiple changes required per policy as > mentioned in spec. I am writing this review guide for the patch sequence and at the end how > each policy will look like. > > I have prepared the first set of patches. I would like to get the feedback on those so that we > can modify the other policy also on the same line. My plan is to start the other policy work after > we merge the first set of policy changes. > > Patch sequence: Example: os-services API policy: > ------------------------------------------------------------- > 1. Cover/Improve the test coverage for existing policies: > This will be the first patch. We do not have good test coverage of the policy, current tests are > not at all useful and do not perform the real checks. Idea is to add the actual tests coverage for > each policy as the first patch. new tests try to access the API with all possible context and check > for positive and negative cases. > - https://review.opendev.org/#/c/669181/ > > 2. Introduce scope_types: > This will add the scope_type for policy. It will be either 'system', 'project' or 'system and project'. > In the same patch, along with existing test working as it is, new tests of scope type will be added which will > run with [oslo_policy] enforce_scope=True so that we can capture the real scope checks. > - https://review.opendev.org/#/c/645427/ > > 3. Add new default roles: > This will add new defaults which can be SYSTEM_ADMIN, SYSTEM_READER, PROJECT_MEMBER_OR_SYSTEM_ADMIN, PROJECT_READER_OR_SYSTEM_READER etc depends on Policy. > Test coverage of new defaults, as well as deprecated defaults, are covered in same patch. This patch will > add the granularity in policy if needed. Without policy granularity, we cannot add new defaults per rule. > - https://review.opendev.org/#/c/648480/ (I need to add more tests for deprecated rules) I have updated the above patch with the required test to verify that deprecated rule works fine. This is good to go now. -gmann > > 4. Pass actual Targets in policy: > This is to pass the actual targets in context.can(). Main goal is to remove the defaults targets which is > nothing but context'user_id,project_id only. It will be {} if no actual target data needed in check_str. > - https://review.opendev.org/#/c/676688/ > > Patch sequence: Example: Admin Action API policy: > 1. https://review.opendev.org/#/c/657698/ > 2. https://review.opendev.org/#/c/657823/ > 3. https://review.opendev.org/#/c/676682/ > 4. https://review.opendev.org/#/c/663095/ > > There are other patches I have posted in between for common changes or fix or framework etc. > > [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/policy-default-refresh.html > > -gmann > From gmann at ghanshyammann.com Thu Aug 22 13:49:59 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Aug 2019 22:49:59 +0900 Subject: [nova] API updates week 19-34 Message-ID: <16cb99759be.eb1f98ba250861.8129887550449689407@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. API Related BP : ============ COMPLETED: 1. Support adding description while locking an instance: - https://blueprints.launchpad.net/nova/+spec/add-locked-reason 2. Add host and hypervisor_hostname flag to create server - https://blueprints.launchpad.net/nova/+spec/add-host-and-hypervisor-hostname-flag-to-create-server 3. Nova API cleanup - https://blueprints.launchpad.net/nova/+spec/api-consistency-cleanup 4. Add 'power-update' external event: - https://blueprints.launchpad.net/nova/+spec/nova-support-instance-power-update Code Ready for Review: ------------------------------ 1. Specifying az when restore shelved server - Topic: https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) - Weekly Progress: Under Review and in runway also. 2. Show Server numa-topology - Topic: https://review.opendev.org/#/q/topic:bp/show-server-numa-topology+(status:open+OR+status:merged) - Weekly Progress: Alex is +2 on nova change, but this might conflict with above one on Microvision number(depends on which one gets first). 3. Nova API policy improvement - Topic: https://review.openstack.org/#/q/topic:bp/policy-default-refresh+(status:open+OR+status:merged) - Weekly Progress: First set of os-service policy changs has Kenichi +2. Need more feedback and +A to start modifying the other policies. review guide over ML - http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008504.html 4. Add User-id field in migrations table - Topic: https://review.opendev.org/#/q/topic:bp/add-user-id-field-to-the-migrations-table+(status:open+OR+status:merged) - Weekly Progress: Changes are up for review but with microversion 2.77. We can rebase the microverison number later and not blocking for review. 5. Support delete_on_termination in volume attach api -Spec: https://review.opendev.org/#/q/topic:bp/support-delete-on-termination-in-server-attach-volume+(status:open+OR+status:merged) - Weekly Progress: Ready for review and rebase on available microversion number can be done later. Specs are merged and code in-progress: ------------------------------ ------------------ 1. Detach and attach boot volumes: - Topic: https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged) - Weekly Progress: No Progress. Patches are in merge conflict. Spec Ready for Review: ----------------------------- 1. Support for changing deleted_on_termination after boot -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: This has been added in backlog. Previously approved Spec needs to be re-proposed for Train: --------------------------------------------------------------------------- 1. Servers Ips non-unique network names : - https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names - https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged) - I remember I planned this to re-propose but could not get time. If anyone would like to help on this please repropose. otherwise I will start this in U cycle. 2. Volume multiattach enhancements: - https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements - https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged) - This also need volutneer - http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007411.html Others: 1. Add API ref guideline for body text - 2 api-ref are left to fix. Bugs: ==== No progress report in this week. NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. -gmann From jim at jimrollenhagen.com Thu Aug 22 15:05:19 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 22 Aug 2019 11:05:19 -0400 Subject: [TC] [UC] Shanghai Forum Selection Committee: Help Needed In-Reply-To: <5D5D9713.6020100@openstack.org> References: <5D5D9713.6020100@openstack.org> Message-ID: On Wed, Aug 21, 2019 at 3:14 PM Jimmy McArthur wrote: > Hi everyone! > > The Forum in Shanghai is coming up. We require 2 volunteers from the TC > and 2 from the UC for the Forum Selection Committee. For more information, > please see:https://wiki.openstack.org/wiki/Forum I'm in from the TC side. // jim > > > Please reach out to myself orknelson at openstack.org if you're interested. > Volunteers should respond on or before September 2, 2019. > > Note: volunteers are required to be currently serving on either the UC or > the TC. > > Cheers, > Jimmy > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.settle at outlook.com Thu Aug 22 15:08:10 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 22 Aug 2019 15:08:10 +0000 Subject: [TC] [UC] Shanghai Forum Selection Committee: Help Needed In-Reply-To: <5D5D9713.6020100@openstack.org> References: <5D5D9713.6020100@openstack.org> Message-ID: On Wed, 2019-08-21 at 14:10 -0500, Jimmy McArthur wrote: > Hi everyone! > > The Forum in Shanghai is coming up. We require 2 volunteers from the > TC and 2 from the UC for the Forum Selection Committee. For more > information, please see:https://wiki.openstack.org/wiki/Forum > > Please reach out to myself orknelson at openstack.org if you're > interested. Volunteers should respond on or before September 2, 2019. Sure! > > Note: volunteers are required to be currently serving on either the > UC or the TC. > > Cheers, > Jimmy > > -- Alexandra Settle IRC: asettle From jeremyfreudberg at gmail.com Thu Aug 22 15:49:50 2019 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 22 Aug 2019 11:49:50 -0400 Subject: [tc] Naming releases after people Message-ID: Hi, There is currently ongoing discussion about a new process for selecting release names. I would like to draw attention to one proposal in particular: "Name releases after ATCs" [0]. This is currently the only proposal which explicitly prefers names 'telling the story of OpenStack' like the current process based on the location of the summit does. For this reason I personally favor this proposal above all others currently proposed. There are some areas of the proposal to smooth out (and I mention these areas on the review), but I before I proceed, I would like to gather feedback from the community about the mere idea of naming releases after people. If such an idea strikes you one way or the other, please leave a comment on the review; I'd appreciate it greatly. Thanks, Jeremy [0] https://review.opendev.org/#/c/677827/ From elmiko at redhat.com Thu Aug 22 16:38:20 2019 From: elmiko at redhat.com (Michael McCune) Date: Thu, 22 Aug 2019 12:38:20 -0400 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community Message-ID: hello all, I am writing on behalf of the API-SIG to update the community on our status. We have recently learned that Ed Leafe will be leaving the membership of the SIG as his new working circumstances are pulling him away from this work. This will leave us with only 2 active members, Dmitry Tantsur and myself. Firstly, I want to extend a heartfelt thanks to Ed. thank you =) I believe that Ed is the longest standing member of the SIG, having been active from the days when it was just a smol working group. His contributions have been numerous over the years and his guidance and wisdom will be missed. Second, this change in our membership has given rise to some questions about the future of the SIG and how we might best serve the OpenStack community. We currently hold an office hour each week and see relatively low traffic during this hour. Our activities have mainly been focused on writing and maintaining the guidelines and the support structure around those guidelines. As most of the new writing has ended (there are a few still in flight), we have been paying most of our attention to maintenance and supporting the community. As our ability to maintain an active membership has diminished, and many of our initial goals met, we have started to question whether we should still maintain the "SIG" status and also how we might best serve the community going forward. We will still need core members to review and approve changes, but perhaps we should adjust our workflow to be of greater service. I don't have any answers to this more existential question about the SIG, but we as a group felt it was a good idea to reach out to the wider community for advice and guidance. So, what do you all have to say? What would you like to see from the API-SIG moving forward? Should we re-evaluate the purpose of this group and how it relates to the OpenStack community? thank you all for the wonderful spirit of openness that we build with, and thanks for listening. and lastly, thanks again to Ed. You have acted as an example of the best aspects of open collaboration. peace o/ From openstack at fried.cc Thu Aug 22 17:45:59 2019 From: openstack at fried.cc (Eric Fried) Date: Thu, 22 Aug 2019 12:45:59 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <8b515ac5-e15c-43b6-ea8f-058891b0336d@gmail.com> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> <09113cee-5777-781a-9fed-285f279da63d@gmail.com> <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> <8b515ac5-e15c-43b6-ea8f-058891b0336d@gmail.com> Message-ID: <307a411d-b694-cd2e-bb36-8f017930f560@fried.cc> {NOVA: websockify 0.9.0 breaks test_novnc [1]} > I've uploaded a nova change at: https://review.opendev.org/677856 > > and added it as a Depends-On to: https://review.opendev.org/677798 > > to see if it solves the problem. It does; it's now merging. Once it and [2] (tempest) have merged, we should be able to proceed with the u-c bump. Thanks Mel! @prometheanfire, you could now swizzle [3] to un-DNM and Depends-On those two. (Is that how this is done, or do we need to let a bot-generated patch like [4] do it?) efried [1] https://bugs.launchpad.net/nova/+bug/1840788 [2] https://review.opendev.org/674364 [3] https://review.opendev.org/677479 [4] https://review.opendev.org/677619 From mthode at mthode.org Thu Aug 22 18:41:26 2019 From: mthode at mthode.org (Matthew Thode) Date: Thu, 22 Aug 2019 13:41:26 -0500 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <307a411d-b694-cd2e-bb36-8f017930f560@fried.cc> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <20190820144401.4dculyxz3s2jqgyk@mthode.org> <09113cee-5777-781a-9fed-285f279da63d@gmail.com> <6616fff3-b216-4997-b8e0-1ffaba0da3a9@www.fastmail.com> <8b515ac5-e15c-43b6-ea8f-058891b0336d@gmail.com> <307a411d-b694-cd2e-bb36-8f017930f560@fried.cc> Message-ID: <20190822184126.oyuxyxcaarvouqzy@mthode.org> On 19-08-22 12:45:59, Eric Fried wrote: > {NOVA: websockify 0.9.0 breaks test_novnc [1]} > > > I've uploaded a nova change at: https://review.opendev.org/677856 > > > > and added it as a Depends-On to: https://review.opendev.org/677798 > > > > to see if it solves the problem. > > It does; it's now merging. Once it and [2] (tempest) have merged, we > should be able to proceed with the u-c bump. > > Thanks Mel! > > @prometheanfire, you could now swizzle [3] to un-DNM and Depends-On > those two. (Is that how this is done, or do we need to let a > bot-generated patch like [4] do it?) > > efried > > [1] https://bugs.launchpad.net/nova/+bug/1840788 > [2] https://review.opendev.org/674364 > [3] https://review.opendev.org/677479 > [4] https://review.opendev.org/677619 > I've un DNM'd it, thanks for the work -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From miguel at mlavalle.com Thu Aug 22 19:54:04 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 22 Aug 2019 14:54:04 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron Drivers meeting on August 23rd Message-ID: Dear Neutrinos, 3 out of 5 members of the drivers team are off on vacation this week, so we won't have quorum for the meeting. As a consequence, I am cancelling it this week. We will resume next week. Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Aug 22 20:16:38 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 22 Aug 2019 15:16:38 -0500 Subject: [nova][ops] What should the compute service delete behavior be wrt resource providers with allocations? In-Reply-To: References: <48dfdbec-d662-a184-3c63-ec8f284f1702@gmail.com> Message-ID: <70f06e96-0544-a9ff-96e2-4228728c5fb2@gmail.com> On 6/13/2019 1:45 PM, Matt Riedemann wrote: > 2. Implement option #1 above where we fail to delete the compute service > if any of the resource providers cannot be deleted. We'd have stuff in > the logs about completing migrations and trying again, and failing that > cleanup allocations for old evacuations. Rather than dump all of that > info into the logs, it would probably be better to just write up a > troubleshooting doc [2] for it and link to that from the logs, then the > doc can reference APIs and CLIs to use for the cleanup scenarios. It's been a couple of months but I finally got around to starting this [1]. There are several TODOs in there but I've updated the functional test to show we're no longer orphaning the resource provider. There are also questions about what to do if we hit this in the compute manager during an ironic node re-balance (different issue but it touches the same delete_resource_provider code). I haven't started on a troubleshooting doc yet since I'm waiting on the novaclient change [2] to land which will be part of that (a CLI to find certain types of migration records on the source compute). [1] https://review.opendev.org/#/c/678100/ [2] https://review.opendev.org/#/c/675117/ -- Thanks, Matt From mriedemos at gmail.com Thu Aug 22 21:50:05 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 22 Aug 2019 16:50:05 -0500 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. In-Reply-To: References: Message-ID: <86cd885d-8cfa-23db-421e-2f3b2989009e@gmail.com> On 8/21/2019 1:59 AM, Alex Xu wrote: > We get a lot of discussion on how to do the claim for the vpmem. There > are a few points we are trying to match: > > * Avoid race problem. (the current VGPU assignment has been found having > race issue https://launchpad.net/bugs/1836204) > * Avoid the device assignment management to be virt driver and > platform-specific. > * Keep it simple. > > Currently, we go through two solutions here. This email is going to > summary the pros/cons of these two solutions. > > #1 Without Nova DB persistent for the assignment info, depends on > hypervisor persistent it. > >    The idea is adding > VirtDriver.claim/unclaim_for_instance(instance_uuid, flavor_id) > interface. The assignment info is populated from hypervisor when > nova-compute startup. And keep in the memory of VirtDriver. The Is there any reason the device assignment in-memory mapping has to be in the virt driver and not, for example, the ResourceTracker itself? This becomes important below. > instance_uuid is used to distinguish the claim from the different > instance. The flavor_id is used for the same host resize, to distinguish > the claim for source and target. This virt driver method is being > invoked inside ResourceTracker to avoid the race problem. There is no > any nova DB persistent for the assignment info. > https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virtual-persistent-memory > > pros: > * Hidden all the device detail and virt driver detail inside the virt > driver. > * Less upgrade issue in the future since it doesn't involve any nova DB > model change > * Expecting as simple implementation since everything inside virt driver. > cons: >    * Two cases are being found, the domain XML being lost for Libvirt > virt driver. And we don't know other hypervisor behavior yet. How do we "lose" the domain xml? I guess your next points are examples? >       * For the same host resize, the source and target instance are > sharing single one domain XML. After the libvirt virt driver updated the > domain XML to the target instance, the source instance's assignment > information will be lost when a nova-compute restart happened. That > means the resized instance can't be revert, the only choice for the user > is to confirm the resize. As discussed with Dan and me in IRC a week or two ago, we suggested you could do the same migration-based allocation switch for move operations as we do for cold migrate, resize and live migration since Queens, where the source node allocations are consumed by the migration record and the target node allocations are consumed by the instance. The conductor swaps the source node allocations before calling the scheduler which will create the target node allocations with the instance. On confirm/revert we either drop the source node allocations (held by the migration) or swap them back (and drop the target node allocations held by the instance). In your device case, clearly conductor and placement isn't involved since we're not tracking those low-level details in placement. Placement just knows there is a certain amount of some resource class but not which consumers are actually assigned which devices on the hypervisor (like pci device management). But as far as keeping track of the assignments in memory, we could still do the same swap where the migration record is tracking the old flavor device assignments (in the virt driver or resource tracker) and the instance record is tracking the new flavor device assignments. That resolves the same-host resize case, correct? Doing it generically in the ResourceTracker is why I asked about doing that above in the RT rather than the driver. What that doesn't solve is restarts of the compute service while there is a pending resize, which is why we need to persist some information somewhere. We could use the domain xml if it contained the flavor id, but it doesn't - and for same-host resize we only have one domain xml so that's not really an option (as you've noted). >       * For live migration, the target host's domain XML will be > cleanup by libvirt after a host restart. The assignment information is > lost before nova-compute startup and doing a cleanup. I'm not really following you here. This is not an expected situation, correct? Meaning the target compute service is restarted while there is an in-progress live migration? I imagine if that happens we have lots of problems and most (manual) recovery procedures are going to involve the operator trying to destroy the guest and it's related resources from the target host and hard rebooting to recover the guest on the source host. >    * Can not support the same host cold migration. Since we need a way > to identify the source and target instance's assignment in memory. But > the same host cold migration means the same instance UUID and same > flavor ID, there isn't another else can be used to distinguish the > assignment. The only in-tree virt driver that supports cold migrating on the same compute service host is the vmware driver, and that does not support things like VGPUs or VPMEMs, so I'm not sure why cold migration on the same host is a concern here - it's not supported and no one is working on adding that support. >    * There are workarounds added for above points, the code becomes > fragile. To summarize, it sounds like the biggest problem is the lack of persistence during a same-host resize because we'd lost the in-memory device assignment tracking, even if we did the migration-based allocation swap magic as described above. Could we have a compromise where for all times *except* during some migration, we get the assigned devices from the hypervisor, but otherwise during a migration we store the old/new assignments in the MigrationContext? That would give us the persistence we need and would only be something that we temporarily care about during a migration. The thing I'm not sure about is if we do that, does it make things more complicated in general for the non-migration cases, or if we do it should we just go the extra mile and always be tracking assigned devices in the database exactly like what we do for PCI devices today - meaning we wouldn't have a special edge case just for migrations with these types of resources. > > #2 With nova DB persistent, but using virt driver specific blob to store > virt driver specific info. > >    The idea is persistent the assignment for instance into DB. The > resource tracker gets available resources from virt driver. The resource > tracker will calculate on the fly based on available resources and > assigned resources from instance DB. The new field ·instance.resources· > is designed for supporting virt driver specific metadata, then hidden > the virt driver and platform detail from RT. > https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new I left some comments in the etherpad about the proposed claims process but the "on the fly" part concerns me for performance, especially if we don't make that conditional based on the types of resources we're claiming. During a claim the ResourceTracker already has the list of tracked_instances and tracked_migrations it cares about, but it sounds like you're proposing that we would also now have to re-fetch all of that data from the database just to get the resources and migration context information for any instances tracked by that host to determine what their assignments are. That seems really heavy-weight to me and is my major concern with this approach, well, that and the fact it sounds like we're creating a new version of the PCIManager (though more generic, it could have a lot of the same split brain type issues we've had with tracking PCI device inventory and allocations over the years since it was introduced; by split brain I mean the hypervisor saying one thing but nova thinking another). > > pros: >    * Persistent assignment into instance object. Avoid the corner case > we lost the assignment. >    * The ResourceTracker is responsible for doing the claim job. This > is more reliable and no race problem, since ResourceTracker works very > well for a long time. Heh, I guess yeah. :) There are a lot of dragons in that code and we're still fixing bugs in it even though it should be mostly stable after all of these years. But resource tracking in general sucks regardless of where it happens (RT, placement or the virt driver) so we just have to be comfortable with knowing there are going to be dragons. >    * The virt driver specific json-blob hidden the virt driver/platform > detail from the ResourceTracker. Random json blobs are nasty in general especially if we need to convert data at runtime later for some upgrade purpose. What is proposed in the etherpad seems OK(ish) though given the only very random thing is the 'metadata' field, but I could see that all getting confusing to maintain later when we have different schema/semantic rules about what's in the metadata depending on the resource class and virt driver. But we'll likely have that problem anyway if we go with the non-persistent option #1 above. >    * The free resource is calculated on the fly, keeping the > implementation simple. Actually, the RT just provides a point to do the > claim, needn't involve the complex of RT.update_available_resources > cons: >    * Doesn't like PCIManager which has both instance side and host side > persistent info. On the fly calculation should take care of the orphaned > instance(the instance is deleted from DB, but still existing on the > host), so actually, it isn't unresolvable issue. And it isn't too hard > to upgrade to have host side persistent info in the future if we want. >    * Data model change for the original proposal. Need review to decide > the data model enough generic > > Currently, Sean, Eric and I prefer the #2 now since the #1 has flaws for > the same host resize and live migration can't be skipped by design. At this point I can't say I have a strong opinion. I think either approach is going to be complicated and buggy and hard to maintain, especially if we don't have CI for these more exotic scenarios (which we don't for VGPU or VPMEM even though you said someone is working on the latter). I've voiced my concerns here but I'm not going to "die on a hill" for this, so in the end I'll likely roll over for whatever those of you that really care about this want to do, and know that you're going to be maintainers of it. -- Thanks, Matt From fungi at yuggoth.org Thu Aug 22 23:08:13 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Aug 2019 23:08:13 +0000 Subject: [all][elections] Combined PTL/TC Election Season Message-ID: <20190822230813.izuaaecwat2rls5v@yuggoth.org> Election details: https://governance.openstack.org/election/ The nomination period officially begins Aug 27, 2019 23:45 UTC. Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Due to circumstances of timing, PTL and TC elections for the coming cycle will run concurrently; deadlines for their nomination and voting activities are synchronized but will still use separate ballots. Please note, if only one candidate is nominated as PTL for a project team during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a project team's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, [1] https://governance.openstack.org/election/#election-officials -- Jeremy Stanley, on behalf of the technical election officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From feilong at catalyst.net.nz Fri Aug 23 03:32:01 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Fri, 23 Aug 2019 15:32:01 +1200 Subject: [openstack-dev][magnum] Nominating brtknr for Magnum core Message-ID: <4fd7baa5-b3e5-80dc-55dd-e2f5f98d1511@catalyst.net.nz> Hi team, I would like to propose adding Bharat Kunwar(brtknr) for the Magnum core team. He has been an awesome contributor since joining the Magnum team. And now he is currently the most active non-core reviewer on Magnum projects for the last 180 days[1].  Bharat has great technical expertise and contributed many high quality patches/reviews. I'm sure he would be an excellent addition to the team. If no one objects, I'll proceed and add him in a week from now. Thanks. [1] https://www.stackalytics.com/report/contribution/magnum-group/180 -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Head of R&D Catalyst Cloud - Cloud Native New Zealand Tel: +64-48032246 Email: flwang at catalyst.net.nz Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From feilong at catalyst.net.nz Fri Aug 23 03:42:47 2019 From: feilong at catalyst.net.nz (Feilong Wang) Date: Fri, 23 Aug 2019 15:42:47 +1200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster Message-ID: Hi all, At this moment, Magnum is still using Fedora Atomic 27 as the default image in devstack. But you can definitely use Fedora Atomic 29 and it works fine. But you may run into a performance issue when booting Fedora Atomic 29 if your compute host doesn't have enough entropy. There are two steps you need for that case: 1. Adding property hw_rng_model='virtio' to Fedora Atomic 29 image 2. Adding property hw_rng:allowed='True' to flavor, and we also need hw_rng:rate_bytes=4096 and hw_rng:rate_period=1 to get a reasonable rate limit to avoid the VM drain the hypervisor. We are working on a patch for Magnum devstack to support FA29 out of box. Meanwhile, we're starting to test Fedora CoreOS 30. Please popup in #openstack-containers channel if you have any question. Cheers. -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Head of R&D Catalyst Cloud - Cloud Native New Zealand Tel: +64-48032246 Email: flwang at catalyst.net.nz Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Fri Aug 23 07:10:57 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 23 Aug 2019 19:10:57 +1200 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: Hello Douglas! As someone who has struggled against the performance issues in Horizon I can easily feel your pain, and an effort to making something better is good. Sadly, this doesn't sound like a safe direction, even if for admin only purposes. The first major issue is that you connect to the databases of the services directly. That's a major issue, both for long term compatibility, and security. The APIs should always be the main point of contact and the ONLY contract that the services have to maintain. By connecting to the database directly you are now relying on a data structure that can, and likely will change, and any security and sanity checking on filters and queries is now handled on your layer rather than the application itself. Not only that, but your dashboard also now needs passwords for all the databases, and by the sounds of it all the message queues. I would highly encourage you to try and work with the community to fix the issues at the API layers rather than bypassing them. We can add better query and filtering to the APIs, and we can work on improving performance if we know the pain points, and this is likely where contributions would be welcome. I think we do need an alternative to Horizon, and the ideal solution in my mind is to make a new dashboard built on either React or Vue, with a thin proxy app (likely in Flask) that serves the initial javascript page, and proxies any API requests to the services themselves. The filter issues should made better by implementing more complex filtering in the APIs themselves, and having the Dashboard layer better at exposing those dynamically. React or Vue would do a much better job of dynamically and quickly reloading and querying the services, and it would make the whole experience much nicer. The one of the best parts of Horizon was that it was a 'dumb' dashboard built around your token. It can be deployed anywhere, by anyone, and only needs access to the cluster to work, no secrets to any database. I know this is a huge issue, and we do need to solve it, but I hope we can work on something better that doesn't bypass the APIs, because that isn't a safe solution. :( Cheers, Adrian Turjak On 20/08/19 10:14 PM, Douglas Zhang wrote: > > Hello everyone, > > To help users interact with openstack, we’re currently developing a > client-side web tool which enables administrators to manage their > openstack cluster in a more efficient and convenient way. (Since we > have not named it officially yet, I’m going to call it openstack-admin) > > *#Introduction* > > Some may ask, “Why do we need an extra web-based user interface since > we have horizon?” Well, although horizon is a mature and powerful > dashboard, it is far not efficient enough on big clusters (a simple > |list|operation could take seconds to complete). What’s more, its > flexibility of searching could not match our requirements. To overcome > obstacles above, a more efficient tool is urgently required, that’s > why we started to develop openstack-admin. > > *#Highlights* > > Comparing with the current user interface, openstack-admin has > following advantages: > > * > > *Fast*: openstack-admin gets data straightly from SQL databases > instead of calling standard openstack API, which accelerates the > querying period to a large extent (especially when we’re dealing > with a large amount of data). > > * > > *Flexible*: openstack-admin supports the fuzzy search for any > important field(e.g. display_name/uuid/ip_address/project_name of > an instance), which enables users to locate a particular object in > no time. > > * > > *User-friendly*: the backend of openstack-admin gets necessary > messages from the message queue used by nova, and send them to the > frontend using websocket. This way, not only more realistic > progress bars could be implemented, but more detailed information > could be provided to users as well. > > *#Issues* > > To make this tool more efficient and provide better support for > concurrency, we chose Golang to implement openstack-admin. As I’ve > asked before (truly appreciate advises from Jeremy and Ghanshyam), a > project written by an unofficial language may be accepted only if > existing languages have been proven to not meet the technical > requirements, so we’re considering re-implementing openstack-admin > using python if we can’t come to an agreement on the language issue. > > So that’s all. How do you guys think of this project? > > Thanks, > > Douglas Zhang > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Aug 23 08:16:30 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 23 Aug 2019 10:16:30 +0200 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: Message-ID: Michael McCune wrote: > [...] > I don't have any answers to this more existential question about the > SIG, but we as a group felt it was a good idea to reach out to the > wider community for advice and guidance. So, what do you all have to > say? What would you like to see from the API-SIG moving forward? > Should we re-evaluate the purpose of this group and how it relates to > the OpenStack community? > [...] Thanks for starting this thread, Michael. Just because something was needed at one point in our history does not mean we need to continue doing it forever, so reevaluating periodically is important. The API WG was originally formed to (1) provide guidelines for API design for project teams to follow, and (2) improve API user experience by converging the OpenStack APIs to be consistent. It was then converted to a SIG, but the original purpose remained. Would you say that those goals have been completed? Are the documented guidelines sufficiently complete to be usable? Is the resulting API user experience sufficiently consistent? If not, maybe this is an opportunity to recruit, by painting a desirable common goal, maybe leveraging the "community goals" process to achieve incremental, baby-steps improvements within the scope of a release cycle. Personally I think it's always good to have a group of API design experts that project teams can tap into when they have questions on good API design. I just have no idea how often such advice is actually asked for. How often do you get questions on API design from projects adding features to their API? Would you say that when a new project adds an API, it is well-designed, or it would have benefited from your advice? Cheers, -- Thierry Carrez (ttx) From soulxu at gmail.com Fri Aug 23 08:43:43 2019 From: soulxu at gmail.com (Alex Xu) Date: Fri, 23 Aug 2019 16:43:43 +0800 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. In-Reply-To: <86cd885d-8cfa-23db-421e-2f3b2989009e@gmail.com> References: <86cd885d-8cfa-23db-421e-2f3b2989009e@gmail.com> Message-ID: Matt Riedemann 于2019年8月23日周五 上午5:53写道: > On 8/21/2019 1:59 AM, Alex Xu wrote: > > We get a lot of discussion on how to do the claim for the vpmem. There > > are a few points we are trying to match: > > > > * Avoid race problem. (the current VGPU assignment has been found having > > race issue https://launchpad.net/bugs/1836204) > > * Avoid the device assignment management to be virt driver and > > platform-specific. > > * Keep it simple. > > > > Currently, we go through two solutions here. This email is going to > > summary the pros/cons of these two solutions. > > > > #1 Without Nova DB persistent for the assignment info, depends on > > hypervisor persistent it. > > > > The idea is adding > > VirtDriver.claim/unclaim_for_instance(instance_uuid, flavor_id) > > interface. The assignment info is populated from hypervisor when > > nova-compute startup. And keep in the memory of VirtDriver. The > > Is there any reason the device assignment in-memory mapping has to be in > the virt driver and not, for example, the ResourceTracker itself? This > becomes important below. > We will answer this below. It is about whether using migration allocation make sense or not. > > > instance_uuid is used to distinguish the claim from the different > > instance. The flavor_id is used for the same host resize, to distinguish > > the claim for source and target. This virt driver method is being > > invoked inside ResourceTracker to avoid the race problem. There is no > > any nova DB persistent for the assignment info. > > > https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/virtual-persistent-memory > > > > pros: > > * Hidden all the device detail and virt driver detail inside the virt > > driver. > > * Less upgrade issue in the future since it doesn't involve any nova DB > > model change > > * Expecting as simple implementation since everything inside virt driver. > > cons: > > * Two cases are being found, the domain XML being lost for Libvirt > > virt driver. And we don't know other hypervisor behavior yet. > > How do we "lose" the domain xml? I guess your next points are examples? > > > * For the same host resize, the source and target instance are > > sharing single one domain XML. After the libvirt virt driver updated the > > domain XML to the target instance, the source instance's assignment > > information will be lost when a nova-compute restart happened. That > > means the resized instance can't be revert, the only choice for the user > > is to confirm the resize. > > As discussed with Dan and me in IRC a week or two ago, we suggested you > could do the same migration-based allocation switch for move operations > as we do for cold migrate, resize and live migration since Queens, where > the source node allocations are consumed by the migration record and the > target node allocations are consumed by the instance. The conductor > swaps the source node allocations before calling the scheduler which > will create the target node allocations with the instance. On > confirm/revert we either drop the source node allocations (held by the > migration) or swap them back (and drop the target node allocations held > by the instance). > > In your device case, clearly conductor and placement isn't involved > since we're not tracking those low-level details in placement. Placement > just knows there is a certain amount of some resource class but not > which consumers are actually assigned which devices on the hypervisor > (like pci device management). But as far as keeping track of the > assignments in memory, we could still do the same swap where the > migration record is tracking the old flavor device assignments (in the > virt driver or resource tracker) and the instance record is tracking the > new flavor device assignments. That resolves the same-host resize case, > correct? Doing it generically in the ResourceTracker is why I asked > about doing that above in the RT rather than the driver. > > What that doesn't solve is restarts of the compute service while there > is a pending resize, which is why we need to persist some information > somewhere. We could use the domain xml if it contained the flavor id, > but it doesn't - and for same-host resize we only have one domain xml so > that's not really an option (as you've noted). > Actually, there are two problems here, let us talk about it separately: 1. lost allocation info after compute service restart for the same host resize This is the about above point. It is nothing about using migration allocation or using instance_uuid + flavor_id. It only can be fixed by DB persistent, also as you said later about persistent in MigrationContext. I will explain that later. 2. Supporting same host cold migration This is the point I said below. For the same host resize, instance_uuid + flavor_id is working very well. But it can't support the same host cold migration. And yes, migration allocation can fix it. But also as you said, do we need to support the same host cold migration? If the answer is no, then we needn't bother it. instance_uuid + flavor_id is much simple. If the answer is yes, right, we can put it into the RT. But it will be complex, maybe we need a data model like the DB way proposal to pass the virt driver/platform specific info between RT and virt driver. Also, think about the case, we need to check if there is any incomplete live-migration, we need to do a cleanup for all free vpmems, since we lost the allocation info for live-migration. Then we need a virt dirver interface to trigger that cleanup, pretty sure I don't want to call it as driver.cleanup_vpmems(). We also need to change the existing driver.spawn method, to pass the assigned resource into virt driver. Also thinking about the case of interrupted migration, I guess there is no way to switch the I also remember Dan said, it isn't good to not support same host cold migration. > > * For live migration, the target host's domain XML will be > > cleanup by libvirt after a host restart. The assignment information is > > lost before nova-compute startup and doing a cleanup. > > I'm not really following you here. This is not an expected situation, > correct? Meaning the target compute service is restarted while there is > an in-progress live migration? I imagine if that happens we have lots of > problems and most (manual) recovery procedures are going to involve the > operator trying to destroy the guest and it's related resources from the > target host and hard rebooting to recover the guest on the source host. > It is more terrible, the restart of nova-compute will just set the instance back to active status https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/manager.py#L1058 And leaving the target host without any cleanup. Also in the LM rollback method, we set the instance back to action in the very beginning, if the compute restart before actual cleanup, then the target won't be clean up also. https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/manager.py#L7323 We shouldn't set the instance back to active when there is migration isn't clean up. Those are existing bugs, and we should fix it. Whatever the solution we choose, it won't be the thing can be fixed automatically with new solution. > > > * Can not support the same host cold migration. Since we need a way > > to identify the source and target instance's assignment in memory. But > > the same host cold migration means the same instance UUID and same > > flavor ID, there isn't another else can be used to distinguish the > > assignment. > > The only in-tree virt driver that supports cold migrating on the same > compute service host is the vmware driver, and that does not support > things like VGPUs or VPMEMs, so I'm not sure why cold migration on the > same host is a concern here - it's not supported and no one is working > on adding that support. > > > * There are workarounds added for above points, the code becomes > > fragile. > > To summarize, it sounds like the biggest problem is the lack of > persistence during a same-host resize because we'd lost the in-memory > device assignment tracking, even if we did the migration-based > allocation swap magic as described above. > Exactly > > Could we have a compromise where for all times *except* during some > migration, we get the assigned devices from the hypervisor, but > otherwise during a migration we store the old/new assignments in the > MigrationContext? That would give us the persistence we need and would > only be something that we temporarily care about during a migration. The > thing I'm not sure about is if we do that, does it make things more > complicated in general for the non-migration cases, or if we do it > should we just go the extra mile and always be tracking assigned devices > in the database exactly like what we do for PCI devices today - meaning > we wouldn't have a special edge case just for migrations with these > types of resources. > Then the only difference with DB persistent way is that store the allocation on the "Instance.resources" also. If we do that one more step, then we needn't change our virt driver interface, and thinking about how to switch the consumer from migration back to instance. Which are the complex I said above. > > > > > #2 With nova DB persistent, but using virt driver specific blob to store > > virt driver specific info. > > > > The idea is persistent the assignment for instance into DB. The > > resource tracker gets available resources from virt driver. The resource > > tracker will calculate on the fly based on available resources and > > assigned resources from instance DB. The new field ·instance.resources· > > is designed for supporting virt driver specific metadata, then hidden > > the virt driver and platform detail from RT. > > https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new > > I left some comments in the etherpad about the proposed claims process > but the "on the fly" part concerns me for performance, especially if we > don't make that conditional based on the types of resources we're > claiming. During a claim the ResourceTracker already has the list of > tracked_instances and tracked_migrations it cares about, but it sounds > like you're proposing that we would also now have to re-fetch all of > that data from the database just to get the resources and migration > context information for any instances tracked by that host to determine > what their assignments are. That seems really heavy-weight to me and is > my major concern with this approach, well, that and the fact it sounds > like we're creating a new version of the PCIManager (though more > generic, it could have a lot of the same split brain type issues we've > had with tracking PCI device inventory and allocations over the years > since it was introduced; by split brain I mean the hypervisor saying one > thing but nova thinking another). > I think you are right, we can use RT.tracked_instances and RT.tracked_migrations. Then it isn't on the fly anymore. There are two existing bugs should be fixed. 1. The orphaned instance isn't in RT.tracked_instance. Although there is resource consuming for orphaned instance https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/resource_tracker.py#L771, the virt driver interface https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/resource_tracker.py#L1447 doesn't implement by most of virt driver. 2. The error status migration isn't in RT.tracked_migration. The resize may interrupt in the middle. Then we set the migration to an error status. Although we have a _clean_incomplete_migration periodic task to cleanup those error migration, there is a window between cleanup, the RT doesn't count the resource consuming. Those are existing bugs, and easy to be fixed. That is why I use the on-the-fly in the beginning, but I agree, those bugs are easy to fix, and the code will begin more tidy. For the split-brain problem, to be honest, the domain XML way shows to us, it can't fix it also. It lost the allocation for the same host resize and live migration. > > > > > pros: > > * Persistent assignment into instance object. Avoid the corner case > > we lost the assignment. > > * The ResourceTracker is responsible for doing the claim job. This > > is more reliable and no race problem, since ResourceTracker works very > > well for a long time. > > Heh, I guess yeah. :) There are a lot of dragons in that code and we're > still fixing bugs in it even though it should be mostly stable after all > of these years. But resource tracking in general sucks regardless of > where it happens (RT, placement or the virt driver) so we just have to > be comfortable with knowing there are going to be dragons. > I already list the bug above, I think the problem is we missing some tracking and doesn't have a close loop for the instance and migration status. I add my analyze in the bottom of the etherpad. https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new > > * The virt driver specific json-blob hidden the virt driver/platform > > detail from the ResourceTracker. > > Random json blobs are nasty in general especially if we need to convert > data at runtime later for some upgrade purpose. What is proposed in the > etherpad seems OK(ish) though given the only very random thing is the > 'metadata' field, but I could see that all getting confusing to maintain > later when we have different schema/semantic rules about what's in the > metadata depending on the resource class and virt driver. But we'll > likely have that problem anyway if we go with the non-persistent option > #1 above. > It is a JOSN blob which dump from versioned object, so it should be OK? > > > * The free resource is calculated on the fly, keeping the > > implementation simple. Actually, the RT just provides a point to do the > > claim, needn't involve the complex of RT.update_available_resources > > cons: > > * Doesn't like PCIManager which has both instance side and host side > > persistent info. On the fly calculation should take care of the orphaned > > instance(the instance is deleted from DB, but still existing on the > > host), so actually, it isn't unresolvable issue. And it isn't too hard > > to upgrade to have host side persistent info in the future if we want. > > * Data model change for the original proposal. Need review to decide > > the data model enough generic > > > > Currently, Sean, Eric and I prefer the #2 now since the #1 has flaws for > > the same host resize and live migration can't be skipped by design. > > At this point I can't say I have a strong opinion. I think either > approach is going to be complicated and buggy and hard to maintain, > especially if we don't have CI for these more exotic scenarios (which we > don't for VGPU or VPMEM even though you said someone is working on the > latter). I've voiced my concerns here but I'm not going to "die on a > hill" for this, so in the end I'll likely roll over for whatever those > of you that really care about this want to do, and know that you're > going to be maintainers of it. > If you worry about the VPMEM itself, then Rui is working on CI, he said he needs two weeks before he has done the work. We can ask him give an update at here if you want. If you worry about the RT part, I think we can have functional test to cover that? I won't say the DB way is complicated, the most of code in RT is about to get the assigned resource from tracked_instance and tracked_migration, then compare to the available resource. The buggy is existing nova bug. It isn't the fault of the proposal. I don't know what the maintain problem point to. it will be great we have a specific case to discuss. > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Fri Aug 23 11:06:23 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Fri, 23 Aug 2019 07:06:23 -0400 Subject: [openstack-dev][magnum] Nominating brtknr for Magnum core In-Reply-To: <4fd7baa5-b3e5-80dc-55dd-e2f5f98d1511@catalyst.net.nz> Message-ID: An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Aug 23 11:09:31 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 23 Aug 2019 20:09:31 +0900 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190821194144.GA1844@zeong> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190819145437.GA29162@zeong> <16cb3b24926.f85727e0206810.691322353108028475@ghanshyammann.com> <20190821194144.GA1844@zeong> Message-ID: <16cbe2acb1d.ce031552275757.8109746026654681476@ghanshyammann.com> ---- On Thu, 22 Aug 2019 04:41:44 +0900 Matthew Treinish wrote ---- > On Wed, Aug 21, 2019 at 07:21:41PM +0900, Ghanshyam Mann wrote: > > ---- On Mon, 19 Aug 2019 23:54:37 +0900 Matthew Treinish wrote ---- > > > On Sun, Aug 18, 2019 at 11:16:11AM -0500, Matthew Thode wrote: > > > > NOVA: > > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > > websockify===0.9.0 tempest test failing > > > > > > > > KEYSTONE: > > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > > > NEUTRON: > > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > > this could be caused by pytest===5.1.0 as well > > > > > > > > KURYR: > > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > > https://review.opendev.org/665352 > > > > > > > > MISC: > > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > > > > This actually doesn't fix the underlying issue blocking it here. PR 265 is for > > > fixing a compatibility issue with python 3.4, which we don't officially support > > > in stestr but was a simple fix. The blocker is actually not an stestr issue, > > > it's a testtools bug: > > > > > > https://github.com/testing-cabal/testtools/issues/272 > > > > > > Where this is coming into play here is that stestr 2.5.0 switched to using an > > > internal test runner built off of stdlib unittest instead of testtools/subunit > > > for python 3. This was done to fix a huge number of compatibility issues people > > > had reported when trying to run stdlib unittest suites using stestr on > > > python >= 3.5 (which were caused by unittest2 and testools). The complication > > > for openstack (more specificially tempest) is that it's built off of testtools > > > not stdlib unittest. So when tempest raises 'self.skipException' as part of > > > it's class level skip checks testtools raises 'unittest2.case.SkipTest' instead > > > of 'unittest.case.SkipTest'. stdlib unittest does not understand what that is > > > and treats it as an unhandled exception which is a test failure, instead of the > > > intended skip result. [1] This is actually a general bug and will come up whenever > > > anyone tries to use stdlib unittest to run tempest. We need to come up with a > > > fix for this problem in testtools [2] or just workaround it in tempest. > > > > > > [1] skip decorators typically aren't effected by this because they set an > > > attribute that gets checked before the test method is executed instead of > > > relying on an exception, which is why this is mostly only an issue for tempest > > > because it does a lot of run time skips via exceptions. > > > > > > [2] testtools is mostly unmaintained at this point, I was recently granted > > > merge access but haven't had much free time to actively maintain it > > > > Thanks matt for details. As you know, for Tempest where we need to support py2.7 > > (including unitest2 use) for stable branches, we are going to use the specific stetsr > > version/branch( > is good option to me. I think your PR to remove the unittest2 use form testtools > > make sense to me [1]. A workaround in Tempest can be last option for us. > > https://github.com/testing-cabal/testtools/pull/277 isn't a short term > solution, unittest2 is still needed for python < 3.5 in testtools and > testtools has not deprecated support for python 2.7 or 3.4 yet. I probably > can rework that PR so that it's conditional and always uses stdlib unittest > for python >= 3.5 but then testtools ends up maintaining two separate paths > depending on python version. I'd like to continue thinking about that is as a > long term solution because I don't know when I'll have the time to keep pushing > that PR forward. Thanks for more details. I understand that might take time. I am in OpenInfra event and after that on vacation till 29th Aug. I will be able to check the workaround on testtools or tempest side after that only. I will check with Matthew about when is the plan to move the stestr to 2.5.0. -gmann > > > > > Till we fix it and to avoid gate break, can we cap stestr in g-r - stestr<2.5.0 ? I know that is > > not the options you like. > > > > [1] https://github.com/mtreinish/testtools/commit/38fc9a9e302f68d471d7b097c7327b4ff7348790 > > > > -gmann > > > > > > > > -Matt Treinish > > > > > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > > > I'm trying to get this in place as we are getting closer to the > > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > > would be appreciated. > > > > > > > > -- > > > > Matthew Thode > > > > > > > > > > > > > > From hagun.kim at samsung.com Fri Aug 23 11:29:41 2019 From: hagun.kim at samsung.com (=?UTF-8?B?6rmA7ZWY6rG0?=) Date: Fri, 23 Aug 2019 20:29:41 +0900 Subject: [openstack-helm] k8s endpoints disappear sometimes References: Message-ID: <20190823112941epcms1p5214a0fff5877e72cb35912ae1a555f26@epcms1p5> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 13402 bytes Desc: not available URL: From a.settle at outlook.com Fri Aug 23 11:50:42 2019 From: a.settle at outlook.com (Alexandra Settle) Date: Fri, 23 Aug 2019 11:50:42 +0000 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: References: Message-ID: Hi everyone, I have abandoned https://review.opendev.org/#/c/639376/ due to the lack of response on the review itself, and this email thread. All discussion regarding the legacy CLI client and OSC to please be constructed through this thread. As part of step 2 of my outlined plan below, I'd be looking for someone with particular interest in the transition of legacy CLI and OSC to step forward and help start a pop-up group. Thank you, Alex On Tue, 2019-08-20 at 08:32 +0000, Alexandra Settle wrote: > Hi all, > > For the Train cycle Artem Goncharov proposed moving the legacy client > CLIs to OSC. This goal did not move forward due to a broad range of > concerns and was eventually -1'd by Erno Kuvaja (Glance PTL) as he > was > unable to support it from a Glance point of view. > > Moving forward, I'd like to end the discussion in the review [1] by > abandoning the patch and move this to the mailing list. The following > looks like it needs to happen: > > 1. This PR should be abandoned. It is not going to be accepted as a > commmunity goal in this format and the debate within the comments is > circular. Let's step out of here, and start having conversation > elsewhere. > > 2. To start those conversations, a pop-up team would be a suitable > alternative to begin driving that work. Someone needs to step forward > to lead the pop-up team. The TC recommends two individuals step > forwards as indicated in the pop-up team Governance document [2]. > > 3. I recommend the pop-up team include Erno Kuvaja or a > representative > from the Glance team and including Matt Reidemann (who has been > working > to close the compute API gaps). > > 4. The leaders must then identify a clear objective, and a clear > disband criteria. > > If in case the pop-up team decide that this could be better drafted > as > a community goal for U, then that is a suitable alternative that can > be > defined using the new goal selection criteria that is being defined > here [3] But this should not be decided before clear objectives are > defined. > > Thanks, > > Alex > IRC: asettle > > [1] https://review.opendev.org/#/c/639376/ > [2] https://governance.openstack.org/tc/reference/popup-teams.html > [3] https://review.opendev.org/#/c/667932/ -- Alexandra Settle IRC: asettle From jim at jimrollenhagen.com Fri Aug 23 12:08:44 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 23 Aug 2019 08:08:44 -0400 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019 at 3:21 AM Adrian Turjak wrote: > Hello Douglas! > > As someone who has struggled against the performance issues in Horizon I > can easily feel your pain, and an effort to making something better is > good. Sadly, this doesn't sound like a safe direction, even if for admin > only purposes. > > The first major issue is that you connect to the databases of the services > directly. That's a major issue, both for long term compatibility, and > security. The APIs should always be the main point of contact and the ONLY > contract that the services have to maintain. By connecting to the database > directly you are now relying on a data structure that can, and likely will > change, and any security and sanity checking on filters and queries is now > handled on your layer rather than the application itself. Not only that, > but your dashboard also now needs passwords for all the databases, and by > the sounds of it all the message queues. > > I would highly encourage you to try and work with the community to fix the > issues at the API layers rather than bypassing them. We can add better > query and filtering to the APIs, and we can work on improving performance > if we know the pain points, and this is likely where contributions would be > welcome. > > I think we do need an alternative to Horizon, and the ideal solution in my > mind is to make a new dashboard built on either React or Vue, with a thin > proxy app (likely in Flask) that serves the initial javascript page, and > proxies any API requests to the services themselves. The filter issues > should made better by implementing more complex filtering in the APIs > themselves, and having the Dashboard layer better at exposing those > dynamically. React or Vue would do a much better job of dynamically and > quickly reloading and querying the services, and it would make the whole > experience much nicer. > Michael Krotscheck did a bunch of work to put CORS config in many API services. Why bother with a proxy when we can continue that? :) > > The one of the best parts of Horizon was that it was a 'dumb' dashboard > built around your token. It can be deployed anywhere, by anyone, and only > needs access to the cluster to work, no secrets to any database. > > I know this is a huge issue, and we do need to solve it, but I hope we can > work on something better that doesn't bypass the APIs, because that isn't a > safe solution. :( > > Cheers, > > Adrian Turjak > On 20/08/19 10:14 PM, Douglas Zhang wrote: > > Hello everyone, > > To help users interact with openstack, we’re currently developing a > client-side web tool which enables administrators to manage their openstack > cluster in a more efficient and convenient way. (Since we have not named it > officially yet, I’m going to call it openstack-admin) > > *# Introduction* > > Some may ask, “Why do we need an extra web-based user interface since we > have horizon?” Well, although horizon is a mature and powerful dashboard, > it is far not efficient enough on big clusters (a simple list operation > could take seconds to complete). What’s more, its flexibility of searching > could not match our requirements. To overcome obstacles above, a more > efficient tool is urgently required, that’s why we started to develop > openstack-admin. > > *# Highlights* > > Comparing with the current user interface, openstack-admin has following > advantages: > > - > > *Fast*: openstack-admin gets data straightly from SQL databases > instead of calling standard openstack API, which accelerates the querying > period to a large extent (especially when we’re dealing with a large amount > of data). > - > > *Flexible*: openstack-admin supports the fuzzy search for any > important field(e.g. display_name/uuid/ip_address/project_name of an > instance), which enables users to locate a particular object in no time. > - > > *User-friendly*: the backend of openstack-admin gets necessary > messages from the message queue used by nova, and send them to the frontend > using websocket. This way, not only more realistic progress bars could be > implemented, but more detailed information could be provided to users as > well. > > *# Issues* > > To make this tool more efficient and provide better support for > concurrency, we chose Golang to implement openstack-admin. As I’ve asked > before (truly appreciate advises from Jeremy and Ghanshyam), a project > written by an unofficial language may be accepted only if existing > languages have been proven to not meet the technical requirements, so we’re > considering re-implementing openstack-admin using python if we can’t come > to an agreement on the language issue. > > So that’s all. How do you guys think of this project? > > Thanks, > > Douglas Zhang > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiko at redhat.com Fri Aug 23 12:57:06 2019 From: elmiko at redhat.com (Michael McCune) Date: Fri, 23 Aug 2019 08:57:06 -0400 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019 at 4:28 AM Thierry Carrez wrote: > > Michael McCune wrote: > > [...] > > I don't have any answers to this more existential question about the > > SIG, but we as a group felt it was a good idea to reach out to the > > wider community for advice and guidance. So, what do you all have to > > say? What would you like to see from the API-SIG moving forward? > > Should we re-evaluate the purpose of this group and how it relates to > > the OpenStack community? > > [...] > > Thanks for starting this thread, Michael. Just because something was > needed at one point in our history does not mean we need to continue > doing it forever, so reevaluating periodically is important. > > The API WG was originally formed to (1) provide guidelines for API > design for project teams to follow, and (2) improve API user experience > by converging the OpenStack APIs to be consistent. It was then converted > to a SIG, but the original purpose remained. > > Would you say that those goals have been completed? Are the documented > guidelines sufficiently complete to be usable? Is the resulting API user > experience sufficiently consistent? If not, maybe this is an opportunity > to recruit, by painting a desirable common goal, maybe leveraging the > "community goals" process to achieve incremental, baby-steps > improvements within the scope of a release cycle. > i think the questions you lead with are excellent, and definitely something that we as a SIG, and community, should consider. although i feel quite good about the current state of the guidelines i'm sure there is room for improvement, but i think the question about the resulting API user experience is quite poignant. these are topics we are definitely bringing up for Ed's last meeting next week ;) > Personally I think it's always good to have a group of API design > experts that project teams can tap into when they have questions on good > API design. I just have no idea how often such advice is actually asked > for. How often do you get questions on API design from projects adding > features to their API? Would you say that when a new project adds an > API, it is well-designed, or it would have benefited from your advice? > these days, the rate of questions to the group has slowed but there are still times when we get called up (maybe once a month or less). i agree that having a group of experienced API engineers /does/ help the broader community, i think the tough part to evaluate is the impact we have on a day-to-day basis. in the past i have experienced first hand the effects of the API SIG and i have usually been very pleased and delighted that the group had such a wide reach. within the last few years though i have not had as much direct contact, it has mainly been through SIG meetings. i really appreciate the questions you are posing, i don't think i have good answers for them but i believe the SIG should put some time into thinking about, and answering, them. thanks again for the thoughtful comments =) peace o/ > Cheers, > > -- > Thierry Carrez (ttx) > From openstack at fried.cc Fri Aug 23 13:55:06 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 23 Aug 2019 08:55:06 -0500 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: Message-ID: > feel quite good about the current state of the guidelines i'm sure > there is room for improvement, It's worth noting that there's quite a lot of unmerged guideline content under the api-sig's purview [1]. We should either close or rehome these patches before we consider disbanding the SIG. It's obviously hard to merge stuff with a small (and now shrinking) core list [2], but at least some of this content is valuable and should not be abandoned. I, for one, would like to be able to refer to published documentation rather than in-flight (or abandoned) change sets when wrestling with version discovery logic in ksa/sdk. efried [1] https://review.opendev.org/#/q/project:openstack/api-sig+status:open [2] https://review.opendev.org/#/admin/groups/468,members From mriedemos at gmail.com Fri Aug 23 14:36:43 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Aug 2019 09:36:43 -0500 Subject: [nova] The pros/cons for libvirt persistent assignment and DB persistent assignment. In-Reply-To: References: <86cd885d-8cfa-23db-421e-2f3b2989009e@gmail.com> Message-ID: <72b6ae74-70e0-777f-e974-33dc4719684b@gmail.com> On 8/23/2019 3:43 AM, Alex Xu wrote: > > 2. Supporting same host cold migration > > This is the point I said below. > For the same host resize, instance_uuid + flavor_id is working very > well. But it can't support the same host cold migration. And yes, > migration allocation can fix it. But also as you said, do we need to > support the same host cold migration? I see no reason to try and bend over backward to support same host cold migration since, as I said, the only virt driver that supports that today (and has been the only one for a long time - maybe forever?) is the vmware driver which isn't supporting any of these more advanced flows (VGPU, VPMEM, PCPU). > > If the answer is no, then we needn't bother it. instance_uuid + > flavor_id is much simple. If the answer is yes, right, we can put it > into the RT. But it will be complex, maybe we need a data model like the > DB way proposal to pass the virt driver/platform specific info between > RT and virt driver. Also, think about the case, we need to check if > there is any incomplete live-migration, we need to do a cleanup for all > free vpmems, since we lost the allocation info for live-migration. Then > we need a virt dirver interface to trigger that cleanup, pretty sure I > don't want to call it as driver.cleanup_vpmems(). We also need to change > the existing driver.spawn method, to pass the assigned resource into > virt driver. Also thinking about the case of interrupted migration, I > guess there is no way to switch the > > I also remember Dan said, it isn't good to not support same host cold > migration. Again, the libvirt driver, as far as I know, has never supported same host cold migration, nor is anyone working on that, so I don't see where the need to make that support happen now is coming from. I think it should be ignored for the sake of these conversations. > > I think you are right, we can use RT.tracked_instances and > RT.tracked_migrations. Then it isn't on the fly anymore. > There are two existing bugs should be fixed. > > 1. The orphaned instance isn't in RT.tracked_instance. Although there is > resource consuming for orphaned instance > https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/resource_tracker.py#L771, > the virt driver interface > https://github.com/openstack/nova/blob/62f6a0a1bc6c4b24621e1c2e927177f99501bef3/nova/compute/resource_tracker.py#L1447 doesn't > implement by most of virt driver. For the latter, get_per_instance_usage, that's only implemented by the xenapi driver which is on the path to being deprecated by the end of Train at this point anyway: https://review.opendev.org/#/c/662295/ so I wouldn't worry too much about that one. In summary I'm not going to block attempts at proposal #2. As you said, there are existing bugs which should be handled, though some likely won't ever be completely fixed (automatic cleanup and recovery from live migration failures - the live migration methods are huge and have a lot of points of failure, so properly rolling back from all of those is going to be a big undertaking in test and review time, and I don't see either happening at this stage). I think one of the motivations to keep VPMEM resource tracking isolated to the hypervisor was just to get something quick and dirty working with a minimal amount of impact to other parts of nova, like the data model, ResourceTracker, etc. If proposal #2 also solves issues for VGPUs and PCPUs then there is more justification for doing it. Either way I'm not opposed to the #2 proposal so if that's what the people that are working on this want, go ahead. I personally don't plan on investing much review time in this series either way though, so that's kind of why I'm apathetic about this. -- Thanks, Matt From jungleboyj at gmail.com Fri Aug 23 14:36:40 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 23 Aug 2019 10:36:40 -0400 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: References: Message-ID: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> > 3. I recommend the pop-up team include Erno Kuvaja or a representative > from the Glance team and including Matt Reidemann (who has been working > to close the compute API gaps). Alex Myself or someone from the Cinder team should be involved as well as we have concerns/challenges with removing python-cinderclient. We have been trying to get OSC to parity with the cinderclient CLI but have not been able to get all the way there. Thanks! Jay > 4. The leaders must then identify a clear objective, and a clear > disband criteria. > > If in case the pop-up team decide that this could be better drafted as > a community goal for U, then that is a suitable alternative that can be > defined using the new goal selection criteria that is being defined > here [3] But this should not be decided before clear objectives are > defined. > > Thanks, > > Alex > IRC: asettle > > [1] https://review.opendev.org/#/c/639376/ > [2] https://governance.openstack.org/tc/reference/popup-teams.html > [3] https://review.opendev.org/#/c/667932/ From mnaser at vexxhost.com Fri Aug 23 14:58:11 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 23 Aug 2019 10:58:11 -0400 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: Message-ID: On Thu, Aug 22, 2019 at 11:46 PM Feilong Wang wrote: > > Hi all, > > At this moment, Magnum is still using Fedora Atomic 27 as the default image in devstack. But you can definitely use Fedora Atomic 29 and it works fine. But you may run into a performance issue when booting Fedora Atomic 29 if your compute host doesn't have enough entropy. There are two steps you need for that case: > > 1. Adding property hw_rng_model='virtio' to Fedora Atomic 29 image > > 2. Adding property hw_rng:allowed='True' to flavor, and we also need hw_rng:rate_bytes=4096 and hw_rng:rate_period=1 to get a reasonable rate limit to avoid the VM drain the hypervisor. > > We are working on a patch for Magnum devstack to support FA29 out of box. Meanwhile, we're starting to test Fedora CoreOS 30. Please popup in #openstack-containers channel if you have any question. Cheers. Neat! I think it's important for us to get off Fedora Atomic given that RedHat seems to be ending it soon. Is the plan to move towards Fedora CoreOS 30 or has there been consideration of using something like an Ubuntu-base (and leveraging something like kubeadm+ansible to drive the deployment?)- > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Head of R&D > Catalyst Cloud - Cloud Native New Zealand > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From gr at ham.ie Fri Aug 23 14:59:25 2019 From: gr at ham.ie (Graham Hayes) Date: Fri, 23 Aug 2019 15:59:25 +0100 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> Message-ID: <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> On 23/08/2019 15:36, Jay Bryant wrote: > >> 3. I recommend the pop-up team include Erno Kuvaja or a representative >> from the Glance team and including Matt Reidemann (who has been working >> to close the compute API gaps). > > > > Alex > > Myself or someone from the Cinder team should be involved as well as we > have concerns/challenges with removing python-cinderclient. We have been > trying to get OSC to parity with the cinderclient CLI but have not been > able to get all the way there. > > Thanks! > > Jay From what I have heard, some of the problems with relying on OSC was reliance on a central (OSC) team for implementation and reviews. Would moving tools like cinder or glance to OSC plugins help solve some of these concerns? This would allow the teams to control the CLI destiny, and even allow teams to add features to team CLIs and OSC simultaneously, while (or if) we transition to OSC. >> 4. The leaders must then identify a clear objective, and a clear >> disband criteria. >> >> If in case the pop-up team decide that this could be better drafted as >> a community goal for U, then that is a suitable alternative that can be >> defined using the new goal selection criteria that is being defined >> here [3] But this should not be decided before clear objectives are >> defined. >> >> Thanks, >> >> Alex >> IRC: asettle >> >> [1] https://review.opendev.org/#/c/639376/ >> [2] https://governance.openstack.org/tc/reference/popup-teams.html >> [3] https://review.opendev.org/#/c/667932/ > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Fri Aug 23 15:20:59 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 23 Aug 2019 15:20:59 +0000 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: Message-ID: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> On 2019-08-23 10:58:11 -0400 (-0400), Mohammed Naser wrote: [...] > Neat! I think it's important for us to get off Fedora Atomic given > that RedHat seems to be ending it soon. Is the plan to move towards > Fedora CoreOS 30 or has there been consideration of using something > like an Ubuntu-base (and leveraging something like kubeadm+ansible to > drive the deployment?)- This seems like it's particularly time-sensitive as Fedora 29 will be EOL and no longer receiving security fixes within just a few short months from now. If it's not fixed before Train is finalized, you're basically releasing an unusable service since it will pretty much immediately be depending on an insecure distribution nobody wants to put into production, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Aug 23 15:38:18 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 23 Aug 2019 08:38:18 -0700 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> Message-ID: On Fri, Aug 23, 2019, at 8:21 AM, Jeremy Stanley wrote: > On 2019-08-23 10:58:11 -0400 (-0400), Mohammed Naser wrote: > [...] > > Neat! I think it's important for us to get off Fedora Atomic given > > that RedHat seems to be ending it soon. Is the plan to move towards > > Fedora CoreOS 30 or has there been consideration of using something > > like an Ubuntu-base (and leveraging something like kubeadm+ansible to > > drive the deployment?)- > > This seems like it's particularly time-sensitive as Fedora 29 will > be EOL and no longer receiving security fixes within just a few > short months from now. If it's not fixed before Train is finalized, > you're basically releasing an unusable service since it will pretty > much immediately be depending on an insecure distribution nobody > wants to put into production, right? Note that 27 is still the default and has been EOL for a long time. Getting to 29 is a good improvement on top of that even if it EOLs sooner than we would like. > -- > Jeremy Stanley > > Attachments: > * signature.asc From mnaser at vexxhost.com Fri Aug 23 15:45:50 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 23 Aug 2019 11:45:50 -0400 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> Message-ID: On Fri, Aug 23, 2019 at 11:44 AM Clark Boylan wrote: > > On Fri, Aug 23, 2019, at 8:21 AM, Jeremy Stanley wrote: > > On 2019-08-23 10:58:11 -0400 (-0400), Mohammed Naser wrote: > > [...] > > > Neat! I think it's important for us to get off Fedora Atomic given > > > that RedHat seems to be ending it soon. Is the plan to move towards > > > Fedora CoreOS 30 or has there been consideration of using something > > > like an Ubuntu-base (and leveraging something like kubeadm+ansible to > > > drive the deployment?)- > > > > This seems like it's particularly time-sensitive as Fedora 29 will > > be EOL and no longer receiving security fixes within just a few > > short months from now. If it's not fixed before Train is finalized, > > you're basically releasing an unusable service since it will pretty > > much immediately be depending on an insecure distribution nobody > > wants to put into production, right? > > Note that 27 is still the default and has been EOL for a long time. Getting to 29 is a good improvement on top of that even if it EOLs sooner than we would like. +1 to this > > -- > > Jeremy Stanley > > > > Attachments: > > * signature.asc > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From elmiko at redhat.com Fri Aug 23 16:28:01 2019 From: elmiko at redhat.com (Michael McCune) Date: Fri, 23 Aug 2019 12:28:01 -0400 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: Message-ID: On Fri, Aug 23, 2019 at 9:58 AM Eric Fried wrote: > > > > feel quite good about the current state of the guidelines i'm sure > > there is room for improvement, > > It's worth noting that there's quite a lot of unmerged guideline content > under the api-sig's purview [1]. We should either close or rehome these > patches before we consider disbanding the SIG. It's obviously hard to > merge stuff with a small (and now shrinking) core list [2], but at least > some of this content is valuable and should not be abandoned. I, for > one, would like to be able to refer to published documentation rather > than in-flight (or abandoned) change sets when wrestling with version > discovery logic in ksa/sdk. > ++ we should make it a priority to re-visit these and do triage where necessary. thank you peace o/ > efried > > [1] https://review.opendev.org/#/q/project:openstack/api-sig+status:open > [2] https://review.opendev.org/#/admin/groups/468,members > From m2elsakha at gmail.com Wed Aug 21 13:56:26 2019 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Wed, 21 Aug 2019 09:56:26 -0400 Subject: [ALL][UC] Train EC Election - nomination results, and what is going on In-Reply-To: References: Message-ID: Thanks Ian and Ed. Looking forward to working with the rest of the team. Mohamed On Tue, Aug 20, 2019 at 12:55 PM Ian Y. Choi wrote: > Hello all, > > As announced [1-2], the UC nomination period was from August 5 - August > 16, 2019, and as the election officials, we would like to share the > result, as well as the next steps. This was discussed by the User > Committee yesterday [3]: > > - There was one UC candidacy [4]. The candicacy has been validated > and election official announce that "Mohamed Elsakhawy" will serve on > the UC. Congratulations! > - There will be no election since there was only one candicate out > of two positions for this election. > - There were discussions during the UC meeting [3] yesterday, and > the UC decided to have a second, special election for another seat > within the timeframe spelled out in the charter. > The same election officials will work, and the officials and UC > members are discussing a feasible time frame for the election. > > As mentioned, the upcoming election is *special*, so please pay > attention for more election information. > The election officials will post an email soon with more details on > upcoming special UC election. > > > Thank you, > > - Ed & Ian > > [1] > http://lists.openstack.org/pipermail/user-committee/2019-July/002862.html > [2] > http://lists.openstack.org/pipermail/user-committee/2019-August/002864.html > [3] > > http://eavesdrop.openstack.org/meetings/uc/2019/uc.2019-08-19-15.04.log.html#l-70 > [4] > http://lists.openstack.org/pipermail/user-committee/2019-August/002866.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From faridaelzanaty at gmail.com Thu Aug 22 20:15:01 2019 From: faridaelzanaty at gmail.com (farida el-zanaty) Date: Thu, 22 Aug 2019 16:15:01 -0400 Subject: [research][code-review][software-dev] Design Discussions During OpenStack Code Reviews Message-ID: Hi! I am Farida El-Zanaty from McGill University. Under the supervision of Prof. Shane McIntosh, my research aims to study design discussions that occur between developers during code reviews. Last year, we published a study about the frequency and types of such discussions that occur in OpenStack Nova and Neutron (http://rebels.ece.mcgill.ca/papers/esem2018_elzanaty.pdf). We are reaching out to OpenStack developers to better understand their perspectives on design discussions during code reviews. Those who are interested can start by participating in our 10-minute survey about their experiences as both the code reviewer and author. Survey participants will be entered into a raffle for a $50 Amazon gift card. Survey: https://forms.gle/Hhn191f6cxF5hVgG8 Thanks for your time, Farida El-Zanaty From engrsalmankhan at gmail.com Wed Aug 21 18:49:29 2019 From: engrsalmankhan at gmail.com (Salman Khan) Date: Wed, 21 Aug 2019 19:49:29 +0100 Subject: FWAAS V2 doesn't work with DVR Message-ID: Hi Guys, I asked this question over #openstack-neutron channel but didn't get any answer, so asking here in a hope that someone might read this email and reply. The problem is: I have enabled FWAAS_V2 with DVR and that doesn't seem to work. I debugged things down to router namespaces and it looks like iptables rules are applied to rfp- interface which doesn't exist in that namespace. So rules are completely wrong as they are applied to an interface that doesn't exist, I mean there is rfp-* interface but the that fwaas expecting is not what it should be. I tried applying the rules to qr-* interfaces in the namespace but that didn't work as well, packets are dropping on "invalid" state rule. That's probably because of nat rules from dvr. Can someone please help me to understand this behaviour. Is it really suppose to work or not. If there is any bug or fix pending or there is any work ongoing to support this. Regards, Salman -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Thu Aug 22 21:16:30 2019 From: Albert.Braden at synopsys.com (Albert Braden) Date: Thu, 22 Aug 2019 21:16:30 +0000 Subject: Nova causes MySQL timeouts Message-ID: It looks like nova is keeping mysql connections open until they time out. Is there a way to stop these error messages? Aborted connection 10726 to db: 'nova' user: 'nova' host: 'asdf' (Got timeout reading communication packets) -------------- next part -------------- An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Aug 23 18:48:24 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 23 Aug 2019 14:48:24 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-3) Message-ID: This is the goal-3 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 3 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == Thank you to all who have contributed their time and fixes to enable patches to land. We're down to 15 projects with failing tests. == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Fri Aug 23 18:53:00 2019 From: feilong at catalyst.net.nz (feilong at catalyst.net.nz) Date: Sat, 24 Aug 2019 06:53:00 +1200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> Message-ID: <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> On 2019-08-24 03:20, Jeremy Stanley wrote: > On 2019-08-23 10:58:11 -0400 (-0400), Mohammed Naser wrote: > [...] >> Neat! I think it's important for us to get off Fedora Atomic given >> that RedHat seems to be ending it soon. Is the plan to move towards >> Fedora CoreOS 30 or has there been consideration of using something >> like an Ubuntu-base (and leveraging something like kubeadm+ansible to >> drive the deployment?)- > > This seems like it's particularly time-sensitive as Fedora 29 will > be EOL and no longer receiving security fixes within just a few > short months from now. If it's not fixed before Train is finalized, > you're basically releasing an unusable service since it will pretty > much immediately be depending on an insecure distribution nobody > wants to put into production, right? I don't think so. Devstack is just a dev environment. If you're going to use FA29 on production, then you can set the properties on image/flavor for your prod. From feilong at catalyst.net.nz Fri Aug 23 18:56:49 2019 From: feilong at catalyst.net.nz (feilong at catalyst.net.nz) Date: Sat, 24 Aug 2019 06:56:49 +1200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: Message-ID: On 2019-08-24 02:58, Mohammed Naser wrote: > On Thu, Aug 22, 2019 at 11:46 PM Feilong Wang > wrote: >> >> Hi all, >> >> At this moment, Magnum is still using Fedora Atomic 27 as the default >> image in devstack. But you can definitely use Fedora Atomic 29 and it >> works fine. But you may run into a performance issue when booting >> Fedora Atomic 29 if your compute host doesn't have enough entropy. >> There are two steps you need for that case: >> >> 1. Adding property hw_rng_model='virtio' to Fedora Atomic 29 image >> >> 2. Adding property hw_rng:allowed='True' to flavor, and we also need >> hw_rng:rate_bytes=4096 and hw_rng:rate_period=1 to get a reasonable >> rate limit to avoid the VM drain the hypervisor. >> >> We are working on a patch for Magnum devstack to support FA29 out of >> box. Meanwhile, we're starting to test Fedora CoreOS 30. Please popup >> in #openstack-containers channel if you have any question. Cheers. > > Neat! I think it's important for us to get off Fedora Atomic given > that RedHat seems to be ending it soon. Is the plan to move towards > Fedora CoreOS 30 or has there been consideration of using something > like an Ubuntu-base (and leveraging something like kubeadm+ansible to > drive the deployment?)- > Personally, I would like we can stay at Fedora Atomic/CoreOS since Magnum has already been benefited from the container-based readonly operating system. But we did have the discussion about using kubeadm+ansible, however, as you can see, it's a quite big refactoring, I'm not sure if we can get it done with current limited resources. >> >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Head of R&D >> Catalyst Cloud - Cloud Native New Zealand >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- From fungi at yuggoth.org Fri Aug 23 19:40:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 23 Aug 2019 19:40:01 +0000 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> Message-ID: <20190823194000.bqg6o63iz2hcjgla@yuggoth.org> On 2019-08-24 06:53:00 +1200 (+1200), feilong at catalyst.net.nz wrote: > On 2019-08-24 03:20, Jeremy Stanley wrote: [...] > > If it's not fixed before Train is finalized, you're basically > > releasing an unusable service since it will pretty much > > immediately be depending on an insecure distribution nobody > > wants to put into production, right? > > I don't think so. Devstack is just a dev environment. If you're > going to use FA29 on production, then you can set the properties > on image/flavor for your prod. I'm probably missing something, but when Fedora 29 reaches EOL there will be no security supported Atomic, right? Are you saying Magnum will work fine in production without Atomic, that it's only used when testing via DevStack? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From feilong at catalyst.net.nz Fri Aug 23 20:00:41 2019 From: feilong at catalyst.net.nz (feilong at catalyst.net.nz) Date: Sat, 24 Aug 2019 08:00:41 +1200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <20190823194000.bqg6o63iz2hcjgla@yuggoth.org> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> <20190823194000.bqg6o63iz2hcjgla@yuggoth.org> Message-ID: <42554e591114b06d04a649da7e134c1c@catalyst.net.nz> On 2019-08-24 07:40, Jeremy Stanley wrote: > On 2019-08-24 06:53:00 +1200 (+1200), feilong at catalyst.net.nz wrote: >> On 2019-08-24 03:20, Jeremy Stanley wrote: > [...] >> > If it's not fixed before Train is finalized, you're basically >> > releasing an unusable service since it will pretty much >> > immediately be depending on an insecure distribution nobody >> > wants to put into production, right? >> >> I don't think so. Devstack is just a dev environment. If you're >> going to use FA29 on production, then you can set the properties >> on image/flavor for your prod. > > I'm probably missing something, but when Fedora 29 reaches EOL there > will be no security supported Atomic, right? Are you saying Magnum > will work fine in production without Atomic, that it's only used > when testing via DevStack? Firstly, I don't know when Fedora Atomic 29 will EOL at this moment, given the Fedora CoreOS 30 is still in testing status. I'm saying at this moment, in production, user can use Fedora Atomic 29. From fungi at yuggoth.org Fri Aug 23 20:15:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 23 Aug 2019 20:15:00 +0000 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <42554e591114b06d04a649da7e134c1c@catalyst.net.nz> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> <20190823194000.bqg6o63iz2hcjgla@yuggoth.org> <42554e591114b06d04a649da7e134c1c@catalyst.net.nz> Message-ID: <20190823201500.y2ug522hmipkp5yv@yuggoth.org> On 2019-08-24 08:00:41 +1200 (+1200), feilong at catalyst.net.nz wrote: [...] > Firstly, I don't know when Fedora Atomic 29 will EOL at this > moment, given the Fedora CoreOS 30 is still in testing status. > > I'm saying at this moment, in production, user can use Fedora > Atomic 29. I see. In the past, Fedora versions have reached EOL around 13 months from their initial release: https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule Fedora 29 was released (slightly behind schedule) at the end of October 2018: https://fedoraproject.org/wiki/Releases/29/Schedule This means it's likely to be EOL at the end of November 2019, roughly 6 weeks after we plan to release Train: https://releases.openstack.org/train/schedule.html If users of the Train release of Magnum are expected to rely on a feature which will only be available in Fedora 29, then basically Magnum will only have a viability of 6 weeks after Train releases before those users are stuck running a release of Fedora which has no security support from Red Hat. The development cycle for Train is rapidly approaching the station, if you'll forgive the simile, with coordinated feature freeze only 2 weeks away now. It would be unfortunate for Magnum to release something nobody can safely use, and it looks to me like time is running out if we want to avoid that. I sincerely hope I'm missing some important detail in all of this, and am eager to find out what it is. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From feilong at catalyst.net.nz Fri Aug 23 20:34:02 2019 From: feilong at catalyst.net.nz (feilong at catalyst.net.nz) Date: Sat, 24 Aug 2019 08:34:02 +1200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: <20190823201500.y2ug522hmipkp5yv@yuggoth.org> References: <20190823152059.ga7fmf2wiauq5dir@yuggoth.org> <61b312df248b69a2c2fdc25cbedbe229@catalyst.net.nz> <20190823194000.bqg6o63iz2hcjgla@yuggoth.org> <42554e591114b06d04a649da7e134c1c@catalyst.net.nz> <20190823201500.y2ug522hmipkp5yv@yuggoth.org> Message-ID: On 2019-08-24 08:15, Jeremy Stanley wrote: > On 2019-08-24 08:00:41 +1200 (+1200), feilong at catalyst.net.nz wrote: > [...] >> Firstly, I don't know when Fedora Atomic 29 will EOL at this >> moment, given the Fedora CoreOS 30 is still in testing status. >> >> I'm saying at this moment, in production, user can use Fedora >> Atomic 29. > > I see. In the past, Fedora versions have reached EOL around 13 > months from their initial release: > > https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle#Maintenance_Schedule > > Fedora 29 was released (slightly behind schedule) at the end of > October 2018: > > https://fedoraproject.org/wiki/Releases/29/Schedule > > This means it's likely to be EOL at the end of November 2019, > roughly 6 weeks after we plan to release Train: > > https://releases.openstack.org/train/schedule.html > > If users of the Train release of Magnum are expected to rely on a > feature which will only be available in Fedora 29, then basically > Magnum will only have a viability of 6 weeks after Train releases > before those users are stuck running a release of Fedora which has > no security support from Red Hat. The development cycle for Train is > rapidly approaching the station, if you'll forgive the simile, with > coordinated feature freeze only 2 weeks away now. It would be > unfortunate for Magnum to release something nobody can safely use, > and it looks to me like time is running out if we want to avoid > that. > > I sincerely hope I'm missing some important detail in all of this, > and am eager to find out what it is. I understand that and that's why we worked on the upgrade feature so that user can upgrade to a newer version operating system. Given the Fedora Atomic to Fedora CoreOS is a big jump, I don't know if the 13 months life is still the case. TBH, I assume it could be longer. Again, we're working hard on this but we don't have a perfect solution due to the limited resources. From mriedemos at gmail.com Fri Aug 23 20:35:05 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Aug 2019 15:35:05 -0500 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> Message-ID: <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> On 8/23/2019 9:59 AM, Graham Hayes wrote: > From what I have heard, some of the problems with relying on OSC was > reliance on a central (OSC) team for implementation and reviews. Yes, that's true, it's basically Dean doing core reviews these days it seems. > > Would moving tools like cinder or glance to OSC plugins help solve > some of these concerns? This would allow the teams to control the CLI > destiny, and even allow teams to add features to team CLIs and OSC > simultaneously, while (or if) we transition to OSC. This came up at one of the Denver PTGs with Dean in the nova room. The upside is project teams could move things themselves faster, like they do with their own python-*client projects today. They are also the ones that know better how their API works. Nova did this with the osc-placement plugin but still ask Dean for UX guidance from time to time. The downside of project teams owning their parts of the CLI are losing a unified vision for how the commands should be structured which is really one of the main reasons for OSC in the first place, a unified, standardized and good user experience, rather than all of the different little things that each project team did in their CLI. So if we did let project teams own their parts of OSC (I know some already do - placement, ironic, designate, watcher, etc), we would have to trust them to follow whatever guidelines exist since Dean can't be herding all those cats. Honestly I don't really trust (real surprise here right folks?!) most developers (myself included), left alone, to do things the "OSC way" unless they already have some experience with working on OSC - or at least communicating with Dean for UX questions. Based on that, I'd prefer that at least "core" component CLIs remain within OSC and the purview of that core team. But the core team problem needs to be solved which would mean expanding the OSC core team. We definitely need more 'core' project cores (nova/cinder/glance/keystone/neutron) reviewing changes to OSC for their components and maybe that's the path to becoming a core on OSC, if the whole sub-domain core thing works there, I don't know (I'm not on the OSC core team). I would still want Dean overseeing things happening at the component level though, at least until there are more obvious OSC-wide cores, i.e. component core(s) +1/+2 a thing but Dean has final say. Anyway, that's my 2 cents. -- Thanks, Matt From johnsomor at gmail.com Fri Aug 23 22:11:26 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 23 Aug 2019 15:11:26 -0700 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> Message-ID: For Octavia, implementing an OSC plugin has worked out exceptionally well. I have also got good feedback from users that OSC is much better than the legacy clients in functionality and ease of use. That said, I have spent some quality time with Dean (thank you again) making sure we didn't stray from the OSC vision. I think like any other guideline or standard in OpenStack we just need to focus on documenting those guidelines and standards well (there are already pretty good documents for this, thank you again to the OSC team). Then it is up to us to pay attention to them and correct our mistakes when we make them. Personally I think all projects should be implemented as plugins. This makes it "obvious" to users when they need a service functionality they need to install the correct plugin. Currently we get "Why doesn't the client support Octavia?" questions when we were one of the early plugins. This also has the advantage of limiting the footprint of the client install for deployments that don't need all of the services. For example, I don't need object nor block storage. It would be nice to not use the disk and tab completion space for those. One of the nice things about the project team owning the plugin is you don't get rogue patches merging that don't align to the project team vision ('-' vs. "_" anyone?). Michael On Fri, Aug 23, 2019 at 1:38 PM Matt Riedemann wrote: > > On 8/23/2019 9:59 AM, Graham Hayes wrote: > > From what I have heard, some of the problems with relying on OSC was > > reliance on a central (OSC) team for implementation and reviews. > > Yes, that's true, it's basically Dean doing core reviews these days it > seems. > > > > > Would moving tools like cinder or glance to OSC plugins help solve > > some of these concerns? This would allow the teams to control the CLI > > destiny, and even allow teams to add features to team CLIs and OSC > > simultaneously, while (or if) we transition to OSC. > > This came up at one of the Denver PTGs with Dean in the nova room. The > upside is project teams could move things themselves faster, like they > do with their own python-*client projects today. They are also the ones > that know better how their API works. Nova did this with the > osc-placement plugin but still ask Dean for UX guidance from time to time. > > The downside of project teams owning their parts of the CLI are losing a > unified vision for how the commands should be structured which is really > one of the main reasons for OSC in the first place, a unified, > standardized and good user experience, rather than all of the different > little things that each project team did in their CLI. So if we did let > project teams own their parts of OSC (I know some already do - > placement, ironic, designate, watcher, etc), we would have to trust them > to follow whatever guidelines exist since Dean can't be herding all > those cats. > > Honestly I don't really trust (real surprise here right folks?!) most > developers (myself included), left alone, to do things the "OSC way" > unless they already have some experience with working on OSC - or at > least communicating with Dean for UX questions. Based on that, I'd > prefer that at least "core" component CLIs remain within OSC and the > purview of that core team. But the core team problem needs to be solved > which would mean expanding the OSC core team. We definitely need more > 'core' project cores (nova/cinder/glance/keystone/neutron) reviewing > changes to OSC for their components and maybe that's the path to > becoming a core on OSC, if the whole sub-domain core thing works there, > I don't know (I'm not on the OSC core team). I would still want Dean > overseeing things happening at the component level though, at least > until there are more obvious OSC-wide cores, i.e. component core(s) > +1/+2 a thing but Dean has final say. > > Anyway, that's my 2 cents. > > -- > > Thanks, > > Matt > From cboylan at sapwetik.org Fri Aug 23 22:37:49 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 23 Aug 2019 15:37:49 -0700 Subject: =?UTF-8?Q?Re:_[all]_[tc]_[osc]_[glance]_[train]_[ptls]_Legacy_client_CLI?= =?UTF-8?Q?_to_OSC_review_639376?= In-Reply-To: References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> Message-ID: On Fri, Aug 23, 2019, at 3:12 PM, Michael Johnson wrote: > For Octavia, implementing an OSC plugin has worked out exceptionally > well. I have also got good feedback from users that OSC is much better > than the legacy clients in functionality and ease of use. > > That said, I have spent some quality time with Dean (thank you again) > making sure we didn't stray from the OSC vision. > > I think like any other guideline or standard in OpenStack we just need > to focus on documenting those guidelines and standards well (there are > already pretty good documents for this, thank you again to the OSC > team). Then it is up to us to pay attention to them and correct our > mistakes when we make them. > > Personally I think all projects should be implemented as plugins. This > makes it "obvious" to users when they need a service functionality > they need to install the correct plugin. Currently we get "Why doesn't > the client support Octavia?" questions when we were one of the early > plugins. This also has the advantage of limiting the footprint of the > client install for deployments that don't need all of the services. > For example, I don't need object nor block storage. It would be nice > to not use the disk and tab completion space for those. One of the > nice things about the project team owning the plugin is you don't get > rogue patches merging that don't align to the project team vision ('-' > vs. "_" anyone?). Since you've brought up optimizations here it is worth pointing out that much of OSC's slowness is due to its plugin system. Basically the way python entrypoints work is pkg_resources scans the python path for all packages, sorts them all by name and version, and checks them for entrypoint metadata to know what it can load as an entrypoint. This is slow. Unfortunately that means that the more we get tied to the plugin system the harder it becomes to potentially fix this problem. Personally I'd much rather lose a few megabytes of disk to avoid major startup overhead. That said, the user experience with OSC tends to be much better than that of the python-*clients even when you factor in some slowness. In fact I have to fight very strong urges to give up any time I have to interact with an API that isn't yet supported by OSC. > > Michael From openstack at fried.cc Fri Aug 23 22:47:09 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 23 Aug 2019 17:47:09 -0500 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> Message-ID: <221ea659-9f9c-62a3-27b5-cd1aa4aa098c@fried.cc> >> reliance on a central (OSC) team for implementation and reviews. >> Would moving tools like cinder or glance to OSC plugins help solve > upside is project teams could move things themselves faster, like they > do with their own python-*client projects today. They are also the ones > that know better how their API works. > The downside of project teams owning their parts of the CLI are losing a > unified vision > we would have to trust them > I would still want Dean > overseeing things happening at the component level though, at least > until there are more obvious OSC-wide cores, i.e. component core(s) > +1/+2 a thing but Dean has final say. This is becoming a familiar theme for projects/efforts with touchpoints across OpenStack. Docs deliverables have been moved into individual projects, and "the docs team" is becoming a SIG [1]. But who is the Dean of docs, making sure we retain a common structure, style, and voice across all of these deliverables? SDK is making forays into the "trust component SMEs" arena [2]. The API-SIG is undergoing a similar existential exercise [3], and it is unclear where its deliverables will end up. And this issue with OSC is not new. All of these have their own nuances, but also quite a bit in common. Can we brainstorm ways that might address more than one in a satisfactory way? Here's one idea: Deliverables live in projects under the governance of their component; so for example nova-docs and nova-osc-plugin and nova-sdk-plugin would be under nova governance, with core teams managed by nova. But patches in those projects require an additional vote of a different flavor (a "Rollcall-Vote"?) from the related SIG or core team in order to merge. Thoughts? efried [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8571 [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8362 [3] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8673 From zigo at debian.org Fri Aug 23 22:47:48 2019 From: zigo at debian.org (Thomas Goirand) Date: Sat, 24 Aug 2019 00:47:48 +0200 Subject: [tc] Release naming process In-Reply-To: <87ftlu8pfx.fsf@meyer.lemoncheese.net> References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> Message-ID: <09b82e2a-eb35-9bc7-ca5b-5a1ed776c193@debian.org> On 8/21/19 6:15 PM, James E. Blair wrote: > Hi, > > In the previous thread about the U release name, we discussed how the > actual process for selecting the name has diverged from the written > process. I think it's important that we follow our own processes, so we > should reconcile those. We should change our actions or change the > process. > > Based on the previous discussion, I've proposed 6 changes to > openstack/governance for some initial feedback. We can, of course, > tweak these options, eliminate them, or add new ones. > > Ultimately, we're aiming for the TC to formally vote on one or a small > number of changes similar to these. > > I'd like for anyone interested in this to review these options and leave > feedback on the changes themselves, or here on the mailing list. Either > is fine, and all will be considered. Leaving "code-review" votes on the > changes themselves will help us gauge relative support for the different > options. > > In a week, I'll collect the feedback and propose a next step. > > https://review.opendev.org/675788 - Stop naming releases > https://review.opendev.org/677745 - Name releases after major cities > https://review.opendev.org/677746 - Name releases after the ICAO alphabet > https://review.opendev.org/677747 - Ask the Foundation to name releases > https://review.opendev.org/677748 - Name releases after random words > https://review.opendev.org/677749 - Clarify the existing release naming process > > The last one is worth particular mention -- it keeps the current process > with some minor clarifications. > > -Jim Hi, I'm quite happy to see that, so far, the "don't change anything in the naming process" option seems popular. Thomas From openstack at nemebean.com Fri Aug 23 23:30:06 2019 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 23 Aug 2019 18:30:06 -0500 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <221ea659-9f9c-62a3-27b5-cd1aa4aa098c@fried.cc> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> <221ea659-9f9c-62a3-27b5-cd1aa4aa098c@fried.cc> Message-ID: <17269234-1be2-f276-69be-177a1e03c182@nemebean.com> On 8/23/19 5:47 PM, Eric Fried wrote: >>> reliance on a central (OSC) team for implementation and reviews. > >> Would moving tools like cinder or glance to OSC plugins help solve > >> upside is project teams could move things themselves faster, like they >> do with their own python-*client projects today. They are also the ones >> that know better how their API works. > >> The downside of project teams owning their parts of the CLI are losing a >> unified vision > >> we would have to trust them > >> I would still want Dean >> overseeing things happening at the component level though, at least >> until there are more obvious OSC-wide cores, i.e. component core(s) >> +1/+2 a thing but Dean has final say. > > This is becoming a familiar theme for projects/efforts with touchpoints > across OpenStack. > > Docs deliverables have been moved into individual projects, and "the > docs team" is becoming a SIG [1]. But who is the Dean of docs, making > sure we retain a common structure, style, and voice across all of these > deliverables? > > SDK is making forays into the "trust component SMEs" arena [2]. > > The API-SIG is undergoing a similar existential exercise [3], and it is > unclear where its deliverables will end up. > > And this issue with OSC is not new. > > All of these have their own nuances, but also quite a bit in common. Can > we brainstorm ways that might address more than one in a satisfactory > way? Here's one idea: > > Deliverables live in projects under the governance of their component; > so for example nova-docs and nova-osc-plugin and nova-sdk-plugin would > be under nova governance, with core teams managed by nova. But patches > in those projects require an additional vote of a different flavor (a > "Rollcall-Vote"?) from the related SIG or core team in order to merge. I don't dislike the idea, but I think the decentralization of these things is at least partially a recognition of the fact that the teams have shrunk to the point where they can't be responsible for reviewing every doc/sdk/cli change across all of openstack (if they ever could in the first place). It may be that the best we can do is have the remaining experts in those areas write up guidelines[0] for the project contributors to follow and then pull the experts in to review specific edge cases and such. The bad thing is that mistakes in stuff like the cli and sdk can be hard to fix once they've been released. I can't tell you how many bugs I ran into because scripts were using the old glance image visibility cli opt that got changed way back when. But that might just be something we have to live with unless I'm wrong about the review capacity of the teams in question. 0: Something like https://docs.openstack.org/python-openstackclient/stein/contributor/humaninterfaceguide.html but maybe even more than that since I think I've seen Dean mention best practices that aren't covered in that doc. > > Thoughts? > > efried > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8571 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8362 > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8673 > From colleen at gazlene.net Sat Aug 24 00:09:29 2019 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 23 Aug 2019 17:09:29 -0700 Subject: [keystone] Keystone Team Update - Week of 19 August 2019 Message-ID: <5da0af52-32b7-4602-b402-bd9d6e118f82@www.fastmail.com> # Keystone Team Update - Week of 19 August 2019 ## News ### Development focus Feature freeze as well as a number of other deadlines are fast approaching. The development focus for the next three weeks should be: * reviews for changes implementing specs: - app cred access rules[1] - resource options for all resources[2] - immutable resources[3] - renewable group membership[4] * finish implementing and reviewing remaining system-scope[5]/default roles[6] migrations (deadline is feature freeze, Sept 9-13) * reviews for keystonemiddleware and keystoneauth[7] (final release Sept 2-6) * reviews for python-keystoneclient[8] (final release Sept 9-13) * helping the requirements team with any requirements issues[9] before requirements freeze (Sept 9-13) * completing community goals (follow the PDF generation news[10]) Additionally, we're still facing issues with instability of our unit test jobs. Improving the efficiency of our unit tests in order to avoid frequent timeouts would go a long way to helping avoid the feature freeze gate crunch. [1] https://review.opendev.org/#/q/is:open+topic:bp/whitelist-extension-for-app-creds [2] https://review.opendev.org/678322 [3] https://review.opendev.org/#/q/is:open+topic:immutable-resources [4] https://review.opendev.org/677469 [5] https://bugs.launchpad.net/keystone/+bugs?field.tag=system-scope [6] https://bugs.launchpad.net/keystone/+bugs?field.tag=default-roles [7] https://review.opendev.org/#/q/is:open+(project:openstack/keystoneauth+OR+project:openstack/keystonemiddleware) [8] https://review.opendev.org/#/q/is:open+project:openstack/python-keystoneclient [9] https://bugs.launchpad.net/keystone/+bug/1839393 [10] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008506.html ### keystoneauth session retries The heat team reported an issue in keystoneauth[11] where the way in which heat currently uses keystoneauth sessions is unable to take advantage of the request retry logic that is currently exposed only in the adapter. The proposal to fix the issue[12] exposes one of the retry options (connect_retries) in the session object. We've been in discussions on the bug report, on the patch, and in IRC[13][14] about whether this is the best approach and whether it would be better to change the way heat uses keystoneauth (would require a massive rewrite) or to expose the request retry options on the auth plugin in order to localize it to keystone requests (but that's an awkward home for it too). We've also been in disagreement about whether this change constitutes a feature or a bugfix with regards to backportability. [11] https://bugs.launchpad.net/keystoneauth/+bug/1840235 [12] https://review.opendev.org/676648 [13] http://eavesdrop.openstack.org/irclogs/%23openstack-sdks/%23openstack-sdks.2019-08-16.log.html#t2019-08-16T15:04:39 [14] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2019-08-23.log.html#t2019-08-23T15:55:10 ## Action Items * Vishakha to look into oauthlib requirements issue[15] * Kristi to propose spec to backlog about merging federation and identity backends [16] https://bugs.launchpad.net/keystone/+bug/1839393 ## Office Hours When there are topics to cover, the keystone team holds office hours on Tuesdays at 17:00 UTC. There will be no office hours this week or next week. Add topics you would like to see covered during office hours to the etherpad: https://etherpad.openstack.org/p/keystone-office-hours-topics ## Open Specs Train specs: https://bit.ly/2uZ2tRl Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 13 changes this week, including changes to implement system scope for the endpoint groups API. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 68 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Train Roadmap Stories - System scope/default roles (https://trello.com/c/ERo50T7r , https://trello.com/c/RlYyb4DU) + https://review.opendev.org/#/q/status:open+topic:implement-default-roles+label:verified%253D%252B1 + https://review.opendev.org/#/q/status:open+topic:trust-policies + https://review.opendev.org/#/q/topic:bug/1805409 - Federated attributes for users (https://trello.com/c/dEmSumDQ) + https://review.opendev.org/#/q/status:open+topic:bp/support-federated-attr - Application credential access rules (https://trello.com/c/dJsWMI4W) + https://review.opendev.org/#/q/status:open+topic:bp/whitelist-extension-for-app-creds - Immutable resources (https://trello.com/c/clIb4qMq) + https://review.opendev.org/#/q/topic:immutable-resources - Resource options for all (https://trello.com/c/8ML6kvig) + https://review.opendev.org/678322 * Needs Discussion - Allow initializing session with connection retries https://review.opendev.org/676648 * Oldest - OpenID Connect improved support (spec) https://review.opendev.org/373983 * Closes bugs - Cleanup session on delete https://review.opendev.org/674139 ## Bugs This week we opened 1 new bugs and closed 2. Bugs opened (1) Bug #1840647 (keystone:Undecided) opened by Nikita Kalyanov https://bugs.launchpad.net/keystone/+bug/1840647 ) Bugs fixed (2) Bug #1840291 (keystone:Medium) fixed by Rabi Mishra https://bugs.launchpad.net/keystone/+bug/1840291 Bug #1818734 (keystone:Low) fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1818734 ## Milestone Outlook https://releases.openstack.org/train/schedule.html Final release for non-client libraries (keystoneauth, keystonemiddleware) is in two weeks. Feature freeze and final client release is in three weeks. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From openstack at fried.cc Sat Aug 24 00:18:10 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 23 Aug 2019 19:18:10 -0500 Subject: [nova] critical bug around reload/upgrades In-Reply-To: <99e420bd-3e99-4bcb-5bb4-eb32b8c3f1d4@gmail.com> References: <99e420bd-3e99-4bcb-5bb4-eb32b8c3f1d4@gmail.com> Message-ID: <794092c4-40cf-88c6-1227-276299f3ccf7@fried.cc> On 4/3/19 1:20 PM, Matt Riedemann wrote: > On 3/28/2019 7:42 PM, Mohammed Naser wrote: >> Looks like some progress has been made but we're pretty confident that >> this >> is more and more an Oslo.service bug: >> >> Matt & Dan have both left ideas around this with possible solutions on >> how to >> make a change like this back portable.. >> >> https://review.openstack.org/#/c/641907/ > > Another update on this, but I was trying to recreate the original > reported issue in the nova bug: > > https://bugs.launchpad.net/nova/+bug/1715374 > > And I didn't even get to the point of the libvirt driver waiting for the > network-vif-plugged event because privsep blows up much earlier during > server create after SIGHUP'ing the service. Details start at comment 34 > in that bug, but the tl;dr is the privsep-helper child processes are > gone after the SIGHUP so anything that relies on privsep (which is > anything using root in the libvirt driver and os-vif utils code now I > think) won't work until you restart the service. > > I don't yet know if this is a regression in Stein but I'm going to > create a stable/rocky devstack and try to find out. With that oslo.service patch [1] in place, I recreated Matt's result as described above. Then I hacked on oslo.privsep a bit [2] and was able to resolve the issue (create instances smoothly after SIGHUPping n-cpu.service). That fix is going to need UT, but also more thread- and socket- and security-savvy eyeballs to make sure it has legs. But hopefully we can finally put this one to bed. efried [1] https://review.opendev.org/#/c/641907/ [2] https://review.opendev.org/#/c/678323/ From openstack at fried.cc Sat Aug 24 00:31:35 2019 From: openstack at fried.cc (Eric Fried) Date: Fri, 23 Aug 2019 19:31:35 -0500 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: <17269234-1be2-f276-69be-177a1e03c182@nemebean.com> References: <5ace0174-500e-de68-5ea5-9daeedef7516@gmail.com> <077fa4e4-a1d8-ac69-6917-48070cc44032@ham.ie> <0f73cad6-88e7-6bfd-cf2d-72ab8e2f553b@gmail.com> <221ea659-9f9c-62a3-27b5-cd1aa4aa098c@fried.cc> <17269234-1be2-f276-69be-177a1e03c182@nemebean.com> Message-ID: > It may be that the best we can do is have the > remaining experts in those areas write up guidelines[0] for the project > contributors to follow and then pull the experts in to review specific > edge cases and such. If that's the best we can do, so be it. But recent events [1] imply that written guidelines aren't always read/followed /o\ > I don't dislike the idea, but I think the decentralization of these > things is at least partially a recognition of the fact that the teams > have shrunk to the point where they can't be responsible for reviewing > every doc/sdk/cli change across all of openstack (if they ever could in > the first place). Right right, the point is to allow them to *just* review the patches from the point of view of conforming to the overall guidelines without feeling responsible for deep technical dives. In other words, reduce the per-patch burden to allow more patches through the pipe. To give a concrete example, if [2] had been in nova-osc-plugin, an OSC core could have vetted it for option/argument naming without sweating the substance of the API interaction, unit tests, etc. So like, a quarter of the patch rather than all of it. efried [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008643.html [2] https://review.opendev.org/#/c/675117/ (pretend this was osc rather than novaclient) [3] https://review.opendev.org/#/c/675117/3/novaclient/v2/shell.py From kennelson11 at gmail.com Sat Aug 24 03:45:38 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 23 Aug 2019 20:45:38 -0700 Subject: [all] PTG Attendance Message-ID: Hello Everyone! This is the list of groups that are plannning on attending the PTG. Thank you PTLs/Chairs/Leads that responded on time :) I hesitate to say it's the 'final' list since I had a few groups respond with a 'Maybe', but it's probably pretty close. Please also note that the tentative activities at the PTG vary by group. Some groups are planning on only doing onboarding for example. Without further ado, here is the list! Airship Auto-Scaling SIG Barbican Blazar CInder Cyborg Edge Computing Group Fenix First Contact SIG Gitea Glance Heat Horizon I18n Ironic K8s SIG Karbor Kata Containers Keystone Loci Manila Meta SIG Monasca Neutron Nova Octavia OpenStack Charms OpenStack Infra / OpenDev OpenStack TC Openstack-helm OpenStack Operators (Ops Docs SIG and Meetup group) Oslo Public Cloud SIG Quality Assurance Release Management Scientific SIG Self-healing SIG Storlets StoryBoard Swift Tacker StarlingX If your team is missing from the list, please let me know ASAP as we are starting to put together a draft schedule. -Kendall Nelson(diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Aug 24 09:47:39 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 24 Aug 2019 18:47:39 +0900 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: Message-ID: <16cc30634ec.125fb7873294246.2871343054827957126@ghanshyammann.com> ---- On Fri, 23 Aug 2019 22:55:06 +0900 Eric Fried wrote ---- > > > feel quite good about the current state of the guidelines i'm sure > > there is room for improvement, > > It's worth noting that there's quite a lot of unmerged guideline content > under the api-sig's purview [1]. We should either close or rehome these > patches before we consider disbanding the SIG. It's obviously hard to > merge stuff with a small (and now shrinking) core list [2], but at least > some of this content is valuable and should not be abandoned. I, for > one, would like to be able to refer to published documentation rather > than in-flight (or abandoned) change sets when wrestling with version > discovery logic in ksa/sdk. In addition, there are many TODO also there in current guidelines [1]. Few of them I remember were left as TODO when few guidelines were moved from nova to api-wg. [1] https://github.com/openstack/api-sig/search?q=TODO&unscoped_q=TODO -gmann > > efried > > [1] https://review.opendev.org/#/q/project:openstack/api-sig+status:open > [2] https://review.opendev.org/#/admin/groups/468,members > > From miguel at mlavalle.com Sat Aug 24 21:50:07 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sat, 24 Aug 2019 16:50:07 -0500 Subject: [all] PTG Attendance In-Reply-To: References: Message-ID: Hi, Neutron is planning to do an onboarding session. Do we need to do any reservation for that? Regards On Fri, Aug 23, 2019 at 10:46 PM Kendall Nelson wrote: > Hello Everyone! > > This is the list of groups that are plannning on attending the PTG. Thank > you PTLs/Chairs/Leads that responded on time :) > > I hesitate to say it's the 'final' list since I had a few groups respond > with a 'Maybe', but it's probably pretty close. > > Please also note that the tentative activities at the PTG vary by group. > Some groups are planning on only doing onboarding for example. > > Without further ado, here is the list! > > Airship > Auto-Scaling SIG > Barbican > Blazar > CInder > Cyborg > Edge Computing Group > Fenix > First Contact SIG > Gitea > Glance > Heat > Horizon > I18n > Ironic > K8s SIG > Karbor > Kata Containers > Keystone > Loci > Manila > Meta SIG > Monasca > Neutron > Nova > Octavia > OpenStack Charms > OpenStack Infra / OpenDev > OpenStack TC > Openstack-helm > OpenStack Operators (Ops Docs SIG and Meetup group) > Oslo > Public Cloud SIG > Quality Assurance > Release Management > Scientific SIG > Self-healing SIG > Storlets > StoryBoard > Swift > Tacker > StarlingX > > If your team is missing from the list, please let me know ASAP as we are > starting to put together a draft schedule. > > -Kendall Nelson(diablo_rojo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Aug 24 21:55:21 2019 From: zigo at debian.org (Thomas Goirand) Date: Sat, 24 Aug 2019 23:55:21 +0200 Subject: [horizon] Horizon's openstack_auth is completely broken with Django 2.2, breaking all the dashboad stack in Debian Sid/Bullseye Message-ID: Hi, Django 2.2 was uploaded to Debian Sid/Bullseye a few days after Buster was released, removing at the same time any support for Python 2. This broke lots of packages, but I managed to upload more than 40 times to fix the situation. Though Horizon is still completely broken. There's a bunch of issues related to Django 2.2 for Horizon. I managed to patch Horizon for a few of them, but there's one bigger issue which I can't solve myself, because I'm not skilled enough in Django, and don't know enough about Horizon's internal. Let me explain. In Django 1.11, the login() function in openstack_auth/views.py was deprecated in the favor of a LoginView class. In Django 2.2, the login() function is now not supported anymore. This means that, if openstack_auth/views.py doesn't get rewritten completely, Horizon will continue to be completely broken in Debian Sid/Bullseye. Unfortunately, the Horizon team is understaffed, and the PTL told me that they don't plan anything before Train+1. As a consequence, Horizon is completely broken in Debian Sid/Bullseye, and will stay as it is if nobody steps up for the work. After a month and a half with this situation not being solved by anyone, I'm hereby calling for help. I could attempt to work on this, though I need help and pointers at some example implementation of this. It'd be nicer if someone more skilled than me in Django was working on this anyways. Cheers, Thomas Goirand (zigo) From kennelson11 at gmail.com Sun Aug 25 07:01:15 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Sun, 25 Aug 2019 14:01:15 +0700 Subject: [all] PTG Attendance In-Reply-To: References: Message-ID: Hey Miguel, When you requested time/space at the PTG, project onboarding was included in that. If Neutron didn't say they wanted onboarding as well I can update my spreadsheet, just let me know! -Kendall (diablo_rojo) On Sun, 25 Aug 2019, 4:50 am Miguel Lavalle, wrote: > Hi, > > Neutron is planning to do an onboarding session. Do we need to do any > reservation for that? > > Regards > > On Fri, Aug 23, 2019 at 10:46 PM Kendall Nelson > wrote: > >> Hello Everyone! >> >> This is the list of groups that are plannning on attending the PTG. Thank >> you PTLs/Chairs/Leads that responded on time :) >> >> I hesitate to say it's the 'final' list since I had a few groups respond >> with a 'Maybe', but it's probably pretty close. >> >> Please also note that the tentative activities at the PTG vary by group. >> Some groups are planning on only doing onboarding for example. >> >> Without further ado, here is the list! >> >> Airship >> Auto-Scaling SIG >> Barbican >> Blazar >> CInder >> Cyborg >> Edge Computing Group >> Fenix >> First Contact SIG >> Gitea >> Glance >> Heat >> Horizon >> I18n >> Ironic >> K8s SIG >> Karbor >> Kata Containers >> Keystone >> Loci >> Manila >> Meta SIG >> Monasca >> Neutron >> Nova >> Octavia >> OpenStack Charms >> OpenStack Infra / OpenDev >> OpenStack TC >> Openstack-helm >> OpenStack Operators (Ops Docs SIG and Meetup group) >> Oslo >> Public Cloud SIG >> Quality Assurance >> Release Management >> Scientific SIG >> Self-healing SIG >> Storlets >> StoryBoard >> Swift >> Tacker >> StarlingX >> >> If your team is missing from the list, please let me know ASAP as we are >> starting to put together a draft schedule. >> >> -Kendall Nelson(diablo_rojo) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Aug 25 07:48:42 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 25 Aug 2019 16:48:42 +0900 Subject: [goals][IPv6-Only Deployments and Testing] Week R-8 Update Message-ID: <16cc7bfaa0b.1009d80b3298918.444089880098404056@ghanshyammann.com> Hello Everyone, Below is the progress on Ipv6 goal during R8 week. I have finished with the project having zuulv3 jobs. The projects having zuulv2 jobs will be my next take (I have not explored the difficulties/complexity yet). Summary: * Number of Ipv6 jobs proposed Projects: 28 * Number of pass projects: 17 ** Number of project merged out of pass project: 11 * Number of failing projects: 11 Storyboard: ========= - https://storyboard.openstack.org/#!/story/2005477 Current status: ============ 1. Cinder fix is merged[1], Thanks Gorka. We need to merge the devstack patch now. 2. Configuring the tempest test regex to run only smoke tests which can be extended to include future IPv6 tests also. Running all test is not actually required as such in IPv6 job but if any project wants to run all then also fine. Example: [1] 3. Murano, zun, solumn, cloudkitty are merged. 4. Murano default dns is also on IPv6 env. 5. For Monasca, kafka was not working for IPv6 but witek is upgrading the Kafka version in Monasca. I will rebase IPv6 job patch on top of that and check the result. 6. This week new projects ipv6 jobs patch and status: - Zun: link: https://review.opendev.org/#/c/677496/ status: job is passed with few devstack plugin fixes and patch is merged. - Searchlight: links: https://review.opendev.org/#/c/678391/ status: There are no integration tests in searchlight as of now so after talking to Trinh (PTL), I am just adding a no-test job to verify that searchlight can be deployed on IPv6 or not. Later when we will have tests then those can be added in this job. IPv6 missing support found: ===================== 1. https://review.opendev.org/#/c/673397/ 2. https://review.opendev.org/#/c/673449/ 3. https://review.opendev.org/#/c/677524/ How you can help: ============== - Each project needs to look for and review the ipv6 job patch. - Verify it works fine on ipv6 and no ipv4 used in conf etc - Any other specific scenario needs to be added as part of project IPv6 verification. - Help on debugging and fix the bug in IPv6 job is failing. Everything related to this goal can be found under this topic: Topic: https://review.opendev.org/#/q/topic:ipv6-only-deployment-and-testing+(status:open+OR+status:merged) How to define and run new IPv6 Job on project side: ======================================= - I prepared a wiki page to describe this section - https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing Review suggestion: ============== - Main goal of these jobs will be whether your service is able to listen on IPv6 and can communicate to any other services either OpenStack or DB or rabbitmq etc on IPv6 or not. So check your proposed job with that point of view. If anything missing, comment on patch. - One example was - I missed to configure novnc address to IPv6- https://review.opendev.org/#/c/672493/ - base script as part of 'devstack-tempest-ipv6' will do basic checks for endpoints on IPv6 and some devstack var setting. But if your project needs more specific verification then it can be added in project side job as post-run playbooks as described in wiki page[3]. [1] https://review.opendev.org/#/c/677524/ [1] https://zuul.opendev.org/t/openstack/build/5b7b823d6faa4f5393b4c46d36e15d80/log/controller/logs/screen-n-cpu.txt.gz#2733 [2] https://review.opendev.org/#/c/676857/ [3] https://review.opendev.org/#/c/676900/ [4] https://wiki.openstack.org/wiki/Goal-IPv6-only-deployments-and-testing -gmann From ssbarnea at redhat.com Sun Aug 25 08:44:02 2019 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Sun, 25 Aug 2019 09:44:02 +0100 Subject: [ansible] 2.9 feature freeze is on 2019-08-29 Message-ID: <3A65E4DF-D2F9-46F0-A4FB-1627B6ACFD54@redhat.com> I just wanted to note for those using Ansible that the feature freeze is in 2019-08-29 which means that if you have any changes you want to make, that's the last date when you will be able to merge them. After this date there will be only be bugfixes and the Ansible team policy is very strict about what counts as a feature. For example not passing some arguments counts as new feature, not a bug fix. https://docs.ansible.com/ansible/latest/roadmap/ROADMAP_2_9.html I wanted to highlight this because recently I wanted to add support for etc_hosts on docker_image (build) (it was only on docker_container / run) and the change got merged into 2.8 but backporting the fix was denied as it counted as new feature. If you are aware about other similar changes that do affect openstack, lets try to make them happen now, or we will be delayed 4 months for sure. Thanks Sorin Sbarnea From lychzhz at gmail.com Sun Aug 25 10:28:01 2019 From: lychzhz at gmail.com (Douglas Zhang) Date: Sun, 25 Aug 2019 18:28:01 +0800 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters Message-ID: Hello everyone, Thanks for your attention and advice for this project. We have read each reply thoroughly and the good news is that we’re able to give answers to some questions raised by them. As Lingxian Kong said: I have a few questions/suggestions: 1. It'd be great and gain more attractions if you could provide a demo about how "openstack-admin" looks like 2. What OpenStack services has "openstack-admin" already integrated? Is it easy to integrate with others? 1. We have deployed openstack-admin on a mini openstack cluster as a demo, any access would be welcomed. - Username: openstack-admin - Password: @Dem0 2. Since openstack-admin gets information it needs by querying the sql database, it’s fairly easy to integrate with all openstack services. - As the demo shows, openstack-admin has integrated *nova*(almost all GET and part of POST), *cinder*(GET and snapshot-creation), *neutron*(subnets and ports), *keystone*(projects) and *glance*(images), that’s all we need in our own working environment. - If we need to integrate more services to openstack-admin(e.g. adding a create instance button or integrating with swift), it would not be a complex task, either. As Adrian Turjak said: The first major issue is that you connect to the databases of the services directly. That's a major issue, both for long term compatibility, and security. The APIs should always be the main point of contact and the ONLY contract that the services have to maintain. By connecting to the database directly you are now relying on a data structure that can, and likely will change, and any security and sanity checking on filters and queries is now handled on your layer rather than the application itself. Not only that, but your dashboard also now needs passwords for all the databases, and by the sounds of it all the message queues. And as Mohammed Naser said: While I agree with you that querying database is much faster, this introduces two issues that I imagine for users: - Dashboards generally having direct access via SQL is a scary thing from an operators perspective - also, it will make maintaining the project quite hard because I don't think any projects expose a *stable* database API. Well, we’re not surprised that our querying approach would be challenged since it does sound unsafe. However, we have made some efforts to solve problems which have been posed: - We use a ORM library to create all queries, which ensures that only those instructions we have specified in the backend(i.e. select, order by, where and other harmless querying instructions) could be executed, protecting our databases from dangerous attacks like SQL injection. All sanity or security checkings would be automatically completed by those library functions. - All instructions that *may* change the database(e.g. start, shutoff, migrate) would be executed by calling standard openstack API, only pure GET instructions were implemented by querying databases directly. We have wrapped each API call with a go func() { ... }() to avoid the extremely long calling period. The results of API calls would be sent back to frontend by websocket asynchronously. - Passwords of databases and message queues(and many other kinds of information) are stored in a config file which would be loaded by openstack-admin. Simply by modifying this file, we could be consistent with all changes about sql databases and MQs. I hope my explanation is clear enough, and we’re willing to solve other possible issues existing. Cheers, Douglas Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Sun Aug 25 15:36:38 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Sun, 25 Aug 2019 08:36:38 -0700 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: <1B71B948-AF95-41FD-98EC-65B7DC6D209C@gmail.com> Keep in mind Keystone's database has what is considered privileged information in it. Notably user passwords (bcrypt or scrypt hashed) and user credentials (encrypted) Even with hashing, it is never recommended to expose these values externally. An example I give is: do you consider password hashes in your shadow file secure enough to publish publically? (The answer should be an emphatic "no"). Keystone also contains in many deployments PII (personally identifying information), while this is not explicitly part of Keystone nor recommended to store in Keystone, there could be other legal ramifications to expose of this data enmasse especially if the data would have been protected via the API. I highly recommend, with a security hat on, not connecting and interacting with Keystone's database directly for this reason. It is possible, even with an ORM, someone will decide to develop a mechanism to pull user related information or there may be exposure that can leak arbitrary data from within the DB. I will also echo concerns that you will have a hard time keeping up across versions with the various database schema changes. For example between stein and train keystone will have added resource options that are intended to communicate immutability for some resources. These are loaded behind the scenes with a join and translated to something usable via code. The referencing keys are minimalist and may be a simple ID or a 4-letter code instead of the full option name. I am sure Keystone is not the only Service that has conventions for data in the Database that do not translate to something useful without being run through the api code. —Morgan > On Aug 25, 2019, at 03:28, Douglas Zhang wrote: > > Hello everyone, > > Thanks for your attention and advice for this project. We have read each reply thoroughly and the good news is that we’re able to give answers to some questions raised by them. > > As Lingxian Kong said: > > I have a few questions/suggestions: > > It'd be great and gain more attractions if you could provide a demo about how "openstack-admin" looks like > > What OpenStack services has "openstack-admin" already integrated? Is it easy to integrate with others? > > We have deployed openstack-admin on a mini openstack cluster as a demo, any access would be welcomed. > > Username: openstack-admin > > Password: @Dem0 > > Since openstack-admin gets information it needs by querying the sql database, it’s fairly easy to integrate with all openstack services. > As the demo shows, openstack-admin has integrated nova(almost all GET and part of POST), cinder(GET and snapshot-creation), neutron(subnets and ports), keystone(projects) and glance(images), that’s all we need in our own working environment. > > If we need to integrate more services to openstack-admin(e.g. adding a create instance button or integrating with swift), it would not be a complex task, either. > > As Adrian Turjak said: > The first major issue is that you connect to the databases of the services directly. That's a major issue, both for long term compatibility, and security. The APIs should always be the main point of contact and the ONLY contract that the services have to maintain. By connecting to the database directly you are now relying on a data structure that can, and likely will change, and any security and sanity checking on filters and queries is now handled on your layer rather than the application itself. Not only that, but your dashboard also now needs passwords for all the databases, and by the sounds of it all the message queues. > And as Mohammed Naser said: > > While I agree with you that querying database is much faster, this introduces two issues that I imagine for users: > > Dashboards generally having direct access via SQL is a scary thing from an operators perspective > > also, it will make maintaining the project quite hard because I don't think any projects expose a stable database API. > > Well, we’re not surprised that our querying approach would be challenged since it does sound unsafe. However, we have made some efforts to solve problems which have been posed: > > We use a ORM library to create all queries, which ensures that only those instructions we have specified in the backend(i.e. select, order by, where and other harmless querying instructions) could be executed, protecting our databases from dangerous attacks like SQL injection. All sanity or security checkings would be automatically completed by those library functions. > > All instructions that may change the database(e.g. start, shutoff, migrate) would be executed by calling standard openstack API, only pure GET instructions were implemented by querying databases directly. We have wrapped each API call with a go func() { ... }() to avoid the extremely long calling period. The results of API calls would be sent back to frontend by websocket asynchronously. > > Passwords of databases and message queues(and many other kinds of information) are stored in a config file which would be loaded by openstack-admin. Simply by modifying this file, we could be consistent with all changes about sql databases and MQs. > > I hope my explanation is clear enough, and we’re willing to solve other possible issues existing. > > Cheers, > > Douglas Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Aug 25 15:56:38 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 25 Aug 2019 10:56:38 -0500 Subject: [all] PTG Attendance In-Reply-To: References: Message-ID: Hey Kendall, I don't remember if I included the on-boarding session in my PTG request. Please update the spreadsheet to include it. Thanks! On Sun, Aug 25, 2019 at 2:01 AM Kendall Nelson wrote: > Hey Miguel, > > When you requested time/space at the PTG, project onboarding was included > in that. If Neutron didn't say they wanted onboarding as well I can update > my spreadsheet, just let me know! > > -Kendall (diablo_rojo) > > On Sun, 25 Aug 2019, 4:50 am Miguel Lavalle, wrote: > >> Hi, >> >> Neutron is planning to do an onboarding session. Do we need to do any >> reservation for that? >> >> Regards >> >> On Fri, Aug 23, 2019 at 10:46 PM Kendall Nelson >> wrote: >> >>> Hello Everyone! >>> >>> This is the list of groups that are plannning on attending the PTG. >>> Thank you PTLs/Chairs/Leads that responded on time :) >>> >>> I hesitate to say it's the 'final' list since I had a few groups respond >>> with a 'Maybe', but it's probably pretty close. >>> >>> Please also note that the tentative activities at the PTG vary by group. >>> Some groups are planning on only doing onboarding for example. >>> >>> Without further ado, here is the list! >>> >>> Airship >>> Auto-Scaling SIG >>> Barbican >>> Blazar >>> CInder >>> Cyborg >>> Edge Computing Group >>> Fenix >>> First Contact SIG >>> Gitea >>> Glance >>> Heat >>> Horizon >>> I18n >>> Ironic >>> K8s SIG >>> Karbor >>> Kata Containers >>> Keystone >>> Loci >>> Manila >>> Meta SIG >>> Monasca >>> Neutron >>> Nova >>> Octavia >>> OpenStack Charms >>> OpenStack Infra / OpenDev >>> OpenStack TC >>> Openstack-helm >>> OpenStack Operators (Ops Docs SIG and Meetup group) >>> Oslo >>> Public Cloud SIG >>> Quality Assurance >>> Release Management >>> Scientific SIG >>> Self-healing SIG >>> Storlets >>> StoryBoard >>> Swift >>> Tacker >>> StarlingX >>> >>> If your team is missing from the list, please let me know ASAP as we are >>> starting to put together a draft schedule. >>> >>> -Kendall Nelson(diablo_rojo) >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Sun Aug 25 18:46:30 2019 From: mihalis68 at gmail.com (Chris Morgan) Date: Sun, 25 Aug 2019 14:46:30 -0400 Subject: [ops] technical agenda for upcoming ops meetup, NYC Message-ID: We are still looking for topics and topic feebdack for the ops meetup. The planning sheet is here https://etherpad.openstack.org/p/NYC19-OPS-MEETUP Please if you are coming +1 topics you like or add more. More detail on this such as links and descriptions of the sessions we've already planned is tweeted on the ops meetups notification twitter account : https://twitter.com/osopsmeetup Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Mon Aug 26 02:46:39 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Mon, 26 Aug 2019 14:46:39 +1200 Subject: [horizon] Horizon's openstack_auth is completely broken with Django 2.2, breaking all the dashboad stack in Debian Sid/Bullseye In-Reply-To: References: Message-ID: Hey Thomas, I'm not really a Horizon core dev, but have assisted in Django version migrations in the past, plus have a little experience with the openstack_auth code (and need to do some openstack_auth changes soon anyway). I'll see how my workload is, and maybe try and work on some fixes for this, but at the very least would be happy to review or help with this work. I think the rush to upgrade to Django 2.2 has been put off until the next release since 1.11 is still an active LTS, but in truth it looks like it will stop being the LTS before the next cycle... which makes 2.2 kind of urgent. Let's see about getting this sorted! Cheers, Adrian On 25/08/19 9:55 AM, Thomas Goirand wrote: > Hi, > > Django 2.2 was uploaded to Debian Sid/Bullseye a few days after Buster > was released, removing at the same time any support for Python 2. This > broke lots of packages, but I managed to upload more than 40 times to > fix the situation. Though Horizon is still completely broken. > > There's a bunch of issues related to Django 2.2 for Horizon. I managed > to patch Horizon for a few of them, but there's one bigger issue which I > can't solve myself, because I'm not skilled enough in Django, and don't > know enough about Horizon's internal. Let me explain. > > In Django 1.11, the login() function in openstack_auth/views.py was > deprecated in the favor of a LoginView class. In Django 2.2, the login() > function is now not supported anymore. This means that, if > openstack_auth/views.py doesn't get rewritten completely, Horizon will > continue to be completely broken in Debian Sid/Bullseye. > > Unfortunately, the Horizon team is understaffed, and the PTL told me > that they don't plan anything before Train+1. > > As a consequence, Horizon is completely broken in Debian Sid/Bullseye, > and will stay as it is if nobody steps up for the work. > > After a month and a half with this situation not being solved by anyone, > I'm hereby calling for help. > > I could attempt to work on this, though I need help and pointers at some > example implementation of this. It'd be nicer if someone more skilled > than me in Django was working on this anyways. > > Cheers, > > Thomas Goirand (zigo) > From tony at bakeyournoodle.com Mon Aug 26 03:14:44 2019 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 26 Aug 2019 13:14:44 +1000 Subject: [tc] Release naming process In-Reply-To: <87ftlu8pfx.fsf@meyer.lemoncheese.net> References: <87ftlu8pfx.fsf@meyer.lemoncheese.net> Message-ID: <20190826031444.GA7170@thor.bakeyournoodle.com> On Wed, Aug 21, 2019 at 09:15:46AM -0700, James E. Blair wrote: > Hi, > > In the previous thread about the U release name, we discussed how the > actual process for selecting the name has diverged from the written > process. I think it's important that we follow our own processes, so we > should reconcile those. We should change our actions or change the > process. > > Based on the previous discussion, I've proposed 6 changes to > openstack/governance for some initial feedback. We can, of course, > tweak these options, eliminate them, or add new ones. > > Ultimately, we're aiming for the TC to formally vote on one or a small > number of changes similar to these. > > I'd like for anyone interested in this to review these options and leave > feedback on the changes themselves, or here on the mailing list. Either > is fine, and all will be considered. Leaving "code-review" votes on the > changes themselves will help us gauge relative support for the different > options. > > In a week, I'll collect the feedback and propose a next step. > > https://review.opendev.org/675788 - Stop naming releases This!, well A variation. I suggest 'Twenty' not '20' The process used to be fun. It isn't any more. This specific instance has created several rifts in our community, hurt feelings and more than a little stress. Apart from that as we have fewer developers it takes resources that are now better spent elsewhere. Yes it will make some of out tooling that assumes alphabetical order a little strange but not too bad, and we were going to have to fix that soon anyway. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dangtrinhnt at gmail.com Mon Aug 26 04:35:48 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 26 Aug 2019 13:35:48 +0900 Subject: [Telemetry][PTL]Lingxian Kong & Zhu Rong as co-PTLs for U release Message-ID: Hi team, As the PTL election period is coming up. I would like to discuss the PTL role for Telemetry in the U cycle. As I changed job recently, I don't think I have enough time budget for the 2 projects (Searchlight and Telemetry) so I talked to Lingxinan Kong (lxkong) and Zhu Rong (zhurong) and they agreed to share the responsibility as co-PTLs for Telemetry for next release. lxkong and zhurong has been very actively contributing to Telemetry in Train and being very helpful as core-reviewers. So I would like to endorse and nominate the two as co-PTLs for Telemetry for the U release. Though I will not be the PTL for Telemetry in the next cycle, I will still be around helping when needed. The next step for lxkong and zhurong is submitting a patchset nominating for the PTLs. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordan.ansell at catalyst.net.nz Mon Aug 26 06:18:33 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Mon, 26 Aug 2019 18:18:33 +1200 Subject: [nova][glance][entropy][database] update glance metadata for nova instance Message-ID: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> Hi Openstack Discuss, I have an issue with nova not synchronizing changes between a glance image and it's local image meta information in nova. I have updated a glance image with the property "hw_rng_model=virtio", and that successfully passes that to new instances created using the updated image. However existing instances do not receive this new property. I have located the image metadata within the nova database, in the **instance_system_metadata** table, and can see it's not updated for the existing instances, and only adding the relevant rows for instances that are created when that property is present. The key being "image_hw_rng_model" and "virtio" being the value. Is there a way to tell nova to update the table for existing instances, and synchronizing the two databases? Or is this the kind of thing that would need to be done *shudder* manually...? If so, are there any experts out there who can point me to some documentation on doing this correctly before I go butcher a couple of dummy nova database? Regards, Jordan From ralf.teckelmann at bertelsmann.de Mon Aug 26 06:55:42 2019 From: ralf.teckelmann at bertelsmann.de (Teckelmann, Ralf, NMU-OIP) Date: Mon, 26 Aug 2019 06:55:42 +0000 Subject: AW: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent In-Reply-To: <69b7774c-608d-5552-e300-99e91da32799@citynetwork.eu> References: <69b7774c-608d-5552-e300-99e91da32799@citynetwork.eu> Message-ID: Hi Florian, Thanks for moving our conversation to the mailing list. From the discussion on the ceph-mailing list I take (as of today): - Ephemeral boot, *without* RBD, with or without attached volumes: freeze/thaw if hw_qemu_guest_agent=yes, resulting in consistent snapshots. - Ephemeral boot *from* RBD, also with or without attached volumes: no freeze/thaw, resulting in potentially inconsistent snapshots even with hw_qemu_guest_agent=yes. - Boot-from-volume from RBD: freeze/thaw if hw_qemu_guest_agent=yes, resulting in consistent snapshots. One may note that " hw_qemu_guest_agent=yes" stands for - the metadata property set on an image and as well - assumes one does install the qemu-guest agent in that image or on the instances spawned from that image. Besides that, Florian explains further down why os_require_quiesce=yes is very nice to have set as well. I am fine with the result of your in depth analysis, Florian. Thanks a lot, Ralf T. -----Ursprüngliche Nachricht----- Von: Florian Haas Gesendet: Mittwoch, 21. August 2019 21:09 An: Teckelmann, Ralf, NMU-OIP Cc: openstack-discuss at lists.openstack.org Betreff: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent [apologies for the top-post] Hi Ralf, it looks like you've met all the necessary prerequisites. Basically, 1. The image you are booting from must have the hw_qemu_guest_agent=yes property set (this configures the Nova instance with a virtual serial device consumed by nova-guest-agent). 2. The instance must run the qemu-guest-agent daemon. 3. The image you are booting from should have the os_require_quiesce=yes property set. This isn't strictly necessary, as libvirt should always try to send the freeze/thaw commands over the serial device if your instance is configured with hw_qemu_guest_agent — but if os_require_quiesce is set then the snapshot will actually fail if libvirt can't freeze, which is what you probably want. 4. The filesystem used within the guest must support fsfreeze. This includes btrfs, ext2/3/4, and xfs, and a few others. vfat on Linux does not support being frozen, though Windows guests with the Windows Qemu Guest Agent apparently do support freezing if VSS is enabled — I am no expert on Windows guests though. What happens under the covers is that qemu-guest-agent invokes the FIFREEZE ioctl on each mounted filesystem in the guest, as seen here: https://urldefense.proofpoint.com/v2/url?u=https-3A__git.qemu.org_-3Fp-3Dqemu.git-3Ba-3Dblob-3Bf-3Dqga_commands-2Dposix.c-23l1327&d=DwIFaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=aXb_NFqLbO-31tpiN1sfnfeMAjINAL_ebQhZf6tKDjI&s=lItytJ_9XhF-gY8aAJmXNJ5VeoZB-grifLw6GZWEuuc&e= (the comments immediately above that line explain under which circumstances the FIFREEZE ioctl may fail). The FIFREEZE ioctl maps to the kernel freeze_super() function, which flushes the filesystem superblock, syncs the filesystem, and then disallows any further I/O. Which, to answer your other question, should indeed persist all in-flight I/O to disk. Unfortunately, nothing in the code path (that I know of) issues any printk's on success, so dmesg won't tell you that the filesystem has been flushed/frozen successfully. You'd only see "VFS:Filesystem freeze failed" in your guest's kernel log on error. The same is true for FITHAW/thaw_super(), which thaws the superblock and makes the filesystem writable again. However, you can (at least on an Ubuntu guest), create a file named /etc/default/qemu-guest-agent, in which you can define DAEMON_ARGS like this: DAEMON_ARGS="--logfile /var/log/qemu-ga.log --verbose" Then, while you are creating a snapshot with "nova image-create" or "openstack server image create", /var/log/qemu-ga.log should be populated with log entries related to the fsfreeze events. The same should be true for creating a snapshot from Horizon. On Ubuntu bionic, you should also make sure that you are running qemu-guest-agent from bionic-security (or a recent daily build of an Ubuntu cloud image), because at least in the initial bionic release qemu-guest-agent was suffering from a packaging issue, described in https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_ubuntu_-2Bsource_qemu_-2Bbug_1820291&d=DwIFaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=aXb_NFqLbO-31tpiN1sfnfeMAjINAL_ebQhZf6tKDjI&s=3oc3rufQlE2cF6qyT_4UgHW0ouD2muOXDdkDJDgfVvk&e=. For RBD-backed Nova/libvirt, things are a bit more complicated still, due to what appears like somewhat inconsistent/unexpected behavior in Nova. See the discussion in: https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ceph.io_hyperkitty_list_ceph-2Dusers-40ceph.io_thread_3YQCRO4JP56EDJN5KX5DWW5N2CSBHRHZ_&d=DwIFaQ&c=vo2ie5TPcLdcgWuLVH4y8lsbGPqIayH3XbK3gK82Oco&r=WXex93lsaiQ-z7CeZkHv93lzt4fdCRIPXloSPQEU7CM&m=aXb_NFqLbO-31tpiN1sfnfeMAjINAL_ebQhZf6tKDjI&s=U0H46fjutEaPYP8DAR0Y05nbILShUGD7z6-mZxaNmo0&e= Does this give you enough information so you can verify whether or not freeze/thaw is working as expected for you? Cheers, Florian On 14/08/2019 10:41, Teckelmann, Ralf, NMU-OIP wrote: > Hello, > > > Working me through documentation and articles I am totally lost on the > matter. > > All I want to know is: > > - if issueing "openstack snapshot create ...." > > - if klicking "create Snaphost" in Horizon for an instance > > will secure a consistent snapshot (of all volumes in question). > With "consistent", I mean that all the data in memory are written to > the disc before starting a snapshot. > > I hope someone can clear up, if using the setup described in the > following is sufficient to achieve this goal or if I have to do > something in addition. > > > If you have any question I am eager to answer as fast as possible. > > > Setup: > > > We have a Stein-based OpenStack deployment with cinder backed by ceph. > > Instances are created with cinder volumes. Boot volumes are based on > an image having the properties: > > - hw_qemu_guest_agent='yes' > - os_require_quiesce='yes' > > > The image is ubuntu 16.04 or 18.04 with quemu-guest-agent package > installed and service running (no additional configuration besides > distro-default): > > > qemu-guest-agent.service - LSB: QEMU Guest Agent startup script >    Loaded: loaded (/etc/init.d/qemu-guest-agent; bad; vendor preset: > enabled) >    Active: active (running) since Wed 2019-08-14 07:42:21 UTC; 9min > ago >      Docs: man:systemd-sysv-generator(8) >    CGroup: /system.slice/qemu-guest-agent.service >            └─2300 /usr/sbin/qemu-ga --daemonize -m virtio-serial -p > /dev/virtio-ports/org.qemu.guest_agent.0 > > Aug 14 07:42:21 ulthwe systemd[1]: Starting LSB: QEMU Guest Agent > startup script... > Aug 14 07:42:21 ulthwe systemd[1]: Started LSB: QEMU Guest Agent > startup script. > > I can see the socket on the compute node and send pings successfully: > > ~# ls /var/lib/libvirt/qemu/*.sock > /var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0000248e.sock > root at pcevh2404:~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-ping"}' > {"return":{}} > > > I can also send freeze and thaw successfully: > > ~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-fsfreeze-freeze"}' > {"return":1} > > ~# virsh qemu-agent-command instance-0000248e > '{"execute":"guest-fsfreeze-thaw"}' > {"return":1} > > Sending a simple write (echo "bla" > blub.file) in the "frozen" state > will be blocked until "thaw" as expected. > > Best regards > > > Ralf T. From zigo at debian.org Mon Aug 26 06:58:16 2019 From: zigo at debian.org (Thomas Goirand) Date: Mon, 26 Aug 2019 08:58:16 +0200 Subject: [horizon] Horizon's openstack_auth is completely broken with Django 2.2, breaking all the dashboad stack in Debian Sid/Bullseye In-Reply-To: References: Message-ID: <86b58107-5c99-9fe3-9f6e-baef239645bd@debian.org> On 8/26/19 4:46 AM, Adrian Turjak wrote: > Hey Thomas, > > I'm not really a Horizon core dev, but have assisted in Django version > migrations in the past, plus have a little experience with the > openstack_auth code (and need to do some openstack_auth changes soon > anyway). > > I'll see how my workload is, and maybe try and work on some fixes for > this, but at the very least would be happy to review or help with this work. > > I think the rush to upgrade to Django 2.2 has been put off until the > next release since 1.11 is still an active LTS, but in truth it looks > like it will stop being the LTS before the next cycle... which makes 2.2 > kind of urgent. > > Let's see about getting this sorted! > > Cheers, > Adrian Hi Adrian, Thanks if you can commit a bit of time on this. FYI, even in Django 1.11, the login() function was deprecated, and Django 2.2 was in Debian Experimental since last January. So this is really an overdue issue. Feel free to ping me on IRC if you want to work on this with me (I'm zigo on both OFTC and freenode). Thomas Goirand (zigo) From gmann at ghanshyammann.com Mon Aug 26 07:06:14 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 26 Aug 2019 16:06:14 +0900 Subject: [Telemetry][PTL]Lingxian Kong & Zhu Rong as co-PTLs for U release In-Reply-To: References: Message-ID: <16cccbf23ce.ff0741cb305865.5070211239650499177@ghanshyammann.com> ---- On Mon, 26 Aug 2019 13:35:48 +0900 Trinh Nguyen wrote ---- > Hi team, > > As the PTL election period is coming up. I would like to discuss the PTL role for Telemetry in the U cycle. As I changed job recently, I don't think I have enough time budget for the 2 projects (Searchlight and Telemetry) so I talked to Lingxinan Kong (lxkong) and Zhu Rong (zhurong) and they agreed to share the responsibility as co-PTLs for Telemetry for next release. > lxkong and zhurong has been very actively contributing to Telemetry in Train and being very helpful as core-reviewers. So I would like to endorse and nominate the two as co-PTLs for Telemetry for the U release. > Though I will not be the PTL for Telemetry in the next cycle, I will still be around helping when needed. > The next step for lxkong and zhurong is submitting a patchset nominating for the PTLs. Thanks Trinh for your all help and hard work to keep Telemetry alive. It is not easy to be two project PTL and you are exceptional (also lxkong :)). Let me clarify co-PTLs things more. There is no official TC process for co-PTLs things. It is only single PTL for each project until now[1]. But yes, any PTL can delegate or divide the PTL activities or assign another short term PTL while on vacation etc. or during PTL transition planning, future PTL candidate can work closely with current PTL. Doing PTL activities in those scenarios can be considered as co-PTLs. PTL can update about co-PTLs or proxy PTL to the community over ML or project docs/wiki etc. > The next step for lxkong and zhurong is submitting a patchset nominating for the PTLs. If there are more than two PTL candidates for the same project then, the election will happen to choose the PTL. As there is no official way on TC to nominate the co-PTL along with PTL, it can be done via ML only. But if you mean to say 2 PTL candidates and election will choose the final PTL then, both candidate nominations sounds perfect. [1] https://opendev.org/openstack/governance/src/commit/92e1342e290600b2965e3014699d774afc3efe9b/reference/projects.yaml -gmann > Bests, > > -- > Trinh Nguyen > www.edlab.xyz > > From florian at citynetwork.eu Mon Aug 26 07:35:09 2019 From: florian at citynetwork.eu (Florian Haas) Date: Mon, 26 Aug 2019 09:35:09 +0200 Subject: AW: [nova][glance][cinder] How to do consistent snapshots with quemu-guest-agent In-Reply-To: References: <69b7774c-608d-5552-e300-99e91da32799@citynetwork.eu> Message-ID: <9e9725ec-a0d8-1ae4-84b8-33a641946eef@citynetwork.eu> On 26/08/2019 08:55, Teckelmann, Ralf, NMU-OIP wrote: > Hi Florian, > > Thanks for moving our conversation to the mailing list. > > From the discussion on the ceph-mailing list I take (as of today): > > - Ephemeral boot, *without* RBD, with or without attached volumes: > freeze/thaw if hw_qemu_guest_agent=yes, resulting in consistent snapshots. > > - Ephemeral boot *from* RBD, also with or without attached volumes: no > freeze/thaw, resulting in potentially inconsistent snapshots even with > hw_qemu_guest_agent=yes. > > - Boot-from-volume from RBD: freeze/thaw if hw_qemu_guest_agent=yes, > resulting in consistent snapshots. > > One may note that " hw_qemu_guest_agent=yes" stands for > - the metadata property set on an image and as well > - assumes one does install the qemu-guest agent in that image or on the instances spawned from that image. > Besides that, Florian explains further down why os_require_quiesce=yes is very nice to have set as well. Right, also note that due to the fact that RBD-backed ephemeral images currently bypass quiescing altogether, os_require_quiesce is actually silently ignored for VMs that are RBD-backed AND do not boot from volume. A Nova bug about this is open, and a discussion on how to best fix it is currently ongoing: https://bugs.launchpad.net/nova/+bug/1841160 Cheers, Florian From smooney at redhat.com Mon Aug 26 08:24:24 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Aug 2019 09:24:24 +0100 Subject: [nova][glance][entropy][database] update glance metadata for nova instance In-Reply-To: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> References: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> Message-ID: On Mon, 2019-08-26 at 18:18 +1200, Jordan Ansell wrote: > Hi Openstack Discuss, > > I have an issue with nova not synchronizing changes between a glance > image and it's local image meta information in nova. > > I have updated a glance image with the property "hw_rng_model=virtio", > and that successfully passes that to new instances created using the > updated image. However existing instances do not receive this new property. > > I have located the image metadata within the nova database, in the > **instance_system_metadata** table, and can see it's not updated for the > existing instances, and only adding the relevant rows for instances that > are created when that property is present. The key being > "image_hw_rng_model" and "virtio" being the value. > > Is there a way to tell nova to update the table for existing instances, > and synchronizing the two databases? Or is this the kind of thing that > would need to be done *shudder* manually...? this is idealy not something you would do at all. nova create a local copy of the image metadata the instace was booted with intionally to not pick up chagne you make to the image metadata after you boot the instance. in some case those change could invalidate the host the image is on so it in general in not considerd safe to just sync them for the random number generator it should be ok but if you were to add a trait requirement of alter the numa topology then it could invalidate the host as a candiate for that instance. so if you want to do this then you need to update it manually as nova is working as intended by not syncing the data. > If so, are there any > experts out there who can point me to some documentation on doing this > correctly before I go butcher a couple of dummy nova database? there is no docs for doing this as it is not a supported feature. you are circumventing a safty feature we have in nova to prevent change to running instances after they are first booted by change to the flavor extra spec or image metadata. > > Regards, > Jordan > > From dangtrinhnt at gmail.com Mon Aug 26 10:23:35 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 26 Aug 2019 19:23:35 +0900 Subject: [Telemetry][PTL]Lingxian Kong & Zhu Rong as co-PTLs for U release In-Reply-To: <16cccbf23ce.ff0741cb305865.5070211239650499177@ghanshyammann.com> References: <16cccbf23ce.ff0741cb305865.5070211239650499177@ghanshyammann.com> Message-ID: Hi Ghanshyam, Thanks for clarifying that. We would discuss and decide which one will submit the nomination patchset. Bests, On Mon, Aug 26, 2019 at 4:06 PM Ghanshyam Mann wrote: > ---- On Mon, 26 Aug 2019 13:35:48 +0900 Trinh Nguyen < > dangtrinhnt at gmail.com> wrote ---- > > Hi team, > > > > As the PTL election period is coming up. I would like to discuss the > PTL role for Telemetry in the U cycle. As I changed job recently, I don't > think I have enough time budget for the 2 projects (Searchlight and > Telemetry) so I talked to Lingxinan Kong (lxkong) and Zhu Rong (zhurong) > and they agreed to share the responsibility as co-PTLs for Telemetry for > next release. > > lxkong and zhurong has been very actively contributing to Telemetry in > Train and being very helpful as core-reviewers. So I would like to endorse > and nominate the two as co-PTLs for Telemetry for the U release. > > Though I will not be the PTL for Telemetry in the next cycle, I will > still be around helping when needed. > > The next step for lxkong and zhurong is submitting a patchset > nominating for the PTLs. > > Thanks Trinh for your all help and hard work to keep Telemetry alive. It > is not easy to be two project PTL and you are exceptional (also lxkong :)). > > Let me clarify co-PTLs things more. There is no official TC process for > co-PTLs things. It is only single PTL for each project until now[1]. > > But yes, any PTL can delegate or divide the PTL activities or assign > another short term PTL while on vacation etc. or during PTL transition > planning, > future PTL candidate can work closely with current PTL. Doing PTL > activities in those scenarios can be considered as co-PTLs. PTL can update > about co-PTLs or proxy PTL to the community over ML or project docs/wiki > etc. > > > The next step for lxkong and zhurong is submitting a patchset > nominating for the PTLs. > If there are more than two PTL candidates for the same project then, the > election will happen to choose the PTL. As there is no official way on TC > to > nominate the co-PTL along with PTL, it can be done via ML only. But if you > mean to say 2 PTL candidates and election will choose the final PTL then, > both candidate nominations sounds perfect. > > [1] > https://opendev.org/openstack/governance/src/commit/92e1342e290600b2965e3014699d774afc3efe9b/reference/projects.yaml > > -gmann > > > Bests, > > > > -- > > Trinh Nguyen > > www.edlab.xyz > > > > > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Mon Aug 26 11:15:57 2019 From: gr at ham.ie (Graham Hayes) Date: Mon, 26 Aug 2019 12:15:57 +0100 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> References: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> Message-ID: <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> On 20/08/2019 19:05, Matt Riedemann wrote: > On 8/20/2019 7:02 AM, Mohammed Naser wrote: >>> Flexible: openstack-admin supports the fuzzy search for any important >>> field(e.g. display_name/uuid/ip_address/project_name of an instance), >>> which enables users to locate a particular object in no time. >> This is really useful to be honest, but we probably can work around it >> by using the filtering that APIs provide. >> > > Isn't this what searchlight integration into horizon was for? > Yes - AFAIK, there was even an idea of integrating searchlight into the OSC / nova CLI for list / search operations. I really liked the idea of searchlight for fuzzy search, being able to search for an IP and finding the floating IP, server, and DNS Record seemed like a great tool. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From ed at leafe.com Mon Aug 26 11:43:47 2019 From: ed at leafe.com (Ed Leafe) Date: Mon, 26 Aug 2019 06:43:47 -0500 Subject: [uc]The UC Special Election Nomination Period is now open! Message-ID: <2A82D3E2-CE68-41D3-8D48-26496C38E722@leafe.com> The recent election for the two open seats on the User Committee had only one nomination, so one seat remains open. The UC decided to hold a second, special election to fill that seat. The nomination period for this special election is now open, and will remain open until August 30 at 05:59 UTC. If there is more than one candidate, the voting period will begin September 1 at 11:59 UTC, and close on September 4 at 11:59 UTC. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the 4 sitting UC members). Self-nomination is common; no third party nomination is required. They do so by sending an email to the user-committee at lists.openstack.orgmailing-list, with the subject: “UC candidacy” by August 30, 05:59 UTC. Please note that you must be a subscriber to the user-committee list for your nomination to be posted. Subscribing is free, and can be done on this page [0]. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From e0ne at e0ne.info Mon Aug 26 12:08:21 2019 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 26 Aug 2019 15:08:21 +0300 Subject: [horizon] No meeting this week Message-ID: Hi team, As mentioned during the last meeting, I can't chair the meeting this week. Let's skip it or you can make it without my presence. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Aug 26 12:57:54 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Aug 2019 08:57:54 -0400 Subject: not running for the TC this term Message-ID: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Since nominations open this week, I wanted to go ahead and let you all know that I will not be seeking re-election to the Technical Committee this term. My role within Red Hat has been changing over the last year, and while I am still working on projects related to OpenStack it is no longer my sole focus. I will still be around, but it is better for me to make room on the TC for someone with more time to devote to it. It’s hard to believe it has been 6 years since I first joined the Technical Committee. So much has happened in our community in that time, and I want to thank all of you for the trust you have placed in me through it all. It has been an honor to serve and help build the community. Thank you, Doug From mdulko at redhat.com Mon Aug 26 13:03:41 2019 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 26 Aug 2019 15:03:41 +0200 Subject: [nova][keystone][neutron][kuryr][requirements] breaking tests with new library versions In-Reply-To: <20190822131734.4f2t735qrkheccjo@mthode.org> References: <20190818161611.6ira6oezdat4alke@mthode.org> <20190822131734.4f2t735qrkheccjo@mthode.org> Message-ID: <994a4c6ec18055582ddf529b5149a57c42bdedbf.camel@redhat.com> On Thu, 2019-08-22 at 08:17 -0500, Matthew Thode wrote: > On 19-08-22 09:53:30, Michał Dulko wrote: > > On Sun, 2019-08-18 at 11:16 -0500, Matthew Thode wrote: > > > NOVA: > > > lxml===4.4.1 nova tests fail https://bugs.launchpad.net/nova/+bug/1838666 > > > websockify===0.9.0 tempest test failing > > > > > > KEYSTONE: > > > oauthlib===3.1.0 keystone https://bugs.launchpad.net/keystone/+bug/1839393 > > > > > > NEUTRON: > > > tenacity===5.1.1 https://2c976b5e9e9a7bed9985-82d79a041e998664bd1d0bc4b6e78332.ssl.cf2.rackcdn.com/677052/5/check/cross-neutron-py27/a0a3c75/testr_results.html.gz > > > this could be caused by pytest===5.1.0 as well > > > > > > KURYR: > > > kubernetes===10.0.1 openshift PINS this, only kuryr-tempest-plugin deps on it > > > https://review.opendev.org/665352 > > > > Alright, I tested and fixed the commit above and I should be able to > > get it merged today. Thanks for following up on this. > > > > Yep, saw it merge. Now all it needs is a release so the version on pypy > / upper-constraints does not hold back the version of kubernetes. Alright, let's see how my release commit [1] will work then. [1] https://review.opendev.org/#/c/678542/ > > > MISC: > > > tornado===5.1.1 salt is cauing this, no eta on fix (same as the last year) > > > stestr===2.5.0 needs merged https://github.com/mtreinish/stestr/pull/265 > > > jsonschema===3.0.2 see https://review.opendev.org/649789 > > > > > > I'm trying to get this in place as we are getting closer to the > > > requirements freeze (sept 9th-13th). Any help clearing up these bugs > > > would be appreciated. > > > From mnaser at vexxhost.com Mon Aug 26 13:10:26 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Aug 2019 09:10:26 -0400 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: On Mon, Aug 26, 2019 at 9:01 AM Doug Hellmann wrote: > > Since nominations open this week, I wanted to go ahead and let you all know that I will not be seeking re-election to the Technical Committee this term. > > My role within Red Hat has been changing over the last year, and while I am still working on projects related to OpenStack it is no longer my sole focus. I will still be around, but it is better for me to make room on the TC for someone with more time to devote to it. > > It’s hard to believe it has been 6 years since I first joined the Technical Committee. So much has happened in our community in that time, and I want to thank all of you for the trust you have placed in me through it all. It has been an honor to serve and help build the community. > > Thank you, > Doug > > Hi Doug, Thank you for serving the technical committee and helping shape OpenStack for the past few years. Also, a personal thanks for helping me throughout the process of becoming a chair. Thanks once again! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From fungi at yuggoth.org Mon Aug 26 13:19:38 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Aug 2019 13:19:38 +0000 Subject: [tc][elections] Not running for reelection to TC this term Message-ID: <20190826131938.ouya5phflfzqoexn@yuggoth.org> I've been on the OpenStack Technical Committee continuously for several years, and would like to take this opportunity to thank everyone in the community for their support and for the honor of being chosen to represent you. I plan to continue participating in the community, including in TC-led activities, but am stepping back from reelection this round for a couple of reasons. First, I want to provide others with an opportunity to serve our community on the TC. I hope that by standing aside for now, others will be encouraged to run. A regular influx of fresh opinions helps us maintain the requisite level of diversity to engage in productive debate. Second, the scheduling circumstances for this election, with the TC and PTL activities combined, will be a bit more complicated for our election officials. I'd prefer to stay engaged in officiating so that we can ensure it goes as smoothly for everyone a possible. To do this without risking a conflict of interest, I need to not be running for office. It's quite possible I'll run again in 6 months, but for now I'm planning to help behind the scenes instead. Best of luck to all who decide to run for election to any of our leadership roles! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From elmiko at redhat.com Mon Aug 26 13:25:24 2019 From: elmiko at redhat.com (Michael McCune) Date: Mon, 26 Aug 2019 09:25:24 -0400 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: <16cc30634ec.125fb7873294246.2871343054827957126@ghanshyammann.com> References: <16cc30634ec.125fb7873294246.2871343054827957126@ghanshyammann.com> Message-ID: On Sat, Aug 24, 2019 at 5:52 AM Ghanshyam Mann wrote: > > ---- On Fri, 23 Aug 2019 22:55:06 +0900 Eric Fried wrote ---- > > > > > feel quite good about the current state of the guidelines i'm sure > > > there is room for improvement, > > > > It's worth noting that there's quite a lot of unmerged guideline content > > under the api-sig's purview [1]. We should either close or rehome these > > patches before we consider disbanding the SIG. It's obviously hard to > > merge stuff with a small (and now shrinking) core list [2], but at least > > some of this content is valuable and should not be abandoned. I, for > > one, would like to be able to refer to published documentation rather > > than in-flight (or abandoned) change sets when wrestling with version > > discovery logic in ksa/sdk. > > In addition, there are many TODO also there in current guidelines [1]. Few of them > I remember were left as TODO when few guidelines were moved from nova to api-wg. > > [1] https://github.com/openstack/api-sig/search?q=TODO&unscoped_q=TODO > that's a good highlight, thank you for the link. perhaps a good next step will to be collect all the pointers to work that needs finishing and figure out if we can triage it and plan some sort of schedule for fixing it. i imagine it will be slow given the current staffing, but we could at least point people at things to look for if they want to help. peace o/ > -gmann > > > > > efried > > > > [1] https://review.opendev.org/#/q/project:openstack/api-sig+status:open > > [2] https://review.opendev.org/#/admin/groups/468,members > > > > > > From camille.rodriguez at canonical.com Mon Aug 26 13:28:45 2019 From: camille.rodriguez at canonical.com (Camille Rodriguez) Date: Mon, 26 Aug 2019 09:28:45 -0400 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> References: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> Message-ID: I like the idea and would like to see a demo if you get one ready. I agree with the previous suggestions about using and ameliorating the APIs instead of querying directly to the databases. In my previous organization, we also struggled with the management of numerous big clusters and we developed a home-made Django web portal to satisfies queries and automation that Horizon could not do. I would be curious to see which functions have been developed for 'openstack-admin' and how similar the needs are. Cheers, Camille Rodriguez On Mon, Aug 26, 2019 at 7:26 AM Graham Hayes wrote: > On 20/08/2019 19:05, Matt Riedemann wrote: > > On 8/20/2019 7:02 AM, Mohammed Naser wrote: > >>> Flexible: openstack-admin supports the fuzzy search for any important > >>> field(e.g. display_name/uuid/ip_address/project_name of an instance), > >>> which enables users to locate a particular object in no time. > >> This is really useful to be honest, but we probably can work around it > >> by using the filtering that APIs provide. > >> > > > > Isn't this what searchlight integration into horizon was for? > > > > Yes - AFAIK, there was even an idea of integrating searchlight into > the OSC / nova CLI for list / search operations. > > > I really liked the idea of searchlight for fuzzy search, being able > to search for an IP and finding the floating IP, server, and DNS Record > seemed like a great tool. > > -- Camille Rodriguez, Field Software Engineer Canonical -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Aug 26 13:35:42 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Aug 2019 09:35:42 -0400 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> References: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> Message-ID: On 8/26/19 7:15 AM, Graham Hayes wrote: > On 20/08/2019 19:05, Matt Riedemann wrote: >> On 8/20/2019 7:02 AM, Mohammed Naser wrote: >>>> Flexible: openstack-admin supports the fuzzy search for any important >>>> field(e.g. display_name/uuid/ip_address/project_name of an instance), >>>> which enables users to locate a particular object in no time. >>> This is really useful to be honest, but we probably can work around it >>> by using the filtering that APIs provide. >>> >> >> Isn't this what searchlight integration into horizon was for? >> > > Yes - AFAIK, there was even an idea of integrating searchlight into > the OSC / nova CLI for list / search operations. > > > I really liked the idea of searchlight for fuzzy search, being able > to search for an IP and finding the floating IP, server, and DNS Record > seemed like a great tool. > I third the suggestion to investigate using searchlight for this tool. Going direct to the individual project databases is not a good design decision (just my opinion, man, but of course I think I'm correct). Additionally, there are features handled at the API layer (e.g., Glance property protections) that are not visible in the database, so you'll have to figure out how to handle those, whereas they're already figured out in searchlight. Plus, you get all the cross-project resource searching that Graham mentioned. The searchlight project is small, but Trinh Nguye has done a great job keeping it active and updated with respect to elasticsearch (ES 5.x support was added in Stein). From gr at ham.ie Mon Aug 26 13:39:02 2019 From: gr at ham.ie (Graham Hayes) Date: Mon, 26 Aug 2019 14:39:02 +0100 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: On 26/08/2019 13:57, Doug Hellmann wrote: > Since nominations open this week, I wanted to go ahead and let you all know that I will not be seeking re-election to the Technical Committee this term. > > My role within Red Hat has been changing over the last year, and while I am still working on projects related to OpenStack it is no longer my sole focus. I will still be around, but it is better for me to make room on the TC for someone with more time to devote to it. > > It’s hard to believe it has been 6 years since I first joined the Technical Committee. So much has happened in our community in that time, and I want to thank all of you for the trust you have placed in me through it all. It has been an honor to serve and help build the community. It has been great to work with you on the TC, and watching you guide the community before that. Thanks for all the time and mental spoons you have dedicated to the community. - Graham > Thank you, > Doug > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From rosmaita.fossdev at gmail.com Mon Aug 26 13:47:31 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 26 Aug 2019 09:47:31 -0400 Subject: [nova][glance][entropy][database] update glance metadata for nova instance In-Reply-To: References: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> Message-ID: <00606cba-2f08-df2c-4342-fc997ec87342@gmail.com> On 8/26/19 4:24 AM, Sean Mooney wrote: > On Mon, 2019-08-26 at 18:18 +1200, Jordan Ansell wrote: >> Hi Openstack Discuss, >> >> I have an issue with nova not synchronizing changes between a glance >> image and it's local image meta information in nova. >> >> I have updated a glance image with the property "hw_rng_model=virtio", >> and that successfully passes that to new instances created using the >> updated image. However existing instances do not receive this new property. >> >> I have located the image metadata within the nova database, in the >> **instance_system_metadata** table, and can see it's not updated for the >> existing instances, and only adding the relevant rows for instances that >> are created when that property is present. The key being >> "image_hw_rng_model" and "virtio" being the value. >> >> Is there a way to tell nova to update the table for existing instances, >> and synchronizing the two databases? Or is this the kind of thing that >> would need to be done *shudder* manually...? > this is idealy not something you would do at all. > nova create a local copy of the image metadata the instace was booted with > intionally to not pick up chagne you make to the image metadata after you boot > the instance. in some case those change could invalidate the host the image is on so > it in general in not considerd safe to just sync them > > for the random number generator it should be ok but if you were to add a trait requirement > of alter the numa topology then it could invalidate the host as a candiate for that instance. > so if you want to do this then you need to update it manually as nova is working as > intended by not syncing the data. >> If so, are there any >> experts out there who can point me to some documentation on doing this >> correctly before I go butcher a couple of dummy nova database? > there is no docs for doing this as it is not a supported feature. > you are circumventing a safty feature we have in nova to prevent change to running instances > after they are first booted by change to the flavor extra spec or image metadata. >> >> Regards, >> Jordan >> >> > > I agree with everything Sean says here. I just want to remind you that if you use the nova image-create action on an instance, the image properties put on the new image are pulled from the nova database. So if you do decide to update the DB manually (not that I am recommending that!), don't forget that any already existing snapshot images will have the "wrong" value for the property. (You can update them via the Images API.) From smooney at redhat.com Mon Aug 26 13:51:21 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Aug 2019 14:51:21 +0100 Subject: [all][tc] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: <4f86a9ec-6a52-988d-3776-b41a0feb12a2@gmail.com> <108eb358-a81f-7bec-0d7c-76e56747ce78@ham.ie> Message-ID: On Mon, 2019-08-26 at 09:28 -0400, Camille Rodriguez wrote: > I like the idea and would like to see a demo if you get one ready. I agree > with the previous suggestions about using and ameliorating the APIs instead > of querying directly to the databases. take this with a grain of salt as i dont know how alive/dead it is but if you are using go instead of using the api directly which you can do with the go sdk you could also look at oaktree which was ment to provide a grpc enpoint to wrap the api. https://opendev.org/x/oaktree (no update in 2 years with no updates so im thinking pretty dead.) from go grpc might be an easier way to interact but there is a go client https://opendev.org/x/golang-client too. the database are considerd internal to the project so while you might be able to read from them you certenly cannot write to them. maintain models that work over time would be non trivaila but if you use teh api this should be less of an issue. go is not really supported as a first class language for clients/proejcts so the tools and libs for go support are more or less non existent. > > In my previous organization, we also struggled with the management of > numerous big clusters and we developed a home-made Django web portal to > satisfies queries and automation that Horizon could not do. I would be > curious to see which functions have been developed for 'openstack-admin' > and how similar the needs are. > > Cheers, > Camille Rodriguez > > On Mon, Aug 26, 2019 at 7:26 AM Graham Hayes wrote: > > > On 20/08/2019 19:05, Matt Riedemann wrote: > > > On 8/20/2019 7:02 AM, Mohammed Naser wrote: > > > > > Flexible: openstack-admin supports the fuzzy search for any important > > > > > field(e.g. display_name/uuid/ip_address/project_name of an instance), > > > > > which enables users to locate a particular object in no time. > > > > > > > > This is really useful to be honest, but we probably can work around it > > > > by using the filtering that APIs provide. > > > > > > > > > > Isn't this what searchlight integration into horizon was for? > > > > > > > Yes - AFAIK, there was even an idea of integrating searchlight into > > the OSC / nova CLI for list / search operations. > > > > > > I really liked the idea of searchlight for fuzzy search, being able > > to search for an IP and finding the floating IP, server, and DNS Record > > seemed like a great tool. > > > > > > From mnaser at vexxhost.com Mon Aug 26 14:49:47 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Aug 2019 10:49:47 -0400 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: On Mon, Aug 26, 2019 at 9:22 AM Jeremy Stanley wrote: > > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. Thanks for serving Jeremy! We're happy to see you still sticking around and helping out with the elections process. :) > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! > -- > Jeremy Stanley -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From jon at csail.mit.edu Mon Aug 26 14:50:18 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 26 Aug 2019 10:50:18 -0400 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: <20190826145018.7qlclxt3xfkpotuu@csail.mit.edu> On Mon, Aug 26, 2019 at 08:57:54AM -0400, Doug Hellmann wrote: :Since nominations open this week, I wanted to go ahead and let you all know that I will not be seeking re-election to the Technical Committee this term. : :My role within Red Hat has been changing over the last year, and while I am still working on projects related to OpenStack it is no longer my sole focus. I will still be around, but it is better for me to make room on the TC for someone with more time to devote to it. : :It’s hard to believe it has been 6 years since I first joined the Technical Committee. So much has happened in our community in that time, and I want to thank all of you for the trust you have placed in me through it all. It has been an honor to serve and help build the community. It's hard to imagine the TC with out you at this point, but we have a great community and I'm sure someone fabulous will step up. Thanks for the passion and though you've put into OpenStack over the years. -Jon : :Thank you, :Doug : : From jon at csail.mit.edu Mon Aug 26 14:58:21 2019 From: jon at csail.mit.edu (Jonathan Proulx) Date: Mon, 26 Aug 2019 10:58:21 -0400 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: <20190826145821.nhhi2hbi4nf5qcr3@csail.mit.edu> On Mon, Aug 26, 2019 at 01:19:38PM +0000, Jeremy Stanley wrote: :Second, the scheduling circumstances for this election, with the TC :and PTL activities combined, will be a bit more complicated for our :election officials. I'd prefer to stay engaged in officiating so :that we can ensure it goes as smoothly for everyone a possible. To :do this without risking a conflict of interest, I need to not be :running for office. Doing what most needs doing in the most ethical way possible is exactly what makes you such a great person. Thanks, and keep being you. -Jon :It's quite possible I'll run again in 6 months, but for now I'm :planning to help behind the scenes instead. Best of luck to all who :decide to run for election to any of our leadership roles! :-- :Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mnaser at vexxhost.com Mon Aug 26 15:07:15 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Aug 2019 11:07:15 -0400 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s the update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # New projects - ansible-role-uwsgi (under openstack-ansible-roles): https://review.opendev.org/#/c/676193/ # General changes - Updated list of extra-atcs for Train (I18n): https://review.opendev.org/#/c/674049/ - Update of goal selection retrospective (unowned goals in etherpad and owned goals in repository): https://review.opendev.org/#/c/667932/ Thanks for tuning in! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From miguel at mlavalle.com Mon Aug 26 15:16:44 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 26 Aug 2019 10:16:44 -0500 Subject: Shanghai PTG remote participation Message-ID: Hi, Some members of the Neutron team won't make it to Shanghai. They are interested, though, to participate remotely. Is the Foundation making any arrangements to facilitate this?. Or is each team on its own to address this requirement? Regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Aug 26 15:23:01 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 26 Aug 2019 10:23:01 -0500 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: Doug, Thank you for everything you've done as a member of the TC and in the community! Thanks, Amy (spotz) On Mon, Aug 26, 2019 at 8:00 AM Doug Hellmann wrote: > Since nominations open this week, I wanted to go ahead and let you all > know that I will not be seeking re-election to the Technical Committee this > term. > > My role within Red Hat has been changing over the last year, and while I > am still working on projects related to OpenStack it is no longer my sole > focus. I will still be around, but it is better for me to make room on the > TC for someone with more time to devote to it. > > It’s hard to believe it has been 6 years since I first joined the > Technical Committee. So much has happened in our community in that time, > and I want to thank all of you for the trust you have placed in me through > it all. It has been an honor to serve and help build the community. > > Thank you, > Doug > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Aug 26 15:25:43 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 26 Aug 2019 10:25:43 -0500 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: Fungi, Thanks for everything you've done as a member of the TC. Amy (spotz) On Mon, Aug 26, 2019 at 8:21 AM Jeremy Stanley wrote: > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. > > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Mon Aug 26 15:26:47 2019 From: kendall at openstack.org (Kendall Waters) Date: Mon, 26 Aug 2019 10:26:47 -0500 Subject: Shanghai PTG remote participation In-Reply-To: References: Message-ID: <85A9D2C8-94A3-4141-8DCE-993E08F962A0@openstack.org> Hi Miguel, As of right now, each team is on their own to address this, however, I will bring this up to the Foundation events team to see if we can come up with any ideas on how to help. Best, Kendall Kendall Waters OpenStack Marketing & Events kendall at openstack.org > On Aug 26, 2019, at 10:16 AM, Miguel Lavalle wrote: > > Hi, > > Some members of the Neutron team won't make it to Shanghai. They are interested, though, to participate remotely. Is the Foundation making any arrangements to facilitate this?. Or is each team on its own to address this requirement? > > Regards > > Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Aug 26 15:44:41 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 26 Aug 2019 17:44:41 +0200 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826145821.nhhi2hbi4nf5qcr3@csail.mit.edu> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> <20190826145821.nhhi2hbi4nf5qcr3@csail.mit.edu> Message-ID: <7898726a1046b09e4b4339f1d0ffe14e79669d6b.camel@evrard.me> On Mon, 2019-08-26 at 10:58 -0400, Jonathan Proulx wrote: > Doing what most needs doing in the most ethical way possible is > exactly what makes you such a great person. Thanks, and keep being > you. > > -Jon I so much agree with that. Thank fungi for all the work, and to continue behind the scenes! JP From juliaashleykreger at gmail.com Mon Aug 26 16:22:00 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 26 Aug 2019 12:22:00 -0400 Subject: [tc][elections] Not seeking re-election to TC Message-ID: Greetings everyone, I wanted to officially let everyone know that I will not be running for re-election to the TC. I have enjoyed serving on the TC for the past two years. Due to some changes in my personal and professional lives, It will not be possible for me to serve during this next term. Thanks everyone! -Julia From jungleboyj at gmail.com Mon Aug 26 16:27:40 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Mon, 26 Aug 2019 11:27:40 -0500 Subject: [all[ Please Do Not Translate Etherpads ... Message-ID: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> All, Not sure how else to address this, but we have had Cinder's etherpads repeatedly converted into Japanese recently. Want to make a public plea here to please be careful with translation tools and ensure that you aren't changing the etherpads in the process.  Rebuilding the content to its previous state is tedious. Thank you! Jay (irc: jungleboyj) From mriedemos at gmail.com Mon Aug 26 16:34:21 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Aug 2019 11:34:21 -0500 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> Message-ID: <5c87e270-87d6-b295-fe88-a6c6fb1e5077@gmail.com> On 8/26/2019 11:27 AM, Jay Bryant wrote: > All, > > Not sure how else to address this, but we have had Cinder's etherpads > repeatedly converted into Japanese recently. > > Want to make a public plea here to please be careful with translation > tools and ensure that you aren't changing the etherpads in the process. > Rebuilding the content to its previous state is tedious. > > Thank you! > > Jay > > (irc: jungleboyj) > > Oh hi: https://etherpad.openstack.org/p/nova-ptg-queens -- Thanks, Matt From mriedemos at gmail.com Mon Aug 26 16:37:05 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Aug 2019 11:37:05 -0500 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> Message-ID: <4a6c7edf-d1f3-4895-9b45-9163583cfbc7@gmail.com> On 8/26/2019 11:27 AM, Jay Bryant wrote: > Want to make a public plea here to please be careful with translation > tools and ensure that you aren't changing the etherpads in the process. For those that do want to translate, I suggest simply copying the content to a new etherpad with the same name but whatever locale suffix, e.g. https://etherpad.openstack.org/p/nova-ptg-queens-zh-CN. -- Thanks, Matt From fungi at yuggoth.org Mon Aug 26 16:44:03 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 26 Aug 2019 16:44:03 +0000 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> Message-ID: <20190826164403.uwhsxokdnldu4a3p@yuggoth.org> On 2019-08-26 11:27:40 -0500 (-0500), Jay Bryant wrote: > Not sure how else to address this, but we have had Cinder's > etherpads repeatedly converted into Japanese recently. > > Want to make a public plea here to please be careful with > translation tools and ensure that you aren't changing the > etherpads in the process.  Rebuilding the content to its previous > state is tedious. Yes, while I doubt pleas in here will reach the right audience, it can't hurt to try and maybe word of this problem will eventually get around. Also, we could remind folks that the "" button near the top-right corner allows you to get a read-only URL by clicking the "Read only" checkbox. Etherpad won't let you modify the content on those, and so they should be a much safer option for folks who do want to use whatever translation tool is currently replacing content in our read-write pads. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Mon Aug 26 16:51:56 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Aug 2019 17:51:56 +0100 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <5c87e270-87d6-b295-fe88-a6c6fb1e5077@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <5c87e270-87d6-b295-fe88-a6c6fb1e5077@gmail.com> Message-ID: On Mon, 2019-08-26 at 11:34 -0500, Matt Riedemann wrote: > On 8/26/2019 11:27 AM, Jay Bryant wrote: > > All, > > > > Not sure how else to address this, but we have had Cinder's etherpads > > repeatedly converted into Japanese recently. > > > > Want to make a public plea here to please be careful with translation > > tools and ensure that you aren't changing the etherpads in the process. > > Rebuilding the content to its previous state is tedious. > > > > Thank you! > > > > Jay > > > > (irc: jungleboyj) > > > > > > Oh hi: > > https://etherpad.openstack.org/p/nova-ptg-queens ya that happend a few days after teh ptg i believe but the untranslated form used to be in the histroy however i cant seam to get that to work anymore. maybe we purged the history since then? > From ianyrchoi at gmail.com Mon Aug 26 16:52:02 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 27 Aug 2019 01:52:02 +0900 Subject: [User-committee] UC candidacy. In-Reply-To: References: Message-ID: <3b2eb984-ad82-18cd-c87a-12621aa396e2@gmail.com> Hello Natal, First of all, thanks a lot for your UC candidacy! UC election is applicable for Active User Contributors (AUCs) as defined by OpenStack User Committee Charter [1], and I would like to share that both UC election officials could not verify that you are an AUC. If you can share more details which support to verify that you are an AUC who is eligible for the election, please do not hesitate to share with the election officials before the due of the nomination period. Although the election officials could not confirm that you are an eilgible candidacy for the election, I am pretty sure that you can become AUC later, and can run for next UC election(s). With many thanks, /Ian [1] https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc Natal Ngétal wrote on 8/26/2019 9:29 PM: > Hi, > > I saw a mail on openstack-discuss about the new user committee election. I'm > Natal I principally contribute to the tripleo part. I have also contribute a > little to oslo projects and gertty, for example. I'm really interested to > candidate for this post, that can be seems weird, because I have started to > contribute in october 2018 only. But I think that can be interesting for the > project, a new eyes. I wish to implicate me more in the community. For example, > I want to organize local meetup regularly, go to the OpenStack summit and ptg > and started to give talk and write articles on the project. I wish also meet > more the customers and work with they to improve the project. Understand the > customers problems. With this role that can be more easy. For me, that can be > also really interesting to learn more the project and the community. That can > be also really source motivation and that can help me for my work. > > My gerrit profile: > > https://review.opendev.org/#/q/owner:hobbestigrou%2540erakis.eu > > > Thanks > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee From mnaser at vexxhost.com Mon Aug 26 16:54:57 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Aug 2019 12:54:57 -0400 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: On Mon, Aug 26, 2019 at 12:25 PM Julia Kreger wrote: > > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be possible > for me to serve during this next term. Thanks for serving for the past few cycles Julia! :) > Thanks everyone! > > -Julia > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From smooney at redhat.com Mon Aug 26 16:55:25 2019 From: smooney at redhat.com (Sean Mooney) Date: Mon, 26 Aug 2019 17:55:25 +0100 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <5c87e270-87d6-b295-fe88-a6c6fb1e5077@gmail.com> Message-ID: On Mon, 2019-08-26 at 17:51 +0100, Sean Mooney wrote: > On Mon, 2019-08-26 at 11:34 -0500, Matt Riedemann wrote: > > On 8/26/2019 11:27 AM, Jay Bryant wrote: > > > All, > > > > > > Not sure how else to address this, but we have had Cinder's etherpads > > > repeatedly converted into Japanese recently. > > > > > > Want to make a public plea here to please be careful with translation > > > tools and ensure that you aren't changing the etherpads in the process. > > > Rebuilding the content to its previous state is tedious. > > > > > > Thank you! > > > > > > Jay > > > > > > (irc: jungleboyj) > > > > > > > > > > Oh hi: > > > > https://etherpad.openstack.org/p/nova-ptg-queens > > ya that happend a few days after teh ptg i believe > but the untranslated form used to be in the histroy however i cant seam to get that > to work anymore. maybe we purged the history since then? actully it does work https://etherpad.openstack.org/p/nova-ptg-queens/timeslider#31857 i just took etherpath a really long time to start retriving the history and rerender > > > > From amy at demarco.com Mon Aug 26 17:02:44 2019 From: amy at demarco.com (Amy Marrich) Date: Mon, 26 Aug 2019 12:02:44 -0500 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: Julia, Thanks for everything you've done while on the TC! Amy (spotz) On Mon, Aug 26, 2019 at 11:24 AM Julia Kreger wrote: > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be possible > for me to serve during this next term. > > Thanks everyone! > > -Julia > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Aug 26 17:46:03 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 27 Aug 2019 02:46:03 +0900 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> Message-ID: As far as I heard, one of the common cases where this happens is to use the translation pop-up from Chrome browser. Many(?) users think translations are done locally as it works for static pages, but when it is applied to etherpad pages, the contents of the etherpad pages will be replaced. I remember that I was surprised with this behavior when I heard this first time... I am not sure what is the right solution, but I can share this information to local community. Thanks, Akihiro Motoki On Tue, Aug 27, 2019 at 1:30 AM Jay Bryant wrote: > > All, > > Not sure how else to address this, but we have had Cinder's etherpads > repeatedly converted into Japanese recently. > > Want to make a public plea here to please be careful with translation > tools and ensure that you aren't changing the etherpads in the process. > Rebuilding the content to its previous state is tedious. > > Thank you! > > Jay > > (irc: jungleboyj) > > From mnaser at vexxhost.com Mon Aug 26 17:47:53 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 26 Aug 2019 13:47:53 -0400 Subject: [ansible-sig] meetings! In-Reply-To: References: Message-ID: Hi everyone, Just sending a reminder about the openstack-ansible-sig group. If you're interested in being involved, please go and select the day and time that would work best for you. This way we can find a time that works for everyone to schedule a meeting every week. Link to select available time slots: https://doodle.com/poll/6f9qdddk6icw92iq Looking forward to discussing with all of you! Regards, Mohammed On Wed, Aug 21, 2019 at 11:32 AM Mohammed Naser wrote: > > Hi everyone, > > I'd like to schedule a meeting every week so we can discuss the > details of what we can do together.. You'll find the link to the > meeting and time slots available below. If you're interested in being > involved, please go and select the day and time that would work best > for you, this way we can find a time that works for everyone. > > Link: https://doodle.com/poll/6f9qdddk6icw92iq > > Looking forward to discussing with all of you! > > Regards, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From sfinucan at redhat.com Mon Aug 26 20:46:14 2019 From: sfinucan at redhat.com (Stephen Finucane) Date: Mon, 26 Aug 2019 21:46:14 +0100 Subject: [horizon] Horizon's openstack_auth is completely broken with Django 2.2, breaking all the dashboad stack in Debian Sid/Bullseye In-Reply-To: References: Message-ID: <7fed32a9f420ed004cae69b8438b4d8113524dd9.camel@redhat.com> On Sat, 2019-08-24 at 23:55 +0200, Thomas Goirand wrote: > Hi, > > Django 2.2 was uploaded to Debian Sid/Bullseye a few days after Buster > was released, removing at the same time any support for Python 2. This > broke lots of packages, but I managed to upload more than 40 times to > fix the situation. Though Horizon is still completely broken. > > There's a bunch of issues related to Django 2.2 for Horizon. I managed > to patch Horizon for a few of them, but there's one bigger issue which I > can't solve myself, because I'm not skilled enough in Django, and don't > know enough about Horizon's internal. Let me explain. > > In Django 1.11, the login() function in openstack_auth/views.py was > deprecated in the favor of a LoginView class. In Django 2.2, the login() > function is now not supported anymore. This means that, if > openstack_auth/views.py doesn't get rewritten completely, Horizon will > continue to be completely broken in Debian Sid/Bullseye. > > Unfortunately, the Horizon team is understaffed, and the PTL told me > that they don't plan anything before Train+1. > > As a consequence, Horizon is completely broken in Debian Sid/Bullseye, > and will stay as it is if nobody steps up for the work. > > After a month and a half with this situation not being solved by anyone, > I'm hereby calling for help. > > I could attempt to work on this, though I need help and pointers at some > example implementation of this. It'd be nicer if someone more skilled > than me in Django was working on this anyways. Fortunately, I do know Django, so I've gone and tackled this series here. https://review.opendev.org/#/q/topic:django22+status:open+project:openstack/horizon All the unit tests are passing now, so I assume that's all that's needed. I didn't test anything manually. Probably a longer conversation to be had here regarding the long term viability of Horizon if issues like this are going to become more common (probably similar to what we're having in docs, oslo, etc.) but that's one for someone else to lead. Cheers, Stephen From jordan.ansell at catalyst.net.nz Mon Aug 26 23:00:49 2019 From: jordan.ansell at catalyst.net.nz (Jordan Ansell) Date: Tue, 27 Aug 2019 11:00:49 +1200 Subject: [nova][glance][entropy][database] update glance metadata for nova instance In-Reply-To: <00606cba-2f08-df2c-4342-fc997ec87342@gmail.com> References: <668e2201-3dd3-17e0-ae6a-736f0b314996@catalyst.net.nz> <00606cba-2f08-df2c-4342-fc997ec87342@gmail.com> Message-ID: On 27/08/19 1:47 AM, Brian Rosmaita wrote: > On 8/26/19 4:24 AM, Sean Mooney wrote: >> On Mon, 2019-08-26 at 18:18 +1200, Jordan Ansell wrote: >>> Hi Openstack Discuss, >>> >>> I have an issue with nova not synchronizing changes between a glance >>> image and it's local image meta information in nova. >>> >>> I have updated a glance image with the property "hw_rng_model=virtio", >>> and that successfully passes that to new instances created using the >>> updated image. However existing instances do not receive this new property. >>> >>> I have located the image metadata within the nova database, in the >>> **instance_system_metadata** table, and can see it's not updated for the >>> existing instances, and only adding the relevant rows for instances that >>> are created when that property is present. The key being >>> "image_hw_rng_model" and "virtio" being the value. >>> >>> Is there a way to tell nova to update the table for existing instances, >>> and synchronizing the two databases? Or is this the kind of thing that >>> would need to be done *shudder* manually...? >> this is idealy not something you would do at all. >> nova create a local copy of the image metadata the instace was booted with >> intionally to not pick up chagne you make to the image metadata after you boot >> the instance. in some case those change could invalidate the host the image is on so >> it in general in not considerd safe to just sync them >> >> for the random number generator it should be ok but if you were to add a trait requirement >> of alter the numa topology then it could invalidate the host as a candiate for that instance. >> so if you want to do this then you need to update it manually as nova is working as >> intended by not syncing the data. >>> If so, are there any >>> experts out there who can point me to some documentation on doing this >>> correctly before I go butcher a couple of dummy nova database? >> there is no docs for doing this as it is not a supported feature. >> you are circumventing a safty feature we have in nova to prevent change to running instances >> after they are first booted by change to the flavor extra spec or image metadata. >>> Regards, >>> Jordan >>> >>> >> > I agree with everything Sean says here. I just want to remind you that > if you use the nova image-create action on an instance, the image > properties put on the new image are pulled from the nova database. So > if you do decide to update the DB manually (not that I am recommending > that!), don't forget that any already existing snapshot images will have > the "wrong" value for the property. (You can update them via the Images > API.) > Thanks Sean and Brian..! I hadn't considered the snapshots.. that's a really good point! And thank you for the warnings, I can see why this isn't something that's synchronized automatically :S Regards, Jordan From ianyrchoi at gmail.com Mon Aug 26 23:43:26 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 27 Aug 2019 08:43:26 +0900 Subject: [User-committee] UC candidacy. In-Reply-To: References: <3b2eb984-ad82-18cd-c87a-12621aa396e2@gmail.com> Message-ID: <40a7c19e-83c0-5f51-aa8c-eaed57e4896b@gmail.com> (Adding community at lists.openstack.org mailing list and some Foundation members related with this) Hello Ilya, As announced by [1], https://www.meetup.com/pro/osf/ is now an official group portal. As an UC election official, I discussed with last UC IRC meeting and I got confirmation that the AUC list is retrieved from Meetup Pro [2]. Two more comments:  - @Ilya: It seems that Russian user group is not listed in Meetup Pro. If it is, please talk with Ashlee to successfully register your user group to Meetup Pro.  - @Ashley @Jimmy: Is it possible to make a redirection of URL: groups.openstack.org to www.meetup.com/pro/osf/ ? With many thanks, /Ian [1] http://lists.openstack.org/pipermail/community/2019-April/001956.html [2] http://eavesdrop.openstack.org/meetings/uc/2019/uc.2019-08-26-15.03.log.html#l-68 Ilya Alekseyev wrote on 8/27/2019 5:34 AM: > Hi Ian! > > Thank you for sharing AUC requirements. > Could you please clarify how currently defining Official OpenStack > User Group? > In the best of my knowledge Groups Portal was retired. > > Thank you in advance. > > Kind regards, > Ilya Alekseyev. > Russian OpenStack Community > > > пн, 26 авг. 2019 г. в 19:52, Ian Y. Choi >: > > Hello Natal, > > First of all, thanks a lot for your UC candidacy! > > UC election is applicable for Active User Contributors (AUCs) as > defined > by OpenStack User Committee Charter [1], > and I would like to share that both UC election officials could not > verify that you are an AUC. > > If you can share more details which support to verify that you are an > AUC who is eligible for the election, please do not hesitate to share > with the election officials before the due of the nomination period. > > Although the election officials could not confirm that you are an > eilgible candidacy for the election, I am pretty sure that you can > become AUC later, and can run for next UC election(s). > > > With many thanks, > > /Ian > > [1] > https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc > > Natal Ngétal wrote on 8/26/2019 9:29 PM: > > Hi, > > > > I saw a mail on openstack-discuss about the new user committee > election. I'm > > Natal I principally contribute to the tripleo part. I have also > contribute a > > little to oslo projects and gertty, for example. I'm really > interested to > > candidate for this post, that can be seems weird, because I have > started to > > contribute in october 2018 only. But I think that can be > interesting for the > > project, a new eyes. I wish to implicate me more in the > community. For example, > > I want to organize local meetup regularly, go to the OpenStack > summit and ptg > > and started to give talk and write articles on the project. I > wish also meet > > more the customers and work with they to improve the project. > Understand the > > customers problems. With this role that can be more easy. For > me, that can be > > also really interesting to learn more the project and the > community. That can > > be also really source motivation and that can help me for my work. > > > > My gerrit profile: > > > > https://review.opendev.org/#/q/owner:hobbestigrou%2540erakis.eu > > > > > > Thanks > > > > _______________________________________________ > > User-committee mailing list > > User-committee at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > From zhangbailin at inspur.com Tue Aug 27 00:16:21 2019 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 27 Aug 2019 00:16:21 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVJlOiBbYWxsWyBQbGVh?= =?gb2312?Q?se_Do_Not_Translate_Etherpads_...?= In-Reply-To: References: Message-ID: <6381822448a048a98cb4f3f18fad0384@inspur.com> > For those that do want to translate, I suggest simply copying the content to a new etherpad with the > same name but whatever locale suffix, e.g. https://etherpad.openstack.org/p/nova-ptg-queens-zh-CN. The released version can be set to read-only. Once the version is released, the content on the etherpad should remain the same. If need to translate and view it, you can create another etherpad yourself, copy the original content once and edit it. I know that there are still a lot of translation tools available, google translation, youdao dictionary... etc. On Mon, 2019-08-26 at 17:51 +0100, Sean Mooney wrote: > On Mon, 2019-08-26 at 11:34 -0500, Matt Riedemann wrote: > > On 8/26/2019 11:27 AM, Jay Bryant wrote: > > > All, > > > > > > Not sure how else to address this, but we have had Cinder's > > > etherpads repeatedly converted into Japanese recently. > > > > > > Want to make a public plea here to please be careful with > > > translation tools and ensure that you aren't changing the etherpads in the process. > > > Rebuilding the content to its previous state is tedious. > > > > > > Thank you! > > > > > > Jay > > > > > > (irc: jungleboyj) > > > > > > > > > > Oh hi: > > > > https://etherpad.openstack.org/p/nova-ptg-queens > > ya that happend a few days after teh ptg i believe but the > untranslated form used to be in the histroy however i cant seam to get > that to work anymore. maybe we purged the history since then? > actully it does work > https://etherpad.openstack.org/p/nova-ptg-queens/timeslider#31857 > i just took etherpath a really long time to start retriving the history and > rerender Thanks rerender it. > > > > From ilyaalekseyev at acm.org Mon Aug 26 20:34:16 2019 From: ilyaalekseyev at acm.org (Ilya Alekseyev) Date: Mon, 26 Aug 2019 23:34:16 +0300 Subject: [User-committee] UC candidacy. In-Reply-To: <3b2eb984-ad82-18cd-c87a-12621aa396e2@gmail.com> References: <3b2eb984-ad82-18cd-c87a-12621aa396e2@gmail.com> Message-ID: Hi Ian! Thank you for sharing AUC requirements. Could you please clarify how currently defining Official OpenStack User Group? In the best of my knowledge Groups Portal was retired. Thank you in advance. Kind regards, Ilya Alekseyev. Russian OpenStack Community пн, 26 авг. 2019 г. в 19:52, Ian Y. Choi : > Hello Natal, > > First of all, thanks a lot for your UC candidacy! > > UC election is applicable for Active User Contributors (AUCs) as defined > by OpenStack User Committee Charter [1], > and I would like to share that both UC election officials could not > verify that you are an AUC. > > If you can share more details which support to verify that you are an > AUC who is eligible for the election, please do not hesitate to share > with the election officials before the due of the nomination period. > > Although the election officials could not confirm that you are an > eilgible candidacy for the election, I am pretty sure that you can > become AUC later, and can run for next UC election(s). > > > With many thanks, > > /Ian > > [1] > > https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc > > Natal Ngétal wrote on 8/26/2019 9:29 PM: > > Hi, > > > > I saw a mail on openstack-discuss about the new user committee election. > I'm > > Natal I principally contribute to the tripleo part. I have also > contribute a > > little to oslo projects and gertty, for example. I'm really interested to > > candidate for this post, that can be seems weird, because I have started > to > > contribute in october 2018 only. But I think that can be interesting for > the > > project, a new eyes. I wish to implicate me more in the community. For > example, > > I want to organize local meetup regularly, go to the OpenStack summit > and ptg > > and started to give talk and write articles on the project. I wish also > meet > > more the customers and work with they to improve the project. Understand > the > > customers problems. With this role that can be more easy. For me, that > can be > > also really interesting to learn more the project and the community. > That can > > be also really source motivation and that can help me for my work. > > > > My gerrit profile: > > > > https://review.opendev.org/#/q/owner:hobbestigrou%2540erakis.eu > > > > > > Thanks > > > > _______________________________________________ > > User-committee mailing list > > User-committee at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Aug 27 00:49:23 2019 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 27 Aug 2019 00:49:23 +0000 Subject: [nova] Specify availability_zone to unshelve Message-ID: <306bc64c2a7c470da432dadc47fc6914@inspur.com> Hi all, “Specify availability_zone to unshelve” in nova runway is active now, the ends date is 2019-08-29, it’s active and ready to review, I hope to complete this patch before the deadline, please review. https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) Thanks everyone. Brin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Aug 27 00:54:42 2019 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 27 Aug 2019 00:54:42 +0000 Subject: =?gb2312?B?tPC4tDogW25vdmFdIFNwZWNpZnkgYXZhaWxhYmlsaXR5X3pvbmUgdG8gdW5z?= =?gb2312?Q?helve?= Message-ID: <50b126eedd554b08838553bf1ba31a2f@inspur.com> > Hi all, “Specify availability_zone to unshelve” in nova runway is active now, the ends date is 2019-08-29, > it’s active and ready to review, I hope to complete this patch before the deadline, please review > https://review.opendev.org/#/q/topic:bp/support-specifying-az-when-restore-shelved-server+(status:open+OR+status:merged) > > Thanks everyone. This patch already reviewed by Matt and gmann, and Alex many times. Brin -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at ChinaBuckets.com Tue Aug 27 01:08:34 2019 From: eli at ChinaBuckets.com (Eliza) Date: Tue, 27 Aug 2019 09:08:34 +0800 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: <9c65999d-c6ed-420b-b419-698c9b5bcdb8@ChinaBuckets.com> Thanks Julia too. on 2019/8/27 1:02, Amy Marrich wrote: > Julia, > > Thanks for everything you've done while on the TC! > > Amy (spotz) > > On Mon, Aug 26, 2019 at 11:24 AM Julia Kreger > > wrote: > > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be possible > for me to serve during this next term. > > Thanks everyone! > > -Julia > From amotoki at gmail.com Tue Aug 27 05:58:54 2019 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 27 Aug 2019 14:58:54 +0900 Subject: [all][tc] PDF Community Goal Update Message-ID: Hi all, I would like to share the status of the PDF community goal. We made great progresses these two weeks. Highlights ---------- It is time for working on the PDF community goal in your project! - openstack-tox-docs job supports PDF build now. AJaeger and I worked on the job support of PDF doc build and publishing and have completed it. - We confirmed PDF build with the sphinx latex builder works for several major projects like nova, cinder and neutron. The approach with the latex builder should work for most projects. - First PDF doc is now published from the i18n project. How to get started ------------------ - "How to get started" section in the PDF goal etherpad [1] explains the minimum steps. You can find useful examples there too. - To build PDF docs locally, you need to install LaTex related packages. See "To test locally" in the etherpad [1]. - If you hit problems during PDF build, check the common problems etherpad [2]. We are collecting knowledges there. - If you have questions, feel free to ask #openstack-doc IRC channel. Also Please sign up your name to "Project volunteers" in [1]. Useful links ------------ [1] https://etherpad.openstack.org/p/train-pdf-support-goal [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems [3] Ongoing reviews: https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) Thanks, Akihiro Motoki (amotoki) From dharmendra.kushwaha at india.nec.com Tue Aug 27 07:10:04 2019 From: dharmendra.kushwaha at india.nec.com (Dharmendra Kushwaha) Date: Tue, 27 Aug 2019 07:10:04 +0000 Subject: [Tacker] No weekly meeting today. Message-ID: Hi Team, I am not available for meeting today. So let's skip it or you can make it without my presence. Thanks & Regards Dharmendra Kushwaha ________________________________ The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. It shall not attach any liability on the originator or NECTI or its affiliates. Any views or opinions presented in this email are solely those of the author and may not necessarily reflect the opinions of NECTI or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. From nmagnezi at redhat.com Tue Aug 27 07:20:40 2019 From: nmagnezi at redhat.com (Nir Magnezi) Date: Tue, 27 Aug 2019 10:20:40 +0300 Subject: [octavia] Stepping down from Octavia core team Message-ID: Hi all, As some of you already know, I'm no longer involved in Octavia development and therefore stepping down from the core team. It has been a pleasure to be a part of the Octavia (and neutron-lbaas) core team for nearly three years. I learned a lot. Octavia is an excellent project with a strong core team, and I'm sure it will continue to grow and gain more adoption. Thanks, Nir -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgoncalves at redhat.com Tue Aug 27 07:29:11 2019 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Tue, 27 Aug 2019 09:29:11 +0200 Subject: [octavia] Stepping down from Octavia core team In-Reply-To: References: Message-ID: On Tue, Aug 27, 2019 at 9:22 AM Nir Magnezi wrote: > > Hi all, > > As some of you already know, I'm no longer involved in Octavia development and therefore stepping down from the core team. > > It has been a pleasure to be a part of the Octavia (and neutron-lbaas) core team for nearly three years. I learned a lot. Thank you very much for all your hard work! I wish you success in all your future endeavors. > Octavia is an excellent project with a strong core team, and I'm sure it will continue to grow and gain more adoption. > > Thanks, > Nir From dtantsur at redhat.com Tue Aug 27 08:06:17 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 27 Aug 2019 10:06:17 +0200 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> Message-ID: <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> On 8/26/19 6:27 PM, Jay Bryant wrote: > All, > > Not sure how else to address this, but we have had Cinder's etherpads repeatedly > converted into Japanese recently. > > Want to make a public plea here to please be careful with translation tools and > ensure that you aren't changing the etherpads in the process. Rebuilding the > content to its previous state is tedious. How did you even do it? The time slider won't allow me to go to yesterday.. > > Thank you! > > Jay > > (irc: jungleboyj) > > From cdent+os at anticdent.org Tue Aug 27 08:15:56 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 27 Aug 2019 09:15:56 +0100 (BST) Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: <16cc30634ec.125fb7873294246.2871343054827957126@ghanshyammann.com> Message-ID: On Mon, 26 Aug 2019, Michael McCune wrote: > On Sat, Aug 24, 2019 at 5:52 AM Ghanshyam Mann wrote: >> In addition, there are many TODO also there in current guidelines [1]. Few of them >> I remember were left as TODO when few guidelines were moved from nova to api-wg. >> >> [1] https://github.com/openstack/api-sig/search?q=TODO&unscoped_q=TODO >> > > that's a good highlight, thank you for the link. > > perhaps a good next step will to be collect all the pointers to work > that needs finishing and figure out if we can triage it and plan some > sort of schedule for fixing it. i imagine it will be slow given the > current staffing, but we could at least point people at things to look > for if they want to help. I think it's important to keep in mind that if we've had those TODOs for so long, many of them for much longer than 3 years, it is quite likely they aren't to dos, but instead are to don't: things people neither care about nor need. At least not enough to do anything about. Keeping the API-SIG on life support to address TODOs that old seems like make-work to me. It's time to start trimming the optional stuff and focus only on the things for which there is real user or developer demand [1]. Not just in the API-SIG but across all of OpenStack. [1] "Vendors" left out on purpose. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From thierry at openstack.org Tue Aug 27 09:04:49 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 27 Aug 2019 11:04:49 +0200 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: Message-ID: <1c568051-94d6-2d09-2d98-ef69c790e4aa@openstack.org> Douglas Zhang wrote: > [...] > As Adrian Turjak said: > > The first major issue is that you connect to the databases of the > services directly. That's a major issue, both for long term > compatibility, and security. The APIs should always be the main > point of contact and the ONLY contract that the services have to > maintain. By connecting to the database directly you are now relying > on a data structure that can, and likely will change, and any > security and sanity checking on filters and queries is now handled > on your layer rather than the application itself. Not only that, but > your dashboard also now needs passwords for all the databases, and > by the sounds of it all the message queues. > > And as Mohammed Naser said: > > While I agree with you that querying database is much faster, this > introduces two issues that I imagine for users: [...] > > * also, it will make maintaining the project quite hard because I > don't think any projects expose a /stable/ database API. > > Well, we’re not surprised that our querying approach would be challenged > since it does sound unsafe. However, we have made some efforts to solve > problems which have been posed: [...] > > I hope my explanation is clear enough, and we’re willing to solve other > possible issues existing. Thanks for the replies! As I highlighted above, you did not address the main issue raised by Adrian and Mohammed, which is that the database schema in OpenStack services is not stable. Our project teams only commit to one public API, and that is the REST one. Querying the database directly is definitely faster (both in original coding and query performance), but you incur enormous technical debt by taking this shortcut. *Someone* will have to care about keeping openstack-admin queries and projects database schema in sync forever after. That means projects either need to commit to a stable database API in addition to a stable REST API, *or* openstack-admin maintainers will have to keep up with any database schema change in any future version of any component they interact with. At this point in the history of OpenStack, IMHO we need to care more about long-term sustainability with a limited number of maintainers, than about speed. There are definitely optimizations that can be made to make the slowest queries faster, without incurring massive technical debt that will have to be repaid by maintainers forever after. It's definitely less funny and rewarding than writing a superfast new database-connected dashboard, but I'd argue that it is where development resources should be spent today... -- Thierry Carrez (ttx) From cdent+os at anticdent.org Tue Aug 27 09:54:34 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 27 Aug 2019 10:54:34 +0100 (BST) Subject: [placement][plt][elections] Non-nomination for U Message-ID: tl;dr: I'd prefer not to be Placement PTL for the U cycle. I've accomplished most of what I set out to do and I think we have other members of the team who would make excellent PTLs (notably tetsuro and gibi). Longer version: Getting placement standing on its own feet, with a mostly reasonable architecture, reasonable performance, and a fairly complete set of features has pretty much been my single focus since the start of 2016. From both a personal and professional standpoint I need to focus some energy elsewhere, lest the deep ruts in my brain become permanent. I think it is important to not be shy about the goal of that three and a half year process: Get placement out of nova and get nova out of placement. The architectural differences necessary to make placement work well, be maintainable, and easily deployable [1] we're difficult to do within nova (for many reasons), but once extracted took a matter of days. Nova, neutron, blazar, zun, watcher, and cyborg all use, or are in the process of starting to use, placement. Ironic is thinking about it. So: some success. But it's not all gravy, there are still some things left to do: While the fundamental concept of placement (presenting and keeping track of inventory of resources) is pretty straightforward, using it most effectively can be complicated, especially if nested resource providers are involved. There's a lot of work to be done to document how to best model data and queries. As the number of resources in the cloud grows it becomes increasingly important to "ask better questions"; knowing how to do that shouldn't require 3 years of experience. The "consumer types" feature is in flight. It will allow placement to be used to keep track of resource usage for some types of quota, which makes good sense: If we're already tracking "user X is using N VCPU" in placement it's wasteful tracking it somewhere else as well. Placement sharding, so that one placement can track multiple domains, is a feature that we've discussed in the last couple of years, thinking perhaps it could be useful to edge setups. Until the recent performance improvements it wouldn't have worked well, but now it might. However, until real users present themselves with a need for this feature, there's not much point moving forward with it. We should continue to do benchmarking and profiling. That work has done wonders to reveal unexpected areas for improvement. Being willing and able to analyse and adjust the system and the code, continuously, is key to ensuring the project does not become moribund or calcified. Thanks to everyone who has helped get placement to a healthy place. [1] For some examples: We've taken a common benchmark from 17s to .5s. It's now possible to run placement without a config file or without a manual database sync. Every module has a maintainability index of "A". Placement doesn't use eventlet (and is not accidentally infected by it) so it can benefit from whatever threading or multi-processing model whatever chosen WSGI server provides. Placement is a simple web app over a (not so simple) persistence and query layer. The web app and the associated database server(s) can be scaled independently, up and out. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From thierry at openstack.org Tue Aug 27 10:13:08 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 27 Aug 2019 12:13:08 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 Message-ID: Hi everyone, The size of the TC is a trade-off between getting enough community representation and keeping enough members engaged and active. The current size (13 members) was defined by in 2013, as we moved from 5 directly-elected seats + all PTLs (which would have been 14 people) to a model that could better cope with our explosive growth. Since then, 13 has worked well, to ensure that new blood could come in at every cycle. I would argue that today, there are far less need to get wide representation in the TC (we are pretty aligned), and less difficulty to enter the TC (there is more turnover). In 2019 OpenStack, 13 members is a rather large group. It is becoming difficult to find 13 people able to commit to a significant amount of time over the coming year. And it is difficult to keep all those 13 members active and engaged. IMHO it is time to reduce the TC to 11 members, which sounds like a more reasonable and manageable size. We should encourage people to stop for a while and come back, rather than burn too many people at the same time. We should encourage more people to watch from the sidelines, rather than have a group so large that everyone that can be in it is in it. This would not be a big-bang change, just something we would gradually put in place over the next year. My strawman plan would be as follows: - Sept. 2019 election: no change, elect 6 seats as planned - Feb. 2020 election: elect 6 seats instead of 7 - Sept. 2020 election: elect 5 seats instead of 6 - Then elect 6 seats at every start-of-year and 5 at every end-of-year That would result in TC membership sizes: - U cycle (Q4 2019/Q1 2020): 13 members - V cycle (Q2/Q3 2020): 12 members - W cycle (Q4 2020/Q1 2021): 11 members - after that: 11 members FWIW, I intend to not run for reelection in the Feb 2020 election, so nobody else has to sacrifice their seat to that reform for that election :) Thoughts? -- Thierry Carrez (ttx) From cdent+os at anticdent.org Tue Aug 27 10:28:33 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 27 Aug 2019 11:28:33 +0100 (BST) Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: On Tue, 27 Aug 2019, Thierry Carrez wrote: > Thoughts? Good idea, but I'd argue that 11 is still too many. 7 or 5 might be a better target. Getting there might be complicated but I think it would be useful by providing an opportunity to winnow the focus of the TC to what matters (whatever we might decide that is). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From jim at jimrollenhagen.com Tue Aug 27 11:08:01 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 27 Aug 2019 07:08:01 -0400 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: On Tue, Aug 27, 2019 at 6:30 AM Chris Dent wrote: > On Tue, 27 Aug 2019, Thierry Carrez wrote: > > > Thoughts? > > Good idea, but I'd argue that 11 is still too many. 7 or 5 might be > a better target. Getting there might be complicated but I think it > would be useful by providing an opportunity to winnow the focus of > the TC to what matters (whatever we might decide that is). > +1. I would vote for a reduction to 11, but I would like to see it smaller. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Tue Aug 27 11:36:46 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 27 Aug 2019 13:36:46 +0200 Subject: [ops] [glance] Need to move images to a different backend Message-ID: I have a Rocky installation where glance is configured with multiple backends. This [*] is the relevant part of the glance configuration. I now need to dismiss the file backend and move the images-snapshots stored there to rbd. I had in mind to follow this procedure (already successfully tested with an older version of OpenStack): 1) download the image from the file backend (glance image-download --file 2) upload the file to rbd, relying on this function: https://github.com/openstack/glance_store/blob/stable/rocky/glance_store/_drivers/rbd.py#L445 3) glance location-add --url rbd://xyz 4) glance location-delete --url file://abc I have problems with 3: # glance --debug location-add --url rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap 6bcc4eab-ed35-42dc-88bd-1d45de73b628 ... ... DEBUG:keystoneauth.session:PATCH call to image for https://cloud-areapd.pd.infn.it:9292/v2/images/6bcc4eab-ed35-42dc-88bd-1d45de73b628 used request id req-6c0598cc-582c-4ce7-a14c-5d1bb6ec4f14 Request returned failure status 400. DEBUG:glanceclient.common.http:Request returned failure status 400. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 687, in main OpenStackImagesShell().main(argv) File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 591, in main args.func(client, args) File "/usr/lib/python2.7/site-packages/glanceclient/v2/shell.py", line 749, in do_location_add image = gc.images.add_location(args.id, args.url, metadata) File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 448, in add_location response = self._send_image_update_request(image_id, add_patch) File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 598, in inner return RequestIdProxy(wrapped(*args, **kwargs)) File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line 432, in _send_image_update_request data=json.dumps(patch_body)) File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 340, in patch return self.request(url, 'PATCH', **kwargs) File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 377, in request return self._handle_response(resp) File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 126, in _handle_response raise exc.from_response(resp, resp.content) HTTPBadRequest: 400 Bad Request: Invalid location (HTTP 400) 400 Bad Request: Invalid location (HTTP 400) As far as I can see with the "openstack" client there is not something to add/delete a location. So I guess it is necessary to change the 'direct_url' and 'locations' properties. If I try to change the direct_url property: # openstack image set --property direct_url='rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap' 6bcc4eab-ed35-42dc-88bd-1d45de73b628 403 Forbidden: Attribute 'direct_url' is read-only. (HTTP 403) Any hints ? Thanks, Massimo [*] [default] ... enabled_backends = file:file,http:http,rbd:rbd show_image_direct_url = true show_multiple_locations = true [glance_store] default_backend = rbd [file] filesystem_store_datadir = /var/lib/glance/images/ [rbd] rbd_store_chunk_size = 8 rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_user = glance-prod rbd_store_pool = images-prod -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Aug 27 12:16:55 2019 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 27 Aug 2019 14:16:55 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: Jim Rollenhagen wrote: > On Tue, Aug 27, 2019 at 6:30 AM Chris Dent > wrote: > > On Tue, 27 Aug 2019, Thierry Carrez wrote: > > > Thoughts? > > Good idea, but I'd argue that 11 is still too many. 7 or 5 might be > a better target. Getting there might be complicated but I think it > would be useful by providing an opportunity to winnow the focus of > the TC to what matters (whatever we might decide that is). > > +1. I would vote for a reduction to 11, but I would like to see it smaller. Yes, I thought about that. But if we reduce too fast, we might have too many incumbents going for reelection over a smaller amount of slots... and I wanted to make sure that we preserved opportunities for new blood to join, even during that transition. Maybe 9 is a better sweet spot, though. Or we could just continue the gradual reduction over the next years. -- Thierry From dtantsur at redhat.com Tue Aug 27 12:38:13 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 27 Aug 2019 14:38:13 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: <68bdbb86-6b74-f669-a3af-c4e0b511b548@redhat.com> On 8/27/19 2:16 PM, Thierry Carrez wrote: > Jim Rollenhagen wrote: >> On Tue, Aug 27, 2019 at 6:30 AM Chris Dent > > wrote: >> >>     On Tue, 27 Aug 2019, Thierry Carrez wrote: >> >>      > Thoughts? >> >>     Good idea, but I'd argue that 11 is still too many. 7 or 5 might be >>     a better target. Getting there might be complicated but I think it >>     would be useful by providing an opportunity to winnow the focus of >>     the TC to what matters (whatever we might decide that is). >> >> +1. I would vote for a reduction to 11, but I would like to see it smaller. > > Yes, I thought about that. But if we reduce too fast, we might have too many > incumbents going for reelection over a smaller amount of slots... and I wanted > to make sure that we preserved opportunities for new blood to join, even during > that transition. > > Maybe 9 is a better sweet spot, though. Or we could just continue the gradual > reduction over the next years. > 9 holders of Rings of Power is what I would propose if I had a say in that :) From massimo.sgaravatto at gmail.com Tue Aug 27 13:02:58 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 27 Aug 2019 15:02:58 +0200 Subject: [ops] [glance] Need to move images to a different backend In-Reply-To: References: Message-ID: Hi Abhishek Yes, before trying the: glance location-add --url rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap 6bcc4eab-ed35-42dc-88bd-1d45de73b628 command, I uploaded the image to ceph. And indeed I can see it: [root at ceph-mon-01 ~]# rbd info images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 rbd image '6bcc4eab-ed35-42dc-88bd-1d45de73b628': size 10GiB in 1280 objects order 23 (8MiB objects) block_name_prefix: rbd_data.b7b56aa1e4f0bc format: 2 features: layering flags: create_timestamp: Tue Aug 27 11:42:32 2019 [root at ceph-mon-01 ~]# rbd snap ls images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 SNAPID NAME SIZE TIMESTAMP 455500 snap 10GiB Tue Aug 27 11:46:37 2019 [root at ceph-mon-01 ~]# Cheers, Massimo On Tue, Aug 27, 2019 at 2:41 PM Abhishek Kekane wrote: > Hi Massimo, > > I need to reproduce this first, but pretty much sure this issue is not > because you configured multiple backends. What you are doing is downloading > existing image to your local storage and then uploading it to glance using > add-location operation. > > So before adding this using add-location operation have you manually > uploaded this image to ceph/rbd? if not then how you are building your > location url which you are mentioning in the add-location command? If this > location is not existing then it might be a problem. > > Thank you, > > Abhishek > > On Tue, 27 Aug 2019 at 5:09 PM, Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> I have a Rocky installation where glance is configured with multiple >> backends. This [*] is the relevant part of the glance configuration. >> >> I now need to dismiss the file backend and move the images-snapshots >> stored there to rbd. >> >> >> I had in mind to follow this procedure (already successfully tested with >> an older version of OpenStack): >> >> 1) download the image from the file backend (glance image-download --file >> >> 2) upload the file to rbd, relying on this function: >> https://github.com/openstack/glance_store/blob/stable/rocky/glance_store/_drivers/rbd.py#L445 >> 3) glance location-add --url rbd://xyz >> 4) glance location-delete --url file://abc >> >> I have problems with 3: >> >> # glance --debug location-add --url >> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> ... >> ... >> DEBUG:keystoneauth.session:PATCH call to image for >> https://cloud-areapd.pd.infn.it:9292/v2/images/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> used request id req-6c0598cc-582c-4ce7-a14c-5d1bb6ec4f14 >> Request returned failure status 400. >> DEBUG:glanceclient.common.http:Request returned failure status 400. >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >> 687, in main >> OpenStackImagesShell().main(argv) >> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >> 591, in main >> args.func(client, args) >> File "/usr/lib/python2.7/site-packages/glanceclient/v2/shell.py", line >> 749, in do_location_add >> image = gc.images.add_location(args.id, args.url, metadata) >> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line >> 448, in add_location >> response = self._send_image_update_request(image_id, add_patch) >> File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", >> line 598, in inner >> return RequestIdProxy(wrapped(*args, **kwargs)) >> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", line >> 432, in _send_image_update_request >> data=json.dumps(patch_body)) >> File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line >> 340, in patch >> return self.request(url, 'PATCH', **kwargs) >> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >> line 377, in request >> return self._handle_response(resp) >> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >> line 126, in _handle_response >> raise exc.from_response(resp, resp.content) >> HTTPBadRequest: 400 Bad Request: Invalid location (HTTP 400) >> 400 Bad Request: Invalid location (HTTP 400) >> >> >> As far as I can see with the "openstack" client there is not something to >> add/delete a location. >> So I guess it is necessary to change the 'direct_url' and 'locations' >> properties. >> >> If I try to change the direct_url property: >> >> # openstack image set --property >> direct_url='rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap' >> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> 403 Forbidden: Attribute 'direct_url' is read-only. (HTTP 403) >> >> Any hints ? >> Thanks, Massimo >> >> [*] >> >> [default] >> ... >> enabled_backends = file:file,http:http,rbd:rbd >> show_image_direct_url = true >> show_multiple_locations = true >> >> [glance_store] >> default_backend = rbd >> >> [file] >> filesystem_store_datadir = /var/lib/glance/images/ >> >> >> [rbd] >> rbd_store_chunk_size = 8 >> rbd_store_ceph_conf = /etc/ceph/ceph.conf >> rbd_store_user = glance-prod >> rbd_store_pool = images-prod >> >> -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Tue Aug 27 13:12:15 2019 From: ed at leafe.com (Ed Leafe) Date: Tue, 27 Aug 2019 08:12:15 -0500 Subject: [placement][plt][elections] Non-nomination for U In-Reply-To: References: Message-ID: <400B0ED9-81CA-477C-A75C-33998411C0D4@leafe.com> On Aug 27, 2019, at 4:54 AM, Chris Dent wrote: > > tl;dr: I'd prefer not to be Placement PTL for the U cycle. I've > accomplished most of what I set out to do and I think we have > other members of the team who would make excellent PTLs (notably > tetsuro and gibi). Many thanks for your leadership in gestating the idea of Placement all the way through seeing it through its adolescence. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ildiko.vancsa at gmail.com Tue Aug 27 13:24:17 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 27 Aug 2019 20:24:17 +0700 Subject: [os-upstream-institute][all] Mentors for Shanghai - sign up Message-ID: Hi Upstream Institute Mentors, We are having the Upstream Institute training again prior to the Shanghai Summit. The training will take place in the Lenovo office in Shanghai in the afternoon of November 2nd and all day on November 3rd. As always we are running the training as community effort and looking for mentors to help teaching new contributors how to participate in and become part of the community. If you are interested in becoming a mentor please reply to this thread or sign up on the training wiki: https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions#Shanghai_Training.2C_2019 For further details about the training you can check out the training website: https://docs.openstack.org/upstream-training/index.html Please let me know if you have any questions. Thanks and Best Regards, Ildikó (IRC: ildikov) From raubvogel at gmail.com Tue Aug 27 13:36:55 2019 From: raubvogel at gmail.com (Mauricio Tavares) Date: Tue, 27 Aug 2019 09:36:55 -0400 Subject: Getting the power state Message-ID: How does openstack finds whether an ironic node is on or off? In the following example, [raub at irony ~]$ openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 6ae69caa-970e-4749-9545-7f3854299bef | nod1 | None | power off | available | False | | 86f28a6f-22c1-4352-a841-cfb370ed62ca | nod2 | None | power off | available | False | | bf8317d2-6ef1-4aa3-83b3-25bafa993711 | nod3 | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ [raub at irony ~]$ I know the 3 nodes are alive and running, but openstack does not seem to agree with me. From dtantsur at redhat.com Tue Aug 27 13:47:54 2019 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 27 Aug 2019 15:47:54 +0200 Subject: Getting the power state In-Reply-To: References: Message-ID: <425951e4-184f-9bef-a6e0-048a7c40bfa1@redhat.com> On 8/27/19 3:36 PM, Mauricio Tavares wrote: > How does openstack finds whether an ironic node is on or off? In > the following example, > > [raub at irony ~]$ openstack baremetal node list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power > State | Provisioning State | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 6ae69caa-970e-4749-9545-7f3854299bef | nod1 | None | power > off | available | False | > | 86f28a6f-22c1-4352-a841-cfb370ed62ca | nod2 | None | power > off | available | False | > | bf8317d2-6ef1-4aa3-83b3-25bafa993711 | nod3 | None | power > off | available | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > [raub at irony ~]$ > > I know the 3 nodes are alive and running, but openstack does not seem > to agree with me. > Ironic periodically polls the nodes to learn their real state. The explanation can be: 1. It takes some time, so maybe you'll see the update in a while. 2. The operator has disabled the periodic sync in the configuration. 3. IPMI/Redfish/whatever driver you have provides wrong information. 4. IPMI/Redfish/whatever is no longer accessible, and ironic will soon report power_state=None and maintenance mode. Dmitry From elmiko at redhat.com Tue Aug 27 13:55:42 2019 From: elmiko at redhat.com (Michael McCune) Date: Tue, 27 Aug 2019 09:55:42 -0400 Subject: [all][api-sig] Update on SIG status and seeking guidance from the community In-Reply-To: References: <16cc30634ec.125fb7873294246.2871343054827957126@ghanshyammann.com> Message-ID: On Tue, Aug 27, 2019 at 4:21 AM Chris Dent wrote: > > On Mon, 26 Aug 2019, Michael McCune wrote: > > On Sat, Aug 24, 2019 at 5:52 AM Ghanshyam Mann wrote: > >> In addition, there are many TODO also there in current guidelines [1]. Few of them > >> I remember were left as TODO when few guidelines were moved from nova to api-wg. > >> > >> [1] https://github.com/openstack/api-sig/search?q=TODO&unscoped_q=TODO > >> > > > > that's a good highlight, thank you for the link. > > > > perhaps a good next step will to be collect all the pointers to work > > that needs finishing and figure out if we can triage it and plan some > > sort of schedule for fixing it. i imagine it will be slow given the > > current staffing, but we could at least point people at things to look > > for if they want to help. > > I think it's important to keep in mind that if we've had those TODOs > for so long, many of them for much longer than 3 years, it is quite > likely they aren't to dos, but instead are to don't: things people > neither care about nor need. At least not enough to do anything > about. > agreed > Keeping the API-SIG on life support to address TODOs that old seems > like make-work to me. It's time to start trimming the optional stuff > and focus only on the things for which there is real user or > developer demand [1]. Not just in the API-SIG but across all of > OpenStack. > also agreed, this is why i think it would be nice to do a triage on the api-sig issues and just figure out what we will want to fix and what is likely just old kruft. peace o/ > [1] "Vendors" left out on purpose. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent From akekane at redhat.com Tue Aug 27 14:17:41 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 27 Aug 2019 19:47:41 +0530 Subject: [ops] [glance] Need to move images to a different backend In-Reply-To: References: Message-ID: Hi Massimo, Cool, thank you for confirmation, I will try to reproduce this in my environment. It will take some time, will revert back to you once done. Meanwhile could you please try the same with single store (disabling the multiple stores)? Thank you, Abhishek On Tue, 27 Aug 2019 at 6:33 PM, Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Hi Abhishek > > Yes, before trying the: > > glance location-add --url > rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap > 6bcc4eab-ed35-42dc-88bd-1d45de73b628 > > command, I uploaded the image to ceph. And indeed I can see it: > > > > [root at ceph-mon-01 ~]# rbd info > images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 > rbd image '6bcc4eab-ed35-42dc-88bd-1d45de73b628': > size 10GiB in 1280 objects > order 23 (8MiB objects) > block_name_prefix: rbd_data.b7b56aa1e4f0bc > format: 2 > features: layering > flags: > create_timestamp: Tue Aug 27 11:42:32 2019 > > [root at ceph-mon-01 ~]# rbd snap ls > images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 > SNAPID NAME SIZE TIMESTAMP > 455500 snap 10GiB Tue Aug 27 11:46:37 2019 > [root at ceph-mon-01 ~]# > > > Cheers, Massimo > > On Tue, Aug 27, 2019 at 2:41 PM Abhishek Kekane > wrote: > >> Hi Massimo, >> >> I need to reproduce this first, but pretty much sure this issue is not >> because you configured multiple backends. What you are doing is downloading >> existing image to your local storage and then uploading it to glance using >> add-location operation. >> >> So before adding this using add-location operation have you manually >> uploaded this image to ceph/rbd? if not then how you are building your >> location url which you are mentioning in the add-location command? If this >> location is not existing then it might be a problem. >> >> Thank you, >> >> Abhishek >> >> On Tue, 27 Aug 2019 at 5:09 PM, Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com> wrote: >> >>> I have a Rocky installation where glance is configured with multiple >>> backends. This [*] is the relevant part of the glance configuration. >>> >>> I now need to dismiss the file backend and move the images-snapshots >>> stored there to rbd. >>> >>> >>> I had in mind to follow this procedure (already successfully tested with >>> an older version of OpenStack): >>> >>> 1) download the image from the file backend (glance image-download >>> --file >>> 2) upload the file to rbd, relying on this function: >>> https://github.com/openstack/glance_store/blob/stable/rocky/glance_store/_drivers/rbd.py#L445 >>> 3) glance location-add --url rbd://xyz >>> 4) glance location-delete --url file://abc >>> >>> I have problems with 3: >>> >>> # glance --debug location-add --url >>> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> ... >>> ... >>> DEBUG:keystoneauth.session:PATCH call to image for >>> https://cloud-areapd.pd.infn.it:9292/v2/images/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> used request id req-6c0598cc-582c-4ce7-a14c-5d1bb6ec4f14 >>> Request returned failure status 400. >>> DEBUG:glanceclient.common.http:Request returned failure status 400. >>> Traceback (most recent call last): >>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>> 687, in main >>> OpenStackImagesShell().main(argv) >>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>> 591, in main >>> args.func(client, args) >>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/shell.py", line >>> 749, in do_location_add >>> image = gc.images.add_location(args.id, args.url, metadata) >>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>> line 448, in add_location >>> response = self._send_image_update_request(image_id, add_patch) >>> File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", >>> line 598, in inner >>> return RequestIdProxy(wrapped(*args, **kwargs)) >>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>> line 432, in _send_image_update_request >>> data=json.dumps(patch_body)) >>> File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line >>> 340, in patch >>> return self.request(url, 'PATCH', **kwargs) >>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>> line 377, in request >>> return self._handle_response(resp) >>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>> line 126, in _handle_response >>> raise exc.from_response(resp, resp.content) >>> HTTPBadRequest: 400 Bad Request: Invalid location (HTTP 400) >>> 400 Bad Request: Invalid location (HTTP 400) >>> >>> >>> As far as I can see with the "openstack" client there is not something >>> to add/delete a location. >>> So I guess it is necessary to change the 'direct_url' and 'locations' >>> properties. >>> >>> If I try to change the direct_url property: >>> >>> # openstack image set --property >>> direct_url='rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap' >>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> 403 Forbidden: Attribute 'direct_url' is read-only. (HTTP 403) >>> >>> Any hints ? >>> Thanks, Massimo >>> >>> [*] >>> >>> [default] >>> ... >>> enabled_backends = file:file,http:http,rbd:rbd >>> show_image_direct_url = true >>> show_multiple_locations = true >>> >>> [glance_store] >>> default_backend = rbd >>> >>> [file] >>> filesystem_store_datadir = /var/lib/glance/images/ >>> >>> >>> [rbd] >>> rbd_store_chunk_size = 8 >>> rbd_store_ceph_conf = /etc/ceph/ceph.conf >>> rbd_store_user = glance-prod >>> rbd_store_pool = images-prod >>> >>> -- >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -- Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Aug 27 14:48:48 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 27 Aug 2019 09:48:48 -0500 Subject: not running for the TC this term In-Reply-To: References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: Thank you for all your hard work and guidance, Doug. It's been invaluable as a newcomer to the TC. On Mon, Aug 26, 2019 at 10:26 AM Amy Marrich wrote: > Doug, > > Thank you for everything you've done as a member of the TC and in the > community! > > Thanks, > > Amy (spotz) > > On Mon, Aug 26, 2019 at 8:00 AM Doug Hellmann > wrote: > >> Since nominations open this week, I wanted to go ahead and let you all >> know that I will not be seeking re-election to the Technical Committee this >> term. >> >> My role within Red Hat has been changing over the last year, and while I >> am still working on projects related to OpenStack it is no longer my sole >> focus. I will still be around, but it is better for me to make room on the >> TC for someone with more time to devote to it. >> >> It’s hard to believe it has been 6 years since I first joined the >> Technical Committee. So much has happened in our community in that time, >> and I want to thank all of you for the trust you have placed in me through >> it all. It has been an honor to serve and help build the community. >> >> Thank you, >> Doug >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Aug 27 14:49:57 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 27 Aug 2019 09:49:57 -0500 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <7898726a1046b09e4b4339f1d0ffe14e79669d6b.camel@evrard.me> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> <20190826145821.nhhi2hbi4nf5qcr3@csail.mit.edu> <7898726a1046b09e4b4339f1d0ffe14e79669d6b.camel@evrard.me> Message-ID: On Mon, Aug 26, 2019 at 10:49 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > On Mon, 2019-08-26 at 10:58 -0400, Jonathan Proulx wrote: > > Doing what most needs doing in the most ethical way possible is > > exactly what makes you such a great person. Thanks, and keep being > > you. > > > > -Jon > > I so much agree with that. > > Thank fungi for all the work, and to continue behind the scenes! > ++ I couldn't agree more. Thanks, Jeremy! > > JP > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Aug 27 14:50:43 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 27 Aug 2019 09:50:43 -0500 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: <9c65999d-c6ed-420b-b419-698c9b5bcdb8@ChinaBuckets.com> References: <9c65999d-c6ed-420b-b419-698c9b5bcdb8@ChinaBuckets.com> Message-ID: Thanks for serving, Julia! On Mon, Aug 26, 2019 at 8:11 PM Eliza wrote: > Thanks Julia too. > > on 2019/8/27 1:02, Amy Marrich wrote: > > Julia, > > > > Thanks for everything you've done while on the TC! > > > > Amy (spotz) > > > > On Mon, Aug 26, 2019 at 11:24 AM Julia Kreger > > > > wrote: > > > > Greetings everyone, > > > > I wanted to officially let everyone know that I will not be running > > for re-election to the TC. > > > > I have enjoyed serving on the TC for the past two years. Due to some > > changes in my personal and professional lives, It will not be > possible > > for me to serve during this next term. > > > > Thanks everyone! > > > > -Julia > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Aug 27 14:52:37 2019 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 27 Aug 2019 08:52:37 -0600 Subject: [tripleo] PTL non-candidacy Message-ID: Hey folks, The PTL election cycle is upon us. I will not be running again for the TripleO PTL. I think this last cycle has been fairly successful from a stability stand point. We've continued to improve the deployment process and address a decent amount of tech debt. I will still be around in the community, but I think it's time for others to step up. If anyone has any questions about the position, feel free to reach out to me. Thanks, -Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Aug 27 14:59:57 2019 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 27 Aug 2019 09:59:57 -0500 Subject: [tc] not seeking reelection Message-ID: Hi all, Now that the nomination period is open for TC candidates, I'd like to say that I won't be running for a second term on the TC. My time on the TC has enriched my understanding of open-source communities and I appreciate all the time people put into helping me get up-to-speed. I wish the best of luck to folks putting their hat in the ring this week! Thanks all, Lance -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Aug 27 15:25:32 2019 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 27 Aug 2019 11:25:32 -0400 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> On 27/08/19 6:13 AM, Thierry Carrez wrote: > Hi everyone, > > The size of the TC is a trade-off between getting enough community > representation and keeping enough members engaged and active. The > current size (13 members) was defined by in 2013, as we moved from 5 > directly-elected seats + all PTLs (which would have been 14 people) to a > model that could better cope with our explosive growth. Since then, 13 > has worked well, to ensure that new blood could come in at every cycle. > > I would argue that today, there are far less need to get wide > representation in the TC (we are pretty aligned), and less difficulty to > enter the TC (there is more turnover). In 2019 OpenStack, 13 members is > a rather large group. It is becoming difficult to find 13 people able to > commit to a significant amount of time over the coming year. And it is > difficult to keep all those 13 members active and engaged. > > IMHO it is time to reduce the TC to 11 members, which sounds like a more > reasonable and manageable size. We should encourage people to stop for a > while and come back, rather than burn too many people at the same time. > We should encourage more people to watch from the sidelines, rather than > have a group so large that everyone that can be in it is in it. > > This would not be a big-bang change, just something we would gradually > put in place over the next year. My strawman plan would be as follows: > > - Sept. 2019 election: no change, elect 6 seats as planned > - Feb. 2020 election: elect 6 seats instead of 7 > - Sept. 2020 election: elect 5 seats instead of 6 > - Then elect 6 seats at every start-of-year and 5 at every end-of-year > > That would result in TC membership sizes: > > - U cycle (Q4 2019/Q1 2020): 13 members > - V cycle (Q2/Q3 2020): 12 members > - W cycle (Q4 2020/Q1 2021): 11 members > - after that: 11 members TBH, I'd suggest we: * Aim for 9 * Reduce by 2 each time so we don't end up with an even number * Start now when 4 people have announced they're stepping down (though in a perfect world this would have been the cycle when 7 seats are up for election) > FWIW, I intend to not run for reelection in the Feb 2020 election, so > nobody else has to sacrifice their seat to that reform for that election :) That is still my intention also, so nobody would have to sacrifice their seat even if we reduced to 9. cheers, Zane. From amy at demarco.com Tue Aug 27 15:29:06 2019 From: amy at demarco.com (Amy Marrich) Date: Tue, 27 Aug 2019 10:29:06 -0500 Subject: [tc] not seeking reelection In-Reply-To: References: Message-ID: Lance, Thanks for everything you've done as part of the TC! Amy (spotz) On Tue, Aug 27, 2019 at 10:01 AM Lance Bragstad wrote: > Hi all, > > Now that the nomination period is open for TC candidates, I'd like to say > that I won't be running for a second term on the TC. > > My time on the TC has enriched my understanding of open-source communities > and I appreciate all the time people put into helping me get up-to-speed. I > wish the best of luck to folks putting their hat in the ring this week! > > Thanks all, > > Lance > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Tue Aug 27 15:31:10 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 27 Aug 2019 16:31:10 +0100 Subject: [designate][ptl][elections] Non Candidacy for U cycle Message-ID: Hi All, As I laid out in my Train nomination[1], Train was my last cycle as PTL for Designate. Change is good for projects, and Designate is well overdue for it.[2] The 4 years (with a 6 month break[3]), 4 or 5 different companies, and 2 API versions were a real experience - we did some great work as a team. As a project Designate has come a long way from the start, and team over the years really deserves congratulations. From something that was an outsider project in Folsem and Grizzly, to having a Trademark program now, we are in completely different space, and that is a testament to the wide group of people that have contributed (in terms of code, but also in running it, reporting bugs, helping us with docs, and translating it into many different languages) The project is used by some of the largest installs of OpenStack, and provides really useful service for cloud environments. We do have some work left to do, and as "cloud native" evolves, Designate will as well. I am not going to disappear - I still will be around to work on the project, explain why we did somethings that look weird[4], and help people set it up, but just not as a PTL. If anyone has any questions on what the role entails, or are interested but want some more info, please reach out, via email, mugsie on freenode, or on the #openstack-dns channel Thanks everyone, - Graham 1 - In 7 years, we have had 3 PTLs, and one of them was for a single cycle 2 - https://opendev.org/openstack/election/raw/branch/master/candidates/train/Designate/gr at ham.ie 3 - thanks timsim for giving me a break :) 4 - Most of the time, its my fault -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Tue Aug 27 15:49:15 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 27 Aug 2019 16:49:15 +0100 Subject: [tc] not seeking reelection In-Reply-To: References: Message-ID: <39a30f12-0b14-fcf6-f209-348679bee3f6@ham.ie> On 27/08/2019 15:59, Lance Bragstad wrote: > Hi all, > > Now that the nomination period is open for TC candidates, I'd like to > say that I won't be running for a second term on the TC. > > My time on the TC has enriched my understanding of open-source > communities and I appreciate all the time people put into helping me get > up-to-speed. I wish the best of luck to folks putting their hat in the > ring this week! > > Thanks all, > > Lance Thanks for all your work on the TC in your term, it was really appreciated. - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Tue Aug 27 15:51:31 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 27 Aug 2019 16:51:31 +0100 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: On 26/08/2019 17:22, Julia Kreger wrote: > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be possible > for me to serve during this next term. > > Thanks everyone! > > -Julia > Thanks for all the work, and helping us try and be more open as a community. - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Tue Aug 27 15:52:55 2019 From: gr at ham.ie (Graham Hayes) Date: Tue, 27 Aug 2019 16:52:55 +0100 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: <65064d65-4bf4-ea54-ae6b-d538f0e39ed1@ham.ie> On 26/08/2019 14:19, Jeremy Stanley wrote: > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. > > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! > Thanks for serving as part of the institutional memory of the TC - it has been really useful to have you and the context of past decisions when we look at things today. - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Tue Aug 27 15:57:42 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Aug 2019 15:57:42 +0000 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> Message-ID: <20190827155742.tpleittnofdnmhe5@yuggoth.org> On 2019-08-27 11:25:32 -0400 (-0400), Zane Bitter wrote: [...] > * Start now when 4 people have announced they're stepping down [...] With my election official hat on, this is rather short notice. We've announced the number of seats up for election and nominations start at the (UTC) end of today. While I agree reducing the size of the TC sounds worthwhile, realistically I think we need to announce at the start of the new term that it's the plan for the following cycle's election. With my (soon to be former, at least for a cycle) TC member hat on, I suggest that we hold this election as planned but circulate the suggestion of electing fewer seats in coming cycles. Then once the new TC has been confirmed let them decide whether that should be the plan as soon into the new term as possible, and announce that decision so the community will be prepared for it in the next election. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Aug 27 16:05:04 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 27 Aug 2019 11:05:04 -0500 Subject: [oslo] Proposing Gabriele Santomaggio as oslo.messaging core In-Reply-To: References: Message-ID: Okay, it's been about a week and I think we've heard from everyone we're going to, so I've added Gabriele to the oslo.messaging core team. Welcome! -Ben On 8/21/19 9:25 AM, Ben Nemec wrote: > Hello Norsk, > > It is my pleasure to propose Gabriele Santomaggio (gsantomaggio) as a > new member of the oslo.messaging core team. He has been contributing to > the project for about a cycle now and has gotten up to speed on our > development practices. Oh, and he wrote the book on RabbitMQ[0]. :-) > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > -Ben > > 0: http://shop.oreilly.com/product/9781849516501.do > From nate.johnston at redhat.com Tue Aug 27 16:19:27 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 27 Aug 2019 12:19:27 -0400 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <87wofepdfu.fsf@meyer.lemoncheese.net> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> Message-ID: <20190827161927.h462vwwij6d2nraq@bishop> On Thu, Aug 15, 2019 at 10:04:53AM -0700, James E. Blair wrote: > Hi, > > We have made the switch to begin storing all of the build logs from Zuul > in Swift. > > Each build's logs will be stored in one of 7 randomly chosen Swift > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > providers! > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > on the Zuul web app. A lot of the features previously available on the > log server are now available there, plus some new ones. > > If you're looking for a link to a docs preview build, you'll find that > on the build page under the "Artifacts" section now. > > If you're curious about where your logs ended up, you can see the Swift > hostname under the "logs_url" row in the summary table. > > Please let us know if you have any questions or encounter any issues, > either here, or in #openstack-infra on IRC. Where should I go to see the logs for periodic jobs? I assume these have been transferred over, since (for example) the neutron periodic jobs stopped logging their daily runs after 8/15, except for one time on 8/24. Thanks, Nate From cboylan at sapwetik.org Tue Aug 27 16:23:56 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 27 Aug 2019 09:23:56 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <20190827161927.h462vwwij6d2nraq@bishop> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> <20190827161927.h462vwwij6d2nraq@bishop> Message-ID: <4dd6b43b-82e9-4df6-afec-6b175a34c97f@www.fastmail.com> On Tue, Aug 27, 2019, at 9:19 AM, Nate Johnston wrote: > On Thu, Aug 15, 2019 at 10:04:53AM -0700, James E. Blair wrote: > > Hi, > > > > We have made the switch to begin storing all of the build logs from Zuul > > in Swift. > > > > Each build's logs will be stored in one of 7 randomly chosen Swift > > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > > providers! > > > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > > on the Zuul web app. A lot of the features previously available on the > > log server are now available there, plus some new ones. > > > > If you're looking for a link to a docs preview build, you'll find that > > on the build page under the "Artifacts" section now. > > > > If you're curious about where your logs ended up, you can see the Swift > > hostname under the "logs_url" row in the summary table. > > > > Please let us know if you have any questions or encounter any issues, > > either here, or in #openstack-infra on IRC. > > Where should I go to see the logs for periodic jobs? I assume these have been > transferred over, since (for example) the neutron periodic jobs stopped logging > their daily runs after 8/15, except for one time on 8/24. You can use the builds tab in the dashboard to query for previous builds including those of periodic jobs. For example http://zuul.openstack.org/builds?pipeline=periodic-stable&project=openstack%2Fneutron will get you the periodic stable jobs for neutron. Clark From dtroyer at gmail.com Tue Aug 27 16:27:33 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 27 Aug 2019 11:27:33 -0500 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: <20190827155742.tpleittnofdnmhe5@yuggoth.org> References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: On Tue, Aug 27, 2019 at 11:02 AM Jeremy Stanley wrote: > With my (soon to be former, at least for a cycle) TC member hat on, > I suggest that we hold this election as planned but circulate the > suggestion of electing fewer seats in coming cycles. Then once the > new TC has been confirmed let them decide whether that should be the > plan as soon into the new term as possible, and announce that > decision so the community will be prepared for it in the next > election. I completely agree with this, it is usually around election time (often just after) that we think about these things but making a change for the impending election is too quick. I like Thierry's suggested schedule and I like the target of 9. I do not have a strong preference for getting to 9 in 2 vs 4 cycles (2 at a time vs 1 at a time). As has been mentioned elsewhere (IRC?) having an even number should not be an issue, looking back I can recall only one vote where it might have been a problem and in that case since it was clear we did not have consensus the plan was dropped anyway. FWIW, StarlingX had an even number of members on its Technical Steering Committee (TC equivalent) for two cycles and it has not been a problem for basically the same reason, when driving for even close consensus you generally do not get ties. dt -- Dean Troyer dtroyer at gmail.com From nate.johnston at redhat.com Tue Aug 27 16:30:41 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 27 Aug 2019 12:30:41 -0400 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <4dd6b43b-82e9-4df6-afec-6b175a34c97f@www.fastmail.com> References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> <20190827161927.h462vwwij6d2nraq@bishop> <4dd6b43b-82e9-4df6-afec-6b175a34c97f@www.fastmail.com> Message-ID: <20190827163041.eskq6dllg2vlda5p@bishop> On Tue, Aug 27, 2019 at 09:23:56AM -0700, Clark Boylan wrote: > On Tue, Aug 27, 2019, at 9:19 AM, Nate Johnston wrote: > > On Thu, Aug 15, 2019 at 10:04:53AM -0700, James E. Blair wrote: > > > Hi, > > > > > > We have made the switch to begin storing all of the build logs from Zuul > > > in Swift. > > > > > > Each build's logs will be stored in one of 7 randomly chosen Swift > > > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those > > > providers! > > > > > > You'll note that the links in Gerrit to the Zuul jobs now go to a page > > > on the Zuul web app. A lot of the features previously available on the > > > log server are now available there, plus some new ones. > > > > > > If you're looking for a link to a docs preview build, you'll find that > > > on the build page under the "Artifacts" section now. > > > > > > If you're curious about where your logs ended up, you can see the Swift > > > hostname under the "logs_url" row in the summary table. > > > > > > Please let us know if you have any questions or encounter any issues, > > > either here, or in #openstack-infra on IRC. > > > > Where should I go to see the logs for periodic jobs? I assume these have been > > transferred over, since (for example) the neutron periodic jobs stopped logging > > their daily runs after 8/15, except for one time on 8/24. > > You can use the builds tab in the dashboard to query for previous builds including those of periodic jobs. For example http://zuul.openstack.org/builds?pipeline=periodic-stable&project=openstack%2Fneutron will get you the periodic stable jobs for neutron. Perfect, thanks! Nate From doug at doughellmann.com Tue Aug 27 16:43:55 2019 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 27 Aug 2019 12:43:55 -0400 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: <04D87A36-67FA-4F01-A95A-814F6D95369A@doughellmann.com> > On Aug 27, 2019, at 12:27 PM, Dean Troyer wrote: > > On Tue, Aug 27, 2019 at 11:02 AM Jeremy Stanley wrote: >> With my (soon to be former, at least for a cycle) TC member hat on, >> I suggest that we hold this election as planned but circulate the >> suggestion of electing fewer seats in coming cycles. Then once the >> new TC has been confirmed let them decide whether that should be the >> plan as soon into the new term as possible, and announce that >> decision so the community will be prepared for it in the next >> election. > > I completely agree with this, it is usually around election time > (often just after) that we think about these things but making a > change for the impending election is too quick. I like Thierry's > suggested schedule and I like the target of 9. I do not have a strong > preference for getting to 9 in 2 vs 4 cycles (2 at a time vs 1 at a > time). As has been mentioned elsewhere (IRC?) having an even number > should not be an issue, looking back I can recall only one vote where > it might have been a problem and in that case since it was clear we > did not have consensus the plan was dropped anyway. FWIW, StarlingX > had an even number of members on its Technical Steering Committee (TC > equivalent) for two cycles and it has not been a problem for basically > the same reason, when driving for even close consensus you generally > do not get ties. Yes, I think it’s too close to the current election to change this now. Other than that, the only reason to take longer would be to ensure that we maintain a balance in the number of seats up for election each cycle. Doug From jim at jimrollenhagen.com Tue Aug 27 16:50:40 2019 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 27 Aug 2019 12:50:40 -0400 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: On Tue, Aug 27, 2019 at 12:34 PM Dean Troyer wrote: > On Tue, Aug 27, 2019 at 11:02 AM Jeremy Stanley wrote: > > With my (soon to be former, at least for a cycle) TC member hat on, > > I suggest that we hold this election as planned but circulate the > > suggestion of electing fewer seats in coming cycles. Then once the > > new TC has been confirmed let them decide whether that should be the > > plan as soon into the new term as possible, and announce that > > decision so the community will be prepared for it in the next > > election. > > I completely agree with this, it is usually around election time > (often just after) that we think about these things but making a > change for the impending election is too quick. I like Thierry's > suggested schedule and I like the target of 9. I do not have a strong > preference for getting to 9 in 2 vs 4 cycles (2 at a time vs 1 at a > time). As has been mentioned elsewhere (IRC?) having an even number > should not be an issue, looking back I can recall only one vote where > it might have been a problem and in that case since it was clear we > did not have consensus the plan was dropped anyway. FWIW, StarlingX > had an even number of members on its Technical Steering Committee (TC > equivalent) for two cycles and it has not been a problem for basically > the same reason, when driving for even close consensus you generally > do not get ties. > I agree even numbers are not a problem. I don't think (hope?) the existing TC would merge anything that went 7-6 anyway. // jim > > dt > > -- > Dean Troyer > dtroyer at gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Aug 27 16:52:59 2019 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 27 Aug 2019 11:52:59 -0500 Subject: [oslo] No meeting next week, Sep. 2 Message-ID: <7535bb9f-58f7-81eb-1aad-d6f9cc9234e6@nemebean.com> Hi, Next Monday is a US holiday and as a result I won't be available to run the meeting. Since that affects a number of our other Oslo contributors, I propose that we just skip the meeting for that week. It's the first day after feature freeze anyway so there shouldn't be _too_ much to discuss. As always, if anything comes up in the meantime you can bring it up in #openstack-oslo or here on the list. No need to wait for the meeting. Thanks. -Ben From jungleboyj at gmail.com Tue Aug 27 18:21:37 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 27 Aug 2019 13:21:37 -0500 Subject: [cinder] Mid-Cycle Summary and Video Posted ... Message-ID: All, I have gotten the Summary from the Cinder Mid-Cycle that we held at Lenovo's RTP campus last week posted. [1] Please take a look at the summary and see if you have any action items.  Also note that I have tagged actions without an owner with the 'HELP NEEDED' tag.  If you can take any of the action items it would be greatly appreciated. Thanks to all who participated and made it another successful mid-cycle! Jay (irc:  jungleboyj) [1] https://wiki.openstack.org/wiki/CinderTrainMidCycleSummary From jungleboyj at gmail.com Tue Aug 27 19:03:45 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 27 Aug 2019 14:03:45 -0500 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> Message-ID: On 8/27/2019 4:06 AM, Dmitry Tantsur wrote: > On 8/26/19 6:27 PM, Jay Bryant wrote: >> All, >> >> Not sure how else to address this, but we have had Cinder's etherpads >> repeatedly converted into Japanese recently. >> >> Want to make a public plea here to please be careful with translation >> tools and ensure that you aren't changing the etherpads in the >> process. Rebuilding the content to its previous state is tedious. > > How did you even do it? The time slider won't allow me to go to > yesterday.. > I just slid the time slider back to before the change happened. I did catch it shortly after it happened (the team actually did thankfully).  I have always been able to go back in time quite a ways though. >> >> Thank you! >> >> Jay >> >> (irc: jungleboyj) >> >> > > From nate.johnston at redhat.com Tue Aug 27 19:21:07 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 27 Aug 2019 15:21:07 -0400 Subject: [glance][neutron] Glance migrations broken for postgresql; impact to Neutron periodic jobs Message-ID: <20190827192107.dp2noxkb4nj2fwpd@bishop> Glance developers, I am investigating failures with the neutron-tempest-postgres-full periodic job in Zuul, and it looks like the failing step is actually a database migration for Glance that occurs during devstack setup. Pasted below is a representative example [1]. You can see the same issue causing failures for this neutron periodic job by looking at the Zuul failures [2]; the error is in logs/devstacklog.txt.gz. The last successful job was 2019-08-15T06:07:06; the next one on 2019-08-16 failed. But there were no changes merged in glance during that time period [3]. So if you could help debug the source of this issue then we can hopefully get this job - and probably whatever others depend on postgresql - back working. 2019-08-27 06:35:44.963 | INFO [alembic.runtime.migration] Running upgrade rocky_expand02 -> train_expand01, empty expand for symmetry with train_contract01 2019-08-27 06:35:44.967 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl. 2019-08-27 06:35:44.967 | INFO [alembic.runtime.migration] Will assume transactional DDL. 2019-08-27 06:35:44.970 | Upgraded database to: train_expand01, current revision(s): train_expand01 2019-08-27 06:35:44.971 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl. 2019-08-27 06:35:44.971 | INFO [alembic.runtime.migration] Will assume transactional DDL. 2019-08-27 06:35:44.975 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl. 2019-08-27 06:35:44.975 | INFO [alembic.runtime.migration] Will assume transactional DDL. 2019-08-27 06:35:44.995 | CRITI [glance] Unhandled error 2019-08-27 06:35:44.996 | Traceback (most recent call last): 2019-08-27 06:35:44.996 | File "/usr/local/bin/glance-manage", line 10, in 2019-08-27 06:35:44.996 | sys.exit(main()) 2019-08-27 06:35:44.996 | File "/opt/stack/new/glance/glance/cmd/manage.py", line 563, in main 2019-08-27 06:35:44.996 | return CONF.command.action_fn() 2019-08-27 06:35:44.996 | File "/opt/stack/new/glance/glance/cmd/manage.py", line 395, in sync 2019-08-27 06:35:44.996 | self.command_object.sync(CONF.command.version) 2019-08-27 06:35:44.996 | File "/opt/stack/new/glance/glance/cmd/manage.py", line 166, in sync 2019-08-27 06:35:44.996 | self.migrate(online_migration=False) 2019-08-27 06:35:44.996 | File "/opt/stack/new/glance/glance/cmd/manage.py", line 294, in migrate 2019-08-27 06:35:44.996 | if data_migrations.has_pending_migrations(db_api.get_engine()): 2019-08-27 06:35:44.996 | File "/opt/stack/new/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py", line 61, in has_pending_migrations 2019-08-27 06:35:44.997 | return any([x.has_migrations(engine) for x in migrations]) 2019-08-27 06:35:44.997 | File "/opt/stack/new/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", line 28, in has_migrations 2019-08-27 06:35:44.997 | metadata_backend = con.execute(sql_query) 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 982, in execute 2019-08-27 06:35:44.997 | return self._execute_text(object_, multiparams, params) 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1155, in _execute_text 2019-08-27 06:35:44.997 | parameters, 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context 2019-08-27 06:35:44.997 | e, statement, parameters, cursor, context 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1464, in _handle_dbapi_exception 2019-08-27 06:35:44.997 | util.raise_from_cause(newraise, exc_info) 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause 2019-08-27 06:35:44.997 | reraise(type(exception), exception, tb=exc_tb, cause=cause) 2019-08-27 06:35:44.997 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context 2019-08-27 06:35:44.998 | cursor, statement, parameters, context 2019-08-27 06:35:44.998 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 552, in do_execute 2019-08-27 06:35:44.998 | cursor.execute(statement, parameters) 2019-08-27 06:35:44.998 | DBError: (psycopg2.errors.UndefinedFunction) function instr(text, unknown) does not exist 2019-08-27 06:35:44.998 | LINE 1: select meta_data from image_locations where INSTR(meta_data,... 2019-08-27 06:35:44.998 | ^ 2019-08-27 06:35:44.998 | HINT: No function matches the given name and argument types. You might need to add explicit type casts. Thanks, Nate [1] also http://paste.openstack.org/show/765665/ if that is an easier view [2] http://zuul.openstack.org/builds?project=openstack%2Fneutron&job_name=neutron-tempest-postgres-full&branch=master [3] https://review.opendev.org/#/q/project:openstack/glance+status:merged From mriedemos at gmail.com Tue Aug 27 20:31:20 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Aug 2019 15:31:20 -0500 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> Message-ID: <9dcc9c08-831e-a9f9-bed8-36435e5903b6@gmail.com> On 8/27/2019 2:03 PM, Jay Bryant wrote: > I have always been able to go back in time quite a ways though. You should really take advantage of that ability Jay. -- Thanks, Matt From haleyb.dev at gmail.com Tue Aug 27 21:00:21 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 27 Aug 2019 17:00:21 -0400 Subject: FWAAS V2 doesn't work with DVR In-Reply-To: References: Message-ID: Hi Salman, On 8/21/19 2:49 PM, Salman Khan wrote: > Hi Guys, > > I asked this question over #openstack-neutron channel but didn't get any > answer, so asking here in a hope that someone might read this email and > reply. > The problem is: I have enabled FWAAS_V2 with DVR and that doesn't seem > to work. I debugged things down to router namespaces and it looks like > iptables rules are applied to rfp- interface which doesn't > exist in that namespace. So rules are completely wrong as they are > applied to an interface that doesn't exist, I mean there is rfp-* > interface but the that fwaas expecting is not what it > should be. I tried applying the rules to qr-* interfaces in the > namespace but that didn't work as well, packets are dropping on > "invalid" state rule. That's probably because of nat rules from dvr. > Can someone please help me to understand this behaviour. Is it really > suppose to work or not. If there is any bug or fix pending or there is > any work ongoing to support this. Can you tell what version of neutron/neutron-fwaas you are using? Short of that I believe it should work, the only bug I found that seems related and was fixed recently (end of 2018) was https://bugs.launchpad.net/neutron/+bug/1762454 so maybe take a look at that and see if is the same thing. Otherwise maybe someone on the Fwaas team has seen it? -Brian From jungleboyj at gmail.com Tue Aug 27 22:35:04 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 27 Aug 2019 17:35:04 -0500 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: <9dcc9c08-831e-a9f9-bed8-36435e5903b6@gmail.com> References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> <9dcc9c08-831e-a9f9-bed8-36435e5903b6@gmail.com> Message-ID: On 8/27/2019 4:31 PM, Matt Riedemann wrote: > On 8/27/2019 2:03 PM, Jay Bryant wrote: >> I have always been able to go back in time quite a ways though. > > You should really take advantage of that ability Jay. > Good advice!  If only I could also go forward! From fungi at yuggoth.org Tue Aug 27 22:48:41 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Aug 2019 22:48:41 +0000 Subject: [all[ Please Do Not Translate Etherpads ... In-Reply-To: References: <0c10f69d-afac-f601-4d86-c9f64766c785@gmail.com> <10cf3588-c6d3-5fbd-614d-92b805a0443d@redhat.com> <9dcc9c08-831e-a9f9-bed8-36435e5903b6@gmail.com> Message-ID: <20190827224841.j75qqqfn5ljfrgt3@yuggoth.org> On 2019-08-27 17:35:04 -0500 (-0500), Jay Bryant wrote: > > On 8/27/2019 4:31 PM, Matt Riedemann wrote: > > On 8/27/2019 2:03 PM, Jay Bryant wrote: > > > I have always been able to go back in time quite a ways though. > > > > You should really take advantage of that ability Jay. > > Good advice!  If only I could also go forward! I have the ability to travel forward in time, however it only operates at the speed of normal time. You! Young child... pray tell, what year is it? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From johnsomor at gmail.com Tue Aug 27 23:06:02 2019 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 27 Aug 2019 16:06:02 -0700 Subject: [octavia] Stepping down from Octavia core team In-Reply-To: References: Message-ID: Nir, I am very sorry to see you move on. Thank you for all of your work over the years on Octavia, especially your support in adding CentOS amphora support. Good luck on your new adventures! Michael On Tue, Aug 27, 2019 at 12:31 AM Carlos Goncalves wrote: > > On Tue, Aug 27, 2019 at 9:22 AM Nir Magnezi wrote: > > > > Hi all, > > > > As some of you already know, I'm no longer involved in Octavia development and therefore stepping down from the core team. > > > > It has been a pleasure to be a part of the Octavia (and neutron-lbaas) core team for nearly three years. I learned a lot. > > Thank you very much for all your hard work! > I wish you success in all your future endeavors. > > > Octavia is an excellent project with a strong core team, and I'm sure it will continue to grow and gain more adoption. > > > > Thanks, > > Nir > From fungi at yuggoth.org Tue Aug 27 23:45:10 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Aug 2019 23:45:10 +0000 Subject: [all][elections][ptl][tc] Combined PTL/TC Nominations Kickoff Message-ID: <20190827234510.3uik5tn6yct2qdm4@yuggoth.org> Nominations for OpenStack PTLs (Project Team Leads) and TC (Technical Committee) positions (6 positions) are now open and will remain open until Sep 03, 2019 23:45 UTC. All nominations must be submitted as a text file to the openstack/election repository as explained at https://governance.openstack.org/election/#how-to-submit-a-candidacy Please make sure to follow the candidacy file naming convention: candidates/u// (for example, "candidates/u/TC/stacker at example.org"). The name of the file should match an email address for your current OpenStack Foundation Individual Membership. Take this opportunity to ensure that your OSF member profile contains current information: https://www.openstack.org/profile/ Any OpenStack Foundation Individual Member can propose their candidacy for an available, directly-elected seat on the Technical Committee. In order to be an eligible candidate for PTL you must be an OpenStack Foundation Individual Member. PTL candidates must also have contributed to the corresponding team during the Stein to Train timeframe, Aug 10, 2018 00:00 UTC - Sep 03, 2019 00:00 UTC. Your Gerrit account must also have a verified email address matching the one used in your candidacy filename. Both PTL and TC elections will be held from Sep 10, 2019 23:45 UTC through to Sep 17, 2019 23:45 UTC. The electorate for the TC election are the OpenStack Foundation Individual Members who have a code contribution one of the official teams over the Stein to Train timeframe, Aug 10, 2018 00:00 UTC - Sep 03, 2019 00:00 UTC, as well as any Extra ATCs who are acknowledged by the TC. The electorate for a PTL election are the OpenStack Foundation Individual Members who have a code contribution over the Stein to Train timeframe, Aug 10, 2018 00:00 UTC - Sep 03, 2019 00:00 UTC, in a deliverable repository maintained by the team for which the PTL would lead, as well as the Extra ATCs who are acknowledged by the TC for that specific team. The list of project teams can be found at https://governance.openstack.org/tc/reference/projects/ and their individual team pages include lists of corresponding Extra ATCs. Please find below the timeline: nomination starts @ Aug 27, 2019 23:45 UTC nomination ends @ Sep 03, 2019 23:45 UTC campaigning starts @ Sep 03, 2019 23:45 UTC campaigning ends @ Sep 10, 2019 23:45 UTC elections start @ Sep 10, 2019 23:45 UTC elections end @ Sep 17, 2019 23:45 UTC Shortly after election officials approve candidates, they will be listed on the https://governance.openstack.org/election/ page. The electorate is requested to confirm their email addresses in Gerrit, prior to 2019-09-03 00:00:00+00:00 so that the emailed ballots are mailed to the correct email address. This email address should match one which was provided in your foundation member profile as well. Gerrit account information and OSF member profiles can be checked at https://review.openstack.org/#/settings/contact and https://www.openstack.org/profile/ accordingly. If you have any questions please be sure to either ask them on the mailing list or to the elections officials: https://governance.openstack.org/election/#election-officials -- Jeremy Stanley, on behalf of the technical election officials -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gaetan.trellu at incloudus.com Wed Aug 28 00:25:15 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Tue, 27 Aug 2019 20:25:15 -0400 Subject: [designate][ptl][elections] Non Candidacy for U cycle In-Reply-To: Message-ID: <8429a4d1-1ebf-4b86-a02f-e0254457065b@email.android.com> An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Wed Aug 28 02:12:43 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 28 Aug 2019 14:12:43 +1200 Subject: [horizon] Horizon's openstack_auth is completely broken with Django 2.2, breaking all the dashboad stack in Debian Sid/Bullseye In-Reply-To: <7fed32a9f420ed004cae69b8438b4d8113524dd9.camel@redhat.com> References: <7fed32a9f420ed004cae69b8438b4d8113524dd9.camel@redhat.com> Message-ID: Awesome work! I've started reviewing the patches and they mostly look good and easy to merge. Just gotta get the core devs looking through them and merging them. I think this is urgent since if we fail to add 2.2 support this cycle, then Horizon will be out of LTS support for Django before the next release cycle ends. On 27/08/19 8:46 AM, Stephen Finucane wrote: > On Sat, 2019-08-24 at 23:55 +0200, Thomas Goirand wrote: >> Hi, >> >> Django 2.2 was uploaded to Debian Sid/Bullseye a few days after Buster >> was released, removing at the same time any support for Python 2. This >> broke lots of packages, but I managed to upload more than 40 times to >> fix the situation. Though Horizon is still completely broken. >> >> There's a bunch of issues related to Django 2.2 for Horizon. I managed >> to patch Horizon for a few of them, but there's one bigger issue which I >> can't solve myself, because I'm not skilled enough in Django, and don't >> know enough about Horizon's internal. Let me explain. >> >> In Django 1.11, the login() function in openstack_auth/views.py was >> deprecated in the favor of a LoginView class. In Django 2.2, the login() >> function is now not supported anymore. This means that, if >> openstack_auth/views.py doesn't get rewritten completely, Horizon will >> continue to be completely broken in Debian Sid/Bullseye. >> >> Unfortunately, the Horizon team is understaffed, and the PTL told me >> that they don't plan anything before Train+1. >> >> As a consequence, Horizon is completely broken in Debian Sid/Bullseye, >> and will stay as it is if nobody steps up for the work. >> >> After a month and a half with this situation not being solved by anyone, >> I'm hereby calling for help. >> >> I could attempt to work on this, though I need help and pointers at some >> example implementation of this. It'd be nicer if someone more skilled >> than me in Django was working on this anyways. > Fortunately, I do know Django, so I've gone and tackled this series > here. > > https://review.opendev.org/#/q/topic:django22+status:open+project:openstack/horizon > > All the unit tests are passing now, so I assume that's all that's > needed. I didn't test anything manually. > > Probably a longer conversation to be had here regarding the long term > viability of Horizon if issues like this are going to become more > common (probably similar to what we're having in docs, oslo, etc.) but > that's one for someone else to lead. > > Cheers, > Stephen > > From li.canwei2 at zte.com.cn Wed Aug 28 02:15:19 2019 From: li.canwei2 at zte.com.cn (li.canwei2 at zte.com.cn) Date: Wed, 28 Aug 2019 10:15:19 +0800 (CST) Subject: =?UTF-8?B?W1dhdGNoZXJdIHRlYW0gbWVldGluZyBhdCAwODowMCBVVEMgdG9kYXk=?= Message-ID: <201908281015199909406@zte.com.cn> Hi, Watcher team will have a meeting at 08:00 UTC today in the #openstack-meeting-alt channel. The agenda is available on https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda feel free to add any additional items. Thanks! Canwei Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Wed Aug 28 02:16:35 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 28 Aug 2019 14:16:35 +1200 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: <1c568051-94d6-2d09-2d98-ef69c790e4aa@openstack.org> References: <1c568051-94d6-2d09-2d98-ef69c790e4aa@openstack.org> Message-ID: <5f4cb8d0-c271-e7d1-027e-7c8058b6762b@catalyst.net.nz> Even if we agreeded that direct DB access was OK (it isn't), it is worth adding that this tool will not be backwards compatible in any way. It will always be linked to the current release DB schema. It wouldn't handle mixed version clouds, or anything even a little out of sync without a lot of extra complexity. Horizon, as awful as it is at times, is great in that it is backwards compatible VERY far, and in fact we have historically run Horizon close to master, while lagging behind in other services at various versions. On 27/08/19 9:04 PM, Thierry Carrez wrote: > Douglas Zhang wrote: >> [...] >> As Adrian Turjak said: >> >>     The first major issue is that you connect to the databases of the >>     services directly. That's a major issue, both for long term >>     compatibility, and security. The APIs should always be the main >>     point of contact and the ONLY contract that the services have to >>     maintain. By connecting to the database directly you are now relying >>     on a data structure that can, and likely will change, and any >>     security and sanity checking on filters and queries is now handled >>     on your layer rather than the application itself. Not only that, but >>     your dashboard also now needs passwords for all the databases, and >>     by the sounds of it all the message queues. >> >> And as Mohammed Naser said: >> >>     While I agree with you that querying database is much faster, this >>     introduces two issues that I imagine for users: [...] >> >>     *   also, it will make maintaining the project quite hard because I >>         don't think any projects expose a /stable/ database API. >> >> Well, we’re not surprised that our querying approach would be >> challenged since it does sound unsafe. However, we have made some >> efforts to solve problems which have been posed: [...] >> >> I hope my explanation is clear enough, and we’re willing to solve >> other possible issues existing. > > Thanks for the replies! > > As I highlighted above, you did not address the main issue raised by > Adrian and Mohammed, which is that the database schema in OpenStack > services is not stable. Our project teams only commit to one public > API, and that is the REST one. > > Querying the database directly is definitely faster (both in original > coding and query performance), but you incur enormous technical debt > by taking this shortcut. *Someone* will have to care about keeping > openstack-admin queries and projects database schema in sync forever > after. That means projects either need to commit to a stable database > API in addition to a stable REST API, *or* openstack-admin maintainers > will have to keep up with any database schema change in any future > version of any component they interact with. > > At this point in the history of OpenStack, IMHO we need to care more > about long-term sustainability with a limited number of maintainers, > than about speed. There are definitely optimizations that can be made > to make the slowest queries faster, without incurring massive > technical debt that will have to be repaid by maintainers forever after. > > It's definitely less funny and rewarding than writing a superfast new > database-connected dashboard, but I'd argue that it is where > development resources should be spent today... > From adriant at catalyst.net.nz Wed Aug 28 02:20:49 2019 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 28 Aug 2019 14:20:49 +1200 Subject: [all][tc][horizon] A web tool which helps administrators in managing openstack clusters In-Reply-To: References: <483e8641-acb4-4596-8ee1-1bdca2790ed8@email.android.com> Message-ID: <1c506a8d-c680-c4c9-fb18-128fa5c4ee8c@catalyst.net.nz> Bah, sent it via my phone and didn't hit reply all. On 26/08/19 10:55 PM, Jim Rollenhagen wrote: > Fair points. In case you didn't realize, you only sent this to me, not > the list :) > > // jim > > > On Sat, Aug 24, 2019 at 1:45 AM Adrian Turjak > wrote: > > > > On 24 Aug. 2019 00:08, Jim Rollenhagen > wrote: > > On Fri, Aug 23, 2019 at 3:21 AM Adrian Turjak > > wrote: > > Hello Douglas! > > As someone who has struggled against the performance > issues in Horizon I can easily feel your pain, and an > effort to making something better is good. Sadly, this > doesn't sound like a safe direction, even if for admin > only purposes. > > The first major issue is that you connect to the databases > of the services directly. That's a major issue, both for > long term compatibility, and security. The APIs should > always be the main point of contact and the ONLY contract > that the services have to maintain. By connecting to the > database directly you are now relying on a data structure > that can, and likely will change, and any security and > sanity checking on filters and queries is now handled on > your layer rather than the application itself. Not only > that, but your dashboard also now needs passwords for all > the databases, and by the sounds of it all the message queues. > > I would highly encourage you to try and work with the > community to fix the issues at the API layers rather than > bypassing them. We can add better query and filtering to > the APIs, and we can work on improving performance if we > know the pain points, and this is likely where > contributions would be welcome. > > I think we do need an alternative to Horizon, and the > ideal solution in my mind is to make a new dashboard built > on either React or Vue, with a thin proxy app (likely in > Flask) that serves the initial javascript page, and > proxies any API requests to the services themselves. The > filter issues should made better by implementing more > complex filtering in the APIs themselves, and having the > Dashboard layer better at exposing those dynamically. > React or Vue would do a much better job of dynamically and > quickly reloading and querying the services, and it would > make the whole experience much nicer. > > Michael Krotscheck did a bunch of work to put CORS config in > many API services. Why bother with a proxy when we can > continue that? :)  > > > We deploy our dashboard publically, but our APIs are behind a > whitelist only firewall. I bet we're not the only one. So we need > the proxy like nature of the dashboard API calls. > > I personally like the idea of the browser able to talk to the APIs > directly, but I have a feeling there may be other reasons to avoid > that. Not to mention you get a clearer picture of which API calls > come from the dashboard. Plus, a proxy via the dashboard app > itself (assuming it is deployed near the cluster) won't exactly > add much extra to the response time. > > But if we as a community do end up going down this route, we can > discuss if we need the app to proxy or not. If not, then really we > are just building a pure JavaScript app, but if we do need to > proxy, then the proxy layer would still be very simple with the > core logic all in the front-end. > > > The one of the best parts of Horizon was that it was a > 'dumb' dashboard built around your token. It can be > deployed anywhere, by anyone, and only needs access to the > cluster to work, no secrets to any database. > > I know this is a huge issue, and we do need to solve it, > but I hope we can work on something better that doesn't > bypass the APIs, because that isn't a safe solution. :( > > Cheers, > > Adrian Turjak > > On 20/08/19 10:14 PM, Douglas Zhang wrote: > > Hello everyone, > > To help users interact with openstack, we’re currently > developing a client-side web tool which enables > administrators to manage their openstack cluster in a > more efficient and convenient way. (Since we have not > named it officially yet, I’m going to call it > openstack-admin) > > *# Introduction* > > Some may ask, “Why do we need an extra web-based user > interface since we have horizon?” Well, although > horizon is a mature and powerful dashboard, it is far > not efficient enough on big clusters (a simple |list| > operation could take seconds to complete). What’s > more, its flexibility of searching could not match our > requirements. To overcome obstacles above, a more > efficient tool is urgently required, that’s why we > started to develop openstack-admin. > > *# Highlights* > > Comparing with the current user interface, > openstack-admin has following advantages: > > * > > *Fast*: openstack-admin gets data straightly from > SQL databases instead of calling standard > openstack API, which accelerates the querying > period to a large extent (especially when we’re > dealing with a large amount of data). > > * > > *Flexible*: openstack-admin supports the fuzzy > search for any important field(e.g. > display_name/uuid/ip_address/project_name of an > instance), which enables users to locate a > particular object in no time. > > * > > *User-friendly*: the backend of openstack-admin > gets necessary messages from the message queue > used by nova, and send them to the frontend using > websocket. This way, not only more realistic > progress bars could be implemented, but more > detailed information could be provided to users as > well. > > *# Issues* > > To make this tool more efficient and provide better > support for concurrency, we chose Golang to implement > openstack-admin. As I’ve asked before (truly > appreciate advises from Jeremy and Ghanshyam), a > project written by an unofficial language may be > accepted only if existing languages have been proven > to not meet the technical requirements, so we’re > considering re-implementing openstack-admin using > python if we can’t come to an agreement on the > language issue. > > So that’s all. How do you guys think of this project? > > Thanks, > > Douglas Zhang > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Aug 28 03:38:00 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 28 Aug 2019 11:38:00 +0800 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: To give a survey on the numbers of the valid commit contributors We got 61 in Austin :) ... 1204 contributors in havana [1] ... 2629 in liberty 2991 in mitaka 3104 in newton (which is peek point) [2] 2581 in ocata 2452 in pike 1925 in queens 1665 in rocky 1612 in stein [3] I believe the numbers of contributors should reflect on the numbers of TCs. If we try to compare the ratio (like stein cycle vs past six released cycles), I think to reduce to 9 (from 13) is acceptable numbers (also should be more than 7). And for even number issue, I believe to have 1 cycle of even number TCs will not affect much. But if we can, odd number definitely helps to reduce conflict when we end up using a poll to resolved some issues (one TC can propose solutions but it takes all TCs can to resolve the conflict.). [1] https://www.stackalytics.com/?release=havana&project_type=all&metric=commits [2] https://www.stackalytics.com/?release=newton&project_type=all&metric=commits [3] https://www.stackalytics.com/?release=stein&project_type=all&metric=commits On Tue, Aug 27, 2019 at 6:17 PM Thierry Carrez wrote: > Hi everyone, > > The size of the TC is a trade-off between getting enough community > representation and keeping enough members engaged and active. The > current size (13 members) was defined by in 2013, as we moved from 5 > directly-elected seats + all PTLs (which would have been 14 people) to a > model that could better cope with our explosive growth. Since then, 13 > has worked well, to ensure that new blood could come in at every cycle. > > I would argue that today, there are far less need to get wide > representation in the TC (we are pretty aligned), and less difficulty to > enter the TC (there is more turnover). In 2019 OpenStack, 13 members is > a rather large group. It is becoming difficult to find 13 people able to > commit to a significant amount of time over the coming year. And it is > difficult to keep all those 13 members active and engaged. > > IMHO it is time to reduce the TC to 11 members, which sounds like a more > reasonable and manageable size. We should encourage people to stop for a > while and come back, rather than burn too many people at the same time. > We should encourage more people to watch from the sidelines, rather than > have a group so large that everyone that can be in it is in it. > > This would not be a big-bang change, just something we would gradually > put in place over the next year. My strawman plan would be as follows: > > - Sept. 2019 election: no change, elect 6 seats as planned > - Feb. 2020 election: elect 6 seats instead of 7 > - Sept. 2020 election: elect 5 seats instead of 6 > - Then elect 6 seats at every start-of-year and 5 at every end-of-year > > That would result in TC membership sizes: > > - U cycle (Q4 2019/Q1 2020): 13 members > - V cycle (Q2/Q3 2020): 12 members > - W cycle (Q4 2020/Q1 2021): 11 members > - after that: 11 members > > FWIW, I intend to not run for reelection in the Feb 2020 election, so > nobody else has to sacrifice their seat to that reform for that election :) > > Thoughts? > > -- > Thierry Carrez (ttx) > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From eli at ChinaBuckets.com Wed Aug 28 03:52:48 2019 From: eli at ChinaBuckets.com (Eliza) Date: Wed, 28 Aug 2019 11:52:48 +0800 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: on 2019/8/28 11:38, Rico Lin wrote: > I believe the numbers of contributors should reflect on the numbers of TCs. > If we try to compare the ratio (like stein cycle vs past six released > cycles), I think to reduce to 9 (from 13) is acceptable numbers (also > should be more than 7). > > And for even number issue, I believe to have 1 cycle of even number TCs > will not affect much. But if we can, odd number definitely helps to > reduce conflict when we end up using a poll to resolved some issues (one > TC can propose solutions but it takes all TCs can to resolve the conflict.). I think that's good suggestion. regards. From gmann at ghanshyammann.com Wed Aug 28 04:09:46 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 28 Aug 2019 13:09:46 +0900 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: <16cd66a4e21.fd02feb7374044.3564607002481261235@ghanshyammann.com> --- On Wed, 28 Aug 2019 01:50:40 +0900 Jim Rollenhagen wrote ----On Tue, Aug 27, 2019 at 12:34 PM Dean Troyer wrote:On Tue, Aug 27, 2019 at 11:02 AM Jeremy Stanley wrote: > With my (soon to be former, at least for a cycle) TC member hat on, > I suggest that we hold this election as planned but circulate the > suggestion of electing fewer seats in coming cycles. Then once the > new TC has been confirmed let them decide whether that should be the > plan as soon into the new term as possible, and announce that > decision so the community will be prepared for it in the next > election. I completely agree with this, it is usually around election time (often just after) that we think about these things but making a change for the impending election is too quick.  I like Thierry's suggested schedule and I like the target of 9.  I do not have a strong preference for getting to 9 in 2 vs 4 cycles (2 at a time vs 1 at a time).  As has been mentioned elsewhere (IRC?) having an even number should not be an issue, looking back I can recall only one vote where it might have been a problem and in that case since it was clear we did not have consensus the plan was dropped anyway.  FWIW, StarlingX had an even number of members on its Technical Steering Committee (TC equivalent) for two cycles and it has not been a problem for basically the same reason, when driving for even close consensus you generally do not get ties.I agree even numbers are not a problem. I don't think (hope?)the existing TC would merge anything that went 7-6 anyway.I agree with the idea here to reduce the TC number. I think even number make more sense to have balance between both elections. We can reduce it gradually with target of 11 or 10 as of now and depends on future situations we can decide for more lower number.-gmann// jim  dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Wed Aug 28 05:26:22 2019 From: flux.adam at gmail.com (Adam Harwell) Date: Tue, 27 Aug 2019 22:26:22 -0700 Subject: [octavia] Stepping down from Octavia core team In-Reply-To: References: Message-ID: It was a pleasure working with you while I got the chance! We'll be sad to see you go, but hopefully we'll cross paths again. Good luck with your new role! --Adam On Tue, Aug 27, 2019, 16:10 Michael Johnson wrote: > Nir, > > I am very sorry to see you move on. Thank you for all of your work > over the years on Octavia, especially your support in adding CentOS > amphora support. > > Good luck on your new adventures! > > Michael > > On Tue, Aug 27, 2019 at 12:31 AM Carlos Goncalves > wrote: > > > > On Tue, Aug 27, 2019 at 9:22 AM Nir Magnezi wrote: > > > > > > Hi all, > > > > > > As some of you already know, I'm no longer involved in Octavia > development and therefore stepping down from the core team. > > > > > > It has been a pleasure to be a part of the Octavia (and neutron-lbaas) > core team for nearly three years. I learned a lot. > > > > Thank you very much for all your hard work! > > I wish you success in all your future endeavors. > > > > > Octavia is an excellent project with a strong core team, and I'm sure > it will continue to grow and gain more adoption. > > > > > > Thanks, > > > Nir > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Aug 28 07:03:58 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 28 Aug 2019 07:03:58 +0000 Subject: [placement][plt][elections] Non-nomination for U In-Reply-To: References: Message-ID: <1566975834.20126.0@smtp.office365.com> On Tue, Aug 27, 2019 at 11:54 AM, Chris Dent wrote: tl;dr: I'd prefer not to be Placement PTL for the U cycle. I've accomplished most of what I set out to do and I think we have other members of the team who would make excellent PTLs (notably tetsuro and gibi). Thank you for your hard work Chris! Cheers, gibi -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed Aug 28 07:15:54 2019 From: corvus at inaugust.com (James E. Blair) Date: Wed, 28 Aug 2019 00:15:54 -0700 Subject: [all][infra] Zuul logs are in swift In-Reply-To: <20190827163041.eskq6dllg2vlda5p@bishop> (Nate Johnston's message of "Tue, 27 Aug 2019 12:30:41 -0400") References: <87y305onco.fsf@meyer.lemoncheese.net> <87wofepdfu.fsf@meyer.lemoncheese.net> <20190827161927.h462vwwij6d2nraq@bishop> <4dd6b43b-82e9-4df6-afec-6b175a34c97f@www.fastmail.com> <20190827163041.eskq6dllg2vlda5p@bishop> Message-ID: <8736hl69qt.fsf@meyer.lemoncheese.net> Nate Johnston writes: > On Tue, Aug 27, 2019 at 09:23:56AM -0700, Clark Boylan wrote: >> On Tue, Aug 27, 2019, at 9:19 AM, Nate Johnston wrote: >> > On Thu, Aug 15, 2019 at 10:04:53AM -0700, James E. Blair wrote: >> > > Hi, >> > > >> > > We have made the switch to begin storing all of the build logs from Zuul >> > > in Swift. >> > > >> > > Each build's logs will be stored in one of 7 randomly chosen Swift >> > > regions in Fort Nebula, OVH, Rackspace, and Vexxhost. Thanks to those >> > > providers! >> > > >> > > You'll note that the links in Gerrit to the Zuul jobs now go to a page >> > > on the Zuul web app. A lot of the features previously available on the >> > > log server are now available there, plus some new ones. >> > > >> > > If you're looking for a link to a docs preview build, you'll find that >> > > on the build page under the "Artifacts" section now. >> > > >> > > If you're curious about where your logs ended up, you can see the Swift >> > > hostname under the "logs_url" row in the summary table. >> > > >> > > Please let us know if you have any questions or encounter any issues, >> > > either here, or in #openstack-infra on IRC. >> > >> > Where should I go to see the logs for periodic jobs? I assume these have been >> > transferred over, since (for example) the neutron periodic jobs stopped logging >> > their daily runs after 8/15, except for one time on 8/24. >> >> You can use the builds tab in the dashboard to query for previous >> builds including those of periodic jobs. For example >> http://zuul.openstack.org/builds?pipeline=periodic-stable&project=openstack%2Fneutron >> will get you the periodic stable jobs for neutron. We also just added the buildsets search page, where you can perform a similar search and see entire buildsets -- click the result link to go to a page that shows you all the builds in the buildset: http://zuul.openstack.org/buildsets?project=openstack%2Fneutron&pipeline=periodic -Jim From doka.ua at gmx.com Wed Aug 28 07:58:27 2019 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Wed, 28 Aug 2019 10:58:27 +0300 Subject: CEPH/RDMA + Openstack Message-ID: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> Dear colleagues, does anyone have experience using RDMA-enabled CEPH with Openstack? How stable is it? Whether all Openstack components (Nova, Cinder) works with such specific configuration of Ceph? Any issues which can affect overall system reliability? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From eli at ChinaBuckets.com Wed Aug 28 08:14:10 2019 From: eli at ChinaBuckets.com (Eliza) Date: Wed, 28 Aug 2019 16:14:10 +0800 Subject: CEPH/RDMA + Openstack In-Reply-To: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> Message-ID: <0cf7b80e-1f65-c589-69ed-6d66bc5835b4@ChinaBuckets.com> Hi on 2019/8/28 15:58, Volodymyr Litovka wrote: > does anyone have experience using RDMA-enabled CEPH with Openstack? How > stable is it? Whether all Openstack components (Nova, Cinder) works with > such specific configuration of Ceph? Any issues which can affect overall > system reliability? I dont have experience on it in fact. But IMO ceph's data replicas and rebalancing don't rely on RDMA, will RDMA enabled machines improve the performance for cluster? regards. From doka.ua at gmx.com Wed Aug 28 08:32:23 2019 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Wed, 28 Aug 2019 11:32:23 +0300 Subject: CEPH/RDMA + Openstack In-Reply-To: <0cf7b80e-1f65-c589-69ed-6d66bc5835b4@ChinaBuckets.com> References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> <0cf7b80e-1f65-c589-69ed-6d66bc5835b4@ChinaBuckets.com> Message-ID: <64e0c78b-f0d7-e904-437c-e52aeaa66047@gmx.com> Hi, starting Ceph v14, RDMA supported officially, so if you have NICs that support RDMA, you can try :) On 28.08.2019 11:14, Eliza wrote: > Hi > > on 2019/8/28 15:58, Volodymyr Litovka wrote: >> does anyone have experience using RDMA-enabled CEPH with Openstack? How >> stable is it? Whether all Openstack components (Nova, Cinder) works with >> such specific configuration of Ceph? Any issues which can affect overall >> system reliability? > > I dont have experience on it in fact. > But IMO ceph's data replicas and rebalancing don't rely on RDMA, will > RDMA enabled machines improve the performance for cluster? > > regards. > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From maxget7 at gmail.com Wed Aug 28 08:36:40 2019 From: maxget7 at gmail.com (Ali Abdelal) Date: Wed, 28 Aug 2019 11:36:40 +0300 Subject: [mistral] Publish field in workflow tasks Message-ID: Hello, Currently, there are two "publish" fields, one in the task(regular "publish")-the scope is branch and not global, and another under "on-success", “on-error” or “on-complete”. In the current behavior, regular "publish" is ignored if there is "publish" under "on-success", “on-error” or “on-complete” [1]. For example:- (a) version: '2.0' wf1: tasks: t1: publish: res_x1: 1 on-success: publish: branch: res_x2: 2 (b) version: '2.0' wf2: tasks: t1: publish: res_x1: 1 "res_x1" won't be published in (a), but it will in (b). We can either:- 1) Invalidate such syntax. 2) Merge the two publishes together and if there are duplicate keys, there are two options:- a) What takes priority is what's in publish under "on-success" or “on-error” or “on-complete. b) Not allow having a duplicate. What is your opinion? And please tell us if you have other suggestions. [1] https://bugs.launchpad.net/mistral/+bug/1791449 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Aug 28 08:42:08 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 28 Aug 2019 09:42:08 +0100 Subject: CEPH/RDMA + Openstack In-Reply-To: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> Message-ID: Hi there - > On 28 Aug 2019, at 08:58, Volodymyr Litovka wrote: > does anyone have experience using RDMA-enabled CEPH with Openstack? How > stable is it? Whether all Openstack components (Nova, Cinder) works with > such specific configuration of Ceph? Any issues which can affect overall > system reliability? Last time I looked at this (with pre-release Nautilus, about 9 months ago), I had mixed results. There are four generally-available RDMA fabrics (Infiniband, RoCE, iWARP and OPA) and I had a go at all of them apart from iWARP. RoCE worked for me but IB and OPA were troublesome to get working. There’s some work contributed for iWARP support that introduces the RDMA connection manager (RDMACM), which I found also helped for IB and OPA. There is potential here but in performance terms, I didn’t manage a thorough benchmarking and didn’t see conclusive proof of advantage. Perhaps things have come on since I looked, but it wasn’t an obvious win at the time. I’d love to have another pop at it, but for lack of time… Cheers, Stig From cdent+os at anticdent.org Wed Aug 28 08:47:22 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 28 Aug 2019 09:47:22 +0100 (BST) Subject: [requirements][placement] orjson instead of stdlib json Message-ID: Requirements people: For the past few months we've been doing profiling of placement, modeling large clouds where many results (7000 resource providers) will be returned in response to common requests. One of the last remaining chunks of slowness is serializing the very large dicts representing those providers (and associated allocation requests) to JSON (greater than 100k lines when pretty printed). Using the oslo jsonutils (which is a light wrapper over the stdlib 'json' module) approximately 25% of time is consumed by dumps(). Using orjson [1] instead, time consumption drops to 1%, so this is something it would be good to make real in placement (and perhaps other projects that have large result sets). There's a WIP that demonstrates its use at [2]. I'm posting about it because JSON serialization seems like it might be an area that the requirements team has greater interest than some other areas. Also, orjson is only Python 3, so use needs to be adapted for Python 2, as shown in the WIP [2]. Otherwise it's fine: active, Apache2 or MIT license. Thoughts? [1] https://pypi.org/project/orjson/ [2] https://review.opendev.org/#/c/674661/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From smooney at redhat.com Wed Aug 28 09:13:01 2019 From: smooney at redhat.com (Sean Mooney) Date: Wed, 28 Aug 2019 10:13:01 +0100 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: References: Message-ID: On Wed, 2019-08-28 at 09:47 +0100, Chris Dent wrote: > Requirements people: > > For the past few months we've been doing profiling of placement, > modeling large clouds where many results (7000 resource providers) > will be returned in response to common requests. > > One of the last remaining chunks of slowness is serializing the very > large dicts representing those providers (and associated allocation > requests) to JSON (greater than 100k lines when pretty printed). > Using the oslo jsonutils (which is a light wrapper over the stdlib > 'json' module) approximately 25% of time is consumed by dumps(). > > Using orjson [1] instead, time consumption drops to 1%, so this is > something it would be good to make real in placement (and perhaps > other projects that have large result sets). if the 25% to 1% numbers are real. e.g. the cpu time does not move to somewhere else instead then i would be very interested to see this used in oslo.versioned object and/or the sdk. > There's a WIP that > demonstrates its use at [2]. > > I'm posting about it because JSON serialization seems like it might > be an area that the requirements team has greater interest than some > other areas. > > Also, orjson is only Python 3, so use needs to be adapted for Python > 2, as shown in the WIP [2]. or wait untill octber/novemebr when we officly drop python 2 support that said from a redhat point of view all our future openstack release form stien will be on rhel8 which is python36 only by default(python2 is not install on rhel8). so we will be shipping openstack on python 3 stating with stien. > > Otherwise it's fine: active, Apache2 or MIT license. > > Thoughts? > > [1] https://pypi.org/project/orjson/ > [2] https://review.opendev.org/#/c/674661/ > > > From akekane at redhat.com Wed Aug 28 09:24:53 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 28 Aug 2019 14:54:53 +0530 Subject: [ops] [glance] Need to move images to a different backend In-Reply-To: References: Message-ID: Hi Massimo, Glance multiple store is still experimental and it does not have support for location API in rocky and stein, we are making it compatible in Train release. There is still one bug open for location compatibility with multiple stores and patch for it is not merged yet, I will also encourage you to deploy latest master and apply this patch in your environment and test it. https://review.opendev.org/#/c/617229/19 NOTE: I have tested this on current master after applying patch to local environment and the scenario you mentioned is working fine. Thanks & Best Regards, Abhishek Kekane On Tue, Aug 27, 2019 at 7:47 PM Abhishek Kekane wrote: > Hi Massimo, > Cool, thank you for confirmation, I will try to reproduce this in my > environment. It will take some time, will revert back to you once done. > Meanwhile could you please try the same with single store (disabling the > multiple stores)? > > Thank you, > > Abhishek > > On Tue, 27 Aug 2019 at 6:33 PM, Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> Hi Abhishek >> >> Yes, before trying the: >> >> glance location-add --url >> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> >> command, I uploaded the image to ceph. And indeed I can see it: >> >> >> >> [root at ceph-mon-01 ~]# rbd info >> images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> rbd image '6bcc4eab-ed35-42dc-88bd-1d45de73b628': >> size 10GiB in 1280 objects >> order 23 (8MiB objects) >> block_name_prefix: rbd_data.b7b56aa1e4f0bc >> format: 2 >> features: layering >> flags: >> create_timestamp: Tue Aug 27 11:42:32 2019 >> >> [root at ceph-mon-01 ~]# rbd snap ls >> images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >> SNAPID NAME SIZE TIMESTAMP >> 455500 snap 10GiB Tue Aug 27 11:46:37 2019 >> [root at ceph-mon-01 ~]# >> >> >> Cheers, Massimo >> >> On Tue, Aug 27, 2019 at 2:41 PM Abhishek Kekane >> wrote: >> >>> Hi Massimo, >>> >>> I need to reproduce this first, but pretty much sure this issue is not >>> because you configured multiple backends. What you are doing is downloading >>> existing image to your local storage and then uploading it to glance using >>> add-location operation. >>> >>> So before adding this using add-location operation have you manually >>> uploaded this image to ceph/rbd? if not then how you are building your >>> location url which you are mentioning in the add-location command? If this >>> location is not existing then it might be a problem. >>> >>> Thank you, >>> >>> Abhishek >>> >>> On Tue, 27 Aug 2019 at 5:09 PM, Massimo Sgaravatto < >>> massimo.sgaravatto at gmail.com> wrote: >>> >>>> I have a Rocky installation where glance is configured with multiple >>>> backends. This [*] is the relevant part of the glance configuration. >>>> >>>> I now need to dismiss the file backend and move the images-snapshots >>>> stored there to rbd. >>>> >>>> >>>> I had in mind to follow this procedure (already successfully tested >>>> with an older version of OpenStack): >>>> >>>> 1) download the image from the file backend (glance image-download >>>> --file >>>> 2) upload the file to rbd, relying on this function: >>>> https://github.com/openstack/glance_store/blob/stable/rocky/glance_store/_drivers/rbd.py#L445 >>>> 3) glance location-add --url rbd://xyz >>>> 4) glance location-delete --url file://abc >>>> >>>> I have problems with 3: >>>> >>>> # glance --debug location-add --url >>>> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >>>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>> ... >>>> ... >>>> DEBUG:keystoneauth.session:PATCH call to image for >>>> https://cloud-areapd.pd.infn.it:9292/v2/images/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>> used request id req-6c0598cc-582c-4ce7-a14c-5d1bb6ec4f14 >>>> Request returned failure status 400. >>>> DEBUG:glanceclient.common.http:Request returned failure status 400. >>>> Traceback (most recent call last): >>>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>>> 687, in main >>>> OpenStackImagesShell().main(argv) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>>> 591, in main >>>> args.func(client, args) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/shell.py", >>>> line 749, in do_location_add >>>> image = gc.images.add_location(args.id, args.url, metadata) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>>> line 448, in add_location >>>> response = self._send_image_update_request(image_id, add_patch) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", >>>> line 598, in inner >>>> return RequestIdProxy(wrapped(*args, **kwargs)) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>>> line 432, in _send_image_update_request >>>> data=json.dumps(patch_body)) >>>> File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", >>>> line 340, in patch >>>> return self.request(url, 'PATCH', **kwargs) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>>> line 377, in request >>>> return self._handle_response(resp) >>>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>>> line 126, in _handle_response >>>> raise exc.from_response(resp, resp.content) >>>> HTTPBadRequest: 400 Bad Request: Invalid location (HTTP 400) >>>> 400 Bad Request: Invalid location (HTTP 400) >>>> >>>> >>>> As far as I can see with the "openstack" client there is not something >>>> to add/delete a location. >>>> So I guess it is necessary to change the 'direct_url' and 'locations' >>>> properties. >>>> >>>> If I try to change the direct_url property: >>>> >>>> # openstack image set --property >>>> direct_url='rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap' >>>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>> 403 Forbidden: Attribute 'direct_url' is read-only. (HTTP 403) >>>> >>>> Any hints ? >>>> Thanks, Massimo >>>> >>>> [*] >>>> >>>> [default] >>>> ... >>>> enabled_backends = file:file,http:http,rbd:rbd >>>> show_image_direct_url = true >>>> show_multiple_locations = true >>>> >>>> [glance_store] >>>> default_backend = rbd >>>> >>>> [file] >>>> filesystem_store_datadir = /var/lib/glance/images/ >>>> >>>> >>>> [rbd] >>>> rbd_store_chunk_size = 8 >>>> rbd_store_ceph_conf = /etc/ceph/ceph.conf >>>> rbd_store_user = glance-prod >>>> rbd_store_pool = images-prod >>>> >>>> -- >>> Thanks & Best Regards, >>> >>> Abhishek Kekane >>> >> -- > Thanks & Best Regards, > > Abhishek Kekane > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Wed Aug 28 09:27:08 2019 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Wed, 28 Aug 2019 12:27:08 +0300 Subject: CEPH/RDMA + Openstack In-Reply-To: References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> Message-ID: <2fe6e6a4-dfec-730e-06e0-3816a370ec2a@gmx.com> Hi Stig, the main question is - whether you tested it with Openstack and all components works? Because regarding Ceph itself - we're using Mellanox ConnectX-4LX cards and found that Ceph works fine with RoCE in LAG configuration. > There is potential here Yes, agree with you. That's why we're trying to be ready for future improvements :) I guess that since Ceph supports RDMA officially, Redhat (as owner of Ceph) will sell it and, thus, will improve support. Thank you. On 28.08.2019 11:42, Stig Telfer wrote: > Hi there - > >> On 28 Aug 2019, at 08:58, Volodymyr Litovka wrote: >> does anyone have experience using RDMA-enabled CEPH with Openstack? How >> stable is it? Whether all Openstack components (Nova, Cinder) works with >> such specific configuration of Ceph? Any issues which can affect overall >> system reliability? > Last time I looked at this (with pre-release Nautilus, about 9 months ago), I had mixed results. > > There are four generally-available RDMA fabrics (Infiniband, RoCE, iWARP and OPA) and I had a go at all of them apart from iWARP. > > RoCE worked for me but IB and OPA were troublesome to get working. There’s some work contributed for iWARP support that introduces the RDMA connection manager (RDMACM), which I found also helped for IB and OPA. > > There is potential here but in performance terms, I didn’t manage a thorough benchmarking and didn’t see conclusive proof of advantage. Perhaps things have come on since I looked, but it wasn’t an obvious win at the time. I’d love to have another pop at it, but for lack of time… > > Cheers, > Stig > > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From massimo.sgaravatto at gmail.com Wed Aug 28 09:40:32 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 28 Aug 2019 11:40:32 +0200 Subject: [ops] [glance] Need to move images to a different backend In-Reply-To: References: Message-ID: Hi Abhishek First of all thanks to your help This is a production environment and therefore I have problems updating and deploying the current master I think I will revert to the "old way", since its behavior is fine with me (I don't need to be able to select the backend when creating an image) Thanks again Cheers, Massimo On Wed, Aug 28, 2019 at 11:25 AM Abhishek Kekane wrote: > Hi Massimo, > > Glance multiple store is still experimental and it does not have support > for location API in rocky and stein, we are making it compatible in Train > release. > > There is still one bug open for location compatibility with multiple > stores and patch for it is not merged yet, I will also encourage you to > deploy latest master and apply this patch in your environment and test it. > https://review.opendev.org/#/c/617229/19 > > NOTE: > I have tested this on current master after applying patch to local > environment and the scenario you mentioned is working fine. > > Thanks & Best Regards, > > Abhishek Kekane > > > On Tue, Aug 27, 2019 at 7:47 PM Abhishek Kekane > wrote: > >> Hi Massimo, >> Cool, thank you for confirmation, I will try to reproduce this in my >> environment. It will take some time, will revert back to you once done. >> Meanwhile could you please try the same with single store (disabling the >> multiple stores)? >> >> Thank you, >> >> Abhishek >> >> On Tue, 27 Aug 2019 at 6:33 PM, Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com> wrote: >> >>> Hi Abhishek >>> >>> Yes, before trying the: >>> >>> glance location-add --url >>> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> >>> command, I uploaded the image to ceph. And indeed I can see it: >>> >>> >>> >>> [root at ceph-mon-01 ~]# rbd info >>> images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> rbd image '6bcc4eab-ed35-42dc-88bd-1d45de73b628': >>> size 10GiB in 1280 objects >>> order 23 (8MiB objects) >>> block_name_prefix: rbd_data.b7b56aa1e4f0bc >>> format: 2 >>> features: layering >>> flags: >>> create_timestamp: Tue Aug 27 11:42:32 2019 >>> >>> [root at ceph-mon-01 ~]# rbd snap ls >>> images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>> SNAPID NAME SIZE TIMESTAMP >>> 455500 snap 10GiB Tue Aug 27 11:46:37 2019 >>> [root at ceph-mon-01 ~]# >>> >>> >>> Cheers, Massimo >>> >>> On Tue, Aug 27, 2019 at 2:41 PM Abhishek Kekane >>> wrote: >>> >>>> Hi Massimo, >>>> >>>> I need to reproduce this first, but pretty much sure this issue is not >>>> because you configured multiple backends. What you are doing is downloading >>>> existing image to your local storage and then uploading it to glance using >>>> add-location operation. >>>> >>>> So before adding this using add-location operation have you manually >>>> uploaded this image to ceph/rbd? if not then how you are building your >>>> location url which you are mentioning in the add-location command? If this >>>> location is not existing then it might be a problem. >>>> >>>> Thank you, >>>> >>>> Abhishek >>>> >>>> On Tue, 27 Aug 2019 at 5:09 PM, Massimo Sgaravatto < >>>> massimo.sgaravatto at gmail.com> wrote: >>>> >>>>> I have a Rocky installation where glance is configured with multiple >>>>> backends. This [*] is the relevant part of the glance configuration. >>>>> >>>>> I now need to dismiss the file backend and move the images-snapshots >>>>> stored there to rbd. >>>>> >>>>> >>>>> I had in mind to follow this procedure (already successfully tested >>>>> with an older version of OpenStack): >>>>> >>>>> 1) download the image from the file backend (glance image-download >>>>> --file >>>>> 2) upload the file to rbd, relying on this function: >>>>> https://github.com/openstack/glance_store/blob/stable/rocky/glance_store/_drivers/rbd.py#L445 >>>>> 3) glance location-add --url rbd://xyz >>>>> 4) glance location-delete --url file://abc >>>>> >>>>> I have problems with 3: >>>>> >>>>> # glance --debug location-add --url >>>>> rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap >>>>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>>> ... >>>>> ... >>>>> DEBUG:keystoneauth.session:PATCH call to image for >>>>> https://cloud-areapd.pd.infn.it:9292/v2/images/6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>>> used request id req-6c0598cc-582c-4ce7-a14c-5d1bb6ec4f14 >>>>> Request returned failure status 400. >>>>> DEBUG:glanceclient.common.http:Request returned failure status 400. >>>>> Traceback (most recent call last): >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>>>> 687, in main >>>>> OpenStackImagesShell().main(argv) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line >>>>> 591, in main >>>>> args.func(client, args) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/shell.py", >>>>> line 749, in do_location_add >>>>> image = gc.images.add_location(args.id, args.url, metadata) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>>>> line 448, in add_location >>>>> response = self._send_image_update_request(image_id, add_patch) >>>>> File >>>>> "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 598, >>>>> in inner >>>>> return RequestIdProxy(wrapped(*args, **kwargs)) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/v2/images.py", >>>>> line 432, in _send_image_update_request >>>>> data=json.dumps(patch_body)) >>>>> File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", >>>>> line 340, in patch >>>>> return self.request(url, 'PATCH', **kwargs) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>>>> line 377, in request >>>>> return self._handle_response(resp) >>>>> File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", >>>>> line 126, in _handle_response >>>>> raise exc.from_response(resp, resp.content) >>>>> HTTPBadRequest: 400 Bad Request: Invalid location (HTTP 400) >>>>> 400 Bad Request: Invalid location (HTTP 400) >>>>> >>>>> >>>>> As far as I can see with the "openstack" client there is not something >>>>> to add/delete a location. >>>>> So I guess it is necessary to change the 'direct_url' and 'locations' >>>>> properties. >>>>> >>>>> If I try to change the direct_url property: >>>>> >>>>> # openstack image set --property >>>>> direct_url='rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-prod/6bcc4eab-ed35-42dc-88bd-1d45de73b628/snap' >>>>> 6bcc4eab-ed35-42dc-88bd-1d45de73b628 >>>>> 403 Forbidden: Attribute 'direct_url' is read-only. (HTTP 403) >>>>> >>>>> Any hints ? >>>>> Thanks, Massimo >>>>> >>>>> [*] >>>>> >>>>> [default] >>>>> ... >>>>> enabled_backends = file:file,http:http,rbd:rbd >>>>> show_image_direct_url = true >>>>> show_multiple_locations = true >>>>> >>>>> [glance_store] >>>>> default_backend = rbd >>>>> >>>>> [file] >>>>> filesystem_store_datadir = /var/lib/glance/images/ >>>>> >>>>> >>>>> [rbd] >>>>> rbd_store_chunk_size = 8 >>>>> rbd_store_ceph_conf = /etc/ceph/ceph.conf >>>>> rbd_store_user = glance-prod >>>>> rbd_store_pool = images-prod >>>>> >>>>> -- >>>> Thanks & Best Regards, >>>> >>>> Abhishek Kekane >>>> >>> -- >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Aug 28 09:46:19 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 28 Aug 2019 15:16:19 +0530 Subject: [glance][neutron] Glance migrations broken for postgresql; impact to Neutron periodic jobs In-Reply-To: <20190827192107.dp2noxkb4nj2fwpd@bishop> References: <20190827192107.dp2noxkb4nj2fwpd@bishop> Message-ID: Hi Nate, This issue is already reported and I have submitted a patch [1] to resolve the same. Kindly confirm this patch resolves your issue. [1] https://review.opendev.org/#/c/677694/ Thanks & Best Regards, Abhishek Kekane On Wed, Aug 28, 2019 at 12:54 AM Nate Johnston wrote: > Glance developers, > > I am investigating failures with the neutron-tempest-postgres-full > periodic job > in Zuul, and it looks like the failing step is actually a database > migration for > Glance that occurs during devstack setup. Pasted below is a representative > example [1]. You can see the same issue causing failures for this neutron > periodic job by looking at the Zuul failures [2]; the error is in > logs/devstacklog.txt.gz. > > The last successful job was 2019-08-15T06:07:06; the next one on 2019-08-16 > failed. But there were no changes merged in glance during that time > period [3]. > So if you could help debug the source of this issue then we can hopefully > get > this job - and probably whatever others depend on postgresql - back > working. > > 2019-08-27 06:35:44.963 | INFO [alembic.runtime.migration] Running > upgrade rocky_expand02 -> train_expand01, empty expand for symmetry with > train_contract01 > 2019-08-27 06:35:44.967 | INFO [alembic.runtime.migration] Context impl > PostgresqlImpl. > 2019-08-27 06:35:44.967 | INFO [alembic.runtime.migration] Will assume > transactional DDL. > 2019-08-27 06:35:44.970 | Upgraded database to: train_expand01, current > revision(s): train_expand01 > 2019-08-27 06:35:44.971 | INFO [alembic.runtime.migration] Context impl > PostgresqlImpl. > 2019-08-27 06:35:44.971 | INFO [alembic.runtime.migration] Will assume > transactional DDL. > 2019-08-27 06:35:44.975 | INFO [alembic.runtime.migration] Context impl > PostgresqlImpl. > 2019-08-27 06:35:44.975 | INFO [alembic.runtime.migration] Will assume > transactional DDL. > 2019-08-27 06:35:44.995 | CRITI [glance] Unhandled error > 2019-08-27 06:35:44.996 | Traceback (most recent call last): > 2019-08-27 06:35:44.996 | File "/usr/local/bin/glance-manage", line 10, > in > 2019-08-27 06:35:44.996 | sys.exit(main()) > 2019-08-27 06:35:44.996 | File > "/opt/stack/new/glance/glance/cmd/manage.py", line 563, in main > 2019-08-27 06:35:44.996 | return CONF.command.action_fn() > 2019-08-27 06:35:44.996 | File > "/opt/stack/new/glance/glance/cmd/manage.py", line 395, in sync > 2019-08-27 06:35:44.996 | > self.command_object.sync(CONF.command.version) > 2019-08-27 06:35:44.996 | File > "/opt/stack/new/glance/glance/cmd/manage.py", line 166, in sync > 2019-08-27 06:35:44.996 | self.migrate(online_migration=False) > 2019-08-27 06:35:44.996 | File > "/opt/stack/new/glance/glance/cmd/manage.py", line 294, in migrate > 2019-08-27 06:35:44.996 | if > data_migrations.has_pending_migrations(db_api.get_engine()): > 2019-08-27 06:35:44.996 | File > "/opt/stack/new/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py", > line 61, in has_pending_migrations > 2019-08-27 06:35:44.997 | return any([x.has_migrations(engine) for x > in migrations]) > 2019-08-27 06:35:44.997 | File > "/opt/stack/new/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", > line 28, in has_migrations > 2019-08-27 06:35:44.997 | metadata_backend = con.execute(sql_query) > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line > 982, in execute > 2019-08-27 06:35:44.997 | return self._execute_text(object_, > multiparams, params) > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line > 1155, in _execute_text > 2019-08-27 06:35:44.997 | parameters, > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line > 1248, in _execute_context > 2019-08-27 06:35:44.997 | e, statement, parameters, cursor, context > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line > 1464, in _handle_dbapi_exception > 2019-08-27 06:35:44.997 | util.raise_from_cause(newraise, exc_info) > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line > 398, in raise_from_cause > 2019-08-27 06:35:44.997 | reraise(type(exception), exception, > tb=exc_tb, cause=cause) > 2019-08-27 06:35:44.997 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line > 1244, in _execute_context > 2019-08-27 06:35:44.998 | cursor, statement, parameters, context > 2019-08-27 06:35:44.998 | File > "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line > 552, in do_execute > 2019-08-27 06:35:44.998 | cursor.execute(statement, parameters) > 2019-08-27 06:35:44.998 | DBError: (psycopg2.errors.UndefinedFunction) > function instr(text, unknown) does not exist > 2019-08-27 06:35:44.998 | LINE 1: select meta_data from image_locations > where INSTR(meta_data,... > 2019-08-27 06:35:44.998 | > ^ > 2019-08-27 06:35:44.998 | HINT: No function matches the given name and > argument types. You might need to add explicit type casts. > > Thanks, > > Nate > > [1] also http://paste.openstack.org/show/765665/ if that is an easier view > [2] > http://zuul.openstack.org/builds?project=openstack%2Fneutron&job_name=neutron-tempest-postgres-full&branch=master > [3] https://review.opendev.org/#/q/project:openstack/glance+status:merged > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Aug 28 09:56:04 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 28 Aug 2019 10:56:04 +0100 Subject: [scientific-sig] SIG IRC meeting today 1100 UTC Message-ID: <81E18A0C-28D2-4E77-8147-36C6C1CFCA98@telfer.org> Hi All - We have an IRC meeting today at 1100 UTC (about an hour’s time) in channel #openstack-meeting. Everyone is welcome. Today’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_August_28th_2019 Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Wed Aug 28 10:13:24 2019 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 28 Aug 2019 12:13:24 +0200 Subject: [openstack-dev] [oslo][i18n] Dropping lazy translation support In-Reply-To: References: Message-ID: <13628917.sMWNFXyucf@whitebase.usersys.redhat.com> On Monday, 19 August 2019 19:17:08 CEST Ben Nemec wrote: > On 8/19/19 7:46 AM, Shengjing Zhu wrote: > > Sorry for replying the old mail, and please cc me when reply. > > > > Matt Riedemann writes: > >> This is a follow up to a dev ML email [1] where I noticed that some > >> implementations of the upgrade-checkers goal were failing because some > >> projects still use the oslo_i18n.enable_lazy() hook for lazy log message > >> translation (and maybe API responses?). > >> > >> The very old blueprints related to this can be found here [2][3][4]. > >> > >> If memory serves me correctly from my time working at IBM on this, this > >> was needed to: > >> > >> 1. Generate logs translated in other languages. > >> > >> 2. Return REST API responses if the "Accept-Language" header was used > >> and a suitable translation existed for that language. > >> > >> #1 is a dead horse since I think at least the Ocata summit when we > >> agreed to no longer translate logs since no one used them. > >> > >> #2 is probably something no one knows about. I can't find end-user > >> documentation about it anywhere. It's not tested and therefore I have no > >> idea if it actually works anymore. > >> > >> I would like to (1) deprecate the oslo_i18n.enable_lazy() function so > >> new projects don't use it and (2) start removing the enable_lazy() usage > >> from existing projects like keystone, glance and cinder. > >> > >> Are there any users, deployments or vendor distributions that still rely > >> on this feature? If so, please speak up now. > > > > I was pointed to this discussion when I tried to fix this feature in > > keystone, https://review.opendev.org/677117 > > > > For #2 translated API response, this feature probably hasn't been > > working for some time, but it's still a valid user case. > > > > Has the decision been settled? > > Not to my knowledge. Lazy translation still exists, but I don't know > that anyone is testing it. > > Are you saying that you are using this feature now, or are you > interested in using it going forward? A related note: we nuked enable_lazy in sahara because it broke Python 3 support in mysterious ways: https://review.opendev.org/#/c/626643/ -- Luigi From mthode at mthode.org Wed Aug 28 11:30:36 2019 From: mthode at mthode.org (Matthew Thode) Date: Wed, 28 Aug 2019 06:30:36 -0500 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: References: Message-ID: <20190828113036.hdjob2toidx4iitt@mthode.org> On 19-08-28 10:13:01, Sean Mooney wrote: > On Wed, 2019-08-28 at 09:47 +0100, Chris Dent wrote: > > Requirements people: > > > > For the past few months we've been doing profiling of placement, > > modeling large clouds where many results (7000 resource providers) > > will be returned in response to common requests. > > > > One of the last remaining chunks of slowness is serializing the very > > large dicts representing those providers (and associated allocation > > requests) to JSON (greater than 100k lines when pretty printed). > > Using the oslo jsonutils (which is a light wrapper over the stdlib > > 'json' module) approximately 25% of time is consumed by dumps(). > > > > Using orjson [1] instead, time consumption drops to 1%, so this is > > something it would be good to make real in placement (and perhaps > > other projects that have large result sets). > if the 25% to 1% numbers are real. e.g. the cpu time does not move to > somewhere else instead then i would be very interested to see this > used in oslo.versioned object and/or the sdk. > > There's a WIP that > > demonstrates its use at [2]. > > > > I'm posting about it because JSON serialization seems like it might > > be an area that the requirements team has greater interest than some > > other areas. > > > > Also, orjson is only Python 3, so use needs to be adapted for Python > > 2, as shown in the WIP [2]. > or wait untill octber/novemebr when we officly drop python 2 support > that said from a redhat point of view all our future openstack release form > stien will be on rhel8 which is python36 only by default(python2 is not install on rhel8). > so we will be shipping openstack on python 3 stating with stien. > > > > Otherwise it's fine: active, Apache2 or MIT license. > > > > Thoughts? > > > > [1] https://pypi.org/project/orjson/ > > [2] https://review.opendev.org/#/c/674661/ > > > > > > > > One thing we seek to avoid is duplicating requirements we already track. Though we have allowed C based python libs for speed. Is it possible to try ujson as that's approved and in global-reqs already (and supported by anyjson, also in global-reqs). It hasn't been updated since 2016 and simple PRs/issues are unresolved, though monasca-common and x/kiloeyes use it. Anyjson itself seems to be in a similiar situation. If we'd move to another c based json-lib I'd like to remove ujson (and possibly look at anyjson too, though it has slightly more usage). There may be other json-libs in global reqs that meet your needs though. I'd recommend checking it out first. https://github.com/openstack/requirements/blob/master/global-requirements.txt UJSON +--------------------------+----------------------------------------------------------+------+-------------------+ | Repository | Filename | Line | Text | +--------------------------+----------------------------------------------------------+------+-------------------+ | openstack/monasca-common | requirements.txt | 11 | ujson>=1.35 # BSD | | x/kiloeyes | requirements.txt | 19 | ujson>=1.33 | +--------------------------+----------------------------------------------------------+------+-------------------+ ANYJSON +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ | Repository | Filename | Line | Text | +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ | openstack/faafo | requirements.txt | 5 | anyjson>=0.3.3 | | openstack/fuel-qa | fuelweb_test/requirements.txt | 5 | anyjson>=0.3.3 # BSD | | openstack/fuel-web | nailgun/requirements.txt | 8 | anyjson>=0.3.3 | | openstack/murano-agent | requirements.txt | 5 | anyjson>=0.3.3 # BSD | | openstack/os-apply-config | requirements.txt | 6 | anyjson>=0.3.3 # BSD | | openstack/os-collect-config | requirements.txt | 6 | anyjson>=0.3.3 # BSD | | openstack/os-net-config | requirements.txt | 5 | anyjson>=0.3.3 # BSD | | openstack/tacker | requirements.txt | 9 | anyjson>=0.3.3 # BSD | | starlingx/config | sysinv/sysinv/sysinv/requirements.txt | 4 | anyjson>=0.3.3 | | starlingx/ha | service-mgmt-client/sm-client/requirements.txt | 2 | anyjson>=0.3.3 | | starlingx/metal | inventory/inventory/requirements.txt | 8 | anyjson>=0.3.3 | | x/apmec | requirements.txt | 9 | anyjson>=0.3.3 # BSD | | x/daisycloud-core | code/daisy/requirements.txt | 9 | anyjson>=0.3.3 | | x/novajoin | test-requirements.txt | 8 | anyjson>=0.3.3 # BSD | | x/omni | creds_manager/test-requirements.txt | 8 | anyjson>=0.3.3 # BSD | +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Wed Aug 28 11:38:44 2019 From: mthode at mthode.org (Matthew Thode) Date: Wed, 28 Aug 2019 06:38:44 -0500 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: <20190828113036.hdjob2toidx4iitt@mthode.org> References: <20190828113036.hdjob2toidx4iitt@mthode.org> Message-ID: <20190828113844.23xyjbagtcduzpha@mthode.org> On 19-08-28 06:30:36, Matthew Thode wrote: > On 19-08-28 10:13:01, Sean Mooney wrote: > > On Wed, 2019-08-28 at 09:47 +0100, Chris Dent wrote: > > > Requirements people: > > > > > > For the past few months we've been doing profiling of placement, > > > modeling large clouds where many results (7000 resource providers) > > > will be returned in response to common requests. > > > > > > One of the last remaining chunks of slowness is serializing the very > > > large dicts representing those providers (and associated allocation > > > requests) to JSON (greater than 100k lines when pretty printed). > > > Using the oslo jsonutils (which is a light wrapper over the stdlib > > > 'json' module) approximately 25% of time is consumed by dumps(). > > > > > > Using orjson [1] instead, time consumption drops to 1%, so this is > > > something it would be good to make real in placement (and perhaps > > > other projects that have large result sets). > > if the 25% to 1% numbers are real. e.g. the cpu time does not move to > > somewhere else instead then i would be very interested to see this > > used in oslo.versioned object and/or the sdk. > > > There's a WIP that > > > demonstrates its use at [2]. > > > > > > I'm posting about it because JSON serialization seems like it might > > > be an area that the requirements team has greater interest than some > > > other areas. > > > > > > Also, orjson is only Python 3, so use needs to be adapted for Python > > > 2, as shown in the WIP [2]. > > or wait untill octber/novemebr when we officly drop python 2 support > > that said from a redhat point of view all our future openstack release form > > stien will be on rhel8 which is python36 only by default(python2 is not install on rhel8). > > so we will be shipping openstack on python 3 stating with stien. > > > > > > Otherwise it's fine: active, Apache2 or MIT license. > > > > > > Thoughts? > > > > > > [1] https://pypi.org/project/orjson/ > > > [2] https://review.opendev.org/#/c/674661/ > > > > > > > > > > > > > > > One thing we seek to avoid is duplicating requirements we already track. > Though we have allowed C based python libs for speed. > > Is it possible to try ujson as that's approved and in global-reqs > already (and supported by anyjson, also in global-reqs). It hasn't been > updated since 2016 and simple PRs/issues are unresolved, though > monasca-common and x/kiloeyes use it. Anyjson itself seems to be in a > similiar situation. > > If we'd move to another c based json-lib I'd like to remove ujson (and > possibly look at anyjson too, though it has slightly more usage). There > may be other json-libs in global reqs that meet your needs though. I'd > recommend checking it out first. > > https://github.com/openstack/requirements/blob/master/global-requirements.txt > > UJSON > +--------------------------+----------------------------------------------------------+------+-------------------+ > | Repository | Filename | Line | Text | > +--------------------------+----------------------------------------------------------+------+-------------------+ > | openstack/monasca-common | requirements.txt | 11 | ujson>=1.35 # BSD | > | x/kiloeyes | requirements.txt | 19 | ujson>=1.33 | > +--------------------------+----------------------------------------------------------+------+-------------------+ > > ANYJSON > +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ > | Repository | Filename | Line | Text | > +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ > | openstack/faafo | requirements.txt | 5 | anyjson>=0.3.3 | > | openstack/fuel-qa | fuelweb_test/requirements.txt | 5 | anyjson>=0.3.3 # BSD | > | openstack/fuel-web | nailgun/requirements.txt | 8 | anyjson>=0.3.3 | > | openstack/murano-agent | requirements.txt | 5 | anyjson>=0.3.3 # BSD | > | openstack/os-apply-config | requirements.txt | 6 | anyjson>=0.3.3 # BSD | > | openstack/os-collect-config | requirements.txt | 6 | anyjson>=0.3.3 # BSD | > | openstack/os-net-config | requirements.txt | 5 | anyjson>=0.3.3 # BSD | > | openstack/tacker | requirements.txt | 9 | anyjson>=0.3.3 # BSD | > | starlingx/config | sysinv/sysinv/sysinv/requirements.txt | 4 | anyjson>=0.3.3 | > | starlingx/ha | service-mgmt-client/sm-client/requirements.txt | 2 | anyjson>=0.3.3 | > | starlingx/metal | inventory/inventory/requirements.txt | 8 | anyjson>=0.3.3 | > | x/apmec | requirements.txt | 9 | anyjson>=0.3.3 # BSD | > | x/daisycloud-core | code/daisy/requirements.txt | 9 | anyjson>=0.3.3 | > | x/novajoin | test-requirements.txt | 8 | anyjson>=0.3.3 # BSD | > | x/omni | creds_manager/test-requirements.txt | 8 | anyjson>=0.3.3 # BSD | > +-----------------------------+---------------------------------------------------------------------+------+-----------------------+ > Looking through global-reqs, some more it looks like simplejson can be c based and is in global-reqs as well. Looking at the benchmarks orjson itself provides it's hard to see how it can be 100x faster than even the native json. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ext-markku.tavasti at elisa.fi Wed Aug 28 12:24:46 2019 From: ext-markku.tavasti at elisa.fi (Tavasti Markku EXT) Date: Wed, 28 Aug 2019 12:24:46 +0000 Subject: How to prevent adding admin-role? Message-ID: Hi! I am trying to create 'domain admin' role which has permissions to create projects and users, and manage user roles in projects within own domain. I have pretty ok working set of policies done, but there is one critical security hole: domain admin can add 'admin' role to user, and after it user has superuser privileges. Is there any possibility to limit domain admin rights to give only _member_ roles? I am working in Queens-based Redhat OSP13. Tavasti, Openstack admin For Internal Use Only -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Aug 28 12:30:59 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 28 Aug 2019 13:30:59 +0100 (BST) Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: <20190828113844.23xyjbagtcduzpha@mthode.org> References: <20190828113036.hdjob2toidx4iitt@mthode.org> <20190828113844.23xyjbagtcduzpha@mthode.org> Message-ID: On Wed, 28 Aug 2019, Matthew Thode wrote: > Looking through global-reqs, some more it looks like simplejson can be c > based and is in global-reqs as well. > > Looking at the benchmarks orjson itself provides it's hard to see how it > can be 100x faster than even the native json. I've tried the other options and they don't provide as much improvement as orjson, nor are they as healthy with regard to project activity and JSON correctness. I suspect the reason my tests are seeing such a huge improvement (different from orjson's benchmark) is because this is one single very large (2583330 bytes when JSON) python structure being dumped just once. The benchmarks on the pypi page that fit most are those with canada.json (which happens to the be one where orjson seems to have the biggest advantage). It's not the end of the world if we don't switch, but thought I would raise the topic to see if it was worth pursuing. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From juliaashleykreger at gmail.com Wed Aug 28 12:30:58 2019 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 28 Aug 2019 08:30:58 -0400 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> Message-ID: On Tue, Aug 27, 2019 at 11:26 AM Zane Bitter wrote: > > TBH, I'd suggest we: > > * Aim for 9 I also think 9 is likely a better number. > * Reduce by 2 each time so we don't end up with an even number My biggest concern with this proposal is having the possibility of an even number of people sitting on the TC at any one given time. > * Start now when 4 people have announced they're stepping down (though > in a perfect world this would have been the cycle when 7 seats are up > for election) I feel like this is something that would have to be discussed before the board so they are aware in advance of starting to reduce the size of the TC. If the board says "nope", then I suspect we need to respect that. > > > FWIW, I intend to not run for reelection in the Feb 2020 election, so > > nobody else has to sacrifice their seat to that reform for that election :) > > That is still my intention also, so nobody would have to sacrifice their > seat even if we reduced to 9. > > cheers, > Zane. > From ed at leafe.com Wed Aug 28 13:23:14 2019 From: ed at leafe.com (Ed Leafe) Date: Wed, 28 Aug 2019 08:23:14 -0500 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: References: <20190828113036.hdjob2toidx4iitt@mthode.org> <20190828113844.23xyjbagtcduzpha@mthode.org> Message-ID: On Aug 28, 2019, at 7:30 AM, Chris Dent wrote: > > I suspect the reason my tests are seeing such a huge improvement > (different from orjson's benchmark) is because this is one single > very large (2583330 bytes when JSON) python structure being > dumped just once. While optimizing the performance of JSON-ifying this huge thing is always a good idea, it might be better to consider why such a huge object is necessary in the first place. "Doctor, it hurts when I do this" "Well, then stop doing that!" -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mriedemos at gmail.com Wed Aug 28 13:56:06 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Aug 2019 08:56:06 -0500 Subject: [nova] ComputeDriver.spawn getting a power_on kwarg Message-ID: <268f9d01-2d1f-94a7-59c0-9968dd962976@gmail.com> This is just an FYI to out of tree virt driver maintainers that the ComputeDriver.spawn interface is getting a new power_on kwarg with this chnage: https://review.opendev.org/#/c/642590/ -- Thanks, Matt From tpb at dyncloud.net Wed Aug 28 14:32:02 2019 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 28 Aug 2019 10:32:02 -0400 Subject: No manila meeting 29 August 2019 Message-ID: <20190828143202.gro7tuxdnxeiahad@barron.net> We have quite a few folks unavailable this Thursday and will cancel this week's Manila community meeting. The next meeting will be the following week, 5 September, at 1500 UTC as usual. From mriedemos at gmail.com Wed Aug 28 15:07:23 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Aug 2019 10:07:23 -0500 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: References: <20190828113036.hdjob2toidx4iitt@mthode.org> <20190828113844.23xyjbagtcduzpha@mthode.org> Message-ID: <49ea43ac-e3d4-0526-9798-4c2e56fd796c@gmail.com> On 8/28/2019 8:23 AM, Ed Leafe wrote: > it might be better to consider why such a huge object is necessary in the first place. Hardware defined software. -- Thanks, Matt From jean-philippe at evrard.me Wed Aug 28 15:19:09 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 28 Aug 2019 17:19:09 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: <68bdbb86-6b74-f669-a3af-c4e0b511b548@redhat.com> References: <68bdbb86-6b74-f669-a3af-c4e0b511b548@redhat.com> Message-ID: On Tue, 2019-08-27 at 14:38 +0200, Dmitry Tantsur wrote: > 9 holders of Rings of Power is what I would propose if I had a say in > that :) > My lore is a little rusty, wouldn't that only apply to humans? What about inclusiveness? Regards, JP PS: Sorry for those to become the Nazguls, I guess? From jean-philippe at evrard.me Wed Aug 28 15:22:07 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 28 Aug 2019 17:22:07 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: Message-ID: <35fd02b0f4762a05df4a23dd4e43eb0758295ed5.camel@evrard.me> On Wed, 2019-08-28 at 11:38 +0800, Rico Lin wrote: > To give a survey on the numbers of the valid commit contributors > > We got > > 61 in Austin :) > ... > 1204 contributors in havana [1] > ... > 2629 in liberty > 2991 in mitaka > 3104 in newton (which is peek point) [2] > 2581 in ocata > 2452 in pike > 1925 in queens > 1665 in rocky > 1612 in stein [3] > Is that ponderated with length of a cycle? Ocata was shorter IIRC. Regards, JP From jean-philippe at evrard.me Wed Aug 28 15:27:16 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 28 Aug 2019 17:27:16 +0200 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> <20190827155742.tpleittnofdnmhe5@yuggoth.org> Message-ID: On Tue, 2019-08-27 at 12:50 -0400, Jim Rollenhagen wrote: > I agree even numbers are not a problem. I don't think (hope?) > the existing TC would merge anything that went 7-6 anyway. Agreed with that. And because I didn't write my opinion on the topic: - I agree on the reduction. I don't know what the sweet spot is. 9 might be it. - If all the candidates and the election officials are ok with reduced seats this time, we could start doing it now. It seems the last point isn't obvious, so in the meantime could we propose the plan for the reduction by proposing a governance change? Regards, JP From stig.openstack at telfer.org Wed Aug 28 15:38:43 2019 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 28 Aug 2019 16:38:43 +0100 Subject: CEPH/RDMA + Openstack In-Reply-To: <2fe6e6a4-dfec-730e-06e0-3816a370ec2a@gmx.com> References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> <2fe6e6a4-dfec-730e-06e0-3816a370ec2a@gmx.com> Message-ID: Hi Volodymyr - When I was testing and benchmarking, it was outside of an OpenStack context. I’d be interested to know whether you saw a performance uplift. Are you using RDMA for both front-side and replication traffic? Best wishes, Stig > On 28 Aug 2019, at 10:27, Volodymyr Litovka wrote: > > Hi Stig, > > the main question is - whether you tested it with Openstack and all > components works? Because regarding Ceph itself - we're using Mellanox > ConnectX-4LX cards and found that Ceph works fine with RoCE in LAG > configuration. > >> There is potential here > > Yes, agree with you. That's why we're trying to be ready for future > improvements :) I guess that since Ceph supports RDMA officially, Redhat > (as owner of Ceph) will sell it and, thus, will improve support. > > Thank you. > > On 28.08.2019 11:42, Stig Telfer wrote: >> Hi there - >> >>> On 28 Aug 2019, at 08:58, Volodymyr Litovka wrote: >>> does anyone have experience using RDMA-enabled CEPH with Openstack? How >>> stable is it? Whether all Openstack components (Nova, Cinder) works with >>> such specific configuration of Ceph? Any issues which can affect overall >>> system reliability? >> Last time I looked at this (with pre-release Nautilus, about 9 months ago), I had mixed results. >> >> There are four generally-available RDMA fabrics (Infiniband, RoCE, iWARP and OPA) and I had a go at all of them apart from iWARP. >> >> RoCE worked for me but IB and OPA were troublesome to get working. There’s some work contributed for iWARP support that introduces the RDMA connection manager (RDMACM), which I found also helped for IB and OPA. >> >> There is potential here but in performance terms, I didn’t manage a thorough benchmarking and didn’t see conclusive proof of advantage. Perhaps things have come on since I looked, but it wasn’t an obvious win at the time. I’d love to have another pop at it, but for lack of time… >> >> Cheers, >> Stig >> >> > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > From mnaser at vexxhost.com Wed Aug 28 15:51:36 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 28 Aug 2019 11:51:36 -0400 Subject: [openstack-ansible] weekly update Message-ID: Hi everyone, Here’s the update of what happened in this week’s OpenStack Ansible Office Hours. - The uwsgi role is mostly done but there’s an issue with ansible-lint checks so we’re thinking of moving it to the integrated repository and towards using collections in the long term. - We’re still working on the os_murano role. - We finally fixed Rocky POST_FAILURES (thanks infra!) - Rabbit and Galera are still stuck on bind-to-mgmt work.. - We will also need new code in order to build images in test jobs for Octavia since right now they’re only usable in Master. Thanks! Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From openstack at nemebean.com Wed Aug 28 15:54:53 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 28 Aug 2019 10:54:53 -0500 Subject: [keystone]How to prevent adding admin-role? In-Reply-To: References: Message-ID: <6da1710f-0e7d-130e-3243-775de73369f0@nemebean.com> Tagging with keystone for visibility. On 8/28/19 7:24 AM, Tavasti Markku EXT wrote: > Hi! > > I am trying to create ‘domain admin’ role which has permissions to > create projects and users, and manage user roles in projects within own > domain. I have pretty ok working set of policies done, but there is one > critical security hole: domain admin can add ‘admin’ role to user, and > after it user has superuser privileges. Is there any possibility to > limit domain admin rights to give only _/member/_ roles? I suspect the answer may be no, unfortunately. This is one of the longstanding limitations with roles - admin means admin of everything. There's work underway to improve that, but I think the policy system in Queens just wasn't designed for this sort of use case. That said, I'm not positive this is exactly the same scenario that people generally have trouble with, so hopefully a keystone person can chime in with a more definitive answer. > > I am working in Queens-based Redhat OSP13. > > Tavasti, Openstack admin > > > > For Internal Use Only > From openstack at nemebean.com Wed Aug 28 16:07:14 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 28 Aug 2019 11:07:14 -0500 Subject: [openstack-dev] [oslo][i18n] Dropping lazy translation support In-Reply-To: <13628917.sMWNFXyucf@whitebase.usersys.redhat.com> References: <13628917.sMWNFXyucf@whitebase.usersys.redhat.com> Message-ID: <9d313980-4b33-10ee-25b0-4f24d0d2bf92@nemebean.com> On 8/28/19 5:13 AM, Luigi Toscano wrote: > On Monday, 19 August 2019 19:17:08 CEST Ben Nemec wrote: >> On 8/19/19 7:46 AM, Shengjing Zhu wrote: >>> Sorry for replying the old mail, and please cc me when reply. >>> >>> Matt Riedemann writes: >>>> This is a follow up to a dev ML email [1] where I noticed that some >>>> implementations of the upgrade-checkers goal were failing because some >>>> projects still use the oslo_i18n.enable_lazy() hook for lazy log message >>>> translation (and maybe API responses?). >>>> >>>> The very old blueprints related to this can be found here [2][3][4]. >>>> >>>> If memory serves me correctly from my time working at IBM on this, this >>>> was needed to: >>>> >>>> 1. Generate logs translated in other languages. >>>> >>>> 2. Return REST API responses if the "Accept-Language" header was used >>>> and a suitable translation existed for that language. >>>> >>>> #1 is a dead horse since I think at least the Ocata summit when we >>>> agreed to no longer translate logs since no one used them. >>>> >>>> #2 is probably something no one knows about. I can't find end-user >>>> documentation about it anywhere. It's not tested and therefore I have no >>>> idea if it actually works anymore. >>>> >>>> I would like to (1) deprecate the oslo_i18n.enable_lazy() function so >>>> new projects don't use it and (2) start removing the enable_lazy() usage >>>> from existing projects like keystone, glance and cinder. >>>> >>>> Are there any users, deployments or vendor distributions that still rely >>>> on this feature? If so, please speak up now. >>> >>> I was pointed to this discussion when I tried to fix this feature in >>> keystone, https://review.opendev.org/677117 >>> >>> For #2 translated API response, this feature probably hasn't been >>> working for some time, but it's still a valid user case. >>> >>> Has the decision been settled? >> >> Not to my knowledge. Lazy translation still exists, but I don't know >> that anyone is testing it. >> >> Are you saying that you are using this feature now, or are you >> interested in using it going forward? > > A related note: we nuked enable_lazy in sahara because it broke Python 3 > support in mysterious ways: > > https://review.opendev.org/#/c/626643/ > Hmm, we did a bad thing in the Message API. :-( We shadowed the built-in translate function, so if translate gets called on a Message object it goes to our code instead of the stdlib code. I proposed https://review.opendev.org/#/c/679092/ which would fix the bug you saw. It's not perfect, but at least it mitigates the problem since translate is part of the public API and we can't just rename it. From fungi at yuggoth.org Wed Aug 28 16:22:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Aug 2019 16:22:11 +0000 Subject: [tc] Gradually reduce TC to 11 members over 2020 In-Reply-To: References: <18d408c7-b396-0d3f-b5d8-ae537b25e5f7@redhat.com> Message-ID: <20190828162211.n363gzjxj722iayb@yuggoth.org> On 2019-08-28 08:30:58 -0400 (-0400), Julia Kreger wrote: [...] > I feel like this is something that would have to be discussed > before the board so they are aware in advance of starting to > reduce the size of the TC. If the board says "nope", then I > suspect we need to respect that. [...] I think it would be nice of us to give them a heads-up, but the OSF bylaws don't mandate any particular size for the OpenStack TC: https://www.openstack.org/legal/technical-committee-member-policy/ Instead it only requires a supermajority of the sitting TC members to agree to change where it's specified in our charter: https://governance.openstack.org/tc/reference/charter.html#tc-members So the OSF board of directors doesn't actually have any direct control over the decision as far as I can tell. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Aug 28 16:46:11 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Aug 2019 16:46:11 +0000 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: References: Message-ID: <20190828164610.ctn6vot4kmc3cbk7@yuggoth.org> On 2019-08-28 09:47:22 +0100 (+0100), Chris Dent wrote: [...] > Otherwise it's fine: active, Apache2 or MIT license. > > Thoughts? > > [1] https://pypi.org/project/orjson/ [...] One thing worth noting is that they don't publish an sdist to PyPI, only wheels: https://pypi.org/project/orjson/#files This means that when I try to install it into a venv created with my local build of Python v3.8.0b3 it fails to find any installable orjson because there's no fallback on PyPI beyond the Python releases and platforms for which they've explicitly produced wheels. There's https://github.com/ijl/orjson/issues/18 open for over two months requesting wheels for Python 3.8, but it's going to be a treadmill if they're not also publishing an sdist. It's further a potential license concern, since they're not publishing the source code alongside the wheels (so if for example the upstream Git repository goes away...). The official Python Packaging Guide notes that sdist publication is strongly recommended, and that publishing wheels in addition to that is optional: https://packaging.python.org/guides/distributing-packages-using-setuptools/#source-distributions If this is a library you care about using, it may make sense to attempt to make these points to its maintainer(s). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mtreinish at kortar.org Wed Aug 28 17:19:17 2019 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 28 Aug 2019 13:19:17 -0400 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: <20190828164610.ctn6vot4kmc3cbk7@yuggoth.org> References: <20190828164610.ctn6vot4kmc3cbk7@yuggoth.org> Message-ID: <20190828171917.GA3256@zeong> On Wed, Aug 28, 2019 at 04:46:11PM +0000, Jeremy Stanley wrote: > On 2019-08-28 09:47:22 +0100 (+0100), Chris Dent wrote: > [...] > > Otherwise it's fine: active, Apache2 or MIT license. > > > > Thoughts? > > > > [1] https://pypi.org/project/orjson/ > [...] > > One thing worth noting is that they don't publish an sdist to PyPI, > only wheels: https://pypi.org/project/orjson/#files > > This means that when I try to install it into a venv created with my > local build of Python v3.8.0b3 it fails to find any installable > orjson because there's no fallback on PyPI beyond the Python > releases and platforms for which they've explicitly produced wheels. > > There's https://github.com/ijl/orjson/issues/18 open for over two > months requesting wheels for Python 3.8, but it's going to be a > treadmill if they're not also publishing an sdist. It's further a > potential license concern, since they're not publishing the source > code alongside the wheels (so if for example the upstream Git > repository goes away...). The official Python Packaging Guide notes > that sdist publication is strongly recommended, and that publishing > wheels in addition to that is optional: > > https://packaging.python.org/guides/distributing-packages-using-setuptools/#source-distributions > > If this is a library you care about using, it may make sense to > attempt to make these points to its maintainer(s). I expect that it's because orjson is actually a python api for rust code. It looks like they chose to use pyo3-pack instead of setuptools-rust (which is normally what I use). pyo3-pack only recently added support for building sdists but it's only been included in a beta release so far (It was a longstanding issue with pyo3-pack https://github.com/PyO3/pyo3-pack/issues/2 ). Also, even assuming orjson start publishing sdists you'll still need nightly rust installed to compile it since pyo3 only works with the nightly builds of rust at this point. Which while not difficult to do, is not something people typically have installed. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Aug 28 17:35:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Aug 2019 17:35:16 +0000 Subject: [requirements][placement] orjson instead of stdlib json In-Reply-To: <20190828171917.GA3256@zeong> References: <20190828164610.ctn6vot4kmc3cbk7@yuggoth.org> <20190828171917.GA3256@zeong> Message-ID: <20190828173515.wqqg2t4iq7qrkb3g@yuggoth.org> On 2019-08-28 13:19:17 -0400 (-0400), Matthew Treinish wrote: [...] > Also, even assuming orjson start publishing sdists you'll still > need nightly rust installed to compile it since pyo3 only works > with the nightly builds of rust at this point. Which while not > difficult to do, is not something people typically have installed. This points out yet another portability issue. After watching the Debian community struggle to backport security-supported versions of Firefox to their stable releases, which involved needing to backport a full rust toolchain along with it, I'm unconvinced that's the sort of scenario we should be inflicting on our users either. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Wed Aug 28 17:49:22 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 28 Aug 2019 12:49:22 -0500 Subject: [neutron][graphql] Is the graphql branch still being used? Message-ID: Dear Stackers, Some time ago we created a feature branch in the Neutron repo for a PoC with graphql: https://opendev.org/openstack/neutron/src/branch/feature/graphql Is this PoC still going on? Is the branch still needed? The branch has been inactive since January and we are finding that patches against this branch don't pass zuul tests: https://review.opendev.org/#/c/678144/. If the branch is still being used, the team working on it should fix it so tests pass. If it is not used, let's remove it. If I don't hear back from anyone having interest in maintaining this branch by September 6th, I will go ahead and remove it. Thanks an regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Aug 28 18:24:31 2019 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 28 Aug 2019 13:24:31 -0500 Subject: [oslo] Reminder: Feature Freeze is this Friday Message-ID: <07dd0eab-cd17-611d-4bad-6abe7859a567@nemebean.com> This is your friendly neighborhood PTL reminding you that if you need a feature to merge in Oslo this cycle, it should be in by this Friday. We feature freeze earlier than other projects so they have a chance to make use of any features we add before their feature freeze. After Friday, any new features in Oslo will require an exception from me. I don't see much in the way of feature reviews open against Oslo right now (at least none that are mergeable), but if I missed anything let me know. Thanks. -Ben From rfolco at redhat.com Wed Aug 28 18:34:00 2019 From: rfolco at redhat.com (Rafael Folco) Date: Wed, 28 Aug 2019 15:34:00 -0300 Subject: [tripleo] TripleO CI Summary: Sprint 35 Message-ID: Greetings, The TripleO CI team has just completed Sprint 35 / Unified Sprint 14 (Aug 08 thru Aug 28). The following is a summary of completed work during this sprint cycle: - Completed testing on the RHEL8 OVB featureset 001 job and added it to the periodic pipeline. - Added RHEL8 scenario{1-4} jobs to the periodic pipeline as non-voting. - Completed the design for staging environment as a zuul job for testing changes in the promoter server and also prepare for the multi-arch builds. - Created a staging promoter job and started implementing functional tests on molecule for the staging environment setup. - Completed a PoC for multi-arch container builds with tagging and manifests approach. The change [3] on build-containers role that introduces arch aware tagging is still under review. - Promotion status: green on all branches for most of the sprint cycle. - Removed Fedora and Pike jobs everywhere. - Bumped ansible to version 2.8 across tripleo ci jobs. The planned work for the next sprint [1] are: - Address issues for scenario{1-4} jobs for RHEL8 in the periodic pipeline and get jobs to green status. - Merge multi-arch container tagging support and change the promoter code to push manifests with annotated metadata for architecture in addition to the arch tagged containers strategy. - Implement the staging promoter job for testing changes in the promotion workflow. - Create functional test cases for the staging promoter setup using molecule. - Design a staging promotion workflow to emulate promotions using mock containers and test servers. The Ruck and Rover for this sprint are Gabriele Cerami (panda) and Sagi Shnaidman (sshnaidm). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes are being tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-15 [2] https://etherpad.openstack.org/p/ruckroversprint15 [3] https://review.opendev.org/#/c/663977/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiko at redhat.com Wed Aug 28 18:58:07 2019 From: elmiko at redhat.com (Michael McCune) Date: Wed, 28 Aug 2019 14:58:07 -0400 Subject: [neutron][graphql] Is the graphql branch still being used? In-Reply-To: References: Message-ID: On Wed, Aug 28, 2019 at 1:56 PM Miguel Lavalle wrote: > Dear Stackers, > > Some time ago we created a feature branch in the Neutron repo for a PoC > with graphql: > https://opendev.org/openstack/neutron/src/branch/feature/graphql > > Is this PoC still going on? Is the branch still needed? The branch has > been inactive since January and we are finding that patches against this > branch don't pass zuul tests: https://review.opendev.org/#/c/678144/. If > the branch is still being used, the team working on it should fix it so > tests pass. If it is not used, let's remove it. If I don't hear back from > anyone having interest in maintaining this branch by September 6th, I will > go ahead and remove it. > > i just wanted to chime in from the api-sig. we had originally been helping to organize this experiment with graphql, but i have not heard any updates about the project in quite awhile (> 9 months). the main contact i had for this project was +Gilles Dubreuil , i have cc'd him on this email. peace o/ Thanks an regards > > Miguel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Aug 28 20:05:17 2019 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 28 Aug 2019 14:05:17 -0600 Subject: [ptl][tripleo][election] PTL Candidacy Message-ID: Greetings, I would like to nominate myself for the TripleO PTL role for the U cycle. I have been working closely with several previous PTL's like Alex, Emilien and Juan Osorio and I am hoping to contribute back to the community as a PTL while following their excellent example. I would like to see Tripleo continue focusing on simplification and improving the user experience. Specifically continue the path set in previous TripleO cycles. - Push forward with tripleo-ansible and the transformation squad. - Work with the newly formed ansible sig group. - Help continue to simplify the stack of components where possible. - Help with new features like the undercloud minion node. - Continue to support the wider adoption of the Tripleo Single Node deployment as a development and test tool. - Continue the great work being done to clean up logging and error handling to improve operators ability to troubleshoot their installations. Thank you for the opportunity! Thanks, Wes Hayutin irc: weshay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Aug 28 21:16:08 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Aug 2019 16:16:08 -0500 Subject: [all] [tc] [osc] [glance] [train] [ptls] Legacy client CLI to OSC review 639376 In-Reply-To: References: Message-ID: <0c962dfd-0b78-e658-7813-c0e33e807a8d@gmail.com> On 8/23/2019 6:50 AM, Alexandra Settle wrote: > I have abandonedhttps://review.opendev.org/#/c/639376/ due to the lack > of response on the review itself, and this email thread. > > All discussion regarding the legacy CLI client and OSC to please be > constructed through this thread. > > As part of step 2 of my outlined plan below, I'd be looking for someone > with particular interest in the transition of legacy CLI and OSC to > step forward and help start a pop-up group. Just FYI, I've submitted a project idea for working on closing compute API feature gaps in this university student/mentor program: http://okrieg.github.io/EC500/index-fall-2019.html I think next steps are the professors compile the submissions and then they get ranked/voted on which students want to work on which projects, so no idea if anyone will want to work on this (nor do I have a public posting of the project idea I submitted since it was just a google form), but if it's selected then we'll have some new contributors to help push the buttons. That doesn't mean I'm signing up for legacy CLI -> OSC pop up SIG WG ballyhoo, just that I did a thing. -- Thanks, Matt From feilong at catalyst.net.nz Wed Aug 28 22:48:47 2019 From: feilong at catalyst.net.nz (feilong) Date: Thu, 29 Aug 2019 10:48:47 +1200 Subject: [election][ptl][magnum] PTL Candidacy Message-ID: <8769d025-4bdc-dc92-b3bd-91e791da5d46@catalyst.net.nz> Greeting, Over the last release, I have had the privilege and honor to serve as the PTL of Magnum. I would like to take this opportunity to express my appreciation for all your support and contribution to Magnum. And I look forward to working with you all over the next release. Therefore, I would like announce my candidacy to serve as the Magnum project team leader for the U cycle, if you will have me. In case if you don't know me, I've been working on and contributing to Magnum since Queens release as a core developer. And I have done a lot work for Magnum to help it getting production ready, e.g. the Calico network driver, nodes anti affinity, k8s health status check, Keystone auth, coreDNS autoscale, rolling upgrade, auto scaling, auto healing, private cluster, getting Magnum certified by CNCF and etc, etc. That said, I really understand the ups and downs which Magnum went through and I know the blueprints, direction and pains of this project, for interests from both private cloud and public cloud sides, which make me a good candidate for the PTL position. For U release (including the rest of Train cycle), things I'd like to do: 1. Fedora CoreOS The most matured driver for Kubernetes in Magnum is Fedora Atomic, however, the Fedora community is moving to Fedora CoreOS, so we should start to think about the migration. We have started the work but haven't finished it due to the limited resource among the team. This need to be done in Train or early of U cycle. 2. Node groups Node groups provide users with the ability to specify groups of nodes with different properties for different purposes. Within the scope of a group users are able to define labels, used image, flavor, etc depending on the purpose. We have merged part of this huge feature, but we do need to finish the rest of the work. 3. Security hardening Security is always the most important area we would like to improve for Magnum, there are still parts we need to improve to make Magnum more securer. 4. Containerize Kubernetes master This could be one of the most interesting features for Magnum and it's definitely hard work. Running Kubernetes master nodes in a seed/root Kubernetes could bring us a lot of benefits. 5. Rolling upgrade for node operating system With the big support from CERN, We have done the k8s version rolling upgrade, but we still need the rolling upgrade work for node operating system. I have already got a patch for that being reviewed now. It has been a fantastic experience working with this amazing team and I know without the dedication and hard work of everyone who has contributed to Magnum we can't get those good work done. So the PTL of this cool team is most like a facilitator, coordinator, and mentor. I would be pleased to serve as PTL for Magnum for the U cycle and I'd appreciate your vote. Thanks for your consideration! -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ From dangtrinhnt at gmail.com Thu Aug 29 03:45:29 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 29 Aug 2019 12:45:29 +0900 Subject: [ptl][searchlight][election] Trinh Nguyen's candidacy for Searchlight PTL in U cycle Message-ID: Hello. I would like to announce my candidacy for Searchlight PTL for U. I've been the PTL for Searchlight for the last two cycles (Stein and Train) which I believe have been giving me enough understanding and confidence to help to maintain it for at least one more release. Even though there was not much activity of Searchlight in the Train cycle, as I surveyed people and companies around my circle, there're still needs for a centralized resource indexing service as well as multi-cloud support as we laid out [1] in the use cases and vision documents [2]. Moreover, with a simple architecture and well-defined contributing process, I strongly think Searchlight would be a good starting point for future OpenStack contributors. Therefore, Searchlight has good reasons to live one. My jobs as a PTL for Searchlight in the U cycle as I envisioned are: * Welcome and help new contributors * Help to make Searchlight work with ElasticSearch 6.x, 7.x * Finish on-going blueprints and fix bugs Thank you for your consideration. Trinh Nguyen (dangtrinhnt) [1] https://docs.openstack.org/searchlight/latest/user/usecases.html [2] https://docs.openstack.org/searchlight/latest/contributor/searchlight-vision.html -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sxmatch1986 at gmail.com Thu Aug 29 06:14:29 2019 From: sxmatch1986 at gmail.com (hao wang) Date: Thu, 29 Aug 2019 14:14:29 +0800 Subject: [ptl][zaqar][election]WangHao's candidacy for Zaqar PTL in U cycle Message-ID: Hi all, I would like to announce my candidacy for Zaqar PTL for U I've been PTL for Zaqar for two cycles(Rocky and Stein), I still want to serve this project in the new cycle, so there is my self-nomination: In U release, we will continue our works in: 1) Keep the Zaqar health and stable 2) Continue to support more backends for topic resource 3) Consider removing the old API code and let it more clearly. 4) Support sms subscription in Zaqar Thanks for your consideration! From renat.akhmerov at gmail.com Thu Aug 29 07:05:59 2019 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 29 Aug 2019 14:05:59 +0700 Subject: [Mistral] PTL candidacy In-Reply-To: References: Message-ID: Hi, I'd like to announce my PTL candidacy for Mistral in U cycle. In Train, we again made a big step towards having better performance, which turned to be very important for real production use cases. Mistral “join” mechanism is now much more CPU and RAM efficient (60-70% faster for same workflows). Mistral now also provides a mechanism to plug in an alternative scheduler implementation. The new implementation of scheduler, which designed to be more scalable, is now available and being tested. For U cycle I'd like to focus on making Mistral more usable:  * Consistent and well-structured documentation  * Additional tools that simplify creating custom actions and functions    for Mistral  * Easier debugging of workflow errors  * Easier and smoother installation process Some of the mentioned activities are already going on. For example, we added the workflow execution report that, when requested, shows the entire execution tree (including sub workflows). That allows to see error paths (and hence root causes) w/o having to do many copy-paste actions. It's not a secret that in the last few cycles there was a struggle with attracting more active contributors. Now, we finally have several very talanted engineers fully allocated to work on Mistral. So it makes me believe that we'll be able to solve a number of big tasks (including cool user facing goodies) that we had to put on a shelve just for the lack of resources. As usually, anybody is welcome to join our team! It's a fun project to work on. The best way to get in touch with us is IRC channel #openstack-mistral or the openstack-discuss mailing list (with [mistral] tag in email subject). Renat [1] https://review.opendev.org/#/c/679191 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Thu Aug 29 07:20:22 2019 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Thu, 29 Aug 2019 14:20:22 +0700 Subject: [neutron][ovn] networking-ovn plugin support for multiple remote ovsdb-server Message-ID: Hi All, I have testing OpenStack (Queens Version) with OVN and works, but when I want to cluster the ovsdb-server, OpenStack stuck to create neutron object. based on my discussion in openvswitch mailing list ( https://mail.openvswitch.org/pipermail/ovs-discuss/2019-August/049175.html), my ovsdb-server clustering databases are working. Is networking-ovn support to connect to multiple ovsdb-server? something like this in ml2_conf.ini [ovn] ovn_nb_connection = tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp: 10.101.101.102:6641 ovn_sb_connection = tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp: 10.101.101.102:6642 networking-ovn-metadata-agent.ini [ovn] ovn_sb_connection = tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp: 10.101.101.102:6642 Below is the full step to create ovsdb-server and neutron services. - step 1: bootstrapping ovsdb-server cluster : http://paste.openstack.org/show/766812/ - step 2: creating neutron service in controller : http://paste.openstack.org/show/766464/ - step 3: creating neutron service in compute : http://paste.openstack.org/show/766465/ But when I try to create neutron resource, Its always hang (only neutron resource). This is the full logs of all nodes, contain: http://paste.openstack.org/show/766461/ - all openvswitch logs - port (via netstat) - step bootstraping ovsdb-server Neutron logs in controller0: paste.openstack.org/show/766462/ Is networking-ovn support to connect to multiple ovsdb-server? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Aug 29 07:51:34 2019 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 29 Aug 2019 09:51:34 +0200 Subject: [blazar] No IRC meeting today Message-ID: Hello, I won't be able to chair the biweekly Blazar IRC meeting today. The next scheduled meeting is on September 12. Cheers, Pierre From katonalala at gmail.com Thu Aug 29 08:09:30 2019 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 29 Aug 2019 10:09:30 +0200 Subject: [neutron][graphql] Is the graphql branch still being used? In-Reply-To: References: Message-ID: Hi, For the last summit in Denver we planned a demo with Gilles and presentation of the aletrnatives graphql & openAPIv3, but as it was not supported the PoC slowly died. If I understand well as nobody has time/resource to work with API refactoring it is now out of the scope of Openstackt. Regards Lajos Michael McCune ezt írta (időpont: 2019. aug. 28., Sze, 21:03): > > > On Wed, Aug 28, 2019 at 1:56 PM Miguel Lavalle > wrote: > >> Dear Stackers, >> >> Some time ago we created a feature branch in the Neutron repo for a PoC >> with graphql: >> https://opendev.org/openstack/neutron/src/branch/feature/graphql >> >> Is this PoC still going on? Is the branch still needed? The branch has >> been inactive since January and we are finding that patches against this >> branch don't pass zuul tests: https://review.opendev.org/#/c/678144/. If >> the branch is still being used, the team working on it should fix it so >> tests pass. If it is not used, let's remove it. If I don't hear back from >> anyone having interest in maintaining this branch by September 6th, I will >> go ahead and remove it. >> >> > i just wanted to chime in from the api-sig. we had originally been helping > to organize this experiment with graphql, but i have not heard any updates > about the project in quite awhile (> 9 months). the main contact i had for > this project was +Gilles Dubreuil , i have cc'd him > on this email. > > peace o/ > > Thanks an regards >> >> Miguel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant at civo.com Thu Aug 29 10:26:12 2019 From: grant at civo.com (Grant Morley) Date: Thu, 29 Aug 2019 11:26:12 +0100 Subject: Metadata service caching old nameservers? Message-ID: <5e8e7fda-24ad-b4f6-99a0-77d6f0664be3@civo.com> Hi All, We have a bit of a weird issue with resolv.conf for instances. We have changed our subnets in neutron to use googles nameservers which is fine. However it seems that when instances are launched they are still getting the old nameserver settings as well as the new ones. If I look at the metadata networking service it returns the old nameservers as well as the new ones below: curl -i http://169.254.169.254/openstack/2017-02-22/network_data.json HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Content-Length: 753 Date: Thu, 29 Aug 2019 09:40:03 GMT {"services": [{"type": "dns", "address": "178.18.121.70"}, {"type": "dns", "address": "178.18.121.78"}, {"type": "dns", "address": "8.8.8.8"}, {"type": "dns", "address": "8.8.4.4"}] In our neutron dhcp-agent.ini file we have the correct dnsmasq nameservers set: dnsmasq_dns_servers = 8.8.8.8, 8.8.4.4 Are there any database tables I can change or clear up to ensure the old nameservers no longer get set? I can't seem to find any reference in our config any more of the old nameservers, so I assume one of the services is still setting them but I can't figure out what is. Any help will be much appreciated. Many thanks, -- Grant Morley Cloud Lead, Civo Ltd www.civo.com | Signup for an account! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ext-markku.tavasti at elisa.fi Thu Aug 29 12:52:02 2019 From: ext-markku.tavasti at elisa.fi (Tavasti Markku EXT) Date: Thu, 29 Aug 2019 12:52:02 +0000 Subject: [keystone]How to prevent adding admin-role? In-Reply-To: <6da1710f-0e7d-130e-3243-775de73369f0@nemebean.com> References: <6da1710f-0e7d-130e-3243-775de73369f0@nemebean.com> Message-ID: > From: Ben Nemec > On 8/28/19 7:24 AM, Tavasti Markku EXT wrote: > > Is there any possibility to limit domain admin rights to give only _/member/_ roles? > > I suspect the answer may be no, unfortunately. This is one of the > longstanding limitations with roles - admin means admin of everything. > There's work underway to improve that, but I think the policy system in > Queens just wasn't designed for this sort of use case. Actually I found out how to restrict rights of domadmin so that she can't add any other roles than _member_ Key is to add this to policy rules for identity:create_grant : whatever_your_conditions_are and '_member_':%(target.role.name)s Seems to be working. This page is most likely useful for anyone trying to do same: https://pedro.alvarezpiedehierro.com/2019/02/06/openstack-domain-project-admin/ --Tavasti For Internal Use Only From doka.ua at gmx.com Thu Aug 29 14:07:23 2019 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Thu, 29 Aug 2019 17:07:23 +0300 Subject: CEPH/RDMA + Openstack In-Reply-To: References: <65f16ab0-41cd-727f-82fd-2e5ebbfb5da5@gmx.com> <2fe6e6a4-dfec-730e-06e0-3816a370ec2a@gmx.com> Message-ID: Hi Stig, as far as I understand, it's impossible to have RDMA on front-side, because QEMU uses librbd which talks to OSDs over TCP. Only replication is done over RDMA. Nevertheless I will keep you informed on the progress. On 28.08.2019 18:38, Stig Telfer wrote: > Hi Volodymyr - > > When I was testing and benchmarking, it was outside of an OpenStack context. > > I’d be interested to know whether you saw a performance uplift. Are you using RDMA for both front-side and replication traffic? > > Best wishes, > Stig > > >> On 28 Aug 2019, at 10:27, Volodymyr Litovka wrote: >> >> Hi Stig, >> >> the main question is - whether you tested it with Openstack and all >> components works? Because regarding Ceph itself - we're using Mellanox >> ConnectX-4LX cards and found that Ceph works fine with RoCE in LAG >> configuration. >> >>> There is potential here >> Yes, agree with you. That's why we're trying to be ready for future >> improvements :) I guess that since Ceph supports RDMA officially, Redhat >> (as owner of Ceph) will sell it and, thus, will improve support. >> >> Thank you. >> >> On 28.08.2019 11:42, Stig Telfer wrote: >>> Hi there - >>> >>>> On 28 Aug 2019, at 08:58, Volodymyr Litovka wrote: >>>> does anyone have experience using RDMA-enabled CEPH with Openstack? How >>>> stable is it? Whether all Openstack components (Nova, Cinder) works with >>>> such specific configuration of Ceph? Any issues which can affect overall >>>> system reliability? >>> Last time I looked at this (with pre-release Nautilus, about 9 months ago), I had mixed results. >>> >>> There are four generally-available RDMA fabrics (Infiniband, RoCE, iWARP and OPA) and I had a go at all of them apart from iWARP. >>> >>> RoCE worked for me but IB and OPA were troublesome to get working. There’s some work contributed for iWARP support that introduces the RDMA connection manager (RDMACM), which I found also helped for IB and OPA. >>> >>> There is potential here but in performance terms, I didn’t manage a thorough benchmarking and didn’t see conclusive proof of advantage. Perhaps things have come on since I looked, but it wasn’t an obvious win at the time. I’d love to have another pop at it, but for lack of time… >>> >>> Cheers, >>> Stig >>> >>> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From fungi at yuggoth.org Thu Aug 29 14:41:32 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 29 Aug 2019 14:41:32 +0000 Subject: [OSSA-2019-004] Ageing time of 0 disables linuxbridge MAC learning (CVE-2019-15753) Message-ID: <20190829144132.kivzg435silxniui@yuggoth.org> ================================================================= OSSA-2019-004: Ageing time of 0 disables linuxbridge MAC learning ================================================================= :Date: August 29, 2019 :CVE: CVE-2019-15753 Affects ~~~~~~~ - Os-vif: >=1.15.0<1.15.2, 1.16.0 Description ~~~~~~~~~~~ James Denton with Rackspace reported a vulnerability in os-vif, the Nova/Neutron network integration library. A hard-coded MAC ageing time of 0 disables MAC learning in linuxbridge, forcing obligatory Ethernet flooding for non-local destinations which both impedes network performance and allows users to possibly view the content of packets for instances belonging to other tenants sharing the same network. Only deployments using the linuxbridge backend are affected. Patches ~~~~~~~ - https://review.opendev.org/678098 (Stein) - https://review.opendev.org/672834 (Train) Credits ~~~~~~~ - James Denton from Rackspace (CVE-2019-15753) References ~~~~~~~~~~ - https://launchpad.net/bugs/1837252 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15753 -- Jeremy Stanley, on behalf of the OpenStack VMT -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Thu Aug 29 14:51:58 2019 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 29 Aug 2019 09:51:58 -0500 Subject: [keystone]How to prevent adding admin-role? In-Reply-To: References: <6da1710f-0e7d-130e-3243-775de73369f0@nemebean.com> Message-ID: <4ef2d73b-79ae-6b89-a823-d552aaa9ac3b@nemebean.com> On 8/29/19 7:52 AM, Tavasti Markku EXT wrote: >> From: Ben Nemec >> On 8/28/19 7:24 AM, Tavasti Markku EXT wrote: >>> Is there any possibility to limit domain admin rights to give only _/member/_ roles? >> >> I suspect the answer may be no, unfortunately. This is one of the >> longstanding limitations with roles - admin means admin of everything. >> There's work underway to improve that, but I think the policy system in >> Queens just wasn't designed for this sort of use case. > > Actually I found out how to restrict rights of domadmin so that she can't add any other roles than _member_ > Key is to add this to policy rules for identity:create_grant : whatever_your_conditions_are and '_member_':%(target.role.name)s > > Seems to be working. Cool, thanks for sharing your solution. > > This page is most likely useful for anyone trying to do same: https://pedro.alvarezpiedehierro.com/2019/02/06/openstack-domain-project-admin/ > > --Tavasti > > For Internal Use Only > From miguel at mlavalle.com Thu Aug 29 15:40:41 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 29 Aug 2019 10:40:41 -0500 Subject: [neutron][ptl][elections] Non candidacy for U cycle Message-ID: Dear Neutron team, It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride and am thankful for all the support and hard work that you all have provided over these past two years. As I said in my last self-nomination ( http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas and energy. One of the focus areas of my PTL tenure has been to keep the team strong by actively helping community members to become contributors and core reviewers. As a consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step up to the plate. As for me, I am not going anywhere. I am very lucky that my management at Verizon Media supports my continued active involvement with the community at large and the Neutron family of projects in particular. So I look forward to help the team in the upcoming U cycle and beyond. Thank you soooo much! Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Aug 29 15:42:46 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 29 Aug 2019 10:42:46 -0500 Subject: [PTLs] [i18n] Call for help for Project Onboarding in Shanghai Message-ID: <5D67F276.7070801@openstack.org> Hi everyone! We'd like to suggest that PTLs/presenters of Project Onboarding find a buddy to do Chinese translation in the room in Shanghai. We're hoping some local volunteers will step up to pair with each project. I've set up an etherpad here so community members can sign up to volunteer: https://etherpad.openstack.org/p/Shanghai_Project_Onboarding_Translation PTLs, if you know someone within your project that can help, please add them to the list. Cheers, Jimmy From balazs.gibizer at est.tech Thu Aug 29 15:45:57 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Thu, 29 Aug 2019 15:45:57 +0000 Subject: [nova] Status of the bp support-move-ops-with-qos-ports Message-ID: <1567093554.14852.1@smtp.office365.com> Hi, As feature freeze is closing in I try to raise awareness of the implementation status of the support-move-ops-with-qos-ports bp [1]. The original goal of the bp was to support every move operations (cold migrate, resize, live-migrate, evacuate, unshelve). Today I have ready and tested (unit + functional) patches proposed for cold migrate and resize [2]. I don't think every move operations will be ready and merged in Train but I still hope that there is a chance for cold migrate and resize to land. Sure I will continue working on supporting the operations that miss the Train in the U release. One possible complication in the current patch series is that it proposes the necessary RPC changes for _every_ move operations [3]. This might not be what we want to merge in Train if only cold migrate and resize support fill fit. So I see two possible way forwards: a) Others also feel that cold migrate and resize will fit into Train, and then I will quickly change [3] to only modify those RPCs. b) We agree to postpone the whole series to U. Then I will not spend too much time on reworking [3] in the Train timeframe. A connected topic: I proposed patches [4] to add cold migrate and resize tempest coverage to the nova-grenade-multinode (renamed from nova-grenade-live-migrate) job as Matt pointed out in [3] that we don't have such coverage and that my original change in [3] would have broken a mixed-compute-version deployment. Any feedback is appreciated. Cheers, gibi [1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/support-move-ops-with-qos-ports.html [2] https://review.opendev.org/#/q/topic:bp/support-move-ops-with-qos-ports [3] https://review.opendev.org/#/c/655721 [4] https://review.opendev.org/#/c/679210 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Aug 29 15:55:10 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 29 Aug 2019 17:55:10 +0200 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: Hi, Thank You for all Your work as a leadership during those 4 cycles! > On 29 Aug 2019, at 17:40, Miguel Lavalle wrote: > > Dear Neutron team, > > It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride and am thankful for all the support and hard work that you all have provided over these past two years. As I said in my last self-nomination (http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas and energy. One of the focus areas of my PTL tenure has been to keep the team strong by actively helping community members to become contributors and core reviewers. As a consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step up to the plate. > > As for me, I am not going anywhere. I am very lucky that my management at Verizon Media supports my continued active involvement with the community at large and the Neutron family of projects in particular. So I look forward to help the team in the upcoming U cycle and beyond. > > Thank you soooo much! > > Miguel — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Thu Aug 29 16:01:34 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 29 Aug 2019 18:01:34 +0200 Subject: [neutron][election] PTL candidacy Message-ID: Hi, I want to propose my candidacy for Neutron PTL. I have worked with OpenStack since the Havana release, first as a user and operator and later on as an upstream developer mostly focused on Neutron. I was appointed to the Neutron core team in the Queens cycle and joined the Neutron Drivers last September. I participated very actively and led the QoS sub-team and currently lead the project's CI group, focusing on identifying bugs and keeping our gate jobs squeaky clean. During my involvement with Neutron, I have been employed by an OpenStack operator, OVH, and one of the big distributions, Red Hat. This experience allows me to bring to the project a blend of operational and product development perspective. I think that my experience uniquely positions me to serve as an effective Neutron PTL. My priorities to focus on as a PTL are: * I would like to continue the team effort on merging the ML2/OVS+DVR solution with networking-ovn - it is quite a new initiative, described in https://review.opendev.org/#/c/658414/ which I want to continue in the next cycle. OVN backend in OpenStack scales better than Neutron’s current default solution with openvswitch agent. It already have some features which we are missing in core Neutron, like DVR based on OpenFlow, enabling DVR for OVS DPDK. OVN is also becoming more prominent backend in e.g. Kubernetes and is more and more popular with OpenStack operators. I think that we should use this momentum and finally include OVN as one of in-tree backends in Neutron, and consider it as a default at some future point. * Improve overall stability of our CI - we already did a lot of improvements in this area but I think that we must continue the effort in order to provide developers a great overall experience and the highest possible speed when working on new features. * Improve test coverage for existing features - this is very important to avoid regressions and find bugs in existing features. Stability of existing Neutron features is very important for users and operators and we, as developers should focus on this maybe even more than on providing new features. * Finish the transition to the new engine facade - it is a long standing blueprint already and we should focus to finally finish this transition and use the new engine facade everywhere in Neutron. * Improve stability of Neutron when it is running under uWSGI - since a couple of cycles ago we have the possibility to run Neutron in this way but it is not the default yet. We have some CI jobs to test Neuton in this configuration but those jobs are non-voting and quite unstable still. So I would like to make those jobs more stable and voting. I think that Miguel did a great job in the last four cycles as Neutron’s PTL and I would like to continue his work and his way of managing the team, like e.g. mentoring potential new core reviewers. I think that currently we have a pretty stable and good team, with quite enough number of cores so my goal is to keep it in good shape. Helping onboard new contributors is critical to the growth and success of our community and will be a priority during my tenure. — Slawek Kaplonski Senior software engineer Red Hat From amuller at redhat.com Thu Aug 29 16:03:26 2019 From: amuller at redhat.com (Assaf Muller) Date: Thu, 29 Aug 2019 12:03:26 -0400 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: On Thu, Aug 29, 2019 at 11:59 AM Slawek Kaplonski wrote: > > Hi, > > Thank You for all Your work as a leadership during those 4 cycles! > > > On 29 Aug 2019, at 17:40, Miguel Lavalle wrote: > > > > Dear Neutron team, > > > > It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride and am thankful for all the support and hard work that you all have provided over these past two years. As I said in my last self-nomination (http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas and energy. One of the focus areas of my PTL tenure has been to keep the team strong by actively helping community members to become contributors and core reviewers. As a consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step up to the plate. > > > > As for me, I am not going anywhere. I am very lucky that my management at Verizon Media supports my continued active involvement with the community at large and the Neutron family of projects in particular. So I look forward to help the team in the upcoming U cycle and beyond. Miguel, you did a fantastic job. You should be proud of your contributions. Thank you. > > > > Thank you soooo much! > > > > Miguel > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From mriedemos at gmail.com Thu Aug 29 16:09:46 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Aug 2019 11:09:46 -0500 Subject: [nova] Status of the bp support-move-ops-with-qos-ports In-Reply-To: <1567093554.14852.1@smtp.office365.com> References: <1567093554.14852.1@smtp.office365.com> Message-ID: <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> On 8/29/2019 10:45 AM, Balázs Gibizer wrote: > As feature freeze is closing in I try to raise awareness of the > implementation status of the support-move-ops-with-qos-ports >  bp > [1]. Thanks for this since I know you've been pushing patches but I haven't been able to get myself past the hump that is [3] yet. > > The original goal of the bp was to support every move operations (cold > migrate, resize, live-migrate, evacuate, unshelve). Today I have ready > and tested (unit + functional) patches proposed for cold migrate and > resize [2]. I don't think every move operations will be ready and merged > in Train but I still hope that there is a chance for cold migrate and > resize to land. Sure I will continue working on supporting the > operations that miss the Train in the U release. This is probably OK since we agreed at the PTG that this blueprint was essentially going to be closing gaps in functionality introduced in Stein but not adding a new microversion, correct? If so, then doing it piece-meal seems OK to me. > > One possible complication in the current patch series is that it > proposes the necessary RPC changes for _every_ move operations [3]. This > might not be what we want to merge in Train if only cold migrate and > resize support fill fit. So I see two possible way forwards: > a) Others also feel that cold migrate and resize will fit into Train, > and then I will quickly change [3] to only modify those RPCs. > b) We agree to postpone the whole series to U. Then I will not spend too > much time on reworking [3] in the Train timeframe. Or (c) we just land that change. I was meaning to review that today anyway so this email is coincidental for me. I think passing the request spec down to the compute methods is fine and something we'll want to do eventually anyway for this series (even if the live migration stuff is deferred to U). > > A connected topic: I proposed patches [4] to add cold migrate and resize > tempest coverage to the nova-grenade-multinode (renamed from > nova-grenade-live-migrate) job as Matt pointed out in [3] that we don't > have such coverage and that my original change in [3] would have broken > a mixed-compute-version deployment. I'm just waiting on CI results for those but since I've already been through them in some form and they are pretty small I think there is a good chance of landing these soon. -- Thanks, Matt From kennelson11 at gmail.com Thu Aug 29 16:40:52 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 29 Aug 2019 09:40:52 -0700 Subject: [all][election][TC] Candidacy for TC Message-ID: Hello Everyone :) I am declaring my candidacy for the TC election. Roles like PTL and TC member are incredibly important when it comes to enabling others’ success and removing roadblocks that stand between contributors (new or established) and the work they want to do. Much of the work I have done over the years has had this focus, and as an elected member of the TC I think I could have an even larger impact. A little background on me: I have been working on OpenStack since 2015 (Liberty release). I started with getting involved organizing and mentoring in the Women of OpenStack’s mentoring programs and developing Cinder before branching out to os-brick and on to other roles. Since then, I have been involved in many different projects and efforts focused on supporting the larger community. Highlights: - Onboarded over 300 global contributors as instructor at OpenStack Upstream Institute (OUI) building our contributor community in places like Brazil, Korea, and Vietnam[1] - Developed the curriculum of OUI, including the Contributor Guide[2]. - Evolved OpenStack’s various mentoring programs[3] and actively participated in them, including our community’s involvement in the Outreachy Program[4]. - Established the First Contact SIG[5]. - Coordinated the community’s ongoing migration to StoryBoard from Launchpad - Presided over 5 releases of (Pike-Train) technical elections (both PTL and TC) as an election official - Served on the Release Management, Cinder, & Infrastructure (StoryBoard) teams - Helped organize Forums, PTGs, and other in-person gatherings - Collaborated with the k8s community to improve both our community’s various onboarding programs My experience teaching Upstream Institute and running the First Contact SIG has helped me build skills that will be beneficial as a TC member, such as making connections between new contributors and existing projects or efforts. This ability to make connections and start conversations will support our community’s cross project efforts as they continue to be a challenge that I will help the TC overcome. My previous work has afforded me the opportunity to interact with almost every OpenStack project and understand how they operate differently from one another. Each project has its own culture; understanding that and how to engage with each of them is incredibly important for the Technical Committee to achieve selected community goals and other cross project efforts. As a member of the TC, I will seek to lower the barrier to entry to roles like contributor or even PTL. There is a lot of duplicate and stale information for setting up the accounts and tools needed to get started which should be cleaned up. Individual project contributor documentation[6][7] doesn’t exist for all the projects currently and would be valuable. Individual project PTL guides (similar to the Release Management process doc[8]) would also benefit our community as it would smooth the transition from one PTL to another and minimize the loss of tribal knowledge, especially when these transitions happen in the middle of a cycle. I’ve proposed this contributor documentation focus as a community goal for the U release[9]. Anything we can do to make a cohesive path for new community members to work their way to roles like PTL will ensure the health and success of our future. I love this community and all the awesome people in it, serving on the TC would be a great experience and an honor. Thank you, Kendall Review history: https://review.opendev.org/#/q/reviewer:16708,n,z Commit history: https://review.openstack.org/#/q/owner:16708,n,z Foundation Profile: https://www.openstack.org/community/members/profile/35859/ Freenode: diablo_rojo [1] https://docs.openstack.org/upstream-training/ [2] https://docs.openstack.org/contributors/ [3] https://wiki.openstack.org/wiki/Mentoring [4] https://www.outreachy.org/ [5] https://wiki.openstack.org/wiki/First_Contact_SIG [6] https://docs.openstack.org/swift/latest/development_guidelines.html [7] https://docs.openstack.org/nova/latest/contributor/index.html [8] https://releases.openstack.org/reference/process.html [9] https://etherpad.openstack.org/p/PVG-u-series-goals -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Aug 29 16:48:39 2019 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Aug 2019 11:48:39 -0500 Subject: [release] Release countdown for week R-6, September 2-6 Message-ID: <20190829164839.GA23278@sm-workstation> Development Focus ----------------- Work on libraries should be wrapping up, in preparation for the various library-related deadlines coming up. Now is a good time to make decisions on deferring feature work to the next development cycle in order to be able to focus on finishing already-started feature work. General Information ------------------- We are now getting close to the end of the cycle, and will be gradually freezing feature work on the various deliverables that make up the OpenStack release. This coming week is the deadline for general libraries (except client libraries): their last feature release needs to happen before "Non-client library freeze" on September 05. Only bugfixes releases will be allowed beyond this point. When requesting those library releases, you can also include the stable/train branching request with the review (as an example, see the "branches" section here: https://opendev.org/openstack/releases/src/branch/master/deliverables/pike/os-brick.yaml#n2) In the next weeks we will have deadlines for: * Client libraries (think python-*client libraries), which need to have their last feature release before "Client library freeze" (September 12) * Deliverables following a cycle-with-rc model (that would be most services), which observe a Feature freeze on that same date, September 12. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. As we are getting to the point of creating stable/train branches, this would be a good point for teams to review membership in their $project-stable-maint groups. Once the stable/train branches are cut for a repo, the ability to approve any necessary backports into those branches for train will be limited to the members of that stable team. If there are any questions about stable policy or stable team membership, please each out in the #openstack-stable channel. Finally, now is also a good time to start planning what highlights you want for your deliverables in the cycle highlights. The deadline to submit an initial version for those is set to Feature freeze on September 12. Background on cycle-highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Project Team Guide, Cycle-Highlights: https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights knelson [at] openstack.org/diablo_rojo on IRC is available if you need help selecting or writing your highlights Upcoming Deadlines & Dates -------------------------- Non-client library freeze: September 05 (R-6 week) Client library freeze: September 12 (R-5 week) Train-3 milestone (feature freeze): September 12 (R-5 week) RC1 deadline: September 26 (R-3 week) Train final release: October 16 Forum+PTG at Shanghai summit: November 4 From rico.lin.guanyu at gmail.com Thu Aug 29 17:05:30 2019 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 30 Aug 2019 01:05:30 +0800 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri Message-ID: Dear all OpenStackers It's my honor to announce OpenStack U release name "*Ussuri" *(naming after Ussuri River). Ussuri has been selected as top one option from community poll and also been lawyer-approved, so it's official. Thanks again for all who join the U release naming processes. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Aug 29 17:09:10 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 29 Aug 2019 13:09:10 -0400 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri In-Reply-To: References: Message-ID: On Thu, Aug 29, 2019 at 1:08 PM Rico Lin wrote: > Dear all OpenStackers > > It's my honor to announce OpenStack U release name "*Ussuri" *(naming > after Ussuri River). > cool! congrats and thanks for your hard work on getting this done Rico. > Ussuri has been selected as top one option from community poll and also > been lawyer-approved, so it's official. > > Thanks again for all who join the U release naming processes. > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Thu Aug 29 17:15:41 2019 From: gaetan.trellu at incloudus.com (=?ISO-8859-1?Q?Ga=EBtan_Trellu?=) Date: Thu, 29 Aug 2019 13:15:41 -0400 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri In-Reply-To: Message-ID: <7851a22c-15ff-446a-98a5-13113cae8343@email.android.com> An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Thu Aug 29 18:59:55 2019 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 29 Aug 2019 14:59:55 -0400 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: <20190829185955.i4qxo5c26n5ubm2x@bishop> On Thu, Aug 29, 2019 at 10:40:41AM -0500, Miguel Lavalle wrote: > Dear Neutron team, > > It has been an honor to be your PTL for the past four cycles. I have really > enjoyed the ride and am thankful for all the support and hard work that you > all have provided over these past two years. As I said in my last > self-nomination ( > http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), > I am of the opinion that it is healthy to transfer periodically the baton > to a new leader with fresh ideas and energy. One of the focus areas of my > PTL tenure has been to keep the team strong by actively helping community > members to become contributors and core reviewers. As a consequence, I > know for a fact that we have a deep bench of potential new PTLs ready to > step up to the plate. > > As for me, I am not going anywhere. I am very lucky that my management at > Verizon Media supports my continued active involvement with the community > at large and the Neutron family of projects in particular. So I look > forward to help the team in the upcoming U cycle and beyond. > > Thank you soooo much! > > Miguel Miguel, Thank you for your leadership these past years. You have set a high standard for what it means to be a leader in the Neutron community. You have been a great technologist, keeping a keen eye on the strategic needs of the project and the community while also posessed of great insight when battling bugs. But you have also been a kind word, an outstretched hand, or a welcoming presence for those of us who have had the pleasure to work with you. I feel that with your leadership and your example Neutron has been a better environment in which to labor. So thank you for everything you have done and been for your fellow Neutrinos these four cycles. Nate From haleyb.dev at gmail.com Thu Aug 29 19:27:34 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 29 Aug 2019 15:27:34 -0400 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: <7a492dfd-389f-1cf5-9bca-63432b318552@gmail.com> Miguel, No, thank *you* for the past two years of leadership, Neutron would not be where it is today without you! Glad you are able to continue on as a Neutrino as well :) -Brian On 8/29/19 11:40 AM, Miguel Lavalle wrote: > Dear Neutron team, > > It has been an honor to be your PTL for the past four cycles. I have > really enjoyed the ride and am thankful for all the support and hard > work that you all have provided over these past two years. As I said in > my last self-nomination > (http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), > I am of the opinion that it is healthy to transfer periodically the > baton to a new leader with fresh ideas and energy. One of the focus > areas of my PTL tenure has been to keep the team strong by actively > helping community members to become contributors  and core reviewers. As > a consequence, I know for a fact that we have a deep bench of potential > new PTLs ready to step up to the plate. > > As for me, I am not going anywhere. I am very lucky that my management > at Verizon Media supports my continued active involvement with the > community at large and the Neutron family of projects in particular. So > I look forward  to help the team in the upcoming U cycle and beyond. > > Thank you soooo much! > > Miguel From haleyb.dev at gmail.com Thu Aug 29 19:32:06 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 29 Aug 2019 15:32:06 -0400 Subject: Metadata service caching old nameservers? In-Reply-To: <5e8e7fda-24ad-b4f6-99a0-77d6f0664be3@civo.com> References: <5e8e7fda-24ad-b4f6-99a0-77d6f0664be3@civo.com> Message-ID: <04cddfc2-6ba0-be2e-ead1-eb78ad3ea891@gmail.com> On 8/29/19 6:26 AM, Grant Morley wrote: > Hi All, > > We have a bit of a weird issue with resolv.conf for instances. We have > changed our subnets in neutron to use googles nameservers which is fine. > However it seems that when instances are launched they are still getting > the old nameserver settings as well as the new ones. If I look at the > metadata networking service it returns the old nameservers as well as > the new ones below: > > curl -i http://169.254.169.254/openstack/2017-02-22/network_data.json > HTTP/1.1 200 OK > Content-Type: text/plain; charset=UTF-8 > Content-Length: 753 > Date: Thu, 29 Aug 2019 09:40:03 GMT > > {"services": [{"type": "dns", "address": "178.18.121.70"}, {"type": > "dns", "address": "178.18.121.78"}, {"type": "dns", "address": > "8.8.8.8"}, {"type": "dns", "address": "8.8.4.4"}] > > In our neutron dhcp-agent.ini file we have the correct dnsmasq > nameservers set: > > dnsmasq_dns_servers = 8.8.8.8, 8.8.4.4 > > Are there any database tables I can change or clear up to ensure the old > nameservers no longer get set? I can't seem to find any reference in our > config any more of the old nameservers, so I assume one of the services > is still setting them but I can't figure out what is. So does the DHCP response on lease renewal just have the two Google nameservers in it? If so, does a new VM booted on the subnet have the correct metadata values reported? Just trying to narrow-down if this is in the neutron code or not. -Brian From ashlee at openstack.org Thu Aug 29 19:41:34 2019 From: ashlee at openstack.org (Ashlee Ferguson) Date: Thu, 29 Aug 2019 14:41:34 -0500 Subject: [User-committee] UC candidacy. In-Reply-To: <40a7c19e-83c0-5f51-aa8c-eaed57e4896b@gmail.com> References: <3b2eb984-ad82-18cd-c87a-12621aa396e2@gmail.com> <40a7c19e-83c0-5f51-aa8c-eaed57e4896b@gmail.com> Message-ID: Hi Ian, Thanks for flagging this! I’ve connected with Ilya and Jimmy. We’re getting Ilya’s group added to Meetup.com/pro/osf shortly. Ashlee > On Aug 26, 2019, at 6:43 PM, Ian Y. Choi wrote: > > (Adding community at lists.openstack.org mailing list and some Foundation members related with this) > > Hello Ilya, > > As announced by [1], https://www.meetup.com/pro/osf/ is now an official group portal. > As an UC election official, I discussed with last UC IRC meeting and I got confirmation that the AUC list is retrieved from Meetup Pro [2]. > > Two more comments: > - @Ilya: It seems that Russian user group is not listed in Meetup Pro. If it is, please talk with Ashlee to successfully register your user group to Meetup Pro. > - @Ashley @Jimmy: Is it possible to make a redirection of URL: groups.openstack.org to www.meetup.com/pro/osf/ ? > > > With many thanks, > > /Ian > > [1] http://lists.openstack.org/pipermail/community/2019-April/001956.html > [2] http://eavesdrop.openstack.org/meetings/uc/2019/uc.2019-08-26-15.03.log.html#l-68 > > Ilya Alekseyev wrote on 8/27/2019 5:34 AM: >> Hi Ian! >> >> Thank you for sharing AUC requirements. >> Could you please clarify how currently defining Official OpenStack User Group? >> In the best of my knowledge Groups Portal was retired. >> >> Thank you in advance. >> >> Kind regards, >> Ilya Alekseyev. >> Russian OpenStack Community >> >> >> пн, 26 авг. 2019 г. в 19:52, Ian Y. Choi >>: >> >> Hello Natal, >> >> First of all, thanks a lot for your UC candidacy! >> >> UC election is applicable for Active User Contributors (AUCs) as >> defined >> by OpenStack User Committee Charter [1], >> and I would like to share that both UC election officials could not >> verify that you are an AUC. >> >> If you can share more details which support to verify that you are an >> AUC who is eligible for the election, please do not hesitate to share >> with the election officials before the due of the nomination period. >> >> Although the election officials could not confirm that you are an >> eilgible candidacy for the election, I am pretty sure that you can >> become AUC later, and can run for next UC election(s). >> >> >> With many thanks, >> >> /Ian >> >> [1] >> https://governance.openstack.org/uc/reference/charter.html#active-user-contributors-auc >> >> Natal Ngétal wrote on 8/26/2019 9:29 PM: >> > Hi, >> > >> > I saw a mail on openstack-discuss about the new user committee >> election. I'm >> > Natal I principally contribute to the tripleo part. I have also >> contribute a >> > little to oslo projects and gertty, for example. I'm really >> interested to >> > candidate for this post, that can be seems weird, because I have >> started to >> > contribute in october 2018 only. But I think that can be >> interesting for the >> > project, a new eyes. I wish to implicate me more in the >> community. For example, >> > I want to organize local meetup regularly, go to the OpenStack >> summit and ptg >> > and started to give talk and write articles on the project. I >> wish also meet >> > more the customers and work with they to improve the project. >> Understand the >> > customers problems. With this role that can be more easy. For >> me, that can be >> > also really interesting to learn more the project and the >> community. That can >> > be also really source motivation and that can help me for my work. >> > >> > My gerrit profile: >> > >> > https://review.opendev.org/#/q/owner:hobbestigrou%2540erakis.eu >> > >> > >> > Thanks >> > >> > _______________________________________________ >> > User-committee mailing list >> > User-committee at lists.openstack.org >> > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee >> >> >> >> _______________________________________________ >> User-committee mailing list >> User-committee at lists.openstack.org >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee >> > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Thu Aug 29 19:48:47 2019 From: chris at openstack.org (Chris Hoge) Date: Thu, 29 Aug 2019 12:48:47 -0700 Subject: [loci] Non-candidacy for U cycle Message-ID: <7590360F-AE93-4236-B5F2-F9A1197333A9@openstack.org> Thank you to the Loci team for having me serve as PTL. With the project seeing heavier usage in production, I feel it's important to hand over leadership to the developers who have more day-to-day usage. There are still lots of great enhancements that can be made to the project, and I'm excited to see what comes next. -Chris From engrsalmankhan at gmail.com Wed Aug 28 11:40:37 2019 From: engrsalmankhan at gmail.com (Salman Khan) Date: Wed, 28 Aug 2019 12:40:37 +0100 Subject: FWAAS V2 doesn't work with DVR In-Reply-To: References: Message-ID: Hi Brian, Thanks for your reply. We are using Queens release. FWAAS_v2 for sure doesn't work with DVR but without dvr it's all fine. I think the way dvr does the routing east-west (across two internal subnets) would never be able to work with iptables, because it's too complex to handle it. Probably that's why community is moving towards ovs rules. However, I made a few changes in the code to make the north-south firewall workable, will push a code change sometime soon after cleanup. Salman On Tue, Aug 27, 2019 at 10:00 PM Brian Haley wrote: > Hi Salman, > > On 8/21/19 2:49 PM, Salman Khan wrote: > > Hi Guys, > > > > I asked this question over #openstack-neutron channel but didn't get any > > answer, so asking here in a hope that someone might read this email and > > reply. > > The problem is: I have enabled FWAAS_V2 with DVR and that doesn't seem > > to work. I debugged things down to router namespaces and it looks like > > iptables rules are applied to rfp- interface which doesn't > > exist in that namespace. So rules are completely wrong as they are > > applied to an interface that doesn't exist, I mean there is rfp-* > > interface but the that fwaas expecting is not what it > > should be. I tried applying the rules to qr-* interfaces in the > > namespace but that didn't work as well, packets are dropping on > > "invalid" state rule. That's probably because of nat rules from dvr. > > Can someone please help me to understand this behaviour. Is it really > > suppose to work or not. If there is any bug or fix pending or there is > > any work ongoing to support this. > > Can you tell what version of neutron/neutron-fwaas you are using? > > Short of that I believe it should work, the only bug I found that seems > related and was fixed recently (end of 2018) was > https://bugs.launchpad.net/neutron/+bug/1762454 so maybe take a look at > that and see if is the same thing. > > Otherwise maybe someone on the Fwaas team has seen it? > > -Brian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufar at onf-ambassador.org Thu Aug 29 07:05:12 2019 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Thu, 29 Aug 2019 14:05:12 +0700 Subject: [neutron][ovn] networking-ovn plugin support for multiple remote ovsdb-server Message-ID: [neutron][ovn] networking-ovn plugin support for multiple remote ovsdb-server Hi All, I have testing OpenStack (Queens Version) with OVN and works, but when I want to cluster the ovsdb-server, OpenStack stuck to create neutron object. based on my discussion in openvswitch mailing list ( https://mail.openvswitch.org/pipermail/ovs-discuss/2019-August/049175.html), my ovsdb-server clustering databases are working. Are networking-ovn support to connect to multiple ovsdb-server? something like this in ml2_conf.ini [ovn] ovn_nb_connection = tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp: 10.101.101.102:6641 ovn_sb_connection = tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp: 10.101.101.102:6642 networking-ovn-metadata-agent.ini [ovn] ovn_sb_connection = tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp: 10.101.101.102:6642 Below is the full step to create ovsdb-server and neutron services. - step 1: bootstrapping ovsdb-server cluster : http://paste.openstack.org/show/766812/ - step 2: creating neutron service in controller : http://paste.openstack.org/show/766464/ - step 3: creating neutron service in compute : http://paste.openstack.org/show/766465/ But when I try to create neutron resource, Its always hang (only neutron resource). This is the full logs of all nodes, contain: http://paste.openstack.org/show/766461/ - all openvswitch logs - port (via netstat) - step bootstraping ovsdb-server Neutron logs in controller0: paste.openstack.org/show/766462/ Are networking-ovn support to connect to multiple ovsdb-server? Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From saivishwasp at gmail.com Thu Aug 29 16:45:17 2019 From: saivishwasp at gmail.com (Sai Vishwas) Date: Thu, 29 Aug 2019 22:15:17 +0530 Subject: [Openstack][Swift] : Swift file-storage allocation policy Message-ID: Hi all, I have the following questions regarding the file allocation policy in Openstack Swift: 1. Are the uploaded files/objects mapped to partitions solely based on the name by hashing? Is there any way that I could force a particular file to reside on a particular disk? 2. How are the partitions mapped to disks? Is there any way that I could force a particular partition to reside on a particular disk? 3. Are the sizes of all partitions the same? (I am aware that a partition is a virtual concept but I am interested in whether a limit is placed on how many files are assigned to a particular partition ) I am asking the above questions as I have a few files which are accessed very frequently and would like them to reside on one of the SSDs that we have. Does OpenStack Swift internally provide this option? If not how does Swift accommodate for such SLA bound requirements without violating the concept of partitions? It would be of great help if you could explain the internal working of swift with respect to this scenario. Regards, Sai Vishwas Padigi -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Thu Aug 29 17:02:19 2019 From: me at not.mn (John Dickinson) Date: Thu, 29 Aug 2019 10:02:19 -0700 Subject: [Openstack] [Swift] : Swift file-storage allocation policy In-Reply-To: References: Message-ID: <12EE711E-E7B1-49BB-92A4-544B16562F08@not.mn> answers inline On 29 Aug 2019, at 9:45, Sai Vishwas wrote: > Hi all, > > I have the following questions regarding the file allocation policy in > Openstack Swift: > > 1. Are the uploaded files/objects mapped to partitions solely based on the > name by hashing? Is there any way that I could force a particular file to > reside on a particular disk? Yes, objects are mapped to disks based on the hash of the object name. There is not way to force an object to reside on a particular disk. > 2. How are the partitions mapped to disks? Is there any way that I could > force a particular partition to reside on a particular disk? The mapping is done based on what we call the ring. For a (very deep) discussion of how it works, see the docs at https://docs.openstack.org/swift/latest/overview_ring.html > 3. Are the sizes of all partitions the same? (I am aware that a partition > is a virtual concept but I am interested in whether a limit is placed on > how many files are assigned to a particular partition ) Partitions don't have a "size" per-se. A partition is simply the high-order bits of the result of a hash function. There is no limit to how many objects are assigned to a disk. The total number of partitions is based on the ring's "part power" that is initially set at ring creation time. The value is chosen based on the number of drives that are in the ring. > > I am asking the above questions as I have a few files which are accessed > very frequently and would like them to reside on one of the SSDs that we > have. Does OpenStack Swift internally provide this option? If not how does > Swift accommodate for such SLA bound requirements without violating the > concept of partitions? In general, operators make this work by using a separate http cache system in front of the swift cluster to store very hot content. > > It would be of great help if you could explain the internal working of > swift with respect to this scenario. It's a complicated topic, but starting with the doc I linked above will give you a very good start on understanding how it works. > > Regards, > Sai Vishwas Padigi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 850 bytes Desc: OpenPGP digital signature URL: From eli at ChinaBuckets.com Fri Aug 30 00:47:25 2019 From: eli at ChinaBuckets.com (Eliza) Date: Fri, 30 Aug 2019 08:47:25 +0800 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri In-Reply-To: References: Message-ID: <132a10ef-764f-4564-ecbc-23534df359cd@ChinaBuckets.com> Rico Lin wrote: > > It's my honor to announce OpenStack U release name "*Ussuri" *(naming > after Ussuri River). That's cool name. thanks for your hard work. From miguel at mlavalle.com Fri Aug 30 00:50:47 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 29 Aug 2019 19:50:47 -0500 Subject: [openstack-dev] [neutron] Cancelling Neutron Drivers meeting on August 30th Message-ID: Dear Neutrinos, It turns out that this week we still have 3 out of 5 members of the drivers team off on vacation, so we won't have quorum for the meeting again. As a consequence, I am cancelling this week once more. We will resume on September 9th at the usual time. Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Aug 30 01:19:36 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 30 Aug 2019 10:19:36 +0900 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri In-Reply-To: References: Message-ID: Awesome!!! Thanks to everyone who involved in this naming process. On Fri, Aug 30, 2019 at 2:08 AM Rico Lin wrote: > Dear all OpenStackers > > It's my honor to announce OpenStack U release name "*Ussuri" *(naming > after Ussuri River). > > Ussuri has been selected as top one option from community poll and also > been lawyer-approved, so it's official. > > Thanks again for all who join the U release naming processes. > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Aug 30 01:40:17 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 30 Aug 2019 10:40:17 +0900 Subject: [all][election][TC] Candidacy for TC In-Reply-To: References: Message-ID: Awesome, Kendall. I'm personally appreciated for all the things you have done for the community. Moreover, new blood to the TC would be great to move the community forward. Bests, On Fri, Aug 30, 2019 at 1:46 AM Kendall Nelson wrote: > Hello Everyone :) > > I am declaring my candidacy for the TC election. > > Roles like PTL and TC member are incredibly important when it comes to > enabling others’ success and removing roadblocks that stand between > contributors (new or established) and the work they want to do. Much of the > work I have done over the years has had this focus, and as an elected > member of the TC I think I could have an even larger impact. > > A little background on me: I have been working on OpenStack since 2015 > (Liberty release). I started with getting involved organizing and mentoring > in the Women of OpenStack’s mentoring programs and developing Cinder before > branching out to os-brick and on to other roles. Since then, I have been > involved in many different projects and efforts focused on supporting the > larger community. > > Highlights: > > > - > > Onboarded over 300 global contributors as instructor at OpenStack > Upstream Institute (OUI) building our contributor community in places like > Brazil, Korea, and Vietnam[1] > - > > Developed the curriculum of OUI, including the Contributor Guide[2]. > - > > Evolved OpenStack’s various mentoring programs[3] and actively > participated in them, including our community’s involvement in the > Outreachy Program[4]. > - > > Established the First Contact SIG[5]. > - > > Coordinated the community’s ongoing migration to StoryBoard from > Launchpad > - > > Presided over 5 releases of (Pike-Train) technical elections (both PTL > and TC) as an election official > - > > Served on the Release Management, Cinder, & Infrastructure > (StoryBoard) teams > - > > Helped organize Forums, PTGs, and other in-person gatherings > - > > Collaborated with the k8s community to improve both our community’s > various onboarding programs > > > My experience teaching Upstream Institute and running the First Contact > SIG has helped me build skills that will be beneficial as a TC member, such > as making connections between new contributors and existing projects or > efforts. This ability to make connections and start conversations will > support our community’s cross project efforts as they continue to be a > challenge that I will help the TC overcome. > > My previous work has afforded me the opportunity to interact with almost > every OpenStack project and understand how they operate differently from > one another. Each project has its own culture; understanding that and how > to engage with each of them is incredibly important for the Technical > Committee to achieve selected community goals and other cross project > efforts. > > > As a member of the TC, I will seek to lower the barrier to entry to roles > like contributor or even PTL. There is a lot of duplicate and stale > information for setting up the accounts and tools needed to get started > which should be cleaned up. Individual project contributor > documentation[6][7] doesn’t exist for all the projects currently and would > be valuable. Individual project PTL guides (similar to the Release > Management process doc[8]) would also benefit our community as it would > smooth the transition from one PTL to another and minimize the loss of > tribal knowledge, especially when these transitions happen in the middle of > a cycle. I’ve proposed this contributor documentation focus as a community > goal for the U release[9]. Anything we can do to make a cohesive path for > new community members to work their way to roles like PTL will ensure the > health and success of our future. > > I love this community and all the awesome people in it, serving on the TC > would be a great experience and an honor. > > Thank you, > > Kendall > > Review history: https://review.opendev.org/#/q/reviewer:16708,n,z > > Commit history: https://review.openstack.org/#/q/owner:16708,n,z > > Foundation Profile: > https://www.openstack.org/community/members/profile/35859/ > > Freenode: diablo_rojo > > [1] https://docs.openstack.org/upstream-training/ > > [2] https://docs.openstack.org/contributors/ > > [3] https://wiki.openstack.org/wiki/Mentoring > > [4] https://www.outreachy.org/ > > [5] https://wiki.openstack.org/wiki/First_Contact_SIG > > [6] https://docs.openstack.org/swift/latest/development_guidelines.html > > [7] https://docs.openstack.org/nova/latest/contributor/index.html > > [8] https://releases.openstack.org/reference/process.html > > [9] https://etherpad.openstack.org/p/PVG-u-series-goals > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin.lu at huawei.com Fri Aug 30 01:45:21 2019 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Fri, 30 Aug 2019 01:45:21 +0000 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: <0957CD8F4B55C0418161614FEC580D6B30BF1F07@yyzeml704-chm.china.huawei.com> Miguel, Thanks for your leadership. You are the most helpful and complement PTL I have ever seen. It is a pleasure to work with you in Neutron and look forward to continuing to work with you in the future. Best regards, Hongbin From: Miguel Lavalle [mailto:miguel at mlavalle.com] Sent: August 29, 2019 11:41 AM To: openstack-discuss Subject: [neutron][ptl][elections] Non candidacy for U cycle Dear Neutron team, It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride and am thankful for all the support and hard work that you all have provided over these past two years. As I said in my last self-nomination (http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas and energy. One of the focus areas of my PTL tenure has been to keep the team strong by actively helping community members to become contributors and core reviewers. As a consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step up to the plate. As for me, I am not going anywhere. I am very lucky that my management at Verizon Media supports my continued active involvement with the community at large and the Neutron family of projects in particular. So I look forward to help the team in the upcoming U cycle and beyond. Thank you soooo much! Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From i at liuyulong.me Fri Aug 30 02:49:28 2019 From: i at liuyulong.me (=?utf-8?B?TElVIFl1bG9uZw==?=) Date: Fri, 30 Aug 2019 10:49:28 +0800 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: Miguel Thank you for your outstanding contribution to the community and leadership for Neutron. It is soooo great to see that you will continue to involve the upstream things. Some of the valuable comments you gave me on the contribution and neutron project have benefited me a lot. Thank you. LIU Yulong ------------------ Original ------------------ From: "Miguel Lavalle"; Date: Thu, Aug 29, 2019 11:40 PM To: "openstack-discuss"; Subject: [neutron][ptl][elections] Non candidacy for U cycle Dear Neutron team, It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride and am thankful for all the support and hard work that you all have provided over these past two years. As I said in my last self-nomination (http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas and energy. One of the focus areas of my PTL tenure has been to keep the team strong by actively helping community members to become contributors and core reviewers. As a consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step up to the plate. As for me, I am not going anywhere. I am very lucky that my management at Verizon Media supports my continued active involvement with the community at large and the Neutron family of projects in particular. So I look forward to help the team in the upcoming U cycle and beyond. Thank you soooo much! Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Fri Aug 30 06:11:38 2019 From: akekane at redhat.com (Abhishek Kekane) Date: Fri, 30 Aug 2019 11:41:38 +0530 Subject: [Glance] PTL Candidacy Message-ID: Hello everyone, I would like to announce my candidacy for the role of PTL of Glance for the Ussuri cycle. I have been working closely with several previous PTL's like Erno and Brian and I am hoping to contribute back to the community as a PTL while following their excellent example. I've been working primarily with Glance since Icehouse and do work full-time on Glance upstream. During that time I've served community in release liaison role, as Glance core reviewer and part of Glance bugs team. I have represented Glance in interactions with various teams, such as the cross-projects interactions, API workgroup, documentation and OSIC. Even though I work in GMT+05:30 time zone, I am flexible enough to catchup with peoples who were in US or European time-zone. I have seen lots of highs and lows of the Glance and am proud that we have not only survived but also becoming strong. During the Train cycle we have managed to release glance-store version 1.0 which supports multiple stores configuration. This is how I want to focus for Ussuri cycle 1. Multiple stores support As glance-store v1.0 is out with multiple stores configuration support, I want to make sure Glance is able maintain it's backward compatibility and doesn't break any cross-project workflow. Most of the work is already done during Train cycle but in case if any bug occurs then it will be solved on the high priority. Multiple stores will also play role in Edge computing, so if any enhancement is required then it will be handled on top priority. 2. Glance cluster awareness With Interoperable Image Import and new edge usecases, Glance cluster awareness [0] is most important feature. Even the basic implementation has started during Train cycle, we want to use this in caching operations which will be carried out during this development cycle. 3. Bridging gap between Glance and Horizon Glance has now added new features such as new import workflow, import plugins, hidden images etc. Horizon is yet to implement these in the dashboards. I am going to put some efforts in communication with horizon team and help them to understand/implement/bridge this gap. 4. Catching up with tempest Similar to horizon, Glance needs to have tempest upto-date with new image import workflow, hidden images, multiples stores scenarios. I would like to catchup with tempest team and extend a hand to bridge this gap. 5. Community priorities Whatever community goals put forward by TC, as a PTL of Glance project will want to adopt these as a priority for Ussuri cycle. 6. Other As we are very short team at this moment, I would like to put some efforts in attracting some contributors, help them to understand how Glance functions and encourage them to contribute for Glance. Looking at past how Erno and Brian has steadied the ship, I know its a big task and with their help I will do my best not to fail the expectations. Thank you for consideration, Abhishek Kekane [0] https://review.opendev.org/#/c/664956/2/specs/train/approved/glance/cluster-awareness.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From luka.peschke at objectif-libre.com Fri Aug 30 07:32:54 2019 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Fri, 30 Aug 2019 09:32:54 +0200 Subject: [cloudkitty] PTL candidacy Message-ID: <28df8058.ANEAAEob8rMAAAAAAAAAAAQR_QkAAAAAZtYAAAAAAAzbjABdaNEn@mailjet.com> Hello contributors, user, operators, I would like to submit my candidacy as CloudKitty's PTL for the Ussuri cycle. I've been contributing to the project for several releases and I've been the PTL for the last two releases. I'm very happy with the T cycle, as the goals we fixed ourselves have been achieved: the planned features have been implemented, some features that were planned for U might make it into T, and most importantly, our community has grown. If you want to have me as the project's PTL for the Ussuri cycle, I'd like to keep focusing on the points we addressed during the last few cycles: * Help CloudKitty's community to grow. This is a constant effort, and it is starting to pay off: people are gaining interest in CloudKitty, more and more questions and suggestions are being asked/made on IRC, and some external contributors are proposing patches. That's very rewarding for our small community, and we intent to keep it going! * Improving the documentation: A lot has been done during the last cycles, but with the growth of the v2 API, we need to keep putting effort into this; more specifically into tutorials, guides and into the developer documentation I'd also like to finally work on the new rating module. This has been planned and asked for too long, and we should finally work on this together. I will use this opportunity to thank everybody who's been involved in CloudKitty: developers, reviewers, users... The project lives thanks to you! Thank you for your attention, Luka Peschke From sbauza at redhat.com Fri Aug 30 08:06:18 2019 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 30 Aug 2019 10:06:18 +0200 Subject: [nova] Status of the bp support-move-ops-with-qos-ports In-Reply-To: <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> References: <1567093554.14852.1@smtp.office365.com> <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> Message-ID: On Thu, Aug 29, 2019 at 6:15 PM Matt Riedemann wrote: > On 8/29/2019 10:45 AM, Balázs Gibizer wrote: > > As feature freeze is closing in I try to raise awareness of the > > implementation status of the support-move-ops-with-qos-ports > > < > https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/support-move-ops-with-qos-ports> bp > > > [1]. > > Thanks for this since I know you've been pushing patches but I haven't > been able to get myself past the hump that is [3] yet. > > > > > The original goal of the bp was to support every move operations (cold > > migrate, resize, live-migrate, evacuate, unshelve). Today I have ready > > and tested (unit + functional) patches proposed for cold migrate and > > resize [2]. I don't think every move operations will be ready and merged > > in Train but I still hope that there is a chance for cold migrate and > > resize to land. Sure I will continue working on supporting the > > operations that miss the Train in the U release. > > This is probably OK since we agreed at the PTG that this blueprint was > essentially going to be closing gaps in functionality introduced in > Stein but not adding a new microversion, correct? If so, then doing it > piece-meal seems OK to me. > > Yeah, and it's already pretty clear about which specific operations can be done. I'm OK with having only a few operations done by Train given the API and the docs are correctly explaining which ones. > > > One possible complication in the current patch series is that it > > proposes the necessary RPC changes for _every_ move operations [3]. This > > might not be what we want to merge in Train if only cold migrate and > > resize support fill fit. So I see two possible way forwards: > > a) Others also feel that cold migrate and resize will fit into Train, > > and then I will quickly change [3] to only modify those RPCs. > > b) We agree to postpone the whole series to U. Then I will not spend too > > much time on reworking [3] in the Train timeframe. > > Or (c) we just land that change. I was meaning to review that today > anyway so this email is coincidental for me. I think passing the request > spec down to the compute methods is fine and something we'll want to do > eventually anyway for this series (even if the live migration stuff is > deferred to U). > > +1 FWIW, since 3 years I would like to pass the RequestSpec down to the compute but I hadn't time to do it yet. Cool with me. > > > > A connected topic: I proposed patches [4] to add cold migrate and resize > > tempest coverage to the nova-grenade-multinode (renamed from > > nova-grenade-live-migrate) job as Matt pointed out in [3] that we don't > > have such coverage and that my original change in [3] would have broken > > a mixed-compute-version deployment. > > I'm just waiting on CI results for those but since I've already been > through them in some form and they are pretty small I think there is a > good chance of landing these soon. > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Aug 30 08:07:14 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 30 Aug 2019 10:07:14 +0200 Subject: [neutron][qa] Databases used on CI jobs Message-ID: Hi, Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? Or maybe we should have jobs with both Mariadb and Mysql databases? [1] https://review.opendev.org/#/c/677221 [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz — Slawek Kaplonski Senior software engineer Red Hat From jpena at redhat.com Fri Aug 30 08:25:23 2019 From: jpena at redhat.com (Javier Pena) Date: Fri, 30 Aug 2019 04:25:23 -0400 (EDT) Subject: [Packaging-Rpm] PTL Candidacy In-Reply-To: <970717338.12727069.1567153513243.JavaMail.zimbra@redhat.com> Message-ID: <963219563.12727085.1567153523325.JavaMail.zimbra@redhat.com> Hello everyone, I would like to announce my candidacy to be the PTL for the Packaging Rpm project during the Ussuri development cycle. For the Ussuri cycle, my focus would be on: * Making sure all packages have been migrated to Python 3, so we can finally get rid of Python 2. * Transition the RDO CI and packages to CentOS 8, once it is released, and remove the now obsolete Fedora-based CI. That, of course, would be in addition to our common goals: expanding our contributor base, improving collaboration between RPM distributions, and keeping a high quality set of packages. Thanks, Javier From smooney at redhat.com Fri Aug 30 08:28:22 2019 From: smooney at redhat.com (Sean Mooney) Date: Fri, 30 Aug 2019 09:28:22 +0100 Subject: [nova] Status of the bp support-move-ops-with-qos-ports In-Reply-To: References: <1567093554.14852.1@smtp.office365.com> <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> Message-ID: <0d26942ed15700e5308989d8a98e59d845f096dc.camel@redhat.com> On Fri, 2019-08-30 at 10:06 +0200, Sylvain Bauza wrote: > On Thu, Aug 29, 2019 at 6:15 PM Matt Riedemann wrote: > > > On 8/29/2019 10:45 AM, Balázs Gibizer wrote: > > > As feature freeze is closing in I try to raise awareness of the > > > implementation status of the support-move-ops-with-qos-ports > > > < > > > > https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/support-move-ops-with-qos-ports> > > ; bp > > > > > [1]. > > > > Thanks for this since I know you've been pushing patches but I haven't > > been able to get myself past the hump that is [3] yet. > > > > > > > > The original goal of the bp was to support every move operations (cold > > > migrate, resize, live-migrate, evacuate, unshelve). Today I have ready > > > and tested (unit + functional) patches proposed for cold migrate and > > > resize [2]. I don't think every move operations will be ready and merged > > > in Train but I still hope that there is a chance for cold migrate and > > > resize to land. Sure I will continue working on supporting the > > > operations that miss the Train in the U release. > > > > This is probably OK since we agreed at the PTG that this blueprint was > > essentially going to be closing gaps in functionality introduced in > > Stein but not adding a new microversion, correct? If so, then doing it > > piece-meal seems OK to me. > > > > > > Yeah, and it's already pretty clear about which specific operations can be > done. > I'm OK with having only a few operations done by Train given the API and > the docs are correctly explaining which ones. honestly once we have a functional move operation that make it much simpler to use this feature. so if we only land resize/cold migration that would still be a good step forward. > > > > > > One possible complication in the current patch series is that it > > > proposes the necessary RPC changes for _every_ move operations [3]. This > > > might not be what we want to merge in Train if only cold migrate and > > > resize support fill fit. So I see two possible way forwards: > > > a) Others also feel that cold migrate and resize will fit into Train, > > > and then I will quickly change [3] to only modify those RPCs. > > > b) We agree to postpone the whole series to U. Then I will not spend too > > > much time on reworking [3] in the Train timeframe. > > > > Or (c) we just land that change. I was meaning to review that today > > anyway so this email is coincidental for me. I think passing the request > > spec down to the compute methods is fine and something we'll want to do > > eventually anyway for this series (even if the live migration stuff is > > deferred to U). > > > > > > +1 > FWIW, since 3 years I would like to pass the RequestSpec down to the > compute but I hadn't time to do it yet. > Cool with me. i have had cases in the past where it would have been nice to have but i think this is a case where it makes sense as any other approch would be less clean. > > > > > > > > > A connected topic: I proposed patches [4] to add cold migrate and resize > > > tempest coverage to the nova-grenade-multinode (renamed from > > > nova-grenade-live-migrate) job as Matt pointed out in [3] that we don't > > > have such coverage and that my original change in [3] would have broken > > > a mixed-compute-version deployment. > > > > I'm just waiting on CI results for those but since I've already been > > through them in some form and they are pretty small I think there is a > > good chance of landing these soon. > > > > -- > > > > Thanks, > > > > Matt > > > > From grant at civo.com Fri Aug 30 08:30:16 2019 From: grant at civo.com (Grant Morley) Date: Fri, 30 Aug 2019 09:30:16 +0100 Subject: Metadata service caching old nameservers? In-Reply-To: <04cddfc2-6ba0-be2e-ead1-eb78ad3ea891@gmail.com> References: <5e8e7fda-24ad-b4f6-99a0-77d6f0664be3@civo.com> <04cddfc2-6ba0-be2e-ead1-eb78ad3ea891@gmail.com> Message-ID: <37affb04-f316-17ac-273f-d1a377d928fe@civo.com> Hi Brian, The lease does have the two Google nameservers in it, however a new VM is still somehow getting the old values as well. That is what is confusing me.  If it helps we are using Queens still currently. Is there any database entry that I can change? Many thanks for your help. Regards, On 29/08/2019 20:32, Brian Haley wrote: > > > On 8/29/19 6:26 AM, Grant Morley wrote: >> Hi All, >> >> We have a bit of a weird issue with resolv.conf for instances. We >> have changed our subnets in neutron to use googles nameservers which >> is fine. However it seems that when instances are launched they are >> still getting the old nameserver settings as well as the new ones. If >> I look at the metadata networking service it returns the old >> nameservers as well as the new ones below: >> >> curl -i http://169.254.169.254/openstack/2017-02-22/network_data.json >> HTTP/1.1 200 OK >> Content-Type: text/plain; charset=UTF-8 >> Content-Length: 753 >> Date: Thu, 29 Aug 2019 09:40:03 GMT >> >> {"services": [{"type": "dns", "address": "178.18.121.70"}, {"type": >> "dns", "address": "178.18.121.78"}, {"type": "dns", "address": >> "8.8.8.8"}, {"type": "dns", "address": "8.8.4.4"}] >> >> In our neutron dhcp-agent.ini file we have the correct dnsmasq >> nameservers set: >> >> dnsmasq_dns_servers = 8.8.8.8, 8.8.4.4 >> >> Are there any database tables I can change or clear up to ensure the >> old nameservers no longer get set? I can't seem to find any reference >> in our config any more of the old nameservers, so I assume one of the >> services is still setting them but I can't figure out what is. > > So does the DHCP response on lease renewal just have the two Google > nameservers in it?  If so, does a new VM booted on the subnet have the > correct metadata values reported?  Just trying to narrow-down if this > is in the neutron code or not. > > -Brian -- Grant Morley Cloud Lead, Civo Ltd www.civo.com | Signup for an account! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 30 08:31:58 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 30 Aug 2019 09:31:58 +0100 Subject: [neutron][qa] Databases used on CI jobs In-Reply-To: References: Message-ID: On Fri, 30 Aug 2019 at 09:08, Slawek Kaplonski wrote: > > Hi, > > Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. > It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. Thanks for bringing this up. Just to clarify - mariadb 10.1 is provided by Ubuntu bionic, so I'd expect others to hit this. > > And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? > Or maybe we should have jobs with both Mariadb and Mysql databases? > > [1] https://review.opendev.org/#/c/677221 > [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 > [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From skaplons at redhat.com Fri Aug 30 08:36:35 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 30 Aug 2019 10:36:35 +0200 Subject: [neutron][qa] Databases used on CI jobs In-Reply-To: References: Message-ID: <0DAF93AE-6188-41E6-B156-221663E94941@redhat.com> Hi, > On 30 Aug 2019, at 10:31, Mark Goddard wrote: > > On Fri, 30 Aug 2019 at 09:08, Slawek Kaplonski wrote: >> >> Hi, >> >> Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. >> It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. > > Thanks for bringing this up. Just to clarify - mariadb 10.1 is > provided by Ubuntu bionic, so I'd expect others to hit this. In Neutron we are running almost all jobs on Ubuntu Bionic (one is on Fedora and 2 on Centos) and we have Mysql 5.7 installed there instead of Mariadb. > >> >> And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? >> Or maybe we should have jobs with both Mariadb and Mysql databases? >> >> [1] https://review.opendev.org/#/c/677221 >> [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 >> [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> — Slawek Kaplonski Senior software engineer Red Hat From jakub.sliva at ultimum.io Fri Aug 30 09:02:25 2019 From: jakub.sliva at ultimum.io (=?UTF-8?B?SmFrdWIgU2zDrXZh?=) Date: Fri, 30 Aug 2019 11:02:25 +0200 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: Message-ID: pá 23. 8. 2019 v 20:59 odesílatel napsal: > On 2019-08-24 02:58, Mohammed Naser wrote: > > On Thu, Aug 22, 2019 at 11:46 PM Feilong Wang > > wrote: > >> > >> Hi all, > >> > >> At this moment, Magnum is still using Fedora Atomic 27 as the default > >> image in devstack. But you can definitely use Fedora Atomic 29 and it > >> works fine. But you may run into a performance issue when booting > >> Fedora Atomic 29 if your compute host doesn't have enough entropy. > >> There are two steps you need for that case: > >> > >> 1. Adding property hw_rng_model='virtio' to Fedora Atomic 29 image > >> > >> 2. Adding property hw_rng:allowed='True' to flavor, and we also need > >> hw_rng:rate_bytes=4096 and hw_rng:rate_period=1 to get a reasonable > >> rate limit to avoid the VM drain the hypervisor. > >> > >> We are working on a patch for Magnum devstack to support FA29 out of > >> box. Meanwhile, we're starting to test Fedora CoreOS 30. Please popup > >> in #openstack-containers channel if you have any question. Cheers. > > > > Neat! I think it's important for us to get off Fedora Atomic given > > that RedHat seems to be ending it soon. Is the plan to move towards > > Fedora CoreOS 30 or has there been consideration of using something > > like an Ubuntu-base (and leveraging something like kubeadm+ansible to > > drive the deployment?)- > > > > Personally, I would like we can stay at Fedora Atomic/CoreOS since > Magnum has already been benefited from the container-based readonly > operating system. But we did have the discussion about using > kubeadm+ansible, however, as you can see, it's a quite big refactoring, > I'm not sure if we can get it done with current limited resources. > > According to some notes from Magnum Train PTG meeting there is plan to move all drivers out of tree. Despite there is no time schedule for that we started to work on Ubuntu K8s driver which is completely independent to openstack/magnum project and which is based on kubeadm, Heat SW Deployments and external Openstack cloud provider. So we do not think there is much refactoring needed. At the moment it is under heavy development, however, at some point we plan to open the source and we will certainly appreciate any help. > >> > >> > >> -- > >> Cheers & Best regards, > >> Feilong Wang (王飞龙) > >> > -------------------------------------------------------------------------- > >> Head of R&D > >> Catalyst Cloud - Cloud Native New Zealand > >> Tel: +64-48032246 > >> Email: flwang at catalyst.net.nz > >> Level 6, Catalyst House, 150 Willis Street, Wellington > >> > -------------------------------------------------------------------------- > > Jakub Slíva Ultimum Technologies s.r.o. Na Poříčí 1047/26, 11000 Praha 1 Czech Republic jakub.sliva at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianyrchoi at gmail.com Fri Aug 30 09:16:11 2019 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Fri, 30 Aug 2019 18:16:11 +0900 Subject: [User-committee] UC candidacy In-Reply-To: References: Message-ID: Hello, As an UC election official, I confirm that the candidate is eligible for UC election within nomination period. With many thanks, /Ian On Friday, August 30, 2019, Jaesuk Ahn wrote: > > Hi, > > This is Jaesuk Ahn from OpenStack Korea User Group. > I would like announce my candidacy to serve as the User Committee. > > As a strong advocate of OpenStack community, I have been involved in > OpenStack > from 2010, founded OpenStack Korea Community in 2011, and been a > coordinator of > the Korea community since then. I have devoted myself to build an open and > healthy local community, in addition, I have always been trying to be a > good > arbitrator between local and global community to embrace a diversity > together. > > Here are articles about various events that I have been one of main > organizers > and program chair of the events [1], [2], [3]. Please see the presentation > [4] > I did with two other community leaders on how we have built community in > Korea. > > From a technical point of view, I have been leading OpenStack dev team > since > 2010. I have switched companies during those years but was able to focus > on one > thing: deployment automation & lifecycle management of OpenStack. > OpenStack-Helm, > and Airship are the most recent efforts on this line of work in my career. > Here is my linkedin profile stating my work during the last 10 years [5]. > > I am currently leading a development team to work on > openstack-on-kubernetes in > SK Telecom. I have made an architectural decision to leverage kubernetes > for > OpenStack lifecycle management in 2016, that this decision eventually led > us to > collaborate with AT&T team. As a result of community collaboration with > AT&T, > I am happy to see both OpenStack-Helm project and Airship project active > in > the community. > > if I am chosen by the community, I am fully committed to hear various > voices > from many community members, especially from community members who has > difficulties in language and time barrier. I will do my best to > collaborate with others to > make OpenStack community more diverse and active. > > [1] https://superuser.openstack.org/articles/building-new- > foundations-openinfra-days-korea/ > [2] https://superuser.openstack.org/articles/openstack-korea-days-2017/ > [3] https://superuser.openstack.org/articles/openstack-day-in- > korea-2015-to-infinity-and-beyond/ > [4] https://www.slideshare.net/openstack_kr/boston-summit- > what-makes-it-possible-to-run-openstack-community-for-three-generations > [5] https://www.linkedin.com/in/jsahn/ > > > Thanks > > > *Jaesuk Ahn*, Ph.D. > Cloud Labs, SK Telecom > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Aug 30 09:22:55 2019 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 30 Aug 2019 10:22:55 +0100 Subject: [kolla] PTL candidacy Message-ID: Hi, I'd like to nominate myself to serve as the Kolla PTL for the Ussuri cycle. I have been PTL for the Train cycle, and would like the opportunity to continue to lead the team. In my Train nomination [1] I proposed a few things, including: * help those on the periphery of the project get more involved * try a new meeting time * hold a virtual PTG * adopt some of ironic's planning and tracking processes * continue improving CI testing During the cycle we have helped a number of new contributors get involved, and have taken on a new core contributor who is already one of the most active members. We polled for a new meeting time but found we were already optimal. The virtual PTG helped us to prioritise our efforts, and I expect we'll hold another this cycle. We now use a whiteboard [2] similar to ironic's, and went through a process of prioritisation of features for this cycle which we continue to use to guide our efforts. Finally, CI test coverage continues to improve - notably the new multinode upgrade job caught a pretty nasty MariaDB upgrade bug that we might not otherwise have spotted before release. If elected this cycle I'd like to continue to reflect on our processes, and encourage increasing test coverage for further stability. We recently added Kayobe [3] as a project deliverable, and I intend to ensure that these two overlapping communities come together effectively to make the most effective use of our combined resources. Thanks for reading, Mark Goddard (mgoddard) [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003617.html [2] https://etherpad.openstack.org/p/KollaWhiteBoard [3] https://docs.openstack.org/kayobe From bcafarel at redhat.com Fri Aug 30 09:29:40 2019 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Fri, 30 Aug 2019 11:29:40 +0200 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: On Thu, 29 Aug 2019 at 17:43, Miguel Lavalle wrote: > Dear Neutron team, > > It has been an honor to be your PTL for the past four cycles. I have > really enjoyed the ride and am thankful for all the support and hard work > that you all have provided over these past two years. As I said in my last > self-nomination ( > http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), > I am of the opinion that it is healthy to transfer periodically the baton > to a new leader with fresh ideas and energy. One of the focus areas of my > PTL tenure has been to keep the team strong by actively helping community > members to become contributors and core reviewers. As a consequence, I > know for a fact that we have a deep bench of potential new PTLs ready to > step up to the plate. > > As for me, I am not going anywhere. I am very lucky that my management at > Verizon Media supports my continued active involvement with the community > at large and the Neutron family of projects in particular. So I look > forward to help the team in the upcoming U cycle and beyond. > > Thank you soooo much! > > Miguel > Joining the (well deserved) thank you wagon! Thanks Miguel for all your PTL work in these 4 cycles, driving and renewing the group. And looking forward to keep working with "former PTL" you :) -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Aug 30 09:50:53 2019 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 30 Aug 2019 10:50:53 +0100 (BST) Subject: [placement] update 19-34 Message-ID: HTML: https://anticdent.org/placement-update-19-34.html Welcome to placement update 19-34. Feature Freeze is the week of September 9th. We have features in progress in placement itself (consumer types) and osc-placement that would be great to land. # Most Important In addition to the features above, we really need to get started on tuning up the documentation so that `same_subtree` and friends can be used effectively. It is also time to start thinking about what features, if any, need to be pursued in Ussuri. If there are few, that ought to leave time and energy for getting the osc-placement plugin more up to date. And, there are plenty of stories (see below) that need attention. Ideally we'd end every cycle with zero stories, including removing ones that no longer make sense. # What's Changed * Tetsuro has picked up the baton for performance and refactoring work and found [some](https://review.opendev.org/677120) [improvements](https://review.opendev.org/677209) that have merged. There's additional work in progress (noted below). # Stories/Bugs (Numbers in () are the change since the last pupdate.) There are 25 (2) stories in [the placement group](https://storyboard.openstack.org/#!/project_group/placement). 0 (0) are [untagged](https://storyboard.openstack.org/#!/worklist/580). 5 (1) are [bugs](https://storyboard.openstack.org/#!/worklist/574). 4 (0) are [cleanups](https://storyboard.openstack.org/#!/worklist/575). 12 (1) are [rfes](https://storyboard.openstack.org/#!/worklist/594). 4 (0) are [docs](https://storyboard.openstack.org/#!/worklist/637). If you're interested in helping out with placement, those stories are good places to look. * Placement related nova [bugs not yet in progress](https://goo.gl/TgiPXb) on launchpad: 17 (-1). * Placement related nova [in progress bugs](https://goo.gl/vzGGDQ) on launchpad: 6 (2). # osc-placement osc-placement is currently behind by 12 microversions. * Add support for multiple member_of. There's been some useful discussion about how to achieve this, and a consensus has emerged on how to get the best results. * Adds a new `--amend` option which can update resource provider inventory without requiring the user to pass a full replacement for inventory and an `--aggregate` option to set inventory on all the providers in an aggregate. This has been broken up into three patches to help with review. This one is very close but needs review from more people than Matt. # Main Themes ## Consumer Types Adding a type to consumers will allow them to be grouped for various purposes, including quota accounting. * A WIP, as microversion 1.37, has started. I picked this up yesterday and hope to have it finished next week, barring distractions. I figure having it in place for nova for Ussuri is a nice to have. ## Cleanup Cleanup is an overarching theme related to improving documentation, performance and the maintainability of the code. The changes we are making this cycle are fairly complex to use and are fairly complex to write, so it is good that we're going to have plenty of time to clean and clarify all these things. Performance related explorations continue: * Refactor initialization of research context. This puts the code that might cause an exit earlier in the process so we can avoid useless work. One outcome of the performance work needs to be something like a _Deployment Considerations_ document to help people choose how to tweak their placement deployment to match their needs. The simple answer is use more web servers and more database servers, but that's often very wasteful. Discussions about [using a different JSON serializer](http://lists.openstack.org/pipermail/openstack-discuss/2019-August/thread.html#8849) terminated choosing not to use `orjson` because it presents some packaging and distribution issues that might be problematic. There's still an option to use one of the other alternatives, but that exploration has not started. # Other Placement Miscellaneous changes can be found in [the usual place](https://review.opendev.org/#/q/project:openstack/placement+status:open). * Merge request log and request id middlewares is worth attention. It makes sure that _all_ log message from a single request use a global and local request id. There are two [os-traits changes](https://review.opendev.org/#/q/project:openstack/os-traits+status:open) being discussed. And zero [os-resource-classes changes](https://review.opendev.org/#/q/project:openstack/os-resource-classes+status:open). # Other Service Users New discoveries are added to the end. Merged stuff is removed. Anything that has had no activity in 4 weeks has been removed. * Cyborg: Placement report * helm: add placement chart * libvirt: report pmem namespaces resources by provider tree * Nova: Remove PlacementAPIConnectFailure handling from AggregateAPI * Nova: WIP: Add a placement audit command * Nova: libvirt: Start reporting PCPU inventory to placement A part of * Nova: support move ops with qos ports * Blazar: Create placement client for each request * nova: Support filtering of hosts by forbidden aggregates * blazar: Send global_request_id for tracing calls * tempest: Add placement API methods for testing routed provider nets * openstack-helm: Build placement in OSH-images * Correct global_request_id sent to Placement * Nova: cross cell resize * Nova: Scheduler translate properties to traits * Nova: single pass instance info fetch in host manager * Zun: [WIP] Claim container allocation in placement * Nova: using provider config file for custom resource providers # End ☃ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent From jean-philippe at evrard.me Fri Aug 30 10:43:23 2019 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 30 Aug 2019 12:43:23 +0200 Subject: [loci] Non-candidacy for U cycle In-Reply-To: <7590360F-AE93-4236-B5F2-F9A1197333A9@openstack.org> References: <7590360F-AE93-4236-B5F2-F9A1197333A9@openstack.org> Message-ID: <8b40ae8aad50fde543a5e0b00eb1f0392306bf8d.camel@evrard.me> On Thu, 2019-08-29 at 12:48 -0700, Chris Hoge wrote: > Thank you to the Loci team for having me serve as PTL. With the > project seeing > heavier usage in production, I feel it's important to hand over > leadership to > the developers who have more day-to-day usage. There are still lots > of great > enhancements that can be made to the project, and I'm excited to see > what comes > next. > > -Chris Hello Chris, Thanks for your service! You've always been there when we needed you, and I will remember that :) Regards, JP From michele at acksyn.org Fri Aug 30 12:28:50 2019 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 30 Aug 2019 14:28:50 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA Message-ID: <20190830122850.GA5248@holtby> Hi all, Damien (dciabrin on IRC) has always been very active in all HA things in TripleO and I think it is overdue for him to have core rights on this topic. So I'd like to propose to give him core permissions on any HA-related code in TripleO. Please vote here and in a week or two we can then act on this. Thanks, -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From johfulto at redhat.com Fri Aug 30 12:33:38 2019 From: johfulto at redhat.com (John Fulton) Date: Fri, 30 Aug 2019 08:33:38 -0400 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 On Fri, Aug 30, 2019 at 8:31 AM Michele Baldessari wrote: > > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > From emilien at redhat.com Fri Aug 30 12:37:21 2019 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 30 Aug 2019 08:37:21 -0400 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +2 On Fri, Aug 30, 2019 at 8:36 AM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Aug 30 12:41:16 2019 From: marios at redhat.com (Marios Andreou) Date: Fri, 30 Aug 2019 15:41:16 +0300 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: On Fri, Aug 30, 2019 at 3:30 PM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > +1 i thought he already was agree long overdue! > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Fri Aug 30 13:35:01 2019 From: no-reply at openstack.org (no-reply at openstack.org) Date: Fri, 30 Aug 2019 13:35:01 -0000 Subject: kayobe 6.0.0.0rc2 (stein) Message-ID: Hello everyone, A new release candidate for kayobe for the end of the Stein cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kayobe/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Stein release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/stein release branch at: https://opendev.org/openstack/kayobe/log/?h=stable/stein Release notes for kayobe can be found at: https://docs.openstack.org/releasenotes/kayobe/ From massimo.sgaravatto at gmail.com Fri Aug 30 13:34:52 2019 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 30 Aug 2019 15:34:52 +0200 Subject: [ops][glance] How to remove images stuck in 'deleted' Message-ID: Hi I have a few old images that for some reasons are stuck in the deleted state, and I would like to remove them Since a simple "openstack image delete " can't be used [*], is it safe to set images.deleted to '1' in the glance db and then remove the relevant file in the backend ? Or are there better/safer options ? I am now running OpenStack Rocky Thanks, Massimo [*] Failed to delete image with name or ID '70a784ec-b109-4d4a-8a63-7c58d8db866c': 400 Bad Request: Image status transition from deleted to deleted is not allowed (HTTP 400) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmendiza at redhat.com Fri Aug 30 13:35:13 2019 From: dmendiza at redhat.com (=?UTF-8?Q?Douglas_Mendiz=c3=a1bal?=) Date: Fri, 30 Aug 2019 08:35:13 -0500 Subject: [election][barbican][ptl] U Cycle Candidacy Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I would like to announce my candidacy to continue to serve as Barbican PTL for the U cycle. My goals for this next cycle is to continue to improve the usability of the client libraries, especially getting feature parity between python-barbicanclient and openstacksdk.key_manager to support the Image Encryption cross-project effort. Thank you, Douglas Mendizábal -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEwcapj5oGTj2zd3XogB6WFOq/OrcFAl1pJhAACgkQgB6WFOq/ OreypA/+JweQuiTuM4TgcmngHdFWBWZeL2VQ9a6OgyfxwSBXV7QP859wbmLsc6UP bOZkRCNNq2/GLkx6fvQ8/4Q7NzoB5ZL3OSAJVkbo3Gdm8/eXiq+FHiy6JqX+2EjD 3GT8FUHDxI22zxsFnn9ILUGkreyJ2vxYFlIDhhRVrpJu40NCm8cm2FmVuFDmWbgk 3T73+Sch2GAkSuiB/ntlHlvzCd2yFX+8dLjtjeyRHL+uhA9Kx8Y75FMKF4bzlFl0 7WLc9fcQl7TSu3HlW0wFrC+EIIe9SJTvJF2f0UyuKstJLP1DdYyGNW+Qdv1Kcdwv 71jt8ZhjyD7OZ+WQ3kaYFcF33i2sTSpKWHZNISf3uB3dw1UL+tsu9x9xlcy+hfu3 AExE6kCmXinVNYf5i99MxNBbiqrnXJd5jSfLehHPj6dopUED253hqEu2ugO65aBM Nv1gORQPuO6iPq6l6YtHFj453OxBwFAvtTmPAKbd3S2N1USlik3x393t6y7ykhta X+EyC+Tci0cwX3JV1qkDSJvC1oPb0F8ynykrT3HrkCaDSqQ5xrScmMbXrlyHiB3J E99xzTKePz+xp7wY7hTFrSWdjFRJBlhgEgPzkxNCqblVkDkx3CvSdMfs3K75h0Og t4tckF82J5j+LeXn2W8nk6ZHGkKaFlC2cSWjCk5YWjBhridhVGs= =pfrQ -----END PGP SIGNATURE----- From jyotishri403 at gmail.com Fri Aug 30 13:44:27 2019 From: jyotishri403 at gmail.com (Jyoti Dahiwele) Date: Fri, 30 Aug 2019 19:14:27 +0530 Subject: Issue in cloudkitty Message-ID: Dear Team, Plz help in following issue. While collecting gnocchi aggregates I ran into 400 response from gnocchi due to lack of metrics at that particular point of time. I am unable to fill the metrics with zero values. The cloudkitty rated_data_frames table in the database shows __NO_DATA__ . Here below are the logs *cloudkitty-processor.log* *2019-01-11 17:06:56.334 7207 WARNING cloudkitty.orchestrator [-] [5793054cd0fe4a018e959eb9081442a8] Error while collecting metric volume.size: {u'cause': u'Metrics not found', u'detail': [u'volume.size']} (HTTP 400). Retrying on next collection cycle.: BadRequest: {u'cause': u'Metrics not found', u'detail': [u'volume.size']} (HTTP 400)* *2019-01-11 17:07:00.356 7207 WARNING cloudkitty.orchestrator [-] [2eea218eea984dd68f1378ea21c64b83] Error while collecting metric volume.size: {u'cause': u'Metrics not found', u'detail': [u'volume.size']} (HTTP 400). Retrying on next collection cycle.: BadRequest: {u'cause': u'Metrics not found', u'detail': [u'volume.size']} (HTTP 400)* *gnocchi-api.log* *10.20.0.100 - - [11/Jan/2019:17:07:04 +0000] "POST /v1/aggregates?details=False&groupby=id&groupby=project_id&start=2019-01-01T00%3A00%3A00&stop=2019-01-01T01%3A00%3A00 HTTP/1.1" 400 280 "-" "cloudkitty-processor keystoneauth1/3.11.2 python-requests/2.21.0 CPython/2.7.15rc1"* *10.20.0.100 - - [11/Jan/2019:17:07:08 +0000] "POST /v1/aggregates?details=False&groupby=id&groupby=project_id&start=2019-01-01T00%3A00%3A00&stop=2019-01-01T01%3A00%3A00 HTTP/1.1" 400 280 "-" "cloudkitty-processor keystoneauth1/3.11.2 python-requests/2.21.0 CPython/2.7.15rc1"* *Also, the price while launching the instance is not getting updated on the fly.* *Below are the processor logs of cloudkitty:-* *2019-08-30 11:20:18.571 23130 ERROR oslo_messaging.rpc.server AttributeError: 'list' object has no attribute 'start'* 2019-08-30 11:20:18.571 23130 ERROR oslo_messaging.rpc.server 2019-08-30 11:20:18.887 23136 DEBUG cloudkitty.orchestrator [-] Received quote from RPC. quote /usr/lib/python2.7/site-packages/cloudkitty/orchestrator.py:11 1 2019-08-30 11:20:18.977 23136 DEBUG oslo_db.api [-] Loading backend 'sqlalchemy' from 'cloudkitty.rating.hash.db.sqlalchemy.api' _load_backend /usr/lib/pytho n2.7/site-packages/oslo_db/api.py:261 2019-08-30 11:20:19.131 23136 DEBUG oslo_db.api [-] Loading backend 'sqlalchemy' from 'cloudkitty.db.sqlalchemy.api' _load_backend /usr/lib/python2.7/site-pa ckages/oslo_db/api.py:261 2019-08-30 11:20:19.153 23136 DEBUG oslo_db.api [-] Loading backend 'sqlalchemy' from 'cloudkitty.rating.pyscripts.db.sqlalchemy.api' _load_backend /usr/lib/ python2.7/site-packages/oslo_db/api.py:261 2019-08-30 11:20:19.167 23136 ERROR oslo_messaging.rpc.server [-] Exception during message handling: AttributeError: 'list' object has no attribute 'start' 2019-08-30 11:20:19.167 23136 ERROR oslo_messaging.rpc.server Traceback (most recent call last): -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Aug 30 13:52:01 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 30 Aug 2019 13:52:01 +0000 Subject: [neutron][qa][tc] Databases used on CI jobs In-Reply-To: References: Message-ID: <20190830135201.3bpall2eoyjk3563@yuggoth.org> On 2019-08-30 10:07:14 +0200 (+0200), Slawek Kaplonski wrote: [...] > And the question is - should we maybe switch somehow to Mariadb > instead of Mysql? Should it be done globally or maybe it’s only > “issue” in Neutron’s CI jobs? Or maybe we should have jobs with > both Mariadb and Mysql databases? [...] The unfortunate reality is that there are multiple relational database servers available for folks to try to use, and we can't reasonably run automated tests against them all. The TC's 2017-06-13 resolution to "Document current level of community database support" is the closest thing we have to guidance around this in recent history: https://governance.openstack.org/tc/resolutions/20170613-postgresql-status.html At the time the challenge was that we couldn't reliably test everything against both MySQL and PostgreSQL, and the user survey showed an overwhelming majority of known deployments used MySQL. We used the terminology "MySQL family of databases" to include MariaDB and Galera (particularly because upstream support for MySQL had waned in the wake of the Oracle acquisition and MariaDB was a recent enough fork that it was basically a drop-in bug-compatible replacement which some distros had even taken to misleadingly relabeling as "MySQL"). That was more than two years ago. Fast-forward to the present and it seems MariaDB and MySQL have diverged noticeably, as this recent article suggests: https://blog.panoply.io/a-comparative-vmariadb-vs-mysql As such, it's probably time to revisit our database testing requirements. I expect some of the challenge here is that CentOS 7 (and presumably other RH-derived distros) have "replaced" MySQL with MariaDB. Since our Project Testing Interface requires that we test against Latest Ubuntu LTS, Latest CentOS Major, and Latest openSUSE Leap, that may be the best way to frame this discussion: https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gdubreui at redhat.com Fri Aug 30 12:38:58 2019 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 30 Aug 2019 14:38:58 +0200 Subject: [neutron][graphql] Is the graphql branch still being used? In-Reply-To: References: Message-ID: Hi Stackers, I'm not active anymore on OpenStack projects as I've been asked to focus on Openshift migration projects earlier this year after moving from Australia to France. Unless someone for Neutron core has the bandwidth to continue that experiment then the feature branch is to disappear. Because it would be difficult to go deeper without knowledge specific to Neutron. If you have any question please let me know. In the meantime, I wish you guys all the best. Cheers, Gilles On 29/8/19 10:09 am, Lajos Katona wrote: > Hi, > > For the last summit in Denver we planned a demo with Gilles and > presentation of the aletrnatives graphql & openAPIv3, but as it was > not supported the PoC slowly died. > If I understand well as nobody has time/resource to work with API > refactoring it is now out of the scope of Openstackt. > > Regards > Lajos > > Michael McCune > ezt írta > (időpont: 2019. aug. 28., Sze, 21:03): > > > > On Wed, Aug 28, 2019 at 1:56 PM Miguel Lavalle > > wrote: > > Dear Stackers, > > Some time ago we created a feature branch in the Neutron repo > for a PoC with graphql: > https://opendev.org/openstack/neutron/src/branch/feature/graphql > > Is this PoC still going on? Is the branch still needed? The > branch has been inactive since January and we are finding that > patches against this branch don't pass zuul tests: > https://review.opendev.org/#/c/678144/. If the branch is still > being used, the team working on it should fix it so tests > pass. If it is not used, let's remove it. If I don't hear back > from anyone having interest in maintaining this branch by > September 6th, I will go ahead and remove it. > > > i just wanted to chime in from the api-sig. we had originally been > helping to organize this experiment with graphql, but i have not > heard any updates about the project in quite awhile (> 9 months). > the main contact i had for this project was +Gilles Dubreuil > , i have cc'd him on this email. > > peace o/ > > Thanks an regards > > Miguel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Fri Aug 30 14:06:08 2019 From: abishop at redhat.com (Alan Bishop) Date: Fri, 30 Aug 2019 07:06:08 -0700 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 ! On Fri, Aug 30, 2019 at 5:32 AM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Aug 30 14:06:29 2019 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 30 Aug 2019 08:06:29 -0600 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 On Fri, Aug 30, 2019 at 6:35 AM Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri Aug 30 14:07:31 2019 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 30 Aug 2019 16:07:31 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 On 30.08.2019 14:28, Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- Best regards, Bogdan Dobrelya, Irc #bogdando From ralonsoh at redhat.com Fri Aug 30 14:24:48 2019 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Fri, 30 Aug 2019 15:24:48 +0100 Subject: [neutron][ptl][elections] Non candidacy for U cycle In-Reply-To: References: Message-ID: <95a97e3d44751a84f347fcb8add26942fe51956d.camel@redhat.com> Dear Miguel: Thank you very much for your job during this time. It was a pleasure to work with you. Un abrazo y espero que nos veamos pronto! On Thu, 2019-08-29 at 12:03 -0400, Assaf Muller wrote: > On Thu, Aug 29, 2019 at 11:59 AM Slawek Kaplonski wrote: > > > > Hi, > > > > Thank You for all Your work as a leadership during those 4 cycles! > > > > > On 29 Aug 2019, at 17:40, Miguel Lavalle wrote: > > > > > > Dear Neutron team, > > > > > > It has been an honor to be your PTL for the past four cycles. I have really enjoyed the ride > > > and am thankful for all the support and hard work that you all have provided over these past > > > two years. As I said in my last self-nomination ( > > > http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003714.html), I am of the > > > opinion that it is healthy to transfer periodically the baton to a new leader with fresh ideas > > > and energy. One of the focus areas of my PTL tenure has been to keep the team strong by > > > actively helping community members to become contributors and core reviewers. As a > > > consequence, I know for a fact that we have a deep bench of potential new PTLs ready to step > > > up to the plate. > > > > > > As for me, I am not going anywhere. I am very lucky that my management at Verizon Media > > > supports my continued active involvement with the community at large and the Neutron family of > > > projects in particular. So I look forward to help the team in the upcoming U cycle and > > > beyond. > > Miguel, you did a fantastic job. You should be proud of your > contributions. Thank you. > > > > > > > Thank you soooo much! > > > > > > Miguel > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > From james.slagle at gmail.com Fri Aug 30 14:36:16 2019 From: james.slagle at gmail.com (James Slagle) Date: Fri, 30 Aug 2019 10:36:16 -0400 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: +1 On Fri, Aug 30, 2019 at 8:34 AM Michele Baldessari wrote: > > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > -- -- James Slagle -- From ksambor at redhat.com Fri Aug 30 15:48:31 2019 From: ksambor at redhat.com (Kamil Sambor) Date: Fri, 30 Aug 2019 17:48:31 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> Message-ID: +1 Kamil Sambor pt., 30 sie 2019, 16:37 użytkownik James Slagle napisał: > +1 > > On Fri, Aug 30, 2019 at 8:34 AM Michele Baldessari > wrote: > > > > Hi all, > > > > Damien (dciabrin on IRC) has always been very active in all HA things in > > TripleO and I think it is overdue for him to have core rights on this > > topic. So I'd like to propose to give him core permissions on any > > HA-related code in TripleO. > > > > Please vote here and in a week or two we can then act on this. > > > > Thanks, > > -- > > Michele Baldessari > > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > > > > -- > -- James Slagle > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Aug 30 15:51:04 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 30 Aug 2019 11:51:04 -0400 Subject: [openstack-dev][magnum] Using Fedora Atomic 29 for k8s cluster In-Reply-To: References: Message-ID: On Fri, Aug 30, 2019 at 5:02 AM Jakub Slíva wrote: > > > pá 23. 8. 2019 v 20:59 odesílatel napsal: >> >> On 2019-08-24 02:58, Mohammed Naser wrote: >> > On Thu, Aug 22, 2019 at 11:46 PM Feilong Wang >> > wrote: >> >> >> >> Hi all, >> >> >> >> At this moment, Magnum is still using Fedora Atomic 27 as the default >> >> image in devstack. But you can definitely use Fedora Atomic 29 and it >> >> works fine. But you may run into a performance issue when booting >> >> Fedora Atomic 29 if your compute host doesn't have enough entropy. >> >> There are two steps you need for that case: >> >> >> >> 1. Adding property hw_rng_model='virtio' to Fedora Atomic 29 image >> >> >> >> 2. Adding property hw_rng:allowed='True' to flavor, and we also need >> >> hw_rng:rate_bytes=4096 and hw_rng:rate_period=1 to get a reasonable >> >> rate limit to avoid the VM drain the hypervisor. >> >> >> >> We are working on a patch for Magnum devstack to support FA29 out of >> >> box. Meanwhile, we're starting to test Fedora CoreOS 30. Please popup >> >> in #openstack-containers channel if you have any question. Cheers. >> > >> > Neat! I think it's important for us to get off Fedora Atomic given >> > that RedHat seems to be ending it soon. Is the plan to move towards >> > Fedora CoreOS 30 or has there been consideration of using something >> > like an Ubuntu-base (and leveraging something like kubeadm+ansible to >> > drive the deployment?)- >> > >> >> Personally, I would like we can stay at Fedora Atomic/CoreOS since >> Magnum has already been benefited from the container-based readonly >> operating system. But we did have the discussion about using >> kubeadm+ansible, however, as you can see, it's a quite big refactoring, >> I'm not sure if we can get it done with current limited resources. >> > > According to some notes from Magnum Train PTG meeting there is plan to move all drivers out of tree. Despite there is no time schedule for that we started to work on Ubuntu K8s driver which is completely independent to openstack/magnum project and which is based on kubeadm, Heat SW Deployments and external Openstack cloud provider. So we do not think there is much refactoring needed. At the moment it is under heavy development, however, at some point we plan to open the source and we will certainly appreciate any help. Let's share efforts! https://review.opendev.org/#/c/678309/ This driver can get you a full deployment but the only thing missing (which shouldn't be much more) is using Magnum CA support. There is a document in that patch that you can try to use DevStack with! Thanks Mohammed >> >> >> >> >> >> >> -- >> >> Cheers & Best regards, >> >> Feilong Wang (王飞龙) >> >> -------------------------------------------------------------------------- >> >> Head of R&D >> >> Catalyst Cloud - Cloud Native New Zealand >> >> Tel: +64-48032246 >> >> Email: flwang at catalyst.net.nz >> >> Level 6, Catalyst House, 150 Willis Street, Wellington >> >> -------------------------------------------------------------------------- >> > > > > Jakub Slíva > > Ultimum Technologies s.r.o. > Na Poříčí 1047/26, 11000 Praha 1 > Czech Republic > > jakub.sliva at ultimum.io > https://ultimum.io > > LinkedIn | Twitter | Facebook > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From beagles at redhat.com Fri Aug 30 16:06:01 2019 From: beagles at redhat.com (Brent Eagles) Date: Fri, 30 Aug 2019 13:36:01 -0230 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: References: <20190830122850.GA5248@holtby> Message-ID: +1 On Fri, Aug 30, 2019 at 1:20 PM Kamil Sambor wrote: > +1 > > Kamil Sambor > > pt., 30 sie 2019, 16:37 użytkownik James Slagle > napisał: > >> +1 >> >> On Fri, Aug 30, 2019 at 8:34 AM Michele Baldessari >> wrote: >> > >> > Hi all, >> > >> > Damien (dciabrin on IRC) has always been very active in all HA things in >> > TripleO and I think it is overdue for him to have core rights on this >> > topic. So I'd like to propose to give him core permissions on any >> > HA-related code in TripleO. >> > >> > Please vote here and in a week or two we can then act on this. >> > >> > Thanks, >> > -- >> > Michele Baldessari >> > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D >> > >> >> >> -- >> -- James Slagle >> -- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Fri Aug 30 16:24:51 2019 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 30 Aug 2019 18:24:51 +0200 Subject: [tripleo] Proposing Damien Ciabrini as core on TripleO/HA In-Reply-To: <20190830122850.GA5248@holtby> References: <20190830122850.GA5248@holtby> Message-ID: <7457c9c7-808d-348e-ff56-846af2150dd9@redhat.com> +1! On 8/30/19 2:28 PM, Michele Baldessari wrote: > Hi all, > > Damien (dciabrin on IRC) has always been very active in all HA things in > TripleO and I think it is overdue for him to have core rights on this > topic. So I'd like to propose to give him core permissions on any > HA-related code in TripleO. > > Please vote here and in a week or two we can then act on this. > > Thanks, > From haleyb.dev at gmail.com Fri Aug 30 16:44:35 2019 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 30 Aug 2019 12:44:35 -0400 Subject: Metadata service caching old nameservers? In-Reply-To: <37affb04-f316-17ac-273f-d1a377d928fe@civo.com> References: <5e8e7fda-24ad-b4f6-99a0-77d6f0664be3@civo.com> <04cddfc2-6ba0-be2e-ead1-eb78ad3ea891@gmail.com> <37affb04-f316-17ac-273f-d1a377d928fe@civo.com> Message-ID: On 8/30/19 4:30 AM, Grant Morley wrote: > Hi Brian, > > The lease does have the two Google nameservers in it, however a new VM > is still somehow getting the old values as well. That is what is > confusing me.  If it helps we are using Queens still currently. > > Is there any database entry that I can change? If the DHCP lease doesn't have it in the reply then it's coming via cloud-init and the Nova metadata service, but I didn't see a way to change that yet via the API. -Brian > On 29/08/2019 20:32, Brian Haley wrote: >> >> >> On 8/29/19 6:26 AM, Grant Morley wrote: >>> Hi All, >>> >>> We have a bit of a weird issue with resolv.conf for instances. We >>> have changed our subnets in neutron to use googles nameservers which >>> is fine. However it seems that when instances are launched they are >>> still getting the old nameserver settings as well as the new ones. If >>> I look at the metadata networking service it returns the old >>> nameservers as well as the new ones below: >>> >>> curl -i http://169.254.169.254/openstack/2017-02-22/network_data.json >>> HTTP/1.1 200 OK >>> Content-Type: text/plain; charset=UTF-8 >>> Content-Length: 753 >>> Date: Thu, 29 Aug 2019 09:40:03 GMT >>> >>> {"services": [{"type": "dns", "address": "178.18.121.70"}, {"type": >>> "dns", "address": "178.18.121.78"}, {"type": "dns", "address": >>> "8.8.8.8"}, {"type": "dns", "address": "8.8.4.4"}] >>> >>> In our neutron dhcp-agent.ini file we have the correct dnsmasq >>> nameservers set: >>> >>> dnsmasq_dns_servers = 8.8.8.8, 8.8.4.4 >>> >>> Are there any database tables I can change or clear up to ensure the >>> old nameservers no longer get set? I can't seem to find any reference >>> in our config any more of the old nameservers, so I assume one of the >>> services is still setting them but I can't figure out what is. >> >> So does the DHCP response on lease renewal just have the two Google >> nameservers in it?  If so, does a new VM booted on the subnet have the >> correct metadata values reported?  Just trying to narrow-down if this >> is in the neutron code or not. >> >> -Brian > -- > > Grant Morley > Cloud Lead, Civo Ltd > www.civo.com | Signup for an account! > From tim at swiftstack.com Fri Aug 30 17:52:44 2019 From: tim at swiftstack.com (tim at swiftstack.com) Date: Fri, 30 Aug 2019 10:52:44 -0700 Subject: [election][ptl][swift] Tim Burke's candidacy for Swift PTL Message-ID: I would be honored to continue serving you as Swift PTL. This past cycle, we have laid the foundation for Swift's continued success: we support running Swift on Python 3. This has been long overdue, and a long time coming. While we first began to the process four years ago, momentum has only really picked up for it in the last year and a half. As of last week, all unit tests and functional tests now run under py3. It has been such a joy to see this project finally coming to a close. We certainly benefited greatly by having four and a half years with an exceptionally stable base to build upon, but now we have ensured Swift's future deployability, maintainability, and longevity. John Dickinson's long-time goal that Swift be used everywhere, every day, by everyone could only hope to be fleeting without this community- driven effort. Speaking of, I'm so happy to see that the transition to storing Zuul logs in Swift [1] has gone so smoothly. A half-dozen or so clusters suddenly started ingesting millions more PUTs every day, and yet Clark Boylan says he "[hasn't] heard complaints from the swift clouds" -- I can think of few better compliments on the quality of our software. :-) With many millions of objects getting created every day and a median object size in the tens of kilobytes, this emphasizes another years- long project: OVH's work to optimize small-file workloads. Our next focus should be to integrate their (already running in production!) improvements back into master and disseminate their knowledge of how to hack on, debug, deploy, monitor, and troubleshoot in a brave new world. I look forward to organizing this effort in Ussuri. Tim Burke [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008516.html From corey.bryant at canonical.com Fri Aug 30 19:01:31 2019 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 30 Aug 2019 15:01:31 -0400 Subject: [goal][python3] Train unit tests weekly update (goal-2) Message-ID: This is the goal-2 weekly update for the "Update Python 3 test runtimes for Train" goal [1]. There are 2 weeks remaining for completion of Train community goals [2]. == How can you help? == If your project has failing tests please take a look and help fix. Python 3.7 unit tests will be self-testing in Zuul. Failing patches: https://review.openstack.org/#/q/topic:python3-train +status:open+(+label:Verified-1+OR+label:Verified-2+) If your project has patches with successful tests please help get them merged. Open patches needing reviews: https://review.openstack.org/#/q/topic:python3 -train+is:open Patch automation scripts needing review: https://review.opendev.org/#/c/666934 == Ongoing Work == Thank you to all who have contributed their time and fixes to enable patches to land. We're down to 9 projects with failing tests and 13 projects with successful tests. == Completed Work == All patches have been submitted to all applicable projects for this goal. Merged patches: https://review.openstack.org/#/q/topic:python3-train +is:merged == What's the Goal? == To ensure (in the Train cycle) that all official OpenStack repositories with Python 3 unit tests are exclusively using the 'openstack-python3-train-jobs' Zuul template or one of its variants (e.g. 'openstack-python3-train-jobs-neutron') to run unit tests, and that tests are passing. This will ensure that all official projects are running py36 and py37 unit tests in Train. For complete details please see [1]. == Reference Material == [1] Goal description: https://governance.openstack.org/tc/goals/train/ python3-updates.html [2] Train release schedule: https://releases.openstack.org/train /schedule.html (see R-5 for "Train Community Goals Completed") Storyboard: https://storyboard.openstack.org/#!/story/2005924 Porting to Python 3.7: https://docs.python.org/3/whatsnew/3.7.html#porting-to-python-3-7 Python Update Process: https://opendev.org/openstack/governance/src/branch/master/resolutions/20181024-python-update-process.rst Train runtimes: https://opendev.org/openstack/governance/src/branch/master/reference/runtimes/ train.rst Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Fri Aug 30 19:16:49 2019 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 30 Aug 2019 15:16:49 -0400 Subject: [tc] monthly meeting Message-ID: Hi everyone, Here's the planning for the next TC meeting which will happen on Thursday, September 5th at 1400 UTC. If you’d like to add a topic of discussion, please feel free to do so. We will be cutting off the agenda on Wednesday the 4th so any subject added afterward won’t appear in the plan. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting Regards, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From skaplons at redhat.com Fri Aug 30 19:45:58 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 30 Aug 2019 21:45:58 +0200 Subject: [neutron][graphql] Is the graphql branch still being used? In-Reply-To: References: Message-ID: <7B3966F3-3B51-4B58-AE97-79CB876C3E31@redhat.com> Hi, > On 28 Aug 2019, at 19:49, Miguel Lavalle wrote: > > Dear Stackers, > > Some time ago we created a feature branch in the Neutron repo for a PoC with graphql: https://opendev.org/openstack/neutron/src/branch/feature/graphql > > Is this PoC still going on? Is the branch still needed? The branch has been inactive since January and we are finding that patches against this branch don't pass zuul tests: https://review.opendev.org/#/c/678144/. If the branch is still being used, the team working on it should fix it so tests pass. If it is not used, let's remove it. If I don't hear back from anyone having interest in maintaining this branch by September 6th, I will go ahead and remove it. There are also other branches which should be probably deleted, like feature/qos, feature/lbaasv2 and feature/pecan. Can You remove them too maybe? > > Thanks an regards > > Miguel > — Slawek Kaplonski Senior software engineer Red Hat From morgan.fainberg at gmail.com Fri Aug 30 21:02:13 2019 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 30 Aug 2019 14:02:13 -0700 Subject: [keystone] Weekly meeting for September 3rd 2019 Message-ID: As of this time, we are planning to skip the keystone weekly meeting for 2019-09-03. This is to allow for work to continue with less interruption as well as US-based folks who have Labor Day (2019-09-02 this year) off to continue to make progress in light of the abbreviated week. As always, please feel free to join us on irc (freenode) in #openstack-keystone if you have any questions. I am also available (irc nic: kmalloc ). Cheers, --Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Aug 30 22:06:09 2019 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 30 Aug 2019 17:06:09 -0500 Subject: [nova] Status of the bp support-move-ops-with-qos-ports In-Reply-To: <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> References: <1567093554.14852.1@smtp.office365.com> <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> Message-ID: <5e816ed3-27fb-7c87-e624-ab9a61ebcac0@gmail.com> On 8/29/2019 11:09 AM, Matt Riedemann wrote: >> One possible complication in the current patch series is that it >> proposes the necessary RPC changes for _every_ move operations [3]. >> This might not be what we want to merge in Train if only cold migrate >> and resize support fill fit. So I see two possible way forwards: >> a) Others also feel that cold migrate and resize will fit into Train, >> and then I will quickly change [3] to only modify those RPCs. >> b) We agree to postpone the whole series to U. Then I will not spend >> too much time on reworking [3] in the Train timeframe. > > Or (c) we just land that change. I was meaning to review that today > anyway so this email is coincidental for me. I think passing the request > spec down to the compute methods is fine and something we'll want to do > eventually anyway for this series (even if the live migration stuff is > deferred to U). I'm +2 on that change now. > >> >> A connected topic: I proposed patches [4] to add cold migrate and >> resize tempest coverage to the nova-grenade-multinode (renamed from >> nova-grenade-live-migrate) job as Matt pointed out in [3] that we >> don't have such coverage and that my original change in [3] would have >> broken a mixed-compute-version deployment. > > I'm just waiting on CI results for those but since I've already been > through them in some form and they are pretty small I think there is a > good chance of landing these soon. These CI changes are approved and I've rebased the series on top of them. -- Thanks, Matt From jungleboyj at gmail.com Fri Aug 30 22:25:36 2019 From: jungleboyj at gmail.com (Jay Bryant) Date: Fri, 30 Aug 2019 17:25:36 -0500 Subject: [ptl][cinder] U Cycle PTL Non-Candidacy ... Message-ID: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> All, I just wanted to communicate that I am not going to be running for another term as Cinder's PTL. It has been an honor to lead the Cinder team for the last two years.  When I started working with OpenStack nearly 6 years ago, leading the Cinder team was one of my goals and I appreciate the team trusting me with this responsibility for the last 4 cycles. I have enjoyed watching the project evolve over the last couple of years, going from a focus on getting new features in place to a focus on ensuring that customers get reliable storage management with an ever improving user experience. Cinder's value in the storage community outside of OpenStack has been validated as other SDS solutions have leveraged it to provide storage management for the many vendors that Cinder supports. Cinder continues to grow by adding things like cinderlib, making it relevant not only in virtualized environments but also for containerized environments.  I am glad that I have been able to help this evolution happen. As PTLs have done in the past, it is time for me to pursue other opportunities in the OpenStack ecosystem and hand over the reigns to a new leader.  Cinder has a great team and will continue to do great things.  Fear not, I am not going to go anywhere, I plan to continue to stay active in Cinder for the foreseeable future. Again, thank you for the opportunity to be Cinder's PTL, it has been a great ride! Sincerely, Jay Bryant (irc: jungleboyj) From miguel at mlavalle.com Sat Aug 31 00:02:43 2019 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 30 Aug 2019 19:02:43 -0500 Subject: [neutron][graphql] Is the graphql branch still being used? In-Reply-To: <7B3966F3-3B51-4B58-AE97-79CB876C3E31@redhat.com> References: <7B3966F3-3B51-4B58-AE97-79CB876C3E31@redhat.com> Message-ID: Hi Slawek, Yes, it was already in my plan to remove the other branches that you mention. I'll stick to the September 6th deadline that I set in my original message (in case someone wants to continue with the graphql PoC) and at that point I'll remove the branches Cheers On Fri, Aug 30, 2019 at 2:46 PM Slawek Kaplonski wrote: > Hi, > > > On 28 Aug 2019, at 19:49, Miguel Lavalle wrote: > > > > Dear Stackers, > > > > Some time ago we created a feature branch in the Neutron repo for a PoC > with graphql: > https://opendev.org/openstack/neutron/src/branch/feature/graphql > > > > Is this PoC still going on? Is the branch still needed? The branch has > been inactive since January and we are finding that patches against this > branch don't pass zuul tests: https://review.opendev.org/#/c/678144/. If > the branch is still being used, the team working on it should fix it so > tests pass. If it is not used, let's remove it. If I don't hear back from > anyone having interest in maintaining this branch by September 6th, I will > go ahead and remove it. > > There are also other branches which should be probably deleted, like > feature/qos, feature/lbaasv2 and feature/pecan. Can You remove them too > maybe? > > > > > Thanks an regards > > > > Miguel > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Sat Aug 31 01:25:16 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 30 Aug 2019 21:25:16 -0400 Subject: [ptl][cinder] U Cycle PTL Non-Candidacy ... In-Reply-To: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> References: <98f3c4f3-7de8-a81b-87c7-1c4fdb2d08d4@gmail.com> Message-ID: <92d02d87-83a4-8e48-10f1-0701ccbf8353@gmail.com> Jay, Thanks for all your hard work and leadership as PTL. Glad to hear that you'll be staying active on Cinder! On 8/30/19 6:25 PM, Jay Bryant wrote: > All, > > I just wanted to communicate that I am not going to be running for > another term as Cinder's PTL. > > It has been an honor to lead the Cinder team for the last two years.  > When I started working with OpenStack nearly 6 years ago, leading the > Cinder team was one of my goals and I appreciate the team trusting me > with this responsibility for the last 4 cycles. > > I have enjoyed watching the project evolve over the last couple of > years, going from a focus on getting new features in place to a focus on > ensuring that customers get reliable storage management with an ever > improving user experience. > > Cinder's value in the storage community outside of OpenStack has been > validated as other SDS solutions have leveraged it to provide storage > management for the many vendors that Cinder supports. Cinder continues > to grow by adding things like cinderlib, making it relevant not only in > virtualized environments but also for containerized environments.  I am > glad that I have been able to help this evolution happen. > > As PTLs have done in the past, it is time for me to pursue other > opportunities in the OpenStack ecosystem and hand over the reigns to a > new leader.  Cinder has a great team and will continue to do great > things.  Fear not, I am not going to go anywhere, I plan to continue to > stay active in Cinder for the foreseeable future. > > Again, thank you for the opportunity to be Cinder's PTL, it has been a > great ride! > > Sincerely, > > Jay Bryant > > (irc: jungleboyj) > > > From rosmaita.fossdev at gmail.com Sat Aug 31 01:50:43 2019 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 30 Aug 2019 21:50:43 -0400 Subject: [cinder] PTL candidacy Message-ID: <8b318548-1ac7-ad19-c4c9-b2e0e5a41567@gmail.com> Hello everyone, I would like to announce my candidacy for Cinder PTL for the 'U' cycle. The Cinder project is in a good state right now due to Jay Bryant's leadership and a strong community of developers interested in keeping Cinder stable and relevant. The project has matured (which is a good thing): a look at the cinder-specs repository shows an interest in adding some features, but there's also a large focus on enhancing existing features and introducing improvements to make the software more reliable. And the project's relevance has been continued by the introduction of the cinderlib library that makes Cinder backend drivers usable for applications beyond OpenStack, for example, by the Ember CSI plugin for containers. The Cinder project also faces some challenges. Like all the major OpenStack projects, over the past several cycles it has been showing a decline in the number of contributors and in the diversity of their affiliations. Further, we had a big scare this cycle when it looked like about half of the backend drivers would have to be marked unsupported due to vendors not updating their third-party CI systems to Python 3. (As we approach milestone 3, happily, the situation is looking better.) Much as I'd like to say that as Cinder PTL, my scintillating personality will reverse this trend, the reality is that the days of companies renting out an entire museum in San Diego or Paris to entertain OpenStack contributors are well behind us. (Plus, I don't have a scintillating personality.) The positive way to look at this is that our community has contracted to a hard core of sufficient density as the effluvium is ejected, and we've reached a kind of steady state. Thus I think the main task I would need to undertake as PTL is to focus on nurturing the community we have. This isn't to say that we won't continue to do onboarding and outreach (we will), it's just that the days when an OpenStack project could design features and people would just show up and work on them are in the past. So we need to make sure that the current Cinder community--both developers working on the main project and developers working on vendor drivers--feel well-connected and valued. (This isn't to say they don't feel that way now; my point is that this is important to maintain.) Concrete steps toward this goal are continuing Jay's good work in keeping the Cinder community well-informed of the discussions at weekly meetings, Forum sessions, the PTG, and midcycle, and taking advantage of video meeting software so we can have more (virtual) face-to-face meetings. All this is sounds reasonable enough, but why me? I'm a fairly new member of the Cinder community. I've been a Cinder core contributor since April and have been helping with stable maintenance and releases. I've had some experience as an OpenStack PTL (Glance PTL for O, P, and Q), so I know what I'm getting into. I'm an active participant in the Cinder community, and if you'll have me as PTL, I promise to do my best not to break anything. Thank you for your consideration, Brian Rosmaita (rosmaita) From gmann at ghanshyammann.com Sat Aug 31 14:48:37 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 21:48:37 +0700 Subject: not running for the TC this term In-Reply-To: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> References: <96E71EEE-9BE9-45BC-B302-47CC32D51A41@doughellmann.com> Message-ID: <16ce8264242.d047937c95379.8320799353464008495@ghanshyammann.com> ---- On Mon, 26 Aug 2019 19:57:54 +0700 Doug Hellmann wrote ---- > Since nominations open this week, I wanted to go ahead and let you all know that I will not be seeking re-election to the Technical Committee this term. > > My role within Red Hat has been changing over the last year, and while I am still working on projects related to OpenStack it is no longer my sole focus. I will still be around, but it is better for me to make room on the TC for someone with more time to devote to it. > > It’s hard to believe it has been 6 years since I first joined the Technical Committee. So much has happened in our community in that time, and I want to thank all of you for the trust you have placed in me through it all. It has been an honor to serve and help build the community. Thanks, Doug for all your contribution as TC. It has been very helpful for me as a new member to get onboard in TC with your guidance. I have learnt a lot from you. -gmann > > Thank you, > Doug > > > From gmann at ghanshyammann.com Sat Aug 31 14:51:13 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 21:51:13 +0700 Subject: [neutron][qa] Databases used on CI jobs In-Reply-To: References: Message-ID: <16ce828a588.fedae8e395409.1575331473387166547@ghanshyammann.com> ---- On Fri, 30 Aug 2019 15:07:14 +0700 Slawek Kaplonski wrote ---- > Hi, > > Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. > It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. > > And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? > Or maybe we should have jobs with both Mariadb and Mysql databases? I am not sure how many projects are effected in this. But I will suggest to go for later option. You can add a new neutron job with Mariadb and if you need you can add the same in devstack gate as experimental to test the things on demand. -gmann > > [1] https://review.opendev.org/#/c/677221 > [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 > [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > From gmann at ghanshyammann.com Sat Aug 31 14:59:34 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 21:59:34 +0700 Subject: [tc][elections] Not running for reelection to TC this term In-Reply-To: <20190826131938.ouya5phflfzqoexn@yuggoth.org> References: <20190826131938.ouya5phflfzqoexn@yuggoth.org> Message-ID: <16ce83048b4.12abce47595493.1199712395487284560@ghanshyammann.com> ---- On Mon, 26 Aug 2019 20:19:38 +0700 Jeremy Stanley wrote ---- > I've been on the OpenStack Technical Committee continuously for > several years, and would like to take this opportunity to thank > everyone in the community for their support and for the honor of > being chosen to represent you. I plan to continue participating in > the community, including in TC-led activities, but am stepping back > from reelection this round for a couple of reasons. > > First, I want to provide others with an opportunity to serve our > community on the TC. I hope that by standing aside for now, others > will be encouraged to run. A regular influx of fresh opinions helps > us maintain the requisite level of diversity to engage in productive > debate. > > Second, the scheduling circumstances for this election, with the TC > and PTL activities combined, will be a bit more complicated for our > election officials. I'd prefer to stay engaged in officiating so > that we can ensure it goes as smoothly for everyone a possible. To > do this without risking a conflict of interest, I need to not be > running for office. > > It's quite possible I'll run again in 6 months, but for now I'm > planning to help behind the scenes instead. Best of luck to all who > decide to run for election to any of our leadership roles! Thanks Jeremy for your all contribution as TC. You have been one of the key person and happy to see you around. -gmann > -- > Jeremy Stanley > From gmann at ghanshyammann.com Sat Aug 31 15:00:25 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 22:00:25 +0700 Subject: [tc][elections] Not seeking re-election to TC In-Reply-To: References: Message-ID: <16ce8311252.10754642d95497.8658414969337192017@ghanshyammann.com> ---- On Mon, 26 Aug 2019 23:22:00 +0700 Julia Kreger wrote ---- > Greetings everyone, > > I wanted to officially let everyone know that I will not be running > for re-election to the TC. > > I have enjoyed serving on the TC for the past two years. Due to some > changes in my personal and professional lives, It will not be possible > for me to serve during this next term. > > Thanks everyone! Thanks Julia for serving as TC. -gmann > > -Julia > > From gmann at ghanshyammann.com Sat Aug 31 15:03:39 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 22:03:39 +0700 Subject: [all][tc] PDF Community Goal Update In-Reply-To: References: Message-ID: <16ce8340887.b4dfe46795525.478054958081717434@ghanshyammann.com> ---- On Tue, 27 Aug 2019 12:58:54 +0700 Akihiro Motoki wrote ---- > Hi all, > > I would like to share the status of the PDF community goal. > We made great progresses these two weeks. > > Highlights > ---------- > > It is time for working on the PDF community goal in your project! > > - openstack-tox-docs job supports PDF build now. > AJaeger and I worked on the job support of PDF doc build and > publishing and have completed it. > - We confirmed PDF build with the sphinx latex builder works for > several major projects like nova, cinder and neutron. > The approach with the latex builder should work for most projects. > - First PDF doc is now published from the i18n project. > > How to get started > ------------------ > > - "How to get started" section in the PDF goal etherpad [1] explains > the minimum steps. > You can find useful examples there too. > - To build PDF docs locally, you need to install LaTex related > packages. See "To test locally" in the etherpad [1]. > - If you hit problems during PDF build, check the common problems > etherpad [2]. We are collecting knowledges there. > - If you have questions, feel free to ask #openstack-doc IRC channel. > > Also Please sign up your name to "Project volunteers" in [1]. Thanks Motoki for your help and sharing the status. As you know, I have setup the storyboard also for this goal, I would like to see project's volunteer assign the respective story also along with etherpad tracking - https://storyboard.openstack.org/#!/story/list?tags=pdf-doc-enabled -gmann > > Useful links > ------------ > > [1] https://etherpad.openstack.org/p/train-pdf-support-goal > [2] https://etherpad.openstack.org/p/pdf-goal-train-common-problems > [3] Ongoing reviews: > https://review.opendev.org/#/q/topic:build-pdf-docs+(status:open+OR+status:merged) > > Thanks, > Akihiro Motoki (amotoki) > > From gmann at ghanshyammann.com Sat Aug 31 15:05:02 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 31 Aug 2019 22:05:02 +0700 Subject: [tc] not seeking reelection In-Reply-To: References: Message-ID: <16ce8354a87.addb5e5195533.5421683210605918272@ghanshyammann.com> ---- On Tue, 27 Aug 2019 21:59:57 +0700 Lance Bragstad wrote ---- > Hi all, > Now that the nomination period is open for TC candidates, I'd like to say that I won't be running for a second term on the TC. > My time on the TC has enriched my understanding of open-source communities and I appreciate all the time people put into helping me get up-to-speed. I wish the best of luck to folks putting their hat in the ring this week! > Thanks all, > Lance Thanks Lance for your serving and help in TC. -gmann From gagehugo at gmail.com Sat Aug 31 17:14:59 2019 From: gagehugo at gmail.com (Gage Hugo) Date: Sat, 31 Aug 2019 12:14:59 -0500 Subject: [security] Weekly Newsletter - 29 Aug 2019 Message-ID: Last two weeks had no meeting activity, however this week we had plenty, so here's the summary. Hope everyone has a great weekend! #Date: 29 Aug 2019 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Meeting Notes - Summary: http://eavesdrop.openstack.org/meetings/security/2019/security.2019-08-29-15.00.html - OSSA-2019-004 was released this week, more details here: https://security.openstack.org/ossa/OSSA-2019-004.html - The VMT is currently in the process of updating the requirements for a project to obtain the "vulnerability:managed tag, there is a current change in progress here:https://review.opendev.org/#/c/678426/ - The main goal is to reduce the barrier of entry by not explicitly requiring an audit being performed on the project (but still recommending it), as well as clarifications on other guidelines - The security docs are continuing to see updates: https://review.opendev.org/#/q/project:openstack/security-doc - Shoutout to nickthetait for taking on this work, and to those reviewing it as well! - We discussed the default policy file discrepencies in Cinder/Nova in the Queens release, it appears that several projects have different file defaults for policy. - This is causing issues when a policy file works fine in one release, but after upgrading, the file is no longer automatically detected. - One path forward is to open a security docs bug to track these and look for a way to resolve this. #VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - OSSA-2019-004 was released this week, more details here: https://security.openstack.org/ossa/OSSA-2019-004.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Aug 31 19:20:18 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 01 Sep 2019 02:20:18 +0700 Subject: [all][tc] Hear ye! Hear ye! Our Newest Release Name - OpenStack Ussuri In-Reply-To: References: Message-ID: <16ce91f00c0.112fe44bb96935.5110743579317529543@ghanshyammann.com> ---- On Fri, 30 Aug 2019 00:05:30 +0700 Rico Lin wrote ---- > Dear all OpenStackers > It's my honor to announce OpenStack U release name "Ussuri" (naming after Ussuri River). > Ussuri has been selected as top one option from community poll and also been lawyer-approved, so it's official. > Thanks again for all who join the U release naming processes. Thanks Rico for coordinating it, good job. -gmann > > -- > May The Force of OpenStack Be With You, > Rico Lin > irc: ricolin > > > > From gmann at ghanshyammann.com Sat Aug 31 19:51:50 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 01 Sep 2019 04:51:50 +0900 Subject: [qa][ptl][election] Quality Assurance PTL Candidacy for Ussuri Message-ID: <16ce93bdc7b.110bcaa8d97107.9199003574402092721@ghanshyammann.com> Hi Everyone, I would like to announce my candidacy for Quality Assurance PTL role for Ussuri cycle. It was a great experience while serving QA PTL for the last three cycles. With very few active contributions, I am proud on the team to keep the QA tooling up and maintained. Making sure no gate break and keep the OpenStack stability as a priority. In Train Cycle, we concentrate on a few key things like improving the gate stability which was tried with breaking the integrate-gate project template into project-specific template[1]. Migrating all the OpenStack CI/CD from Xenial to Bionic etc. We have few more planned work still pending for Train like keystone's system scope testing, making sanity job voting, documenting the QA process etc but I think we will be able to make it in Train as we still have around a month to release Train. Along with daily QA activities, my priority for QA in the next cycle will be: - Release Patrole stable version and find out the way to make it faster so that it can test the API policy on project gate also. - New ideas for more tooling or tests project to focus on OpenStack or Common communities. - Finish the grenade job to zuulv3 which is not yet merged. - Stability of Tempest plugins. There are many tempest plugins which are not so stable. We will plan audit activities for all plugins. I know using Tempest service registry and setting service_availabilty properly is the key things to do on the Plugins side. - Finish the JSON strict validation for volume services. Zhufl has lot of patches to review on this. This will be on priority to finish in the next cycle. - Guiding and motivating more contributors to QA projects. Thanks for reading and consideration my candidacy for Ussuri cycle. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007568.html -gmann From skaplons at redhat.com Sat Aug 31 19:56:52 2019 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 31 Aug 2019 21:56:52 +0200 Subject: [neutron][qa] Databases used on CI jobs In-Reply-To: <16ce828a588.fedae8e395409.1575331473387166547@ghanshyammann.com> References: <16ce828a588.fedae8e395409.1575331473387166547@ghanshyammann.com> Message-ID: <71EC7C04-3FC1-4819-941D-67BB34A56A25@redhat.com> Hi, > On 31 Aug 2019, at 16:51, Ghanshyam Mann wrote: > > ---- On Fri, 30 Aug 2019 15:07:14 +0700 Slawek Kaplonski wrote ---- >> Hi, >> >> Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. >> It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. >> >> And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? >> Or maybe we should have jobs with both Mariadb and Mysql databases? > > I am not sure how many projects are effected in this. But I will suggest to go for later option. You can add a new neutron job with Mariadb and > if you need you can add the same in devstack gate as experimental to test the things on demand. Thx. I will then add scenario periodic job with MariaDB. In same way like we have it done now for PostgreSQL job. I don’t want to add yet another scenario job to our check/gate queue which already have a lot of various jobs. Will it work for You? > > -gmann > >> >> [1] https://review.opendev.org/#/c/677221 >> [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 >> [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> >> > — Slawek Kaplonski Senior software engineer Red Hat From gmann at ghanshyammann.com Sat Aug 31 20:08:41 2019 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 01 Sep 2019 05:08:41 +0900 Subject: [neutron][qa] Databases used on CI jobs In-Reply-To: <71EC7C04-3FC1-4819-941D-67BB34A56A25@redhat.com> References: <16ce828a588.fedae8e395409.1575331473387166547@ghanshyammann.com> <71EC7C04-3FC1-4819-941D-67BB34A56A25@redhat.com> Message-ID: <16ce94b4974.122a2c6ca97181.6819722393595252155@ghanshyammann.com> ---- On Sun, 01 Sep 2019 04:56:52 +0900 Slawek Kaplonski wrote ---- > Hi, > > > On 31 Aug 2019, at 16:51, Ghanshyam Mann wrote: > > > > ---- On Fri, 30 Aug 2019 15:07:14 +0700 Slawek Kaplonski wrote ---- > >> Hi, > >> > >> Recently in Neutron we merged patch [1] which caused error in Kolla-ansible [2]. > >> It seems that issue happens on mariadb 10.1 which is used in broken Kolla-ansible job so I started checking what database is used on Neutron’s jobs as we didn’t found any issue with this patch. And it seems that in all scenario/api jobs we have installed mysql 5.7 - see [3] for example. > >> > >> And the question is - should we maybe switch somehow to Mariadb instead of Mysql? Should it be done globally or maybe it’s only “issue” in Neutron’s CI jobs? > >> Or maybe we should have jobs with both Mariadb and Mysql databases? > > > > I am not sure how many projects are effected in this. But I will suggest to go for later option. You can add a new neutron job with Mariadb and > > if you need you can add the same in devstack gate as experimental to test the things on demand. > > Thx. I will then add scenario periodic job with MariaDB. In same way like we have it done now for PostgreSQL job. > I don’t want to add yet another scenario job to our check/gate queue which already have a lot of various jobs. > Will it work for You? +1 for that. Periodic jobs work perfectly here. -gmann > > > > > -gmann > > > >> > >> [1] https://review.opendev.org/#/c/677221 > >> [2] https://bugs.launchpad.net/kolla-ansible/+bug/1841907 > >> [3] https://2f275f27875ae066d2b6-00a9605a715176e6bd23401c23e881d7.ssl.cf2.rackcdn.com/650846/24/check/tempest-integrated-networking/3fffbd5/controller/logs/dpkg-l.txt.gz > >> > >> — > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > >> > >> > >> > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > From balazs.gibizer at est.tech Sat Aug 31 20:23:05 2019 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Sat, 31 Aug 2019 20:23:05 +0000 Subject: [nova] Status of the bp support-move-ops-with-qos-ports In-Reply-To: <5e816ed3-27fb-7c87-e624-ab9a61ebcac0@gmail.com> References: <1567093554.14852.1@smtp.office365.com> <3b96b3fe-1723-f748-5b86-d52bc99247f8@gmail.com> <5e816ed3-27fb-7c87-e624-ab9a61ebcac0@gmail.com> Message-ID: <1567282983.1043.0@smtp.office365.com> On Sat, Aug 31, 2019 at 12:06 AM, Matt Riedemann wrote: On 8/29/2019 11:09 AM, Matt Riedemann wrote: One possible complication in the current patch series is that it proposes the necessary RPC changes for _every_ move operations [3]. This might not be what we want to merge in Train if only cold migrate and resize support fill fit. So I see two possible way forwards: a) Others also feel that cold migrate and resize will fit into Train, and then I will quickly change [3] to only modify those RPCs. b) We agree to postpone the whole series to U. Then I will not spend too much time on reworking [3] in the Train timeframe. Or (c) we just land that change. I was meaning to review that today anyway so this email is coincidental for me. I think passing the request spec down to the compute methods is fine and something we'll want to do eventually anyway for this series (even if the live migration stuff is deferred to U). I'm +2 on that change now. A connected topic: I proposed patches [4] to add cold migrate and resize tempest coverage to the nova-grenade-multinode (renamed from nova-grenade-live-migrate) job as Matt pointed out in [3] that we don't have such coverage and that my original change in [3] would have broken a mixed-compute-version deployment. I'm just waiting on CI results for those but since I've already been through them in some form and they are pretty small I think there is a good chance of landing these soon. These CI changes are approved and I've rebased the series on top of them. Thank you Matt! cheers, gibi -- Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: