On Friday, November 13, 2020, 11:04:48 AM PST, openstack-discuss-request@lists.openstack.org <openstack-discuss-request@lists.openstack.org> wrote:
Send openstack-discuss mailing list submissions to
To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to
You can reach the person managing the list at
When replying, please edit your Subject line so it is more specific
than "Re: Contents of openstack-discuss digest..."
Today's Topics:
1. Re: [release][infra] Discrepancy between release jobs and the
"normal" jobs CI in terms of distro (Jeremy Stanley)
2. [OpenStack][InteropWG] Weekly Interop meeting Agenda for
Victoria Interop Guidelines in next hr (prakash RAMCHANDRAN)
3. Re: [release][infra] Discrepancy between release jobs and the
"normal" jobs CI in terms of distro (Jeremy Stanley)
4. Re:
[nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova
enforces that no DB credentials are allowed for the nova-compute
service (Oliver Walsh)
----------------------------------------------------------------------
Message: 1
Date: Fri, 13 Nov 2020 17:20:44 +0000
Subject: Re: [release][infra] Discrepancy between release jobs and the
"normal" jobs CI in terms of distro
Content-Type: text/plain; charset="utf-8"
On 2020-11-13 13:58:18 +0100 (+0100), Radosław Piliszek wrote:
[...]
> I believe it would be a rare situation but surely testing something on
> Bionic and trying to release on Focal might have its quirks.
Honestly, I think the real problem here is that we have a bunch of
unnecessary cruft in the release-openstack-python job held over from
when we used to use tox to create release artifacts. If you look
through the log of a successful build you'll see that we're not
actually running tox or installing the projects being released, but
we're using the ensure-tox and bindep roles anyway. We may not even
need ensure-pip in there. The important bits of the job are that it
checks out the correct state of the repository and then runs
`python3 setup.py sdist bdist_wheel` and then pulls the resulting
files back to the executor to be published. That should be fairly
consistent no matter what project is being built and no matter what
distro it's being built on.
--
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
------------------------------
Message: 2
Date: Fri, 13 Nov 2020 17:21:06 +0000 (UTC)
Subject: [OpenStack][InteropWG] Weekly Interop meeting Agenda for
Victoria Interop Guidelines in next hr
Content-Type: text/plain; charset="utf-8"
Hi all,
Agenda: 1. Interop Guidelines for 2020.10.json 2. Interop add-on Guidelines for existing2.a DNS (Designate) 2.b Orchestration(Heat)3. Interop add-on guidelines for new proposals3.a FileSystem (Manila)3.b Metal as a Service (Ironic)4. What's next for Interop 2021 with containers & Kubernetes/magnum ? - Need Volunteers with go skills for new conformance test proposals
ThanksPrakashFor Interop WGOpenDev Etherpad
|
|
| |
OpenDev Etherpad
|
|
|
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Message: 3
Date: Fri, 13 Nov 2020 18:18:05 +0000
Subject: Re: [release][infra] Discrepancy between release jobs and the
"normal" jobs CI in terms of distro
Content-Type: text/plain; charset="utf-8"
On 2020-11-13 17:20:44 +0000 (+0000), Jeremy Stanley wrote:
[...]
> Honestly, I think the real problem here is that we have a bunch of
> unnecessary cruft in the release-openstack-python job held over
> from when we used to use tox to create release artifacts. If you
> look through the log of a successful build you'll see that we're
> not actually running tox or installing the projects being
> released, but we're using the ensure-tox and bindep roles anyway.
[...]
--
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
------------------------------
Message: 4
Date: Fri, 13 Nov 2020 19:02:14 +0000
Subject: Re:
[nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova
enforces that no DB credentials are allowed for the nova-compute
service
Message-ID:
Content-Type: text/plain; charset="utf-8"
>
>
> wrote:
>
>>
>>
>> >> On 11/11/20 5:35 PM, Balázs Gibizer wrote:
>> >> > Dear packagers and deployment engine developers,
>> >> >
>> >> > Since Icehouse nova-compute service does not need any database
>> >> > configuration as it uses the message bus to access data in the
>> >> database
>> >> > via the conductor service. Also, the nova configuration guide
>> >> states
>> >> > that the nova-compute service should not have the
>> >> > [api_database]connection config set. Having any DB credentials
>> >> > configured for the nova-compute is a security risk as well since
>> >> that
>> >> > service runs close to the hypervisor. Since Rocky[1] nova-compute
>> >> > service fails if you configure API DB credentials and set
>> >> upgrade_level
>> >> > config to 'auto'.
>> >> >
>> >> > Now we are proposing a patch[2] that makes nova-compute fail at
>> >> startup
>> >> > if the [database]connection or the [api_database]connection is
>> >> > configured. We know that this breaks at least the rpm packaging,
>> >> debian
>> >> > packaging, and puppet-nova. The problem there is that in an
>> >> all-in-on
>> >> > deployment scenario the nova.conf file generated by these tools is
>> >> > shared between all the nova services and therefore nova-compute
>> >> sees DB
>> >> > credentials. As a counter-example, devstack generates a separate
>> >> > nova-cpu.conf and passes that to the nova-compute service even in
>> >> an
>> >> > all-in-on setup.
>> >> >
>> >> > The nova team would like to merge [2] during Wallaby but we are
>> >> OK to
>> >> > delay the patch until Wallaby Milestone 2 so that the packagers
>> >> and
>> >> > deployment tools can catch up. Please let us know if you are
>> >> impacted
>> >> > and provide a way to track when you are ready with the
>> >> modification that
>> >> > allows [2] to be merged.
>> >> >
>> >> > There was a long discussion on #openstack-nova today[3] around
>> >> this
>> >> > topic. So you can find more detailed reasoning there[3].
>> >> >
>> >> > Cheers,
>> >> > gibi
>> >>
>> >> IMO, that's ok if, and only if, we all agree on how to implement it.
>> >> Best would be if we (all downstream distro + config management)
>> >> agree on
>> >> how to implement this.
>> >>
>> >> How about, we all implement a /etc/nova/nova-db.conf, and get all
>> >> services that need db access to use it (ie: starting them with
>> >> --config-file=/etc/nova/nova-db.conf)?
>> >>
>> >
>> > Hi,
>> >
>> > This is going to be an issue for those services we run as a WSGI app.
>> > Looking at [1], I see
>> > the app has a hardcoded list of config files to read (api-paste.ini
>> > and nova.conf), so we'd
>> > need to modify it at the installer level.
>> >
>> > Personally, I like the nova-db.conf way, since it looks like it
>> > reduces the amount of work
>> > required for all-in-one installers to adapt, but that requires some
>> > code change. Would the
>> > Nova team be happy with adding a nova-db.conf file to that list?
>>
>> Devstack solves the all-in-one case by using these config files:
>>
>> * nova.conf and api_paste.ini for the wsgi apps e.g. nova-api and
>> nova-metadata-api
>
> * nova.conf for the nova-scheduler and the top level nova-conductor
>> (super conductor)
>> * nova-cell<cell-id>.conf for the cell level nova-conductor and the
>> proxy services, e.g. nova-novncproxy
>
> * nova-cpu.conf for the nova-compute service
>>
>
> IIUC for nova-metadata-api "it depends":
> local_metadata_per_cell=True it needs nova-cell<cell-id>.conf
> local_metadata_per_cell=False it needs nova.conf
>
> Cheers,
> Ollie
>
Also Sean and Dan mentioned the other day that the cell level
nova-conductor requires api db access, which I really did not expect.
Cheers,
Ollie
>
>>
>> The nova team suggest to use a similar strategy to separate files. So
>
> at the moment we are not planning to change what config files the wsgi
>> apps will read.
>>
>> Cheers,
>> gibi
>>
>>
>> >
>> > Regards,
>> > Javier
>> >
>> >
>> > [1] -
>> >
>> >
>> >> If I understand well, these services would need access to db:
>> >> - conductor
>> >> - scheduler
>> >> - novncproxy
>> >> - serialproxy
>> >> - spicehtml5proxy
>> >> - api
>> >> - api-metadata
>> >>
>> >> Is this list correct? Or is there some services that also don't
>> >> need it?
>> >>
>> >> Cheers,
>> >>
>> >> Thomas Goirand (zigo)
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Subject: Digest Footer
_______________________________________________
openstack-discuss mailing list
------------------------------
End of openstack-discuss Digest, Vol 25, Issue 80
*************************************************