Hi all,

We are in-midst of transition from OSF to OIF and hence decided,  not  to add more add-ons for Victoria. 

Note the value and  efforts involved  in Refstack & Tempest process was in question and answer is we need to identify pointers in Marketplace to confirm the claims for current Marketpalce efforts.  

Here is the summary of Interop WG call today. 

1. Interop Guidelines for 2020.10.json  - 2020.10.json  (https://opendev.org/osf/interop) - Arkady to try submit to osf/interop - escale to osf staff if needed ( refer https://review.opendev.org/#/c/762705/1/2020.11.json )- Need pointers to results in Markets

2. Interop add-on Guidelines  for existing - https://opendev.org/osf/interop/src/branch/master/add-ons
2.a  DNS (Designate)  - dns.2020.10.json - Need pointers to results 
2.b  Orchestration(Heat) -orchestration.2020.10.json - Need pointers results


3. Interop add-on guidelines for new proposals - https://www.openstack.org/marketplace/
3.a  FileSystem (Manila) - No plans for Victoria
3.b Metal as a Service (Ironic) - No plans for Victoria


4. What's next for Interop 2021 with containers & Kubernetes/magnum ? - Need Volunteers with go skills for new conformance test proposals - Need Board level guidelines from Foundation 


Call for volunteers again to ensure we transit from OSF to OIF and please do reply your ideas by email for

1. un-linking orchestration from Heat to Mangnum or Kubernetes as baseline for containers
2. Adding new program for BareMetal as a Service with Ironic , bifrost , metal3 etc.
3. Putting Integrated OpenStack Integrated logo program on autopilot or terminate  the same at Victoria.


Thanks
Prakash
For Interop WG OSF/OIF



On Friday, November 13, 2020, 11:04:48 AM PST, openstack-discuss-request@lists.openstack.org <openstack-discuss-request@lists.openstack.org> wrote:


Send openstack-discuss mailing list submissions to
    openstack-discuss@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
or, via email, send a message with subject or body 'help' to
    openstack-discuss-request@lists.openstack.org

You can reach the person managing the list at
    openstack-discuss-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of openstack-discuss digest..."


Today's Topics:

  1. Re: [release][infra] Discrepancy between release jobs and the
      "normal" jobs CI in terms of distro (Jeremy Stanley)
  2. [OpenStack][InteropWG] Weekly Interop meeting Agenda for
      Victoria Interop Guidelines  in next hr (prakash RAMCHANDRAN)
  3. Re: [release][infra] Discrepancy between release jobs and the
      "normal" jobs CI in terms of distro (Jeremy Stanley)
  4. Re:
      [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova
      enforces that no DB credentials are allowed for the nova-compute
      service (Oliver Walsh)


----------------------------------------------------------------------

Message: 1
Date: Fri, 13 Nov 2020 17:20:44 +0000
From: Jeremy Stanley <fungi@yuggoth.org>
To: openstack-discuss@lists.openstack.org
Subject: Re: [release][infra] Discrepancy between release jobs and the
    "normal" jobs CI in terms of distro
Message-ID: <20201113172044.c6cgt7rdy6m6mkeu@yuggoth.org>
Content-Type: text/plain; charset="utf-8"

On 2020-11-13 13:58:18 +0100 (+0100), Radosław Piliszek wrote:
[...]
> I believe it would be a rare situation but surely testing something on
> Bionic and trying to release on Focal might have its quirks.

Honestly, I think the real problem here is that we have a bunch of
unnecessary cruft in the release-openstack-python job held over from
when we used to use tox to create release artifacts. If you look
through the log of a successful build you'll see that we're not
actually running tox or installing the projects being released, but
we're using the ensure-tox and bindep roles anyway. We may not even
need ensure-pip in there. The important bits of the job are that it
checks out the correct state of the repository and then runs
`python3 setup.py sdist bdist_wheel` and then pulls the resulting
files back to the executor to be published. That should be fairly
consistent no matter what project is being built and no matter what
distro it's being built on.
--
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201113/5fa6322b/attachment-0001.sig>

------------------------------

Message: 2
Date: Fri, 13 Nov 2020 17:21:06 +0000 (UTC)
From: prakash RAMCHANDRAN <pramchan@yahoo.com>
To: "openstack-discuss@lists.openstack.org"
    <openstack-discuss@lists.openstack.org>
Subject: [OpenStack][InteropWG] Weekly Interop meeting Agenda for
    Victoria Interop Guidelines  in next hr
Message-ID: <435519786.5170730.1605288066630@mail.yahoo.com>
Content-Type: text/plain; charset="utf-8"


Hi all,
Please join me in next hour weekly Interop WG meetingInterop Working Group - Weekly Friday 10-11 AM  PST Link: https://meetpad.opendev.org/Interop-WG-weekly-meetin

Agenda: 1. Interop Guidelines for 2020.10.json 2. Interop add-on Guidelines  for existing2.a  DNS (Designate) 2.b  Orchestration(Heat)3. Interop add-on guidelines for new proposals3.a  FileSystem (Manila)3.b Metal as a Service (Ironic)4. What's next for Interop 2021 with containers & Kubernetes/magnum ? - Need Volunteers with go skills for new conformance test proposals

ThanksPrakashFor Interop WGOpenDev Etherpad


|
|
|  |
OpenDev Etherpad


|

|

|




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201113/71daaaad/attachment-0001.html>

------------------------------

Message: 3
Date: Fri, 13 Nov 2020 18:18:05 +0000
From: Jeremy Stanley <fungi@yuggoth.org>
To: openstack-discuss@lists.openstack.org
Subject: Re: [release][infra] Discrepancy between release jobs and the
    "normal" jobs CI in terms of distro
Message-ID: <20201113181805.wwwlhkqxalbrrbxz@yuggoth.org>
Content-Type: text/plain; charset="utf-8"

On 2020-11-13 17:20:44 +0000 (+0000), Jeremy Stanley wrote:
[...]
> Honestly, I think the real problem here is that we have a bunch of
> unnecessary cruft in the release-openstack-python job held over
> from when we used to use tox to create release artifacts. If you
> look through the log of a successful build you'll see that we're
> not actually running tox or installing the projects being
> released, but we're using the ensure-tox and bindep roles anyway.
[...]

This solution has been proposed: https://review.opendev.org/762699
--
Jeremy Stanley
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201113/9332bde8/attachment-0001.sig>

------------------------------

Message: 4
Date: Fri, 13 Nov 2020 19:02:14 +0000
From: Oliver Walsh <owalsh@redhat.com>
To: Balázs Gibizer <balazs.gibizer@est.tech>
Cc: Javier Pena <jpena@redhat.com>,  openstack maillist
    <openstack-discuss@lists.openstack.org>, Thomas Goirand
    <zigo@debian.org>
Subject: Re:
    [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova
    enforces that no DB credentials are allowed for the nova-compute
    service
Message-ID:
    <CALv8kgEi7g_NeON8xVqpHp1T=P4M=ueyvYQ2Q6TAu2wYNA=5jw@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Fri 13 Nov 2020, 16:18 Oliver Walsh, <owalsh@redhat.com> wrote:

>
>
> On Fri, 13 Nov 2020 at 10:06, Balázs Gibizer <balazs.gibizer@est.tech>
> wrote:
>
>>
>>
>> On Thu, Nov 12, 2020 at 06:09, Javier Pena <jpena@redhat.com> wrote:
>> >>  On 11/11/20 5:35 PM, Balázs Gibizer wrote:
>> >>  > Dear packagers and deployment engine developers,
>> >>  >
>> >>  > Since Icehouse nova-compute service does not need any database
>> >>  > configuration as it uses the message bus to access data in the
>> >> database
>> >>  > via the conductor service. Also, the nova configuration guide
>> >> states
>> >>  > that the nova-compute service should not have the
>> >>  > [api_database]connection config set. Having any DB credentials
>> >>  > configured for the nova-compute is a security risk as well since
>> >> that
>> >>  > service runs close to the hypervisor. Since Rocky[1] nova-compute
>> >>  > service fails if you configure API DB credentials and set
>> >> upgrade_level
>> >>  > config to 'auto'.
>> >>  >
>> >>  > Now we are proposing a patch[2] that makes nova-compute fail at
>> >> startup
>> >>  > if the [database]connection or the [api_database]connection is
>> >>  > configured. We know that this breaks at least the rpm packaging,
>> >> debian
>> >>  > packaging, and puppet-nova. The problem there is that in an
>> >> all-in-on
>> >>  > deployment scenario the nova.conf file generated by these tools is
>> >>  > shared between all the nova services and therefore nova-compute
>> >> sees DB
>> >>  > credentials. As a counter-example, devstack generates a separate
>> >>  > nova-cpu.conf and passes that to the nova-compute service even in
>> >> an
>> >>  > all-in-on setup.
>> >>  >
>> >>  > The nova team would like to merge [2] during Wallaby but we are
>> >> OK to
>> >>  > delay the patch until Wallaby Milestone 2 so that the packagers
>> >> and
>> >>  > deployment tools can catch up. Please let us know if you are
>> >> impacted
>> >>  > and provide a way to track when you are ready with the
>> >> modification that
>> >>  > allows [2] to be merged.
>> >>  >
>> >>  > There was a long discussion on #openstack-nova today[3] around
>> >> this
>> >>  > topic. So you can find more detailed reasoning there[3].
>> >>  >
>> >>  > Cheers,
>> >>  > gibi
>> >>
>> >>  IMO, that's ok if, and only if, we all agree on how to implement it.
>> >>  Best would be if we (all downstream distro + config management)
>> >> agree on
>> >>  how to implement this.
>> >>
>> >>  How about, we all implement a /etc/nova/nova-db.conf, and get all
>> >>  services that need db access to use it (ie: starting them with
>> >>  --config-file=/etc/nova/nova-db.conf)?
>> >>
>> >
>> > Hi,
>> >
>> > This is going to be an issue for those services we run as a WSGI app.
>> > Looking at [1], I see
>> > the app has a hardcoded list of config files to read (api-paste.ini
>> > and nova.conf), so we'd
>> > need to modify it at the installer level.
>> >
>> > Personally, I like the nova-db.conf way, since it looks like it
>> > reduces the amount of work
>> > required for all-in-one installers to adapt, but that requires some
>> > code change. Would the
>> > Nova team be happy with adding a nova-db.conf file to that list?
>>
>> Devstack solves the all-in-one case by using these config files:
>>
>> * nova.conf and api_paste.ini for the wsgi apps e.g. nova-api and
>> nova-metadata-api
>
> * nova.conf for the nova-scheduler and the top level nova-conductor
>> (super conductor)
>> * nova-cell<cell-id>.conf for the cell level nova-conductor and the
>> proxy services, e.g. nova-novncproxy
>
> * nova-cpu.conf for the nova-compute service
>>
>
> IIUC for nova-metadata-api "it depends":
> local_metadata_per_cell=True it needs nova-cell<cell-id>.conf
> local_metadata_per_cell=False it needs nova.conf
>
> Cheers,
> Ollie
>

Also Sean and Dan mentioned the other day that the cell level
nova-conductor requires api db access, which I really did not expect.

Cheers,
Ollie


>
>>
>> The nova team suggest to use a similar strategy to separate files. So
>
> at the moment we are not planning to change what config files the wsgi
>> apps will read.
>>
>> Cheers,
>> gibi
>>
>>
>> >
>> > Regards,
>> > Javier
>> >
>> >
>> > [1] -
>> >
>> https://opendev.org/openstack/nova/src/branch/master/nova/api/openstack/wsgi_app.py#L30
>> >
>> >>  If I understand well, these services would need access to db:
>> >>  - conductor
>> >>  - scheduler
>> >>  - novncproxy
>> >>  - serialproxy
>> >>  - spicehtml5proxy
>> >>  - api
>> >>  - api-metadata
>> >>
>> >>  Is this list correct? Or is there some services that also don't
>> >> need it?
>> >>
>> >>  Cheers,
>> >>
>> >>  Thomas Goirand (zigo)
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20201113/f1200424/attachment.html>

------------------------------

Subject: Digest Footer

_______________________________________________
openstack-discuss mailing list
openstack-discuss@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss


------------------------------

End of openstack-discuss Digest, Vol 25, Issue 80
*************************************************