[nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service

Balázs Gibizer balazs.gibizer at est.tech
Fri Nov 13 10:06:01 UTC 2020



On Thu, Nov 12, 2020 at 06:09, Javier Pena <jpena at redhat.com> wrote:
>>  On 11/11/20 5:35 PM, Balázs Gibizer wrote:
>>  > Dear packagers and deployment engine developers,
>>  >
>>  > Since Icehouse nova-compute service does not need any database
>>  > configuration as it uses the message bus to access data in the 
>> database
>>  > via the conductor service. Also, the nova configuration guide 
>> states
>>  > that the nova-compute service should not have the
>>  > [api_database]connection config set. Having any DB credentials
>>  > configured for the nova-compute is a security risk as well since 
>> that
>>  > service runs close to the hypervisor. Since Rocky[1] nova-compute
>>  > service fails if you configure API DB credentials and set 
>> upgrade_level
>>  > config to 'auto'.
>>  >
>>  > Now we are proposing a patch[2] that makes nova-compute fail at 
>> startup
>>  > if the [database]connection or the [api_database]connection is
>>  > configured. We know that this breaks at least the rpm packaging, 
>> debian
>>  > packaging, and puppet-nova. The problem there is that in an 
>> all-in-on
>>  > deployment scenario the nova.conf file generated by these tools is
>>  > shared between all the nova services and therefore nova-compute 
>> sees DB
>>  > credentials. As a counter-example, devstack generates a separate
>>  > nova-cpu.conf and passes that to the nova-compute service even in 
>> an
>>  > all-in-on setup.
>>  >
>>  > The nova team would like to merge [2] during Wallaby but we are 
>> OK to
>>  > delay the patch until Wallaby Milestone 2 so that the packagers 
>> and
>>  > deployment tools can catch up. Please let us know if you are 
>> impacted
>>  > and provide a way to track when you are ready with the 
>> modification that
>>  > allows [2] to be merged.
>>  >
>>  > There was a long discussion on #openstack-nova today[3] around 
>> this
>>  > topic. So you can find more detailed reasoning there[3].
>>  >
>>  > Cheers,
>>  > gibi
>> 
>>  IMO, that's ok if, and only if, we all agree on how to implement it.
>>  Best would be if we (all downstream distro + config management) 
>> agree on
>>  how to implement this.
>> 
>>  How about, we all implement a /etc/nova/nova-db.conf, and get all
>>  services that need db access to use it (ie: starting them with
>>  --config-file=/etc/nova/nova-db.conf)?
>> 
> 
> Hi,
> 
> This is going to be an issue for those services we run as a WSGI app. 
> Looking at [1], I see
> the app has a hardcoded list of config files to read (api-paste.ini 
> and nova.conf), so we'd
> need to modify it at the installer level.
> 
> Personally, I like the nova-db.conf way, since it looks like it 
> reduces the amount of work
> required for all-in-one installers to adapt, but that requires some 
> code change. Would the
> Nova team be happy with adding a nova-db.conf file to that list?

Devstack solves the all-in-one case by using these config files:

* nova.conf and api_paste.ini for the wsgi apps e.g. nova-api and 
nova-metadata-api
* nova.conf for the nova-scheduler and the top level nova-conductor 
(super conductor)
* nova-cell<cell-id>.conf for the cell level nova-conductor and the 
proxy services, e.g. nova-novncproxy
* nova-cpu.conf for the nova-compute service

The nova team suggest to use a similar strategy to separate files. So 
at the moment we are not planning to change what config files the wsgi 
apps will read.

Cheers,
gibi


> 
> Regards,
> Javier
> 
> 
> [1] - 
> https://opendev.org/openstack/nova/src/branch/master/nova/api/openstack/wsgi_app.py#L30
> 
>>  If I understand well, these services would need access to db:
>>  - conductor
>>  - scheduler
>>  - novncproxy
>>  - serialproxy
>>  - spicehtml5proxy
>>  - api
>>  - api-metadata
>> 
>>  Is this list correct? Or is there some services that also don't 
>> need it?
>> 
>>  Cheers,
>> 
>>  Thomas Goirand (zigo)
>> 
>> 
> 
> 





More information about the openstack-discuss mailing list