[nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service

Alex Schultz aschultz at redhat.com
Mon Nov 23 14:02:52 UTC 2020


On Mon, Nov 23, 2020 at 6:42 AM Sean Mooney <smooney at redhat.com> wrote:
>
> On Mon, 2020-11-23 at 06:15 -0700, Alex Schultz wrote:
> > On Mon, Nov 23, 2020 at 3:47 AM Bal√°zs Gibizer <balazs.gibizer at est.tech> wrote:
> > >
> > >
> > >
> > > On Mon, Nov 23, 2020 at 11:18, Thomas Goirand <zigo at debian.org> wrote:
> > > > On 11/23/20 9:30 AM, Tobias Urdin wrote:
> > > > >  Hello,
> > > > >
> > > > >
> > > > >  Just to clarify that this is already possible when using
> > > > >  puppet-nova, it's up to the deployment to
> > > > >
> > > > >  make sure the database parameters for the classes is set.
> > > > >
> > > > >
> > > > >  We've been running without database credentials in nova.conf on our
> > > > >  compute nodes for years.
> > > > >
> > > > >
> > > > >  Best regards
> > > > >
> > > > >  Tobias
> > > >
> > > > Hi Tobias,
> > > >
> > > > That's not what I'm suggesting.
> > > >
> > > > I'm suggesting that nova-compute code from upstream simply ignores
> > > > completely anything related to db connection, so we're done with the
> > > > topic. That is, if nova-compute process having access to the db is the
> > > > issue we're trying to fix.
> > > >
> > > > Or is it that the security problem is having the db credentials
> > > > written
> > > > in a file on the compute node? If so, isn't having hacked root (or
> > > > nova)
> > > > access to a compute node already game-over?
> > > >
> > > > What are we trying to secure here? If that's what I'm thinking (ie:
> > > > some
> > > > VM code to escape from guest, and potentially the hacker can gain
> > > > access
> > > > to the db), then IMO that's not the way to enforce things. It's not
> > > > the
> > > > role of upstream Nova to do this apart from a well enough written
> > > > documentation.
> > >
> > > I always understood this as having a goal to limit the attack surface.
> > > So if a VM escapes out of the sandbox and access the hypervisor then
> > > limit how may other services get compromised outside of the compromised
> > > compute host.
> > >
> >
> > I can agree with this in theory, however I don't think it's nova's
> > responsibility to enforce this.
> >
> nova need to enforce it as we use the absense or present of the db creads to know
> if common code is running in the compute agent or in controller services it activly breaks the nova compute agent if they
> are present.
>

Seems like a poor choice to have made to use db creds to determine
functionality but OK.

> it is a bug to have the db cred in the set fo configs passed to nova-comptue and it has been for years.
> the fact it worked in some case does not change teh fact this was unsupported following a depercation cycle and activly
> depenedon in code after allowing operators, packagers and deployment tool maintianer years to ensure its no longer presnt.
>
> we could make this just a ERROR log without the hard fail but that would still not change the fact there is a bug in packages or deployment
> tools that should be fixed.
>
> >   IMHO a warning about this condition
> > should be sufficient from a project standpoint.  It's up to the
> > operator to ensure this does not happen and not the project.
> >
> that would be true if it was not something that the code relied on to function correctly.
> local condocutor mode was removed in grizzly, since then the db creds have been unsued in the compute node.
> when cells v2 was intoduced they were used to determin if we would check the version in the local cell or in all cells
> as aprt of rpc automatic upgarde level calulation. we now always to the auto discovery which cause it to break.
>
> >   The
> > project can't foresee how the service is actually going to be
> > deployed.
> >
> we can define which methods of deployment we will support however.

No? I don't think that's ever been a thing for openstack services.

> >   In theory this is moot if the compute service is running on
> > the same host as the api and not in something like a container.
> not really we have expected nova-compute to not use nova.conf in an all in one deployment since rocky unless its
> in a container where it has a rendered version that only contains the section relevent to it.

file names were place holders. And since you don't control the
deployment, you don't pick the names (see this thread)...

> > Escaping the service and having access to the host won't prevent the
> > hacker from reading /etc/nova/nova-db.conf instead of
> > /etc/nova/nova-compute.conf.
> it wont but /etc/nova/nova-db.conf  should not be on the compute node
> unless you are also deploying a service that will actully use it there.
> that is vald to do but it shoudl still not be bassed to the nova-comptue binary.

You're missing the point if the operator is deploying nova-api and
compute on the same host, they will be there. It may not be a best
practice but it is something that people do for their own reasons.
Nova should not artificially restrict this for no other reason than
you think it shouldn't be done or you wrote code assuming this. The
whole point of openstack was to allow folks to build their own clouds.
If deployment decisions are being made at a project level now, then
that seems to be a break in how things have been done historically.
Having handled tooling around the deployment of openstack now for
nearly 6 years, I'm not certain projects should necessarily be
dictating this.

I raised my concerns about this when this concept was first explained
to me in #puppet-openstack and I've basically washed my hands of it. I
disagree with this entire thing, but my take was if you're going to do
it then nova developers needs to ensure it's supported in all the
places and properly explained to operators which seems like the plan
from this thread. I still don't think it's a good idea to hard fail
but carry on.

> > > Cheers,
> > > gibi
> > >
> > > >
> > > > Cheers,
> > > >
> > > > Thomas Goirand (zigo)
> > > >
> > >
> > >
> > >
> >
> >
>
>
>




More information about the openstack-discuss mailing list