[openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas
Clint Byrum
clint at fewbar.com
Mon Jun 16 23:54:17 UTC 2014
Excerpts from Doug Wiegley's message of 2014-06-16 16:10:51 -0700:
> Hi Clint,
>
> Comments below.
>
> On 6/16/14, 3:06 PM, "Clint Byrum" <clint at fewbar.com> wrote:
>
> >Excerpts from Doug Wiegley's message of 2014-06-16 13:22:26 -0700:
> >> > nobody is calling Barbican "a database". It is a place to store
> >>
> >> Š did you at least feel a heavy sense of irony as you typed those two
> >> statements? ³It¹s not a database, it just stores things!² :-)
> >>
> >
> >Not at all, though I understand that, clipped as so, it may look a bit
> >ironic.
> >
> >I was using shorthand of "database" to mean a general purpose database. I
> >should have qualified it to avoid any confusion. It is a narrow purpose
> >storage service with strong access controls. We can call that a database
> >if you like, but I think it has one very tiny role, and that is to audit
> >and control access to secrets.
> >
> >> The real irony here is that in this rather firm stand of keeping the
> >>user
> >> in control of their secrets, you are actually making the user LESS in
> >> control of their secrets. Copies of secrets will have to be made,
> >>whether
> >> stored under another tenant, or shadow copied somewhere. And the user
> >> will have no way to delete them, or even know that they exist.
> >>
> >
> >Why would you need to make copies outside of the in-RAM copy that is
> >kept while the service runs? You're trying to do too much instead of
> >operating in a nice loosely coupled fashion.
>
> I’ll come back to this.
>
> >
> >> The force flag would eliminate the common mistake cases enough that I¹d
> >> wager lbaas and most others would cease to worry, not duplicate, and
> >>just
> >> reference barbican id¹s and nothing else. (Not including backends that
> >> will already make a copy of the secret, but things like servicevm will
> >>not
> >> need to dup it.) The earlier assertion that we have to deal with the
> >> missing secrets case even with a force flag is, I think, false, because
> >> once the common errors have been eliminated, the potential window of
> >> accidental pain is reduced to those that really ask for it.
> >
> >The accidental pain thing makes no sense to me. I'm a user and I take
> >responsibility for my data. If I don't want to have that responsibility,
> >I will use less privileged users and delegate the higher amount of
> >privilege to a system that does manage those relationships for me.
> >
> >Do we have mandatory file locking in Unix? No we don't. Why? Because some
> >users want the power to remove files _no matter what_. We build in the
> >expectation that things may disappear no matter what you do to prevent
> >it. I think your LBaaS should be written with the same assumption. It
> >will be more resilient and useful to more people if they do not have to
> >play complicated games to remove a secret.
>
> There is literally no amount of resilience that can recover an HTTPS
> service that has no SSL private key. It’s just down. Worse, it’s not
> down when the user takes the action; it’s down at some indeterminate point
> in the future (e.g. when it reboots next, or when the LB has to move two a
> different servicevm.) We could sub in a self-signed cert in that case,
> though that’s as bad as down in many contexts.
>
This is way too much focus on problems that are not LBaaS's. Help users
make keys for all other services, both inside the cloud and in the
user's machines, more reliable.. don't just magically make them reliable
for LBaaS.
> I could be pedantic and point out that NFS files in use can’t be deleted,
> but that’s not relevant to your point. :-)
>
On the remote system, correct. On the home system, they can be rm'd
right out from under the nfsd and there's nothing it can do about it.
And that's just my point. NFS is a distributed system, and thus will
try to take care of the distributed system, but the lower level thing,
the host, is unencumbered by the service.
I'm suggesting that LBaaS and Barbican are both low level services,
and thus should not be doing anything at a higher level.
> >
> >Anyway, nobody has answered this. What user would indiscriminately delete
> >their own data and expect that things depending on that data will continue
> >to work indefinitely?
>
> Maybe there are multiple admins and a miscommunication? Memory lapse?
> What kind of customer base do you have that doesn’t make those kinds of
> mistakes *all the time*? You’re assuming a perfect user with a perfect
> memory; accidents happen. Now pair that with a wildcard cert used by 1000
> sites, and you’ve just one-clicked your way into *an extremely bad day*.
> And you won’t even know it, until your pager starts to go off at some
> later date.
>
This is the same reason we don't login to systems as root and just do
whatever, right? We make admins funnel changes through a structured system
that can be reviewed, tracked, and collaborated on. But we still have
root.. and we still need it for zomg wtf times when the tooling fails.
How about you give users the rights to create keys and load balancers,
but not to delete? Make them gain higher privileges to delete, but use
that same privilege escalation automatically when you're using automation.
I understand this is similar to the force flag, but it is also an optional
thing that users won't be forced to use.
This maintains the simple distributed nature, gives users a "safe" way
to use if they want, but doesn't force all users to work in the way you
expect them to and doesn't add a lot of extra plumbing to the system.
> Neutron doesn’t let you delete objects in use, e.g. Lots of other
> examples in the API. I’m not saying don’t let them cut their own foot
> off, if that’s their choice (the unix delete example.) But don’t make it
> easy, just because “they’d all just use force all the time anyway.”
>
Neutron owns all of those objects. They're all inside Neutron because
their physical counterparts are literally tightly coupled with copper
and fiber. That is not actually desired, it is a constraint of the
system.
> >Why would you need to make copies outside of the in-RAM copy that is
> >kept while the service runs? You're trying to do too much instead of
> >operating in a nice loosely coupled fashion.
>
> Back to this — what you describe is what we *want* to do, but you have a
> pile of operators and admins that aren’t comfortable with having a single
> click, somewhere not related to the service, quietly setting a time bomb
> for a later service failure/downtime. Giving the user a way to easily
> discover how much of their foot they’re about to blow off, before doing
> it, is what we’re asking for. And without it, I expect that workarounds
> of all varieties will proliferate up and down the stack.
That's a noble goal, but not one for low level services to advance. You
have described the main use case for orchestration: Tie together services
and machines in a way that is understandable and entirely under the
control of users.
>
> Force flag, soft deletes, resource tracking, even just displaying who’s
> using what for a visual cue; there are any number of ways to make this
> less dangerous without pushing it onto every consumer of your service.
>
They're also ways to make the system more complex and brittle.
Look, I'm talking a lot and not showing up with code, so I'm squelching
myself. The point is that this is a distributed system, and should be
built one service at a time. Services that do too much are going to be
overly complex. I am happy to assist anybody in becoming a Heat user
who agrees with me and thinks that something like Heat should be the
way users safely consume cross-project relationships.
More information about the OpenStack-dev
mailing list