[openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation
Richard Raseley
richard at raseley.com
Mon Nov 9 18:34:26 UTC 2015
From this operator’s perspective this is exactly the element of community culture that, by encouraging the proliferation of projects and tools, is making the OpenStack landscape more complex and less {user,operator,architect,business decision maker} friendly.
In my opinion, it is essentially a manufactured and completely unnecessary distinction. I look forward to the day when, through some yet to be known mechanism, we have have a more focused product perspective within the community.
> On Nov 9, 2015, at 10:11 AM, Tim Hinrichs <tim at styra.com> wrote:
>
> They shouldn't be combined because they can each be used without the other. That is, they each stand on their own.
>
> Congress can be used for monitoring or delegating policy without attempting to correct violations (i.e. without needing workflows).
>
> Mistral can be used to make complex changes without writing a policy.
>
> Tim
>
>
>
>
>
> On Mon, Nov 9, 2015 at 8:57 AM Adam Young <ayoung at redhat.com> wrote:
> On 11/09/2015 10:57 AM, Tim Hinrichs wrote:
>> Congress happens to have the capability to run a script/API call under arbitrary conditions on the state of other OpenStack projects, which sounded like what you wanted. Or did I misread your original question?
>>
>> Congress and Mistral are definitely not competing. Congress lets people declare which states of the other OpenStack projects are permitted using a general purpose policy language, but it does not try to make complex changes (often requiring a workflow) to eliminate prohibited states. Mistral lets people create a workflow that makes complex changes to other OpenStack projects, but it doesn't have a general purpose policy language that describes which states are permitted. Congress and Mistral are complementary, and each can stand on its own.
>
> And why should not these two things be in a single project?
>
>
>
>>
>> Tim
>>
>>
>> On Mon, Nov 9, 2015 at 6:46 AM Adam Young <ayoung at redhat.com> wrote:
>> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
>>> Congress allows users to write a policy that executes an action under certain conditions.
>>>
>>> The conditions can be based on any data Congress has access to, which includes nova servers, neutron networks, cinder storage, keystone users, etc. We also have some Ceilometer statistics; I'm not sure about whether it's easy to get the Keystone notifications that you're talking about today, but notifications are on our roadmap. If the user's login is reflected in the Keystone API, we may already be getting that event.
>>>
>>> The action could in theory be a mistral/heat API or an arbitrary script. Right now we're set up to invoke any method on any of the python-clients we've integrated with. We've got an integration with heat but not mistral. New integrations are typically easy.
>>
>> Sounds like Mistral and Congress are competing here, then. Maybe we should merge those efforts.
>>
>>
>>>
>>> Happy to talk more.
>>>
>>> Tim
>>>
>>>
>>>
>>> On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann <doug at doughellmann.com> wrote:
>>> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
>>> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann <doug at doughellmann.com> wrote:
>>> >
>>> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
>>> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
>>> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
>>> > > > > > Can people help me work through the right set of tools for this use
>>> > > case
>>> > > > > > (has come up from several Operators) and map out a plan to implement
>>> > > it:
>>> > > > > >
>>> > > > > > Large cloud with many users coming from multiple Federation sources
>>> > > has
>>> > > > > > a policy of providing a minimal setup for each user upon first visit
>>> > > to
>>> > > > > > the cloud: Create a project for the user with a minimal quota, and
>>> > > > > > provide them a role assignment.
>>> > > > > >
>>> > > > > > Here are the gaps, as I see it:
>>> > > > > >
>>> > > > > > 1. Keystone provides a notification that a user has logged in, but
>>> > > > > > there is nothing capable of executing on this notification at the
>>> > > > > > moment. Only Ceilometer listens to Keystone notifications.
>>> > > > > >
>>> > > > > > 2. Keystone does not have a workflow engine, and should not be
>>> > > > > > auto-creating projects. This is something that should be performed
>>> > > via
>>> > > > > > a Heat template, and Keystone does not know about Heat, nor should
>>> > > it.
>>> > > > > >
>>> > > > > > 3. The Mapping code is pretty static; it assumes a user entry or a
>>> > > > > > group entry in identity when creating a role assignment, and neither
>>> > > > > > will exist.
>>> > > > > >
>>> > > > > > We can assume a special domain for Federated users to have per-user
>>> > > > > > projects.
>>> > > > > >
>>> > > > > > So; lets assume a Heat Template that does the following:
>>> > > > > >
>>> > > > > > 1. Creates a user in the per-user-projects domain
>>> > > > > > 2. Assigns a role to the Federated user in that project
>>> > > > > > 3. Sets the minimal quota for the user
>>> > > > > > 4. Somehow notifies the user that the project has been set up.
>>> > > > > >
>>> > > > > > This last probably assumes an email address from the Federated
>>> > > > > > assertion. Otherwise, the user hits Horizon, gets a "not
>>> > > authenticated
>>> > > > > > for any projects" error, and is stumped.
>>> > > > > >
>>> > > > > > How is quota assignment done in the other projects now? What happens
>>> > > > > > when a project is created in Keystone? Does that information gets
>>> > > > > > transferred to the other services, and, if so, how? Do most people
>>> > > use
>>> > > > > > a custom provisioning tool for this workflow?
>>> > > > > >
>>> > > > >
>>> > > > > I know at Dreamhost we built some custom integration that was triggered
>>> > > > > when someone turned on the Dreamcompute service in their account in our
>>> > > > > existing user management system. That integration created the account
>>> > > in
>>> > > > > keystone, set up a default network in neutron, etc. I've long thought
>>> > > we
>>> > > > > needed a "new tenant creation" service of some sort, that sits outside
>>> > > > > of our existing services and pokes them to do something when a new
>>> > > > > tenant is established. Using heat as the implementation makes sense,
>>> > > for
>>> > > > > things that heat can control, but we don't want keystone to depend on
>>> > > > > heat and we don't want to bake such a specialized feature into heat
>>> > > > > itself.
>>> > > > >
>>> > > >
>>> > > > I agree, an automation piece that is built-in and easy to add to
>>> > > > OpenStack would be great.
>>> > > >
>>> > > > I do not agree that it should be Heat. Heat is for managing stacks that
>>> > > > live on and change over time and thus need the complexity of the graph
>>> > > > model Heat presents.
>>> > > >
>>> > > > I'd actually say that Mistral or Ansible are better choices for this. A
>>> > > > service which listens to the notification bus and triggered a workflow
>>> > > > defined somewhere in either Ansible playbooks or Mistral's workflow
>>> > > > language would simply run through the "skel" workflow for each user.
>>> > > >
>>> > > > The actual workflow would probably almost always be somewhat site
>>> > > > specific, but it would make sense for Keystone to include a few basic
>>> > > ones
>>> > > > as "contrib" elements. For instance, the "notify the user" piece would
>>> > > > likely be simplest if you just let the workflow tool send an email. But
>>> > > > if your cloud has Zaqar, you may want to use that as well or instead.
>>> > > >
>>> > > > Adding Mistral here to see if they have some thoughts on how this
>>> > > > might work.
>>> > > >
>>> > > > BTW, if this does form into a new project, I suggest naming it
>>> > > > Skeleton[1]
>>> > >
>>> > > Following the pattern of Kite's naming, I think a Dirigible is a
>>> > > better way to get users into the cloud. :-)
>>> > >
>>> >
>>> > lol +1
>>> >
>>> > Is this use case specifically for keystone-to-keystone, or for federation
>>> > in general?
>>>
>>> The use case I had in mind was actually signing up a new user for
>>> a cloud (at Dreamhost that meant enabling a paid service in their
>>> account in the existing management tool outside of OpenStack). I'm not
>>> sure how it relates to federation, but it seems like that might just be
>>> another trigger for something similar, though not exactly the same? A
>>> federated user would also presumably need things like a default network,
>>> for example, though it may not need anything added to the keystone
>>> database.
>>>
>>> > As an outcome of the Vancouver summit, we had a use case for mirroring a
>>> > federated user's project ID from the identity provider cloud to the service
>>> > provider cloud. The goal would be that a user can burst into a second cloud
>>> > and immediately receive a token scoped to the same project ID that they're
>>> > already familiar with (which implies a role assignment of some sort; for
>>> > example, member). That would have to be done in real time though, not by a
>>> > secondary service.
>>> >
>>> > And with shadow users, we're looking at creating an identity (basically,
>>> > nothing but a user_id) in the second cloud anyway. And as another
>>> > consequence of shadow users, they wouldn't be getting a "federated token"
>>> > of any sort, but rather a simpler, local token, referencing a local
>>> > identity (the user_id that was just created automatically).
>>> >
>>> > Adam, does any of this align with your use case?
>>> >
>>> > >
>>> > > Doug
>>> > >
>>> > > >
>>> > > > [1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)
>>> > > >
>>> > >
>>> > > __________________________________________________________________________
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151109/aa841878/attachment.pgp>
More information about the OpenStack-dev
mailing list