[all] Train goal: removing and simplifying the endpoint tripplets?
Hi, During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it: <discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together. 2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint. This makes us only need the public endpoint, and that's it. Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo> Is anyone planning to implement (at least some parts of) the above? Cheers, Thomas Goirand (zigo)
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests. I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution. Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore. Jens
+1 to Jens point. Internal endpoints seems to be pretty useful for me as well, as you may set internal networks to completely another physical interface (like internal infiniband connections), while leave public endpoints rate limited, and it's pretty easy to configure and maintain. And I guess it might be the case for a pretty big amount of public clouds. 01.04.2019, 11:48, "Jens Harbott" <frickler@offenerstapel.de>:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution.
Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore.
Jens
-- Kind Regards, Dmitriy Rabotyagov
On 04/01/2019 04:43 AM, Jens Harbott wrote:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution.
Maybe Thomas was referring to having Keystone just return a single set of endpoints depending on the source CIDR. Or maybe he is referring to performing rate-limiting using a lower-level tool that was purpose-built for it -- something like iptables? i.e. ACCEPT all new connections from your private subnet/CIDR and jump all new connections not in your private subnet to a RATE-LIMIT chain that applies rate-limiting thresholds. In other words, use a single HTTP endpoint and do the rate-limiting in the Linux kernel instead of higher-level applications. Related: this is why having "quotas" for things like # of metadata items in Nova was always a terrible "feature" that was abusing the quota system as a terrible rate-limiting middleware when things like iptables or tc were a more appropriate solution. Best, -jay
Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore.
Jens
On Mon, Apr 1, 2019 at 8:10 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 04/01/2019 04:43 AM, Jens Harbott wrote:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution.
Maybe Thomas was referring to having Keystone just return a single set of endpoints depending on the source CIDR.
Or maybe he is referring to performing rate-limiting using a lower-level tool that was purpose-built for it -- something like iptables?
i.e. ACCEPT all new connections from your private subnet/CIDR and jump all new connections not in your private subnet to a RATE-LIMIT chain that applies rate-limiting thresholds.
In other words, use a single HTTP endpoint and do the rate-limiting in the Linux kernel instead of higher-level applications.
Related: this is why having "quotas" for things like # of metadata items in Nova was always a terrible "feature" that was abusing the quota system as a terrible rate-limiting middleware when things like iptables or tc were a more appropriate solution.
I'm personally a fan of single endpoint, you can use your own methods to determine if traffic is coming from a certain place. Anyways, in all of OpenStack's APIs, there is zero assumption that the API you talk to will be different depending on the endpoint. I think Keystone had that assumption but that was ripped out. In our case, we deploy a single SSL secured endpoint which all public users and internal services talk to. If our services are doing something that needs to be rate limited, we probably need to revisit that :) +1 for me, however, I know some people use this type of thing for "network isolation". However, I think this issue can be delegated a layer lower (split DNS, hard-coding a different URL).
Best, -jay
Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore.
Jens
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 4/1/19 12:31 PM, Mohammed Naser wrote:
On Mon, Apr 1, 2019 at 8:10 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 04/01/2019 04:43 AM, Jens Harbott wrote:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above? For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution. Maybe Thomas was referring to having Keystone just return a single set of endpoints depending on the source CIDR.
Or maybe he is referring to performing rate-limiting using a lower-level tool that was purpose-built for it -- something like iptables?
i.e. ACCEPT all new connections from your private subnet/CIDR and jump all new connections not in your private subnet to a RATE-LIMIT chain that applies rate-limiting thresholds.
In other words, use a single HTTP endpoint and do the rate-limiting in the Linux kernel instead of higher-level applications.
Related: this is why having "quotas" for things like # of metadata items in Nova was always a terrible "feature" that was abusing the quota system as a terrible rate-limiting middleware when things like iptables or tc were a more appropriate solution. I'm personally a fan of single endpoint, you can use your own methods to determine if traffic is coming from a certain place.
Anyways, in all of OpenStack's APIs, there is zero assumption that the API you talk to will be different depending on the endpoint. I think Keystone had that assumption but that was ripped out.
Correct, that assumption only existed with v2.0, which we ripped out in Queens. The unfortunate side-effect of that design is that it made security solely an operator problem. It also didn't force developers to really think through those authoritative use cases when designing or writing APIs. The v3 API was designed from the beginning to be single application that implemented a policy engine so that we didn't need an endpoint for administrators and "everyone else". FWIW - we have some documentation in keystone that is related to this thread (but more general that this specific case) [0]. [0] https://docs.openstack.org/keystone/latest/contributor/service-catalog.html
In our case, we deploy a single SSL secured endpoint which all public users and internal services talk to. If our services are doing something that needs to be rate limited, we probably need to revisit that :)
+1 for me, however, I know some people use this type of thing for "network isolation". However, I think this issue can be delegated a layer lower (split DNS, hard-coding a different URL).
Best, -jay
Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore.
Jens
On Mon, Apr 1, 2019 at 10:38 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
On Mon, Apr 1, 2019 at 8:10 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 04/01/2019 04:43 AM, Jens Harbott wrote:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution.
Maybe Thomas was referring to having Keystone just return a single set of endpoints depending on the source CIDR.
Or maybe he is referring to performing rate-limiting using a lower-level tool that was purpose-built for it -- something like iptables?
i.e. ACCEPT all new connections from your private subnet/CIDR and jump all new connections not in your private subnet to a RATE-LIMIT chain that applies rate-limiting thresholds.
In other words, use a single HTTP endpoint and do the rate-limiting in the Linux kernel instead of higher-level applications.
Related: this is why having "quotas" for things like # of metadata items in Nova was always a terrible "feature" that was abusing the quota system as a terrible rate-limiting middleware when things like iptables or tc were a more appropriate solution.
I'm personally a fan of single endpoint, you can use your own methods to determine if traffic is coming from a certain place.
Anyways, in all of OpenStack's APIs, there is zero assumption that the API you talk to will be different depending on the endpoint. I think Keystone had that assumption but that was ripped out.
But I know of at least one case where there is a knob to control if some REST endpoints are available. While not explicitly tied to the endpoint type, we do provide baremetal operators some behavior knobs to disable the exposure of internal traffic versus public traffic as two different services have been launched with slightly different configurations. Overall I think making the ability to only populate one and use it is great. I think the other endpoints need to be optional to allow more advanced configurations as-needed. That will at least lessen the initial confusion barrier for people trying to setup keystone for the first time.
In our case, we deploy a single SSL secured endpoint which all public users and internal services talk to. If our services are doing something that needs to be rate limited, we probably need to revisit that :)
+1 for me, however, I know some people use this type of thing for "network isolation". However, I think this issue can be delegated a layer lower (split DNS, hard-coding a different URL).
Best, -jay
Big +1 on dropping the admin endpoint though, now that keystone doesn't need it anymore.
Jens
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 01/04/2019 09:43, Jens Harbott wrote:
On Thu, 2019-03-28 at 16:49 +0100, Thomas Goirand wrote:
Hi,
During the summit in Tokyo (if I remember well), Sean Dague lead a discussion about removing the need for having 3 endpoints per service. I was very excited about the proposal, and it's IMO a shame it hasn't been implemented. Everyone in the room agreed. Here the content of the discussion as I remember it:
<discussion in Tokyo> 1/ The only service that needed the admin endpoint was Keystone. This requirement is now gone. So we could get rid of the admin endpoint all together.
2/ The need for an interal vs public endpoint was only needed for accounting (of for example bandwidth when uploading to Glance), but this could be work-around by operators by using intelligent routing. So we wouldn't need the internal endpoint.
This makes us only need the public endpoint, and that's it.
Then, there are these %(tenant_id)s bits in the endpoints which are also very much annoying, and could be removed if the clients were smarter. These are still needed, apparently, for: - cinder - swift - heat </discussion in Tokyo>
Is anyone planning to implement (at least some parts of) the above?
For me as an operator, the distinction between internal and public endpoints is helpful, as it allows to easily set up extended filtering or rate limiting for public services without affecting internal API calls, which in most deployments cause the majority of requests.
If 2/ were implemented a class of deployment where the cloud control plane is isolated from any public networks becomes difficult to engineer. There is currently no requirement for there to be connectivity between the control plane and public networks, so 2/ results in the required architecture becoming more opinionated and reduces choices for the deployer. In openstack-ansible (and no doubt other deployment projects) a wide range of topologies are supported, from simplistic networks where internal and public are the same subnet, through to full isolation. We have deployers who choose to sit at various points along that spectrum as their particular situation requires. 2/ seems to require deployers to in some way NAT or route between their control plane network and the public endpoint, something that I expect most people do right now anyway. However this is not universal and there are set of OSA users (myself included) with non-routable control planes who would like to keep them that way.
I'm not sure what "intelligent routing" is meant to be, but it sounds more complicated and unstable than the current solution.
I also don't understand what this would mean in practice. Jon.
participants (8)
-
Jay Pipes
-
Jens Harbott
-
Jonathan Rosser
-
Julia Kreger
-
Lance Bragstad
-
Mohammed Naser
-
Thomas Goirand
-
Работягов Дмитрий