[Openstack-operators] Service Catalog TNG urls

Dan Sneddon dsneddon at redhat.com
Thu Dec 3 20:39:44 UTC 2015


On 12/03/2015 12:01 PM, Clint Byrum wrote:
> Excerpts from Dan Sneddon's message of 2015-12-03 09:43:59 -0800:
>> On 12/03/2015 06:14 AM, Sean Dague wrote:
>>> For folks that don't know, we've got an effort under way to look at some
>>> of what's happened with the service catalog, how it's organically grown,
>>> and do some pruning and tuning to make sure it's going to support what
>>> we want to do with OpenStack for the next 5 years (wiki page to dive
>>> deeper here - https://wiki.openstack.org/wiki/ServiceCatalogTNG).
>>>
>>> One of the early Open Questions is about urls. Today there is a
>>> completely free form field to specify urls, and there are conventions
>>> about having publicURL, internalURL, adminURL. These are, however, only
>>> conventions.
>>>
>>> The only project that's ever really used adminURL has been Keystone, so
>>> that's something we feel we can phase out in new representations.
>>>
>>> The real question / concern is around public vs. internal. And something
>>> we'd love feedback from people on.
>>>
>>> When this was brought up in Tokyo the answer we got was that internal
>>> URL was important because:
>>>
>>> * users trusted it to mean "I won't get changed for bandwidth"
>>> * it is often http instead of https, which provides a 20% performance
>>> gain for transfering large amounts of data (i.e. glance images)
>>>
>>> The question is, how hard would it be for sites to be configured so that
>>> internal routing is used whenever possible? Or is this a concept we need
>>> to formalize and make user applications always need to make the decision
>>> about which interface they should access?
>>>
>>>     -Sean
>>>
>>
>> I think the real question is whether we need to bind APIs to multiple
>> IP addresses, or whether we need to use a proxy to provide external
>> access to a single API endpoint. It seems unacceptable to me to have
>> the API only hosted externally, then use routing tricks for the
>> services to access the APIs.
>>
> 
> I'm not sure I agree that using the lowest cost route is a "trick".
> 
>> While I am not an operator myself, I design OpenStack networks for
>> large (and very large) operators on a regular basis. I can tell you
>> that there is a strong desire from the customers and partners I deal
>> with for separate public/internal endpoints for the following reasons:
>>
>> Performance:
>> There is a LOT of API traffic in a busy OpenStack deployment. Having
>> the internal OpenStack processes use the Internal API via HTTP is a
>> performance advantage. I strongly recommend a separate Internal API
>> VLAN that is non-routable, to ensure that no traffic is unencrypted
>> accidentally.
>>
> 
> I'd be interested in some real metrics on the performance advantage.
> It's pretty important to weigh that vs. the loss of security inside
> a network. Because this argument leads to the "hard shell, squishy
> center" security model, and that leads to rapid cascading failure
> (<yoda>leads.. to..  suffering..</yoda>). I wonder how much of that
> performance loss would be regained by using persistent sessions.
> 
> Anyway, this one can be kept by specifying schemeless URLs, and simply
> configuring your internal services to default to http, but have the
> default for schemeless URLs be https.
> 
>> SecurityAuditing/Accounting:
>> Having a separate internal API (for the OpenStack services) from the
>> Public API (for humans and remote automation), allows the operator to
>> apply a strict firewall in front of the public API to restrict access
>> from outside the cloud. Such a device may also help deflect/absorb a
>> DOS attack against the API. This firewall can be an encryption
>> endpoint, so the traffic can be unencrypted and examined or logged. I
>> wouldn't want the extra latency of such a firewall in front of all my
>> OpenStack internal service calls.
>>
> 
> This one is rough. One way to do it is to simply host the firewall in
> a DMZ segment, setting up your routes for that IP to go through the
> firewall. This means multi-homing the real load balancer/app servers to
> have an IP that the firewall can proxy to directly.
> 
> But I also have to point out that not making your internal servers pass
> through this is another example of a squishy center, trading security
> for performance.
> 
>> Routing:
>> If there is only one API, then it has to be externally accessible. This
>> means that a node without an external connection (like a Ceph node, for
>> instance) would have to either have its API traffic routed, or it would
>> have to be placed on an external segment. Either choice is not optimal.
>> Routers can be a chokepoint. Ceph nodes should be back-end only.
>>
>> Uniform connection path:
>> If there is only one API, and it is externally accessible, then it is
>> almost certainly on a different network segment than the database, AMQP
>> bus, redis (if applicable), etc. If there is an Internal API it can
>> share a segment with these other services while the Public API is on an
>> external segment.
>>
> 
> It seems a little contrary to me that it's preferrable to have a
> software-specific solution to security (internal/external URL in the
> catalog) vs. a decent firewall that doesn't let any traffic through to
> your non-public nodes, *or* well performing internal routers. Or even
> an internal DNS view override. The latter three all seem like simple
> solutions, that allow OpenStack and its clients to be simpler.
> 
> The only reason I can see to perpetuate this version of security in
> networking is IPv4 address space starvation. And that just isn't a
> reason, because you can give all of your nodes IPv6 addresses, and your
> API endpoint an AAAA, and be done with that. Remember that we're talking
> about "the next 5 years".
> 
>> Conclusion:
>> If there were only one API, then I would personally bind the API to a
>> local non-routed VLAN, then use HAProxy to reflect those URLs
>> externally. This makes the APIs themselves simpler, but still provides
>> the advantages of having multiple endpoints. This introduces a
>> dependency on a proxy, but I've never seen a production deployment that
>> didn't use a load-balancing proxy. In this case, the Keystone endpoint
>> list would show the internal API URLs, but they would not be reachable
>> from outside.
>>
> 
> I think we agree on "this is how we'd do it if we had to", but I wrote
> all of the above because I don't really understand what you're saying.
> If the catalog showed only internal API's, how would external clients
> reach you at all?
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

In my example, the center is squishy, but also non-routed to prevent
remote access. There is more than one approach, so consider mine an
example of one approach rather than a suggestion that one size fits all.

As far as the confusion about what I was saying, I think I was not
using specific enough terminology.

What I meant to say was that if Internal URLs are listed alongside
Public URLs in an endpoint, it's not a problem if the Internal URLs are
on a non-reachable, non-routed network. That prevents someone
accidentally using an HTTP instead of an HTTPS connection.

-- 
Dan Sneddon         |  Principal OpenStack Engineer
dsneddon at redhat.com |  redhat.com/openstack
650.254.4025        |  dsneddon:irc   @dxs:twitter



More information about the OpenStack-operators mailing list