[Openstack-operators] [openstack-dev] [nova] [placement] Which service is using port 8778?

Mohammed Naser mnaser at vexxhost.com
Tue Jan 10 22:20:12 UTC 2017


We use virtual hosts, haproxy runs on our VIP at port 80 and port 443 (SSL) (with keepalived to make sure it’s always running) and we use `use_backend` to send to the appropriate backend, more information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/ <http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/>

It makes our catalog nice and neat, we have a <service>-<region>.vexxhost.net <http://vexxhost.net/> internal naming convention, so our catalog looks nice and clean and the API calls don’t get blocked by firewalls (the strange ports might be blocked on some customer-side firewalls).

+----------------------------------+----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------+
| ID                               | Region   | Service Name | Service Type    | Enabled | Interface | URL                                                              |
+----------------------------------+----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------+
| 01fdd8e07ca74c9daf80a8b66dcc8bf6 | ca-ymq-1 | cinderv2     | volumev2        | True    | internal  | https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s     |
| 09b4a971659643528875f70d93ef6846 | ca-ymq-1 | manila       | share           | True    | internal  | https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s      |
| 203fd4e466b44569aa9ab8c78ef55bad | ca-ymq-1 | heat         | orchestration   | True    | admin     | https://orchestration-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s     |
| 20b24181722b49a3983d17d42147a22c | ca-ymq-1 | swift        | object-store    | True    | admin     | https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s    |
| 2f582f99db974766af7548dda56c3b50 | ca-ymq-1 | nova         | compute         | True    | internal  | https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s           |
| 37860b492dd947daa738f461b9084d2a | ca-ymq-1 | neutron      | network         | True    | admin     | https://network-ca-ymq-1.vexxhost.net                            |
| 4d38fa91197e4712a2f2d3f89fcd7dad | ca-ymq-1 | nova         | compute         | True    | public    | https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s           |
| 58894a7156b848d3baa0382ed465f3c2 | ca-ymq-1 | manilav2     | sharev2         | True    | internal  | https://file-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s      |
| 5ebc8fa90c3c46d69d3fa8a03688e452 | ca-ymq-1 | manila       | share           | True    | public    | https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s      |
| 769a4de22d864c3bb2beefe775e3cb9f | ca-ymq-1 | manila       | share           | True    | admin     | https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s      |
| 79fa33ff42ec45118ae8b36789fcb8ae | ca-ymq-1 | swift        | object-store    | True    | public    | https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s    |
| 7a095734e4984cc7b8ac581aa6131f23 | ca-ymq-1 | neutron      | network         | True    | public    | https://network-ca-ymq-1.vexxhost.net                            |
| 7f8b519dfb494cef811b164f5eed0360 | ca-ymq-1 | sahara       | data-processing | True    | internal  | https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| 8842c03d2c51449ebf9ff36778cf17c1 | ca-ymq-1 | glance       | image           | True    | public    | https://image-ca-ymq-1.vexxhost.net                              |
| 8df18f47fcdc4c348d521d4724a5b7ac | ca-ymq-1 | keystone     | identity        | True    | admin     | https://identity-ca-ymq-1.vexxhost.net/v2.0                      |
| 96357df3d6694477b0ad17fef6091210 | ca-ymq-1 | neutron      | network         | True    | internal  | https://network-ca-ymq-1.vexxhost.net                            |
| a25efaf48347441a8d36ce302f31d527 | ca-ymq-1 | cinderv2     | volumev2        | True    | public    | https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s     |
| b073b767f10d44f895d9d14fbc3e3d6b | ca-ymq-1 | swift        | object-store    | True    | internal  | https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s    |
| b132fe7bcf98440f8e72a142df76292d | ca-ymq-1 | sahara       | data-processing | True    | admin     | https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| b736338e3c94402a9b21b32b3d0bf1e5 | ca-ymq-1 | sahara       | data-processing | True    | public    | https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| c0dd9f5f8db248b093d6735b167e1af6 | ca-ymq-1 | keystone     | identity        | True    | public    | https://auth.vexxhost.net/v2.0                                   |
| c8505f07c349413aa7cd61d42337af99 | ca-ymq-1 | keystone     | identity        | True    | internal  | https://auth.vexxhost.net/v2.0                                   |
| da3d087e0c724338ba12c9a1168ef80c | ca-ymq-1 | heat         | orchestration   | True    | internal  | https://orchestration-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s     |
| dd203f9e09da4eba9effb9119edc9eb2 | ca-ymq-1 | manilav2     | sharev2         | True    | admin     | https://file-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s      |
| e8e1eb90f7394f5999aec5c8f8c75c88 | ca-ymq-1 | cinder       | volume          | True    | public    | https://block-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s     |
| f0a311cb5dbf4ae788670107a3433ac2 | ca-ymq-1 | heat         | orchestration   | True    | public    | https://orchestration-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s     |
| f33c8ab0445a422b90f93b1ec092a7d0 | ca-ymq-1 | manilav2     | sharev2         | True    | public    | https://file-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s      |
+----------------------------------+----------+--------------+-----------------+---------+-----------+------------------------------------------------------------------+

I’d be more than happy to give my comments, but I think this is the best way.  Prefixes can work too and would make things easy during dev, but in a production deployment, I would rather not deal with something like that.  Also, all of those are CNAME records pointing to api-<region>.vexxhost.net <http://vexxhost.net/> so it makes it easy to move things over if needed.  I guess the only problem is DNS setup overhead


> On Jan 10, 2017, at 4:56 PM, Emilien Macchi <emilien at redhat.com> wrote:
> 
> On Tue, Jan 10, 2017 at 6:00 AM, Andy McCrae <andy.mccrae at gmail.com> wrote:
>> Sorry to resurrect a few weeks old thread, but I had a few questions.
>> 
>>> 
>>> Yes, we should stop with the magic ports. Part of the reason of
>>> switching over to apache was to alleviate all of that.
>>> 
>>>        -Sean
>> 
>> 
>> Is this for devstack specifically?
>> I can see the motivation for Devstack, since it reduces the concern for
>> managing port allocations.
>> 
>> Is the idea that we move away from ports and everything is on 80 with a
>> VHost to differentiate between services/endpoints?
>> 
>> It seems to me that it would still be good to have a "designated" (and
>> unique - or as unique as possible at least within OpenStack) port for
>> services. We may not have all services on the same hosts, for example, using
>> a single VIP for load balancing. The issue then is that it becomes hard to
>> differentiate the LB pool based on the request.
>> I.e. How would i differentiate between Horizon requests and requests for any
>> other service on port 80, the VIP is the same, but the backends may be
>> completely different (so all requests aren't handled by the same Apache
>> server).
> 
> Right, it causes conflicts when running architectures with HAproxy &
> API co-located.
> In the case of HAproxy, you might need to run ACLs, but it sounds
> adding a layer of complexity in the current deployments that might not
> exist in some cases yet.
> 
> In TripleO, we decided to pick a port (8778) and deploy Placement API
> on this port, so it's consistent with existing services already
> deployed.
> 
> Regarding Sean's comment about switching to Apache, I agree it
> simplifies a lot of things but I don't remember we decided to pick
> Apache because of the magic port thing. Though I remember because it
> was also for the SSL configuration that would be standard across all
> services.
> 
> Any feedback at how our operators do here would be very welcome
> (adding operators mailing-list), so we would make sure we're taking
> the more realistic approach here.
> So the question would it be:
> 
> When deploying OpenStack APIs under WSGI, do you pick magic port (ex:
> 8774 for Nova Compute API) or do you use 80/443 + vhost path?
> 
> Thanks,
> 
>> Assuming, in that case, having a designated port is the only way (and if it
>> isn't I'd love to discuss alternate, and simpler, methods of achieving this)
>> it then seems that assigning a dedicated port for services in Devstack would
>> make sense - it would ensure that there is no overlap, and in a way the
>> error received when the ports overlapped is a genuine issue that would need
>> to be addressed. Although if that is the case, perhaps there is a better way
>> to manage that.
>> 
>> Essentially it seems better to handle port conflicts (within the OpenStack
>> ecosystem, at least) at source rather than pass that on to the deployer to
>> randomly pick ports and avoid conflicts.
>> 
>> Andy
>> 
>> 
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> -- 
> Emilien Macchi
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170110/23655293/attachment.html>


More information about the OpenStack-operators mailing list