[openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

Mike Dorman mdorman at godaddy.com
Fri Apr 28 14:25:50 UTC 2017


Ok.  That would solve some of the problem for us, but we’d still be losing the redundancy.  We could do some HAProxy tricks to route around downed services, but it wouldn’t handle the case when that one physical box is down.

Is there some downside to allowing endpoint_override to remain a list?   That piece seems orthogonal to the spec and IRC discussion referenced, which are more around the service catalog.  I don’t think anyone in this thread is arguing against the idea that there should be just one endpoint URL in the catalog.  But it seems like there are good reasons to allow multiples on the override setting (at least for glance in nova-compute.)

Thanks,
Mike



On 4/28/17, 8:05 AM, "Eric Fried" <openstack at fried.cc> wrote:

    Blair, Mike-
    
    	There will be an endpoint_override that will bypass the service
    catalog.  It still only takes one URL, though.
    
    			Thanks,
    			Eric (efried)
    
    On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
    > We at Nectar are in the same boat as Mike. Our use-case is a little
    > bit more about geo-distributed operations though - our Cells are in
    > different States around the country, so the local glance-apis are
    > particularly important for caching popular images close to the
    > nova-computes. We consider these glance-apis as part of the underlying
    > cloud infra rather than user-facing, so I think we'd prefer not to see
    > them in the service-catalog returned to users either... is there going
    > to be a (standard) way to hide them?
    > 
    > On 28 April 2017 at 09:15, Mike Dorman <mdorman at godaddy.com> wrote:
    >> We make extensive use of the [glance]/api_servers list.  We configure that on hypervisors to direct them to Glance servers which are more “local” network-wise (in order to reduce network traffic across security zones/firewalls/etc.)  This way nova-compute can fail over in case one of the Glance servers in the list is down, without putting them behind a load balancer.  We also don’t run https for these “internal” Glance calls, to save the overhead when transferring images.
    >>
    >> End-user calls to Glance DO go through a real load balancer and then are distributed out to the Glance servers on the backend.  From the end-user’s perspective, I totally agree there should be one, and only one URL.
    >>
    >> However, we would be disappointed to see the change you’re suggesting implemented.  We would lose the redundancy we get now by providing a list.  Or we would have to shunt all the calls through the user-facing endpoint, which would generate a lot of extra traffic (in places where we don’t want it) for image transfers.
    >>
    >> Thanks,
    >> Mike
    >>
    >>
    >>
    >> On 4/27/17, 4:02 PM, "Matt Riedemann" <mriedemos at gmail.com> wrote:
    >>
    >>     On 4/27/2017 4:52 PM, Eric Fried wrote:
    >>     > Y'all-
    >>     >
    >>     >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
    >>     >
    >>     >   I'm working on bp use-service-catalog-for-endpoints[1], which intends
    >>     > to deprecate disparate conf options in various groups, and centralize
    >>     > acquisition of service endpoint URLs.  The idea is to introduce
    >>     > nova.utils.get_service_url(group) -- note singular 'url'.
    >>     >
    >>     >   One affected conf option is [glance]api_servers[2], which currently
    >>     > accepts a *list* of endpoint URLs.  The new API will only ever return *one*.
    >>     >
    >>     >   Thus, as planned, this blueprint will have the side effect of
    >>     > deprecating support for multiple glance endpoint URLs in Pike, and
    >>     > removing said support in Queens.
    >>     >
    >>     >   Some have asserted that there should only ever be one endpoint URL for
    >>     > a given service_type/interface combo[3].  I'm fine with that - it
    >>     > simplifies things quite a bit for the bp impl - but wanted to make sure
    >>     > there were no loudly-dissenting opinions before we get too far down this
    >>     > path.
    >>     >
    >>     > [1]
    >>     > https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
    >>     > [2]
    >>     > https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
    >>     > [3]
    >>     > http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
    >>     >
    >>     > Thanks,
    >>     > Eric Fried (efried)
    >>     > .
    >>     >
    >>     > __________________________________________________________________________
    >>     > OpenStack Development Mailing List (not for usage questions)
    >>     > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
    >>     > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    >>     >
    >>
    >>     +openstack-operators
    >>
    >>     --
    >>
    >>     Thanks,
    >>
    >>     Matt
    >>
    >>     __________________________________________________________________________
    >>     OpenStack Development Mailing List (not for usage questions)
    >>     Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
    >>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    >>
    >>
    >> _______________________________________________
    >> OpenStack-operators mailing list
    >> OpenStack-operators at lists.openstack.org
    >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
    > 
    > 
    > 
    
    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    



More information about the OpenStack-dev mailing list