[openstack-dev] [Neutron][LBaaS][VPNaaS][FWaaS] Dealing with logical logical configurations

Salvatore Orlando sorlando at nicira.com
Wed Jul 31 08:59:31 UTC 2013


More stuff from me.

Salvatore

On 31 July 2013 10:36, Eugene Nikanorov <enikanorov at mirantis.com> wrote:

> *> I don't think this is the right time to get into performance and scale
> discussions; on the implementation side, it would be good for me to
> understand how neutron will be able to undeploy resources - for which it
> should use a driver which unfortunately has been removed. Are we caching
> drivers somewhere - or planning to store them back in the database?*
> Undeployment is a manual operation that admin must perform *before* restarting
> neutron server with removed provider.
> I think it's going to be additional admin-only operation.
> Noop driver is not capable of anything backend-specific.
>

So disassociating instances from service providers (and therefore
de-implementing then) is an admin process, which probably makes sense.
I guess if an admin gets the workflow wrong, and removes the provider
before deimplementing resources, then resources associated with a deleted
provider should go in error as soon as they're used; admins can always
recover readding the provider and de-implementing resources.

Beyond being a placeholder for a service instance without a provider what
would be the role of this noop driver?


>
> Thanks,
> Eugene.
>
>
> On Wed, Jul 31, 2013 at 11:47 AM, Salvatore Orlando <sorlando at nicira.com>wrote:
>
>> More comments on top of your comments!
>> And one more question: what are we going to do with 'orphaned' logical
>> instances? Can they be associated with another provider?
>>
>> Salvatore
>>
>>
>> On 31 July 2013 09:23, Eugene Nikanorov <enikanorov at mirantis.com> wrote:
>>
>>> My comments inline
>>>
>>>
>>> On Wed, Jul 31, 2013 at 1:53 AM, Salvatore Orlando <sorlando at nicira.com>wrote:
>>>
>>>>
>>>>
>>>> On 30 July 2013 23:24, Eugene Nikanorov <enikanorov at mirantis.com>wrote:
>>>>
>>>>> 2) Resources allocated by the provider must be cleaned up - that is
>>>>> done before neutron server is restarted with new configuration.
>>>>> I think it's a valid workflow.
>>>>>
>>>>
>>>> What kind of operations would the cleanup involve?
>>>>
>>>
>>> To me it basically means that resources need to be undeployed from
>>> corresponding devices/hosts.
>>> Being more specific: if for example cloud admin decides to remove lbaas
>>> reference provider for whatever reason, then he would expect haproxy
>>> processes to be shut down on all hosts.
>>>
>>
>> If I get it right then, this would mean that all the resources (in your
>> case service pools) will have to be scanned at startup to see if their
>> provider has been removed.
>> I don't think this is the right time to get into performance and scale
>> discussions; on the implementation side, it would be good for me to
>> understand how neutron will be able to undeploy resources - for which it
>> should use a driver which unfortunately has been removed. Are we caching
>> drivers somewhere - or planning to store them back in the database?
>>
>>
>>
>>>
>>>
>>>>
>>>
>>>>
>>>>>
>>>>>
>>>> Also, I'd be against such a check which would prevent neutron-server
>>>>> from starting if some resources reference unknown providers.
>>>>> Providers may be removed for various reasons, but that would be too
>>>>> disruptive to bring down whole networking service because of that.
>>>>> Another option may be to take such decision on per-service basis. At
>>>>> least I don't think having orphaned loadbalancer pools should prevent
>>>>> neutron to start.
>>>>>
>>>>
>>>> It might be ok to not prevent the service from starting.
>>>> Perhaps I am misunderstanding the workflow, which I barely know since
>>>> the original conversation happened outside of this thread.
>>>> From the perspective of the API user, I perceive the effect would be
>>>> that of a service instance which exists as a logical entity, but actually
>>>> has been de-implemented as its provider has been removed.
>>>> This is probably not very different from the case in which the host
>>>> where a port is deployed goes down - the status for the port will be DOWN.
>>>> I hope that the status for the corresponding services will go DOWN as well
>>>> - otherwise this might result a bit confusing to API users.
>>>>
>>>
>>> I think having resource in DOWN state is different from having resource
>>> with no provider. Why? Because from API perspective when you update
>>> resource from DOWN to UP state, you expect some deployment to happen, and
>>> it would not be a case if provider is not associated with the resource.
>>>
>>>
>> I am referring to the operational status. I've used the term
>> 'administratevely down' incorrectly
>>
>>>
>>>>
>>>>>
>>>>> The need for Noop driver is direct consequence of the case above.
>>>>> If we remove requirement (1) and just delete resources which reference
>>>>> removed provider, than we will not need Noop and unassociated resources.
>>>>>
>>>>
>>>> While a 'noop' driver - assuming this would be the right term - can be
>>>> used to describe service instances without a provider, I wonder if the best
>>>> way of describing an instance without a provider is literally a service
>>>> instance without a provider. Also, correct me if I'm wrong here, if one
>>>> assumes that such 'orphaned' instances should be in status 'DOWN' or
>>>> 'ERROR', then probably it does not matter which provider (if any) the
>>>> service instance is associated with.
>>>>
>>>
>>> In fact,  'Noop' is kind of 'internal' description, not visible to
>>> users. That's just a technical need to have such driver which will finish
>>> removal operations (I'm speaking about lbaas plugin now, which changes
>>> state of objects to PENDING_DELETE and lets drivers to do actual db
>>> removal) and will not do any additional stuff.
>>> Regarding DOWN and ERROR - see my comment above.
>>>
>>
>> So the Noop driver will be able to undeploy the service instance?
>>
>>
>>>
>>> Thanks,
>>> Eugene.
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130731/193de2a2/attachment.html>


More information about the OpenStack-dev mailing list