[openstack-dev] [nova] [quantum] approaches for having quantum update nova's net info cache
Doug Hellmann
doug.hellmann at dreamhost.com
Wed Jun 5 16:10:28 UTC 2013
Re-reading that, I'm not sure that it is actually related. Can you give
some more detail?
Doug
On Wed, Jun 5, 2013 at 6:41 AM, Yaguang Tang <yaguang.tang at canonical.com>wrote:
> A bug about this issue has been reported , any action to fix this can be
> tracked here https://bugs.launchpad.net/nova/+bug/1187295
>
>
> 2013/6/5 Mike Wilson <geekinutah at gmail.com>
>
>> That does seem like a nice middle ground for the cache staleness. Since
>> this behavior is needed for quantum, does it make sense to just have the
>> API a bit more latent only for the quantum case? That is essentially what
>> you would be doing, you just get to avoid doing the work of plumbing a new
>> API call on the quantum side.
>>
>> -Mike
>>
>>
>> On Tue, Jun 4, 2013 at 1:29 PM, Vishvananda Ishaya <vishvananda at gmail.com
>> > wrote:
>>
>>>
>>> On Jun 4, 2013, at 11:48 AM, Chris Behrens <cbehrens at codestud.com>
>>> wrote:
>>>
>>>
>>> On Jun 4, 2013, at 11:11 AM, Vishvananda Ishaya <vishvananda at gmail.com>
>>> wrote:
>>>
>>>
>>> On Jun 4, 2013, at 8:38 AM, Mike Wilson <geekinutah at gmail.com> wrote:
>>>
>>> Doug,
>>>
>>> I'm glad you've brought this up. We had a similar issue to your own
>>> initially. I'm not sure our solution is the best one, but it is at least a
>>> springboard for discussion. As far as we could tell instance_nw_info is
>>> populated and cached because it is costly to generate the nw_info
>>> structure. We were seeing 5 separate calls to the quantum API to generate
>>> that structure whenever get_instance_nw_info was called. We are on Folsom,
>>> but I don't think the behavior has changed. What we ended up doing was
>>> implementing a instance_nw_info API in quantum that basically does all 5
>>> calls in a single mysql query and returns the structure that nova wants to
>>> put together. We also patched nova to always call the quantum API instead
>>> of fetching a cache from the db. This solved two problems:
>>>
>>> 1. No stale caches
>>> 2. Reduced the number of calls going to the quantum API by 80%
>>>
>>> This may not be the prettiest solution, but without it there is no way
>>> we would be able to scale quantum across a large number of nodes. Also the
>>> stale cache problem really affects usability. Maybe there is a better
>>> reason for having that sitting in the database. I'm sure people more
>>> knowledgeable than myself can chime in on this.
>>>
>>>
>>> So that would be a call to quantum on every single 'nova show'? I find
>>> that unacceptable. Anything that increases API latency is probably not the
>>> right answer.
>>>
>>>
>>> This seems like a reasonable approach to me, but I worry about nova list
>>> when there are a large number of instances. Perhaps a bulk get_nw_info
>>> request with a list of instance_uuids would work?
>>>
>>>
>>> I tried this before we got the info_cache in place (the single call for
>>> multiple uuids). It was horrid… but the calls were going across RPC, not
>>> HTTP, at the time. I still don't expect it to be much better with HTTP.
>>>
>>> What is the real problem here? Isn't it that nova is returning data via
>>> nova-api that it probably shouldn't? nova doesn't need to know floating
>>> IPs in order to configure an instance, right? (obviously this must be the
>>> case, since this thread is talking about updating them via talking to
>>> quantum directly.) If this is the case, then it doesn't seem like nova-api
>>> should expose them at all. They are completely an implementation in
>>> quantum. (Or maybe that's true when nova-network goes away? :) Nova did
>>> need to configure *some* IP addresses on the instances, however, so those
>>> would be fine to expose. Nova obviously knows about them because it
>>> potentially had to inject the config.
>>>
>>> So, my general opinion on all of this is:
>>>
>>> 1) nova-api should only return network information that it needed to
>>> know in order to configure the instance. nova *has* to have correct data
>>> in order to configure an instance properly.
>>> 2) nova should not have to query quantum to return real time status of
>>> an instance. If it has to do this, we're exposing the data in the wrong
>>> place. I heard that 'images' is being removed from nova-api in v3. Not
>>> sure if this is true or not. But perhaps there's certain network
>>> information that should be removed from v3 if it doesn't belong there.
>>>
>>>
>>> I think this is all fine for v3, but the fact that v2 is returning stale
>>> floating ip data is bad. So we need some solution for v2 that improves
>>> this. We could provide the potentially stale data in list/detail and force
>>> an update on get. It might be a good middle ground.
>>>
>>> Vish
>>>
>>>
>>> - Chris
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Tang Yaguang
>
> Canonical Ltd. | www.ubuntu.com | www.canonical.com
> Mobile: +86 152 1094 6968
> gpg key: 0x187F664F
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130605/5991bb43/attachment.html>
More information about the OpenStack-dev
mailing list