[openstack-dev] [neutron] Neutron scaling datapoints?
Joshua Harlow
harlowja at outlook.com
Mon Apr 13 03:14:51 UTC 2015
Joshua Harlow wrote:
> Kevin Benton wrote:
>> >Timestamps are just one way (and likely the most primitive), using
>> redis (or memcache) key/value and expiry are another (and letting
>> memcache or redis expire using its own internal algorithms), using
>> zookeeper ephemeral nodes[1] are another... The point being that its
>> backend specific and tooz supports varying backends.
>>
>> Very cool. Is the backend completely transparent so a deployer could
>> choose a service they are comfortable maintaining, or will that change
>> the properties WRT to resiliency of state on node restarts,
>> partitions, etc?
>
> Of course... we tried to make it 'completely' transparent, but in
> reality certain backends (zookeeper which uses a paxos-like algorithm
> and redis with sentinel support...) are better (more resilient, more
> consistent, handle partitions/restarts better...) than others (memcached
> is after all just a distributed cache). This is just the nature of the
> game...
>
And for some more reading fun:
https://aphyr.com/posts/315-call-me-maybe-rabbitmq
https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/283-call-me-maybe-redis
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul
... (aphyr.com has alot of these neat posts)...
>>
>> The Nova implementation of Tooz seemed pretty straight-forward, although
>> it looked like it had pluggable drivers for service management already.
>> Before I dig into it much further I'll file a spec on the Neutron side
>> to see if I can get some other cores onboard to do the review work if I
>> push a change to tooz.
>
> Sounds good to me.
>
>>
>>
>> On Sun, Apr 12, 2015 at 9:38 AM, Joshua Harlow <harlowja at outlook.com
>> <mailto:harlowja at outlook.com>> wrote:
>>
>> Kevin Benton wrote:
>>
>> So IIUC tooz would be handling the liveness detection for the
>> agents.
>> That would be nice to get ride of that logic in Neutron and just
>> register callbacks for rescheduling the dead.
>>
>> Where does it store that state, does it persist timestamps to the DB
>> like Neutron does? If so, how would that scale better? If not,
>> who does
>> a given node ask to know if an agent is online or offline when
>> making a
>> scheduling decision?
>>
>>
>> Timestamps are just one way (and likely the most primitive), using
>> redis (or memcache) key/value and expiry are another (and letting
>> memcache or redis expire using its own internal algorithms), using
>> zookeeper ephemeral nodes[1] are another... The point being that its
>> backend specific and tooz supports varying backends.
>>
>>
>> However, before (what I assume is) the large code change to
>> implement
>> tooz, I would like to quantify that the heartbeats are actually a
>> bottleneck. When I was doing some profiling of them on the
>> master branch
>> a few months ago, processing a heartbeat took an order of
>> magnitude less
>> time (<50ms) than the 'sync routers' task of the l3 agent
>> (~300ms). A
>> few query optimizations might buy us a lot more headroom before
>> we have
>> to fall back to large refactors.
>>
>>
>> Sure, always good to avoid prematurely optimizing things...
>>
>> Although this is relevant for u I think anyway:
>>
>> https://review.openstack.org/#__/c/138607/
>> <https://review.openstack.org/#/c/138607/> (same thing/nearly same
>> in nova)...
>>
>> https://review.openstack.org/#__/c/172502/
>> <https://review.openstack.org/#/c/172502/> (a WIP implementation of
>> the latter).
>>
>> [1]
>> https://zookeeper.apache.org/__doc/trunk/__zookeeperProgrammers.html#__Ephemeral+Nodes
>>
>> <https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#Ephemeral+Nodes>
>>
>>
>>
>> Kevin Benton wrote:
>>
>>
>> One of the most common is the heartbeat from each agent.
>> However, I
>> don't think we can't eliminate them because they are used
>> to determine
>> if the agents are still alive for scheduling purposes. Did
>> you have
>> something else in mind to determine if an agent is alive?
>>
>>
>> Put each agent in a tooz[1] group; have each agent periodically
>> heartbeat[2], have whoever needs to schedule read the active
>> members of
>> that group (or use [3] to get notified via a callback), profit...
>>
>> Pick from your favorite (supporting) driver at:
>>
>> http://docs.openstack.org/____developer/tooz/compatibility.____html
>> <http://docs.openstack.org/__developer/tooz/compatibility.__html>
>> <http://docs.openstack.org/__developer/tooz/compatibility.__html
>> <http://docs.openstack.org/developer/tooz/compatibility.html>>
>>
>> [1]
>> http://docs.openstack.org/____developer/tooz/compatibility.____html#grouping
>>
>> <http://docs.openstack.org/__developer/tooz/compatibility.__html#grouping>
>>
>> <http://docs.openstack.org/__developer/tooz/compatibility.__html#grouping
>> <http://docs.openstack.org/developer/tooz/compatibility.html#grouping>>
>> [2]
>> https://github.com/openstack/____tooz/blob/0.13.1/tooz/____coordination.py#L315
>>
>> <https://github.com/openstack/__tooz/blob/0.13.1/tooz/__coordination.py#L315>
>>
>> <https://github.com/openstack/__tooz/blob/0.13.1/tooz/__coordination.py#L315
>>
>> <https://github.com/openstack/tooz/blob/0.13.1/tooz/coordination.py#L315>>
>>
>> [3]
>> http://docs.openstack.org/____developer/tooz/tutorial/group_____membership.html#watching-____group-changes
>>
>> <http://docs.openstack.org/__developer/tooz/tutorial/group___membership.html#watching-__group-changes>
>>
>> <http://docs.openstack.org/__developer/tooz/tutorial/group___membership.html#watching-__group-changes
>>
>> <http://docs.openstack.org/developer/tooz/tutorial/group_membership.html#watching-group-changes>>
>>
>>
>>
>> __________________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.____openstack.org?subject:____unsubscribe
>> <http://openstack.org?subject:__unsubscribe>
>> <http://OpenStack-dev-request@__lists.openstack.org?subject:__unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>>
>> http://lists.openstack.org/____cgi-bin/mailman/listinfo/____openstack-dev
>> <http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev>
>> <http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>>
>>
>> ______________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.__openstack.org?subject:__unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>> ______________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.__openstack.org?subject:__unsubscribe
>> <http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>> __________________________________________________________________________
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list