<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I'll try to address the question about Proxy process.<br>
<br>
AFAIK there is no way yet in zmq to bind more than once to a
specific port (e.g. tcp://*:9501).<br>
<br>
Apparently we can:<br>
<br>
socket1.bind('tcp://node1:9501')<br>
socket2.bind('tcp://node2:9501')<br>
<br>
but we can not:<br>
<br>
socket1.bind('tcp://*:9501')<br>
socket2.bind('tcp://*:9501')<br>
<br>
So if we would like to have a definite port assigned with the driver
we need to use a proxy which receives on a single socket and
redirects to a number of sockets.<br>
<br>
It is a normal practice in zmq to do so. There are even some helpers
implemented in the library so-called 'devices'. <br>
<br>
Here the performance question is relevant. According to ZeroMQ
documentation [1] The basic heuristic is to allocate 1 I/O thread in
the context for every gigabit per second of data that will be sent
and received (aggregated).<br>
<br>
The other way is to 'bind_to_random_port', but here we need some
mechanism to notify the client about the port we are listening to.
So it is more complicated solution.<br>
<br>
Why to run in a separate process? For zmq api it doesn't matter to
communicate between threads (INPROC), between processes (IPC) or
between nodes (TCP, PGM and others). Because we need to run proxy
once on a node it's easier to do it in a separate process. How to
track the proxy is running already if we put it in a thread of some
service?<br>
<br>
In spite of having a broker-like instance locally we still stay
brokerless because we have no central broker node with a queue we
need to replicate and keep alive. Each node is acutally a peer. The
broker is not a standalone node
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
so we can not say that it is a 'single point of failure'
<title></title>
. We can consider the local broker as a part of a server. It is
worth noting that IPC communication is much more reliable than real
network communication. One more benefit is that the proxy is
stateless so we don't have to bother about managing the state
(syncing it or having enough memory to keep it)<br>
<br>
I'll cite the zmq-guide about broker/brokerless (4.14. Brokerless
Reliability p.221):<br>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<br>
"It might seem ironic to focus so much on broker-based reliability,
when we often explain ØMQ as "brokerless messaging". However, in
messaging, as in real life, the middleman is both a burden and a
benefit. In practice, <b><u>most messaging architectures benefit
from a mix of distributed and brokered messaging</u></b>.
<title></title>
"<br>
<br>
<br>
Thanks,<br>
Oleksii<br>
<br>
<br>
1 - <a class="moz-txt-link-freetext" href="http://zeromq.org/area:faq#toc7">http://zeromq.org/area:faq#toc7</a><br>
<br>
<br>
<div class="moz-cite-prefix">5/26/15 18:57, Davanum Srinivas пишет:<br>
</div>
<blockquote
cite="mid:CANw6fcG6bh+jiD2E=+XkuU9yMqPDgTNECt+K5QmmKZ=ORp6syQ@mail.gmail.com"
type="cite">
<pre wrap="">Alec,
Here are the slides:
<a class="moz-txt-link-freetext" href="http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal">http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal</a>
All the 0mq patches to date should be either already merged in trunk
or waiting for review on trunk.
Oleksii, Li Ma,
Can you please address the other questions?
thanks,
Dims
On Tue, May 26, 2015 at 11:43 AM, Alec Hothan (ahothan)
<a class="moz-txt-link-rfc2396E" href="mailto:ahothan@cisco.com"><ahothan@cisco.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Looking at what is the next step following the design summit meeting on
0MQ as the etherpad does not provide too much information.
Few questions:
- would it be possible to have the slides presented (showing the proposed
changes in the 0MQ driver design) to be available somewhere?
- is there a particular branch in the oslo messaging repo that contains
0MQ related patches - I'm more particularly interested by James Page's
patch to pool the 0MQ connections but there might be other
- question for Li Ma, are you deploying with the straight upstream 0MQ
driver or with some additional patches?
The per node proxy process (which is itself some form of broker) needs to
be removed completely if the new solution is to be made really
broker-less. This will also eliminate the only single point of failure in
the path and reduce the number of 0MQ sockets (and hops per message) by
half.
I think it was proposed that we go on with the first draft of the new
driver (which still keeps the proxy server but reduces the number of
sockets) before eventually tackling the removal of the proxy server?
Thanks
Alec
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: <a class="moz-txt-link-abbreviated" href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a>
<a class="moz-txt-link-freetext" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<pre wrap="">
</pre>
</blockquote>
<br>
</body>
</html>