<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<style type="text/css" id="owaParaStyle"></style>
</head>
<body fpstyle="1" ocsi="0">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">
<div><font size="3">I just pointed out the issues for RPC which is used between API cell and child cell if we deploy child cells in edge clouds. For this thread is about massively distributed cloud, so the RPC issues inside current Nova/Cinder/Neutron are not
the main focus(it could be another important and interesting topic), for example, how to guarantee the reliability for rpc message:</font></div>
<div><font size="3"><br>
</font></div>
<blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;">
<div>
<div><font size="3">> Cells is a good enhancement for Nova scalability, but there are some issues</font></div>
</div>
<div>
<div><font size="3">> in deployment Cells for massively distributed edge clouds:</font></div>
</div>
<div>
<div><font size="3">> </font></div>
</div>
<div>
<div><font size="3">> 1) using RPC for inter-data center communication will bring the difficulty</font></div>
</div>
<div>
<div><font size="3">> in inter-dc troubleshooting and maintenance, and some critical issue in</font></div>
</div>
<div>
<div><font size="3">> operation. No CLI or restful API or other tools to manage a child cell</font></div>
</div>
<div>
<div><font size="3">> directly. If the link between the API cell and child cells is broken, then</font></div>
</div>
<div>
<div><font size="3">> the child cell in the remote edge cloud is unmanageable, no matter locally</font></div>
</div>
<div>
<div><font size="3">> or remotely. </font></div>
</div>
<div>
<div><font size="3">></font></div>
</div>
<div>
<div><font size="3">> 2). The challenge in security management for inter-site RPC communication.</font></div>
</div>
<div>
<div><font size="3">> Please refer to the slides[1] for the challenge 3: Securing OpenStack over</font></div>
</div>
<div>
<div><font size="3">> the Internet, Over 500 pin holes had to be opened in the firewall to allow</font></div>
</div>
<div>
<div><font size="3">> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells</font></div>
</div>
<div>
<div><font size="3">> for edge cloud will face same security challenges.</font></div>
</div>
<div>
<div><font size="3">></font></div>
</div>
<div>
<div><font size="3">> 3)only nova supports cells. But not only Nova needs to support edge clouds,</font></div>
</div>
<div>
<div><font size="3">> Neutron, Cinder should be taken into account too. How about Neutron to</font></div>
</div>
<div>
<div><font size="3">> support service function chaining in edge clouds? Using RPC? how to address</font></div>
</div>
<div>
<div><font size="3">> challenges mentioned above? And Cinder? </font></div>
</div>
<div>
<div><font size="3">></font></div>
</div>
<div>
<div><font size="3">> 4). Using RPC to do the production integration for hundreds of edge cloud is</font></div>
</div>
<div>
<div><font size="3">> quite challenge idea, it's basic requirements that these edge clouds may</font></div>
</div>
<div>
<div><font size="3">> be bought from multi-vendor, hardware/software or both. </font></div>
</div>
<div>
<div><font size="3">> That means using cells in production for massively distributed edge clouds</font></div>
</div>
<div>
<div><font size="3">> is quite bad idea. If Cells provide RESTful interface between API cell and</font></div>
</div>
<div>
<div><font size="3">> child cell, it's much more acceptable, but it's still not enough, similar</font></div>
</div>
<div>
<div><font size="3">> in Cinder, Neutron. Or just deploy lightweight OpenStack instance in each</font></div>
</div>
<div>
<div><font size="3">> edge cloud, for example, one rack. The question is how to manage the large</font></div>
</div>
<div>
<div><font size="3">> number of OpenStack instance and provision service.</font></div>
</div>
<div>
<div><font size="3">></font></div>
</div>
<div>
<div><font size="3">> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf</font></div>
</div>
</blockquote>
<div><font size="3"><br>
</font></div>
<div><font size="3">That's also my suggestion to collect all candidate proposals, and discuss these proposals and compare their cons. and pros. in the Barcelona summit.</font></div>
<div>
<div><font size="3"><br>
</font></div>
<div><font size="3">I propose to use Nova/Cinder/Neutron restful API for inter-site communication for edge clouds, and provide Nova/Cinder/Neutron API as the umbrella for all edge clouds. This is the pattern of Tricircle: https://github.com/openstack/tricircle/</font></div>
</div>
<div><font size="3"><br>
</font></div>
<div><font size="3">If there is other proposal, please don't hesitate to share and let's compare.</font></div>
<div><font size="3"><br>
</font></div>
<div><font size="3">Best Regards</font></div>
<div><font size="3">Chaoyi Huang(joehuang)</font></div>
<br>
<div style="font-family: Times New Roman; color: #000000; font-size: 16px">
<hr tabindex="-1">
<div id="divRpF59297" style="direction: ltr;"><font face="Tahoma" size="2" color="#000000"><b>From:</b> Duncan Thomas [duncan.thomas@gmail.com]<br>
<b>Sent:</b> 01 September 2016 2:03<br>
<b>To:</b> OpenStack Development Mailing List (not for usage questions)<br>
<b>Subject:</b> Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs<br>
</font><br>
</div>
<div></div>
<div>
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On 31 August 2016 at 18:54, Joshua Harlow <span dir="ltr">
<<a href="mailto:harlowja@fastmail.com" target="_blank">harlowja@fastmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex; border-left:1px #ccc solid; padding-left:1ex">
Duncan Thomas wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex; border-left:1px #ccc solid; padding-left:1ex">
<span class="">On 31 August 2016 at 11:57, Bogdan Dobrelya <<a href="mailto:bdobrelia@mirantis.com" target="_blank">bdobrelia@mirantis.com</a><br>
</span><span class=""><mailto:<a href="mailto:bdobrelia@mirantis.com" target="_blank">bdobrelia@mirantis.com</a><wbr>>> wrote:<br>
<br>
I agree that RPC design pattern, as it is implemented now, is a major<br>
blocker for OpenStack in general. It requires a major redesign,<br>
including handling of corner cases, on both sides, *especially* RPC call<br>
clients. Or may be it just have to be abandoned to be replaced by a more<br>
cloud friendly pattern.<br>
<br>
<br>
<br>
Is there a writeup anywhere on what these issues are? I've heard this<br>
sentiment expressed multiple times now, but without a writeup of the<br>
issues and the design goals of the replacement, we're unlikely to make<br>
progress on a replacement - even if somebody takes the heroic approach<br>
and writes a full replacement themselves, the odds of getting community<br>
by-in are very low.<br>
</span></blockquote>
<br>
+2 to that, there are a bunch of technologies that could replace the rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup IMHO would help at least clear the waters a little bit, and explain the blocker of the current RPC design pattern
(which is multidimensional because most people are probably thinking RPC == rabbit when it's actually more than that now, ie zeromq and amqp1.0 and ...) and try to centralize on a better replacement.<br>
<br>
</blockquote>
<div><br>
</div>
<div>Is anybody who dislikes the current pattern(s) and implementation(s) volunteering to start this documentation? I really am not aware of the issues, and I'd like to begin to understand them. </div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>