<div dir="ltr"><div dir="ltr">On Thu, Feb 21, 2019 at 10:02 AM Sean Mooney <<a href="mailto:smooney@redhat.com">smooney@redhat.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote:<br>
> On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney <<a href="mailto:smooney@redhat.com" target="_blank">smooney@redhat.com</a>> wrote:<br>
> > On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote:<br>
> > > Hi Chris,<br>
> > > <br>
> > > Thanks for kicking this off. I've added my replies in-line.<br>
> > > <br>
> > > Thank you for your past term as well.<br>
> > > <br>
> > > Regards,<br>
> > > Mohammed<br>
> > > <br>
> > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent <<a href="mailto:cdent%2Bos@anticdent.org" target="_blank">cdent+os@anticdent.org</a>> wrote:<br>
> > > > <snip><br>
> > > > * If you had a magic wand and could inspire and make a single<br>
> > > > sweeping architectural or software change across the services,<br>
> > > > what would it be? For now, ignore legacy or upgrade concerns.<br>
> > > > What role should the TC have in inspiring and driving such<br>
> > > > changes?<br>
> > > <br>
> > > Oh.<br>
> > > <br>
> > > - Stop using RabbitMQ as an RPC, it's the worst most painful component<br>
> > > to run in an entire OpenStack deployment. It's always broken. Switch<br>
> > > into something that uses HTTP + service registration to find endpoints.<br>
> > as an on looker i have mixed feeling about this statement.<br>
> > RabbitMQ can have issue at scale but it morstly works when its not on fire.<br>
> > Would you be advocating building a openstack specific RPC layer perhaps<br>
> > using keystone as the service registry and a custom http mechanism or adopting<br>
> > an existing technology like grpc?<br>
> > <br>
> > investigating an alternative RPC backend has come up in the past (zeromq and qupid)<br>
> > and i think it has merit but im not sure as a comunity creating a new RPC framework is<br>
> > a project wide effort that openstack need to solve. that said zaqar is a thing<br>
> > <a href="https://docs.openstack.org/zaqar/latest/" rel="noreferrer" target="_blank">https://docs.openstack.org/zaqar/latest/</a> if it is good enough for our endusers to consume<br>
> > perhaps it would be up to the task of being openstack rpc transport layer.<br>
> > <br>
> > anyway my main question was would you advocate adoption of an exisiting technology<br>
> > or creating our own solution if we were to work on this goal as a community.<br>
> > <br>
> <br>
> I'll also chime in here since I agree with Mohammed.<br>
> <br>
> We're certainly going to have to write software to make this happen. Maybe<br>
> that's a new oslo.messaging driver, maybe it's a new equivalent of that layer.<br>
> <br>
> But we've re-invented enough software already. This isn't a place where we<br>
> should do it. We should use build on or glue together existing tools to build<br>
> something scalable and relatively simple to operate.<br>
> <br>
> Are your mixed feelings about this statement a concern about re-inventing<br>
> wheels, or about changing an underlying architectural thing at all?<br>
with re-inventing wheels.<br>
a new oslo.messaging driver i think would be perfectly fine. that said we<br>
dont want 50 of them as it will be impossible<br>
to test and maintain them all.<br>
my concern would be that even if we deveploped the perfect rpc layer for openstack<br>
ourselves it would be a lot of effort that could have been spent elsewhere.<br>
developing glue logic to use an alternitiv is much less invasive and achievable.<br>
<br>
the other thing i woudl say is if im felling pedantinc the fact that we have an rpc<br>
bus is an architecutal question as may in some cases be the perfromce or feature set<br>
the final system provide. but personally i dont see rabbitmq vs grps as an architectual<br>
question at all. the message bus is just an io device. we should be able to change<br>
that io device for another without it martially effecting our architecutre. if it does<br>
we are too tightly coupled to the implementation. <br></blockquote><div><br></div><div>+1. There's some question about our RPC casts that need to be addressed,</div><div>but agree in general. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
you can sub out rabbitmq today with qpid or activmq and in the past you could use<br>
zeromq too but rabbit more or less one out over that set of message queue. <br>
there is also limited support for kafka. kafka is pretty heavy weight as is itsio from<br>
what i have heard so im not sure they would be a good replacement for small cloud but i<br>
belive they are ment to scale well.<br>
<br>
grpc and nats are proably the two rabitmq alternative is would personally consider but i know<br>
some projects hack etcd to act as a psudo rpc bus.<br>
this type of change is something i could see as a comunity goal eventually but would like to<br>
see done with one project first before it got to that point.<br></blockquote><div><br></div><div>Agree with that. I don't want to get into implementation details here. :)</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change<br>
i hope would be guided by measurement and community feedback.<br></blockquote><div><br></div><div>100%, the first steps to doing anything like this is to measure performance</div><div>today, identify other options, measure performance on those. This is a</div><div>high-effort change, and it'd be crazy to do it without data.</div><div><br></div><div>// jim</div></div></div>