[tc] Questions for TC Candidates

Sean Mooney smooney at redhat.com
Thu Feb 21 15:02:55 UTC 2019


On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote:
> On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney <smooney at redhat.com> wrote:
> > On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote:
> > > Hi Chris,
> > > 
> > > Thanks for kicking this off.  I've added my replies in-line.
> > > 
> > > Thank you for your past term as well.
> > > 
> > > Regards,
> > > Mohammed
> > > 
> > > On Wed, Feb 20, 2019 at 9:49 AM Chris Dent <cdent+os at anticdent.org> wrote:
> > > >  <snip>
> > > > * If you had a magic wand and could inspire and make a single
> > > >    sweeping architectural or software change across the services,
> > > >    what would it be? For now, ignore legacy or upgrade concerns.
> > > >    What role should the TC have in inspiring and driving such
> > > >    changes?
> > > 
> > > Oh.
> > > 
> > > - Stop using RabbitMQ as an RPC, it's the worst most painful component
> > >   to run in an entire OpenStack deployment.  It's always broken.  Switch
> > >   into something that uses HTTP + service registration to find endpoints.
> >     as an on looker i have mixed feeling about this statement.
> >     RabbitMQ can have issue at scale but it morstly works when its not on fire.
> >     Would you be advocating building a openstack specific RPC layer perhaps
> >     using keystone as the service registry and a custom http mechanism or adopting
> >     an existing technology like grpc?
> > 
> >     investigating an alternative RPC backend has come up in the past (zeromq and qupid)
> >     and i think it has merit but im not sure as a comunity creating a new RPC framework is
> >     a project wide effort that openstack need to solve. that said zaqar is a thing
> >     https://docs.openstack.org/zaqar/latest/ if it is good enough for our endusers to consume
> >     perhaps it would be up to the task of being openstack rpc transport layer.
> > 
> >     anyway my main question was would you advocate adoption of an exisiting technology
> >     or creating our own solution if we were to work on this goal as a community.
> > 
> 
> I'll also chime in here since I agree with Mohammed.
> 
> We're certainly going to have to write software to make this happen. Maybe
> that's a new oslo.messaging driver, maybe it's a new equivalent of that layer.
> 
> But we've re-invented enough software already. This isn't a place where we
> should do it. We should use build on or glue together existing tools to build
> something scalable and relatively simple to operate.
> 
> Are your mixed feelings about this statement a concern about re-inventing
> wheels, or about changing an underlying architectural thing at all?
with re-inventing wheels.
a new oslo.messaging driver i think would be perfectly fine. that said we
dont want 50 of them as it will be impossible
to test and maintain them all.
my concern would be that even if we deveploped the perfect rpc layer for openstack
ourselves it would be a lot of effort that could have been spent elsewhere.
developing glue logic to use an alternitiv is much less invasive and achievable.

the other thing i woudl say is if im felling pedantinc the fact that we have an rpc
bus is an architecutal question as may in some cases be the perfromce or feature set
the final system provide. but personally i dont see rabbitmq vs grps as an architectual
question at all. the message bus is just an io device. we should be able to change
that io device for another without it martially effecting our architecutre. if it does
we are too tightly coupled to the implementation. 

you can sub out rabbitmq today with qpid or activmq and in the past you could use
zeromq too but rabbit more or less one out over that set of message queue. 
there is also limited support for kafka. kafka is pretty heavy weight as is itsio from
what i have heard so im not sure they would be a good replacement for small cloud but i
belive they are ment to scale well.

grpc and nats are proably the two rabitmq alternative is would personally consider but i know
some projects hack etcd to act as a psudo rpc bus.
this type of change is something i could see as a comunity goal eventually but would like to
see done with one project first before it got to that point.

there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change
i hope would be guided by measurement and community feedback.

thanks for following up.
> 
> // jim




More information about the openstack-discuss mailing list