Re: [tc] Questions for TC Candidates
On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote:
On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney <smooney@redhat.com> wrote:
On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote:
Hi Chris,
Thanks for kicking this off. I've added my replies in-line.
Thank you for your past term as well.
Regards, Mohammed
On Wed, Feb 20, 2019 at 9:49 AM Chris Dent <cdent+os@anticdent.org> wrote:
<snip> * If you had a magic wand and could inspire and make a single sweeping architectural or software change across the services, what would it be? For now, ignore legacy or upgrade concerns. What role should the TC have in inspiring and driving such changes?
Oh.
- Stop using RabbitMQ as an RPC, it's the worst most painful component to run in an entire OpenStack deployment. It's always broken. Switch into something that uses HTTP + service registration to find endpoints. as an on looker i have mixed feeling about this statement. RabbitMQ can have issue at scale but it morstly works when its not on fire. Would you be advocating building a openstack specific RPC layer perhaps using keystone as the service registry and a custom http mechanism or adopting an existing technology like grpc?
investigating an alternative RPC backend has come up in the past (zeromq and qupid) and i think it has merit but im not sure as a comunity creating a new RPC framework is a project wide effort that openstack need to solve. that said zaqar is a thing https://docs.openstack.org/zaqar/latest/ if it is good enough for our endusers to consume perhaps it would be up to the task of being openstack rpc transport layer.
anyway my main question was would you advocate adoption of an exisiting technology or creating our own solution if we were to work on this goal as a community.
I'll also chime in here since I agree with Mohammed.
We're certainly going to have to write software to make this happen. Maybe that's a new oslo.messaging driver, maybe it's a new equivalent of that layer.
But we've re-invented enough software already. This isn't a place where we should do it. We should use build on or glue together existing tools to build something scalable and relatively simple to operate.
Are your mixed feelings about this statement a concern about re-inventing wheels, or about changing an underlying architectural thing at all? with re-inventing wheels. a new oslo.messaging driver i think would be perfectly fine. that said we dont want 50 of them as it will be impossible to test and maintain them all. my concern would be that even if we deveploped the perfect rpc layer for openstack ourselves it would be a lot of effort that could have been spent elsewhere. developing glue logic to use an alternitiv is much less invasive and achievable.
the other thing i woudl say is if im felling pedantinc the fact that we have an rpc bus is an architecutal question as may in some cases be the perfromce or feature set the final system provide. but personally i dont see rabbitmq vs grps as an architectual question at all. the message bus is just an io device. we should be able to change that io device for another without it martially effecting our architecutre. if it does we are too tightly coupled to the implementation. you can sub out rabbitmq today with qpid or activmq and in the past you could use zeromq too but rabbit more or less one out over that set of message queue. there is also limited support for kafka. kafka is pretty heavy weight as is itsio from what i have heard so im not sure they would be a good replacement for small cloud but i belive they are ment to scale well. grpc and nats are proably the two rabitmq alternative is would personally consider but i know some projects hack etcd to act as a psudo rpc bus. this type of change is something i could see as a comunity goal eventually but would like to see done with one project first before it got to that point. there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change i hope would be guided by measurement and community feedback. thanks for following up.
// jim
On Thu, Feb 21, 2019 at 10:02 AM Sean Mooney <smooney@redhat.com> wrote:
On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney <smooney@redhat.com> wrote:
On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote:
Hi Chris,
Thanks for kicking this off. I've added my replies in-line.
Thank you for your past term as well.
Regards, Mohammed
On Wed, Feb 20, 2019 at 9:49 AM Chris Dent <cdent+os@anticdent.org> wrote:
<snip> * If you had a magic wand and could inspire and make a single sweeping architectural or software change across the services, what would it be? For now, ignore legacy or upgrade concerns. What role should the TC have in inspiring and driving such changes?
Oh.
- Stop using RabbitMQ as an RPC, it's the worst most painful component to run in an entire OpenStack deployment. It's always broken. Switch into something that uses HTTP + service registration to find endpoints. as an on looker i have mixed feeling about this statement. RabbitMQ can have issue at scale but it morstly works when its not on fire. Would you be advocating building a openstack specific RPC layer
using keystone as the service registry and a custom http mechanism
or adopting
an existing technology like grpc?
investigating an alternative RPC backend has come up in the past
(zeromq and qupid)
and i think it has merit but im not sure as a comunity creating a
new RPC framework is
a project wide effort that openstack need to solve. that said
zaqar is a thing
https://docs.openstack.org/zaqar/latest/ if it is good enough for
our endusers to consume
perhaps it would be up to the task of being openstack rpc
On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote: perhaps transport layer.
anyway my main question was would you advocate adoption of an
exisiting technology
or creating our own solution if we were to work on this goal as a
community.
I'll also chime in here since I agree with Mohammed.
We're certainly going to have to write software to make this happen. Maybe that's a new oslo.messaging driver, maybe it's a new equivalent of that layer.
But we've re-invented enough software already. This isn't a place where we should do it. We should use build on or glue together existing tools to build something scalable and relatively simple to operate.
Are your mixed feelings about this statement a concern about re-inventing wheels, or about changing an underlying architectural thing at all? with re-inventing wheels. a new oslo.messaging driver i think would be perfectly fine. that said we dont want 50 of them as it will be impossible to test and maintain them all. my concern would be that even if we deveploped the perfect rpc layer for openstack ourselves it would be a lot of effort that could have been spent elsewhere. developing glue logic to use an alternitiv is much less invasive and achievable.
the other thing i woudl say is if im felling pedantinc the fact that we have an rpc bus is an architecutal question as may in some cases be the perfromce or feature set the final system provide. but personally i dont see rabbitmq vs grps as an architectual question at all. the message bus is just an io device. we should be able to change that io device for another without it martially effecting our architecutre. if it does we are too tightly coupled to the implementation.
+1. There's some question about our RPC casts that need to be addressed, but agree in general.
you can sub out rabbitmq today with qpid or activmq and in the past you could use zeromq too but rabbit more or less one out over that set of message queue. there is also limited support for kafka. kafka is pretty heavy weight as is itsio from what i have heard so im not sure they would be a good replacement for small cloud but i belive they are ment to scale well.
grpc and nats are proably the two rabitmq alternative is would personally consider but i know some projects hack etcd to act as a psudo rpc bus. this type of change is something i could see as a comunity goal eventually but would like to see done with one project first before it got to that point.
Agree with that. I don't want to get into implementation details here. :)
there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change i hope would be guided by measurement and community feedback.
100%, the first steps to doing anything like this is to measure performance today, identify other options, measure performance on those. This is a high-effort change, and it'd be crazy to do it without data. // jim
Le jeu. 21 févr. 2019 à 16:29, Jim Rollenhagen <jim@jimrollenhagen.com> a écrit :
On Thu, Feb 21, 2019 at 10:02 AM Sean Mooney <smooney@redhat.com> wrote:
On Thu, Feb 21, 2019 at 7:37 AM Sean Mooney <smooney@redhat.com> wrote:
On Wed, 2019-02-20 at 10:24 -0500, Mohammed Naser wrote:
Hi Chris,
Thanks for kicking this off. I've added my replies in-line.
Thank you for your past term as well.
Regards, Mohammed
On Wed, Feb 20, 2019 at 9:49 AM Chris Dent <cdent+os@anticdent.org> wrote:
<snip> * If you had a magic wand and could inspire and make a single sweeping architectural or software change across the services, what would it be? For now, ignore legacy or upgrade concerns. What role should the TC have in inspiring and driving such changes?
Oh.
- Stop using RabbitMQ as an RPC, it's the worst most painful component to run in an entire OpenStack deployment. It's always broken. Switch into something that uses HTTP + service registration to find endpoints. as an on looker i have mixed feeling about this statement. RabbitMQ can have issue at scale but it morstly works when its not on fire. Would you be advocating building a openstack specific RPC layer
using keystone as the service registry and a custom http
mechanism or adopting
an existing technology like grpc?
investigating an alternative RPC backend has come up in the past
(zeromq and qupid)
and i think it has merit but im not sure as a comunity creating a
new RPC framework is
a project wide effort that openstack need to solve. that said
zaqar is a thing
https://docs.openstack.org/zaqar/latest/ if it is good enough
for our endusers to consume
perhaps it would be up to the task of being openstack rpc
On Thu, 2019-02-21 at 08:12 -0500, Jim Rollenhagen wrote: perhaps transport layer.
anyway my main question was would you advocate adoption of an
exisiting technology
or creating our own solution if we were to work on this goal as a
community.
I'll also chime in here since I agree with Mohammed.
We're certainly going to have to write software to make this happen. Maybe that's a new oslo.messaging driver, maybe it's a new equivalent of that layer.
But we've re-invented enough software already. This isn't a place where we should do it. We should use build on or glue together existing tools to build something scalable and relatively simple to operate.
Are your mixed feelings about this statement a concern about re-inventing wheels, or about changing an underlying architectural thing at all? with re-inventing wheels. a new oslo.messaging driver i think would be perfectly fine. that said we dont want 50 of them as it will be impossible to test and maintain them all. my concern would be that even if we deveploped the perfect rpc layer for openstack ourselves it would be a lot of effort that could have been spent elsewhere. developing glue logic to use an alternitiv is much less invasive and achievable.
the other thing i woudl say is if im felling pedantinc the fact that we have an rpc bus is an architecutal question as may in some cases be the perfromce or feature set the final system provide. but personally i dont see rabbitmq vs grps as an architectual question at all. the message bus is just an io device. we should be able to change that io device for another without it martially effecting our architecutre. if it does we are too tightly coupled to the implementation.
+1. There's some question about our RPC casts that need to be addressed, but agree in general.
you can sub out rabbitmq today with qpid or activmq and in the past you could use zeromq too but rabbit more or less one out over that set of message queue. there is also limited support for kafka. kafka is pretty heavy weight as is itsio from what i have heard so im not sure they would be a good replacement for small cloud but i belive they are ment to scale well.
grpc and nats are proably the two rabitmq alternative is would personally consider but i know some projects hack etcd to act as a psudo rpc bus. this type of change is something i could see as a comunity goal eventually but would like to see done with one project first before it got to that point.
Agree with that. I don't want to get into implementation details here. :)
there is value in useing 1 or 2 rpc buses istead of support many and this is the type of change i hope would be guided by measurement and community feedback.
100%, the first steps to doing anything like this is to measure performance today, identify other options, measure performance on those. This is a high-effort change, and it'd be crazy to do it without data.
Yup, all this. If we want things to happen, we need first to identify the pain points and have people acting on those. All of that can happen with or without the TC scope, to answer the original concern. I'm just glad Chris pointed out this question because now we can start brainstorming about this at the PTG.
// jim
participants (3)
-
Jim Rollenhagen
-
Sean Mooney
-
Sylvain Bauza