Subject: Re: [Trove] State of the Trove service tenant deployment model
Fox, Kevin M
Kevin.Fox at pnnl.gov
Tue Jan 22 20:49:55 UTC 2019
Its probably captured in summit notes from 5-3 years ago. Nothing specific I can point at without going through a lot of archaeology.
Yeah, deploying the users databases on top of kubernetes in vm's would be easier to upgrade I think then pure vm's with a pile of debs/rpms inside.
Its tangentially related to the message bus stuff. If you solve the ddos attack issue with the message bus, you still have the upgrade problem. but depending on how you choose to solve the communications channel issues you can solve other issues such as upgrades easier, harder, or not at all.
Thanks,
Kevin
________________________________
From: Darek Król [dkrol3 at gmail.com]
Sent: Tuesday, January 22, 2019 12:38 PM
To: Fox, Kevin M
Cc: Michael Richardson; openstack-discuss at lists.openstack.org
Subject: Re: Subject: Re: [Trove] State of the Trove service tenant deployment model
Is there any documentation written down from this discussion ? I would really like to read more about the problem and any ideas for possible solutions.
Your recommendation abou k8s sounds interesting but I’m not sure if I understand it fully. Would you like to have a k8s cluster for all tenants on top of vms to handle trove instances ? And Is upgrade a different problem than ddos attack at message bus ?
Best,
Darek
On Tue, 22 Jan 2019 at 21:25, Fox, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
We tried to solve it as a cross project issue for a while and then everyone gave up. lots of projects have the same problem. trove, sahara, magnum, etc.
Other then just control messages, there is also the issue of version skew between guest agents and controllers and how to do rolling upgrades. Its messy today.
I'd recommend at this point to maybe just run kubernetes across the vms and push the guest agents/workload to them. You can still drive it via an openstack api, but doing rolling upgrades of guest agents or mysql containers or whatever is way simpler for operators to handle. We should embrace k8s as part of the solution rather then trying to reimplement it IMO.
Thanks,
Kevin
________________________________________
From: Darek Król [dkrol3 at gmail.com<mailto:dkrol3 at gmail.com>]
Sent: Tuesday, January 22, 2019 12:09 PM
To: Michael Richardson
Cc: openstack-discuss at lists.openstack.org<mailto:openstack-discuss at lists.openstack.org>
Subject: Subject: Re: [Trove] State of the Trove service tenant deployment model
On Tue, Jan 22, 2019 at 07:29:25PM +1300, Zane Bitter wrote:
> Last time I heard (which was probably mid-2017), the Trove team had
> implemented encryption for messages on the RabbitMQ bus. IIUC each DB being
> managed had its own encryption keys, so that would theoretically prevent
> both snooping and spoofing of messages. That's the good news.
>
> The bad news is that AFAIK it's still using a shared RabbitMQ bus, so
> attacks like denial of service are still possible if you can extract the
> shared credentials from the VM. Not sure about replay attacks; I haven't
> actually investigated the implementation.
>
> cheers,
> Zane.
> Excellent - many thanks for the confirmation.
>
> Cheers,
> Michael
Hello Michael and Zane,
sorry for the late reply.
I believe Zane is referring to a video from 2017 [0].
Yes, messages from trove instances are encrypted and the keys are kept
in Trove DB. It is still a shared message bus, but it can be a message
bus dedicated for Trove only and separated from message bus shared by
other Openstack services.
DDOS attacks are also mentioned in the video as a potential threat but
there is very little details and possible solutions. Recently we had
some internal discussion about this threat within Trove team. Maybe we
could user Rabbitmq mechanisms for flow control mentioned in [1,2,3] ?
Another point, I'm wondering if this is a problem only in Trove or is
it something other services would be interesting in also ?
Best,
Darek
[0] https://youtu.be/dzvcKlt3Lx8
[1] https://www.rabbitmq.com/flow-control.html
[2] http://www.rabbitmq.com/blog/2012/04/17/rabbitmq-performance-measurements-part-1/
[3] https://tech.labs.oliverwyman.com/blog/2013/08/31/controlling-fast-producers-in-a-rabbit-as-a-service/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190122/fd699ac7/attachment-0001.html>
More information about the openstack-discuss
mailing list