Subject: Re: [Trove] State of the Trove service tenant deployment model

Darek Król dkrol3 at gmail.com
Sat Feb 9 18:04:24 UTC 2019


Hello Lingxian,

I’ve heard about a few tries of running Trove in production. Unfortunately,
I didn’t have opportunity to get details about networking. At Samsung, we
introducing Trove into our products for on-premise cloud platforms.
However, I cannot share too many details about it, besides it is oriented
towards performance and security is not a concern. Hence, the networking is
very basic without any layers of abstractions if possible.

Could you share more details about your topology and goals you want to
achieve in Trove ? Maybe Trove team could help you in this ? Unfortunately,
I’m not a network expert so I would need to get more details to understand
your use case better.

I would also like to get this opportunity to ask you for details about
Octavia way of communication ?
I'm wondering if the Octavia way prevents DDOS attacks also ?

Best,
Darek

On Fri, 8 Feb 2019 at 01:20, Lingxian Kong <anlin.kong at gmail.com> wrote:

> Sorry for bringing this thread back to the top again.
>
> But I am wondering if there are people who have already deployed Trove in
> production? If yes, are you using service tenant model(create the database
> vm and related resources in the admin project) or using the flatten mode
> that the end user has access to the database vm and the control plane
> network as well?
>
> I am asking because we are going to deploy Trove in a private cloud, and
> we want to take more granular control of the resources created, e.g for
> every database vm, we will create the vm in the admin tenant, plug a port
> to the control plane(`CONF.default_neutron_networks`) and the other ports
> to the network given by the users, we also need to specify different
> security groups to different types of neutron ports for security reasons,
> etc.
>
> There are something missing in trove in order to achieve the above, I'm
> working on that, but I'd like to hear more suggestions.
>
> My irc name is lxkong in #openstack-trove, please ping me if you have
> something to share.
>
> Cheers,
> Lingxian Kong
>
>
> On Wed, Jan 23, 2019 at 7:35 PM Darek Król <dkrol3 at gmail.com> wrote:
>
>> On Wed, Jan 23, 2019 at 9:27 AM Fox, Kevin M
>> <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>> wrote:
>>
>> > > I'd recommend at this point to maybe just run kubernetes across the
>> vms and push the guest agents/workload to them.
>>
>> > This sounds like an overkill to me. Currently, different projects in
>> openstack are solving this issue > in different ways, e.g. Octavia is using
>> two-way SSL authentication API between the controller service and
>> amphora(which is the vm running HTTP server inside), Magnum is using
>> heat-container-agent that is communicating with Heat via API, etc. However,
>> Trove chooses another option which has brought a lot of discussions over a
>> long time.
>>
>> > In the current situation, I don't think it's doable for each project
>> heading to one common solution, but Trove can learn from other projects to
>> solve its own problem.
>> > Cheers,
>> > Lingxian Kong
>>
>> The Octavia way of communication was discussed by Trove several times
>> in the context of secuirty. However, the security threat has been
>> eliminated by encryption.
>> I'm wondering if the Octavia way prevents DDOS attacks also ?
>>
>> Implementation of two-way SSL authentication API could be included in
>> the Trove priority list IMHO if it solves all issues with
>> security/DDOS attacks. This could also creates some share code between
>> both projects and help other services as well.
>>
>> Best,
>> Darek
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190209/43b47ba2/attachment-0001.html>


More information about the openstack-discuss mailing list