Hi, I'm Lingxian Kong, I'm going to serve as Trove PTL for the Train dev cycle. Since the master branch is open for contribution and review, for those who care about Trove, here are several things I'd like to bring to your attention and most importantly, need your feedback. - Deprecate nova-network. As I mentioned in the candidacy, The nova-network related code is spread in the repo, which makes it very difficult for new feature implementation and bugfix. Considering nova-network was deprecated in the OpenStack Newton release, I propose we also deprecate nova-network support in Trove and remove after several cycles according to the deprecation policy of the community. I'm not sure if there is still anyone using nova-network for Trove, especially in production. If yes, please reply to this email. - Create service VM in admin project by default Currently, Trove has configuration support to create the db instance in the admin project, which I think should be the default deployment model to reduce the security risk given all the db instances are communicating with RabbitMQ in the control plane. - Remove SecurityGroup API extension TBH, I don't know when and why that extension was added in Trove but since it's not included in Trove API document( https://developer.openstack.org/api-ref/database/), I assume there is on one relies on that in production, so should be safe to remove. - Remove SecurityGroup related database model I don't have the history development background in my mind, but IMHO, i don't think it's reasonable for Trove to maintain such information in db. - Security group management enhancement Removing the API extension and database model doesn't mean Trove shouldn't support security group for the db instance, on the contrary, security should always be the first thing we consider for new features. The two tasks above are actually prerequisites for this one. In order to make it easy to maintain and as more secure as possible, Trove is not going to allow the end user to manipulate the security group associated with db instance. Trove will try to provide as more information as possible to make the debugging and performance tuning easy. - Monitoring capability Currently, there is no monitoring capability support in Trove, and I think that's the only main part missing for Trove to be running in production. I don't have a full picture in mind now but will try to figure out how to achieve that. - Priorities of the previous dev cycles Of course, I shouldn't put the previous dev cycle priorities away from the track, e.g. the Stein dev cycle priorities are well documented here https://etherpad.openstack.org/p/trove-stein-priorities-and-specs-tracking As Trove project has been experiencing some up and downs in the past, but it's still very useful in some deployment use cases and has some advantages over the container deployment model. As you could guess, the reason I raised my hand to lead Trove is that we(Catalyst Cloud) have been deploying Trove in production, so all those things are aiming at making Trove production ready, not only for private cloud but also for the public. If you have any concerns related to what's mentioned above, please don't hesitate to reply. Alternately, I'm always in the #openstack-trove IRC channel and could answer any questions during the working hours of UTC+12. I really appreciate any feedback from the community. --- Cheers, Lingxian Kong Catalyst Cloud
On Sun, Mar 31, 2019 at 11:27 PM Lingxian Kong <anlin.kong@gmail.com> wrote:
Hi,
I'm Lingxian Kong, I'm going to serve as Trove PTL for the Train dev cycle. Since the master branch is open for contribution and review, for those who care about Trove, here are several things I'd like to bring to your attention and most importantly, need your feedback.
- Deprecate nova-network.
As I mentioned in the candidacy, The nova-network related code is spread in the repo, which makes it very difficult for new feature implementation and bugfix. Considering nova-network was deprecated in the OpenStack Newton release, I propose we also deprecate nova-network support in Trove and remove after several cycles according to the deprecation policy of the community. I'm not sure if there is still anyone using nova-network for Trove, especially in production. If yes, please reply to this email.
- Create service VM in admin project by default
Currently, Trove has configuration support to create the db instance in the admin project, which I think should be the default deployment model to reduce the security risk given all the db instances are communicating with RabbitMQ in the control plane.
- Remove SecurityGroup API extension
TBH, I don't know when and why that extension was added in Trove but since it's not included in Trove API document(https://developer.openstack.org/api-ref/database/), I assume there is on one relies on that in production, so should be safe to remove.
- Remove SecurityGroup related database model
I don't have the history development background in my mind, but IMHO, i don't think it's reasonable for Trove to maintain such information in db.
- Security group management enhancement
Removing the API extension and database model doesn't mean Trove shouldn't support security group for the db instance, on the contrary, security should always be the first thing we consider for new features. The two tasks above are actually prerequisites for this one. In order to make it easy to maintain and as more secure as possible, Trove is not going to allow the end user to manipulate the security group associated with db instance. Trove will try to provide as more information as possible to make the debugging and performance tuning easy.
- Monitoring capability
Currently, there is no monitoring capability support in Trove, and I think that's the only main part missing for Trove to be running in production. I don't have a full picture in mind now but will try to figure out how to achieve that.
- Priorities of the previous dev cycles
Of course, I shouldn't put the previous dev cycle priorities away from the track, e.g. the Stein dev cycle priorities are well documented here https://etherpad.openstack.org/p/trove-stein-priorities-and-specs-tracking
As Trove project has been experiencing some up and downs in the past, but it's still very useful in some deployment use cases and has some advantages over the container deployment model. As you could guess, the reason I raised my hand to lead Trove is that we(Catalyst Cloud) have been deploying Trove in production, so all those things are aiming at making Trove production ready, not only for private cloud but also for the public.
If you have any concerns related to what's mentioned above, please don't hesitate to reply. Alternately, I'm always in the #openstack-trove IRC channel and could answer any questions during the working hours of UTC+12.
I think another thing is the deployment model of relying on RabbitMQ to talk to the control plane. That has been the biggest sticking point for deployers. I think adopting something similar to the Octavia service VM model with public/private key and HTTP API might be far more successful.
I really appreciate any feedback from the community.
--- Cheers, Lingxian Kong Catalyst Cloud
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
Thanks for the advise, Mohammed :-) The several differences between current Trove and Octavia communication model are: - The attack surface between database and haproxy is different, it's much easier to hack database than haproxy. So the Octavia model is good, but doesn't mean it's suitable for Trove as well. - The Octavia guest agent normally doesn't initialize the communication with the control plane, except for sending the monitoring status via UDP, but Trove guest agent has to notify update back to control plane for database status change. This could be changed though. --- Cheers, Lingxian Kong Catalyst Cloud On Tue, Apr 2, 2019 at 6:33 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sun, Mar 31, 2019 at 11:27 PM Lingxian Kong <anlin.kong@gmail.com> wrote:
Hi,
I'm Lingxian Kong, I'm going to serve as Trove PTL for the Train dev
cycle. Since the master branch is open for contribution and review, for those who care about Trove, here are several things I'd like to bring to your attention and most importantly, need your feedback.
- Deprecate nova-network.
As I mentioned in the candidacy, The nova-network related code is
spread in the repo, which makes it very difficult for new feature implementation and bugfix. Considering nova-network was deprecated in the OpenStack Newton release, I propose we also deprecate nova-network support in Trove and remove after several cycles according to the deprecation policy of the community. I'm not sure if there is still anyone using nova-network for Trove, especially in production. If yes, please reply to this email.
- Create service VM in admin project by default
Currently, Trove has configuration support to create the db instance
in the admin project, which I think should be the default deployment model to reduce the security risk given all the db instances are communicating with RabbitMQ in the control plane.
- Remove SecurityGroup API extension
TBH, I don't know when and why that extension was added in Trove but
since it's not included in Trove API document( https://developer.openstack.org/api-ref/database/), I assume there is on one relies on that in production, so should be safe to remove.
- Remove SecurityGroup related database model
I don't have the history development background in my mind, but IMHO,
i don't think it's reasonable for Trove to maintain such information in db.
- Security group management enhancement
Removing the API extension and database model doesn't mean Trove
shouldn't support security group for the db instance, on the contrary, security should always be the first thing we consider for new features. The two tasks above are actually prerequisites for this one. In order to make it easy to maintain and as more secure as possible, Trove is not going to allow the end user to manipulate the security group associated with db instance. Trove will try to provide as more information as possible to make the debugging and performance tuning easy.
- Monitoring capability
Currently, there is no monitoring capability support in Trove, and I
think that's the only main part missing for Trove to be running in production. I don't have a full picture in mind now but will try to figure out how to achieve that.
- Priorities of the previous dev cycles
Of course, I shouldn't put the previous dev cycle priorities away from
the track, e.g. the Stein dev cycle priorities are well documented here https://etherpad.openstack.org/p/trove-stein-priorities-and-specs-tracking
As Trove project has been experiencing some up and downs in the past,
but it's still very useful in some deployment use cases and has some advantages over the container deployment model. As you could guess, the reason I raised my hand to lead Trove is that we(Catalyst Cloud) have been deploying Trove in production, so all those things are aiming at making Trove production ready, not only for private cloud but also for the public.
If you have any concerns related to what's mentioned above, please don't
hesitate to reply. Alternately, I'm always in the #openstack-trove IRC channel and could answer any questions during the working hours of UTC+12.
I think another thing is the deployment model of relying on RabbitMQ to talk to the control plane. That has been the biggest sticking point for deployers. I think adopting something similar to the Octavia service VM model with public/private key and HTTP API might be far more successful.
I really appreciate any feedback from the community.
--- Cheers, Lingxian Kong Catalyst Cloud
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 4/1/19 3:24 PM, Lingxian Kong wrote:
Thanks for the advise, Mohammed :-)
The several differences between current Trove and Octavia communication model are:
- The attack surface between database and haproxy is different, it's much easier to hack database than haproxy. So the Octavia model is good, but doesn't mean it's suitable for Trove as well.
Doesn't that make it _more_ important for the database nodes to not have access to the control plane rabbitmq? If a VM gets compromised I'd much rather that it not have access to a core piece of my infrastructure.
- The Octavia guest agent normally doesn't initialize the communication with the control plane, except for sending the monitoring status via UDP, but Trove guest agent has to notify update back to control plane for database status change. This could be changed though.
--- Cheers, Lingxian Kong Catalyst Cloud
On Tue, Apr 2, 2019 at 6:33 AM Mohammed Naser <mnaser@vexxhost.com <mailto:mnaser@vexxhost.com>> wrote:
On Sun, Mar 31, 2019 at 11:27 PM Lingxian Kong <anlin.kong@gmail.com <mailto:anlin.kong@gmail.com>> wrote: > > Hi, > > I'm Lingxian Kong, I'm going to serve as Trove PTL for the Train dev cycle. Since the master branch is open for contribution and review, for those who care about Trove, here are several things I'd like to bring to your attention and most importantly, need your feedback. > > - Deprecate nova-network. > > As I mentioned in the candidacy, The nova-network related code is spread in the repo, which makes it very difficult for new feature implementation and bugfix. Considering nova-network was deprecated in the OpenStack Newton release, I propose we also deprecate nova-network support in Trove and remove after several cycles according to the deprecation policy of the community. I'm not sure if there is still anyone using nova-network for Trove, especially in production. If yes, please reply to this email. > > - Create service VM in admin project by default > > Currently, Trove has configuration support to create the db instance in the admin project, which I think should be the default deployment model to reduce the security risk given all the db instances are communicating with RabbitMQ in the control plane. > > - Remove SecurityGroup API extension > > TBH, I don't know when and why that extension was added in Trove but since it's not included in Trove API document(https://developer.openstack.org/api-ref/database/), I assume there is on one relies on that in production, so should be safe to remove. > > - Remove SecurityGroup related database model > > I don't have the history development background in my mind, but IMHO, i don't think it's reasonable for Trove to maintain such information in db. > > - Security group management enhancement > > Removing the API extension and database model doesn't mean Trove shouldn't support security group for the db instance, on the contrary, security should always be the first thing we consider for new features. The two tasks above are actually prerequisites for this one. In order to make it easy to maintain and as more secure as possible, Trove is not going to allow the end user to manipulate the security group associated with db instance. Trove will try to provide as more information as possible to make the debugging and performance tuning easy. > > - Monitoring capability > > Currently, there is no monitoring capability support in Trove, and I think that's the only main part missing for Trove to be running in production. I don't have a full picture in mind now but will try to figure out how to achieve that. > > - Priorities of the previous dev cycles > > Of course, I shouldn't put the previous dev cycle priorities away from the track, e.g. the Stein dev cycle priorities are well documented here https://etherpad.openstack.org/p/trove-stein-priorities-and-specs-tracking > > As Trove project has been experiencing some up and downs in the past, but it's still very useful in some deployment use cases and has some advantages over the container deployment model. As you could guess, the reason I raised my hand to lead Trove is that we(Catalyst Cloud) have been deploying Trove in production, so all those things are aiming at making Trove production ready, not only for private cloud but also for the public. > > If you have any concerns related to what's mentioned above, please don't hesitate to reply. Alternately, I'm always in the #openstack-trove IRC channel and could answer any questions during the working hours of UTC+12.
I think another thing is the deployment model of relying on RabbitMQ to talk to the control plane. That has been the biggest sticking point for deployers. I think adopting something similar to the Octavia service VM model with public/private key and HTTP API might be far more successful.
> I really appreciate any feedback from the community. > > --- > Cheers, > Lingxian Kong > Catalyst Cloud
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com <mailto:mnaser@vexxhost.com> W. http://vexxhost.com
On Tue, Apr 2, 2019 at 10:17 AM Ben Nemec <openstack@nemebean.com> wrote:
Doesn't that make it _more_ important for the database nodes to not have access to the control plane rabbitmq? If a VM gets compromised I'd much rather that it not have access to a core piece of my infrastructure.
Yes, it does. That's why I said in the following that the communication model of Trove could be changed by making the guest agent only repond rather than sending request. --- Cheers, Lingxian Kong Catalyst Cloud
Hi, the documentation about neutron and QoS talks about setting a default QoS rule for a project. "Each project can have at most one default QoS policy, although it is not mandatory. If a default QoS policy is defined, all new networks created within this project will have this policy assigned, as long as no other QoS policy is explicitly attached during the creation process. If the default QoS policy is unset, no change to existing networks will be made. In order to set a QoS policy as default, the parameter --default must be used. To unset this QoS policy as default, the parameter --no-default must be used." So if a cloud provider wants to limit all router egress traffic to eg. 1Gbits this policy has to be set as default to all projects. But what if the customer is allowed to create new projects? This project will not have any default policy applied, right? And it looks like the is no API/logic in place to allow a self service customer to choose between different "network flavors"? Is something like that on the roadmap? All the best, Florian
Hi,
Wiadomość napisana przez Florian Engelmann <florian.engelmann@everyware.ch> w dniu 03.04.2019, o godz. 14:38:
Hi,
the documentation about neutron and QoS talks about setting a default QoS rule for a project.
"Each project can have at most one default QoS policy, although it is not mandatory. If a default QoS policy is defined, all new networks created within this project will have this policy assigned, as long as no other QoS policy is explicitly attached during the creation process. If the default QoS policy is unset, no change to existing networks will be made.
In order to set a QoS policy as default, the parameter --default must be used. To unset this QoS policy as default, the parameter --no-default must be used."
So if a cloud provider wants to limit all router egress traffic to eg. 1Gbits this policy has to be set as default to all projects. But what if the customer is allowed to create new projects? This project will not have any default policy applied, right?
Correct. You will have to apply QoS policy to such new tenant by Your self.
And it looks like the is no API/logic in place to allow a self service customer to choose between different "network flavors"? Is something like that on the roadmap?
Currently not.
All the best, Florian
— Slawek Kaplonski Senior software engineer Red Hat
participants (5)
-
Ben Nemec
-
Florian Engelmann
-
Lingxian Kong
-
Mohammed Naser
-
Slawomir Kaplonski