[qa][openstack-ansible] redefining devstack
Hi everyone, This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement' # Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*. At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed). TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue". OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works. # Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken). To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways. Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor. # So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks. In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf. So the new process could be: 1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1 The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts. # Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment). I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually. Thoughts? :) -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
Hi, I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository? Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable). https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/... We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
On Mon, Jun 3, 2019 at 8:27 AM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
You raise a really good concern. Indeed, we might have to change the workflow from "write a plugin" to "write an Ansible role" to be able to test your project with DevStack at that page (or maintain both a "legacy" solution) with a new one.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Mon, Jun 3, 2019 at 8:27 AM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
You raise a really good concern. Indeed, we might have to change the workflow from "write a plugin" to "write an Ansible role" to be able to test your project with DevStack at that page (or maintain both a "legacy" solution) with a new one.
On Mon, 2019-06-03 at 08:39 -0400, Mohammed Naser wrote: the real probalem with that is who is going to port all of the existing plugins. kolla-ansible has also tried to be a devstack replacement in the past via the introduction of dev-mode which clones the git repo of the dev mode project locally and bind mounts them into the container. the problem is it still breaks peoles plugins and workflow. some devstack feature that osa would need to support in order to be a replacement for me are. 1 the ablity to install all openstack project form git if needed including gerrit reviews. abiltiy to eailly specify gerrit reiews or commits for each project # here i am declaring the os-vif should be installed from git not pypi LIBS_FROM_GIT=os-vif # and here i am specifying that gerrit should be used as the source and # i am provide a gerrit/git refs branch for a specific un merged patch OS_VIF_REPO=https://git.openstack.org/openstack/os-vif OS_VIF_BRANCH=refs/changes/25/629025/9 # *_REPO can obvioulsy take anythign that is valid in a git clone command so # i can use a local repo too NEUTRON_REPO=file:///opt/repos/neutron # and *_BRANCH as the name implices works with branches, tag commits* and gerrit ref brances. NEUTRON_BRANCH=bug/1788009 the next thing that would be needed is a way to simply override any config value like this [[post-config|/etc/nova/nova.conf]] #[compute] #live_migration_wait_for_vif_plug=True [libvirt] live_migration_uri = qemu+ssh://root@%s/system #cpu_mode = host-passthrough virt_type = kvm cpu_mode = custom cpu_model = kvm64 im sure that osa can do that but i really can just provide any path to any file if needed. so no need to update a role or plugin to set values in files created by plugins which is the next thing. we enable plugins with a single line like this enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master meaning there is no need to preinstall or clone the repo. in theory the plugin should install all its dependeices and devstack will clone and execute the plugins based on the single line above. plugins however can also read any varable defiend in the local.conf as it will be set in the environment which means i can easily share an exact configuration with someone by shareing a local.conf. im not against improving or replacing devstack but with the devstack ansible roles and the fact we use devstack for all our testing in the gate it is actually has become one of the best openstack installer out there. we do not recommend people run it in production but with the ansible automation of grenade and the move to systemd for services there are less mainatined installers out there that devstack is proably a better foundation for a cloud to build on. people should still not use it in production but i can see why some might.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
Sean Mooney <smooney@redhat.com> writes:
On Mon, Jun 3, 2019 at 8:27 AM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
You raise a really good concern. Indeed, we might have to change the workflow from "write a plugin" to "write an Ansible role" to be able to test your project with DevStack at that page (or maintain both a "legacy" solution) with a new one.
On Mon, 2019-06-03 at 08:39 -0400, Mohammed Naser wrote: the real probalem with that is who is going to port all of the existing plugins.
Do all projects and all jobs have to be converted at once? Or ever? How much complexity do those plugins actually contain? Would they be fairly straightforward to convert? Could we build a "devstack plugin wrapper" for OSA? Could we run OSA and then run devstack with just the plugin(s) needed for a given job? Is there enough appeal in the idea of replacing devstack with something closer to what is used for production deployments to drive us to find an iterative approach that doesn't require changing everything at one time? Or are we stuck with devstack forever?
kolla-ansible has also tried to be a devstack replacement in the past via the introduction of dev-mode which clones the git repo of the dev mode project locally and bind mounts them into the container. the problem is it still breaks peoles plugins and workflow.
some devstack feature that osa would need to support in order to be a replacement for me are.
You've made a good start on a requirements list for a devstack replacement. Perhaps a first step would be for some of the folks who support this idea to compile a more complete list of those requirements, and then we could analyze OSA to see how it might need to be changed or whether it makes sense to use OSA as the basis for a new toolset that takes on some of the "dev" features we might not want in a "production" deployment tool. Here's another potential gap for whoever is going to make that list: devstack pre-populates the environment with some data for things like flavors and images. I don't imagine OSA does that or, if it does, that they are an exact match. How do we change those settings? That leads to a good second step: Do the rest of the analysis to understand what it would take to set up a base job like we have for devstack, that produces a similar setup. Not necessarily identical, but similar enough to be able to run tempest. It seems likely that already exists in some form for testing OSA itself. Could a developer run that on a local system (clearly being able to build the test environment locally is a requirement for replacing devstack)? After that, I would want to see answers to some of the questions about dealing with plugins that I posed above. And only then, I think, could I provide an answer to the question of whether we should make the change or not.
1 the ablity to install all openstack project form git if needed including gerrit reviews.
abiltiy to eailly specify gerrit reiews or commits for each project
# here i am declaring the os-vif should be installed from git not pypi LIBS_FROM_GIT=os-vif
# and here i am specifying that gerrit should be used as the source and # i am provide a gerrit/git refs branch for a specific un merged patch OS_VIF_REPO=https://git.openstack.org/openstack/os-vif OS_VIF_BRANCH=refs/changes/25/629025/9
# *_REPO can obvioulsy take anythign that is valid in a git clone command so # i can use a local repo too NEUTRON_REPO=file:///opt/repos/neutron # and *_BRANCH as the name implices works with branches, tag commits* and gerrit ref brances. NEUTRON_BRANCH=bug/1788009
the next thing that would be needed is a way to simply override any config value like this
[[post-config|/etc/nova/nova.conf]] #[compute] #live_migration_wait_for_vif_plug=True [libvirt] live_migration_uri = qemu+ssh://root@%s/system #cpu_mode = host-passthrough virt_type = kvm cpu_mode = custom cpu_model = kvm64
im sure that osa can do that but i really can just provide any path to any file if needed. so no need to update a role or plugin to set values in files created by plugins which is the next thing.
Does OSA need to support *every* configuration value? Or could it deploy a stack, and then rely on a separate tool to modify config values and restart a service? Clearly some values need to be there when the cloud first starts, but do they all?
we enable plugins with a single line like this
enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master
meaning there is no need to preinstall or clone the repo. in theory the plugin should install all its dependeices and devstack will clone and execute the plugins based on the single line above. plugins however can also
This makes me think it might be most appropriate to be considering a tool that replaces devstack by wrapping OSA, rather than *being* OSA. Maybe that's just an extra playbook that runs before OSA, or maybe it's a simpler bash script that does some setup before invoking OSA.
read any varable defiend in the local.conf as it will be set in the environment which means i can easily share an exact configuration with someone by shareing a local.conf.
im not against improving or replacing devstack but with the devstack ansible roles and the fact we use devstack for all our testing in the gate it is actually has become one of the best openstack installer out there. we do not recommend people run it in production but with the ansible automation of grenade and the move to systemd for services there are less mainatined installers out there that devstack is proably a better foundation for a cloud to build on. people should still not use it in production but i can see why some might.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
-- Doug
Sean Mooney <smooney@redhat.com> writes:
On Mon, 2019-06-03 at 08:39 -0400, Mohammed Naser wrote:
On Mon, Jun 3, 2019 at 8:27 AM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
You raise a really good concern. Indeed, we might have to change the workflow from "write a plugin" to "write an Ansible role" to be able to test your project with DevStack at that page (or maintain both a "legacy" solution) with a new one.
the real probalem with that is who is going to port all of the existing plugins.
Do all projects and all jobs have to be converted at once? Or ever?
How much complexity do those plugins actually contain? Would they be fairly straightforward to convert?
On Tue, 2019-06-04 at 08:39 -0400, Doug Hellmann wrote: that depends. some jsut add support for indivigual projects. others install infrastructure services like ceph or kubernetes which will be used by openstack services. others download and compiles c projects form source like networking-ovs-dpdk. the neutron devstack pluging also used to compiles ovs form source to work around some distro bugs and networking-ovn i belive also can? do the same. a devstack plugin allows all of the above to be done trivally.
Could we build a "devstack plugin wrapper" for OSA? Could we run OSA and then run devstack with just the plugin(s) needed for a given job?
that would likely be possible. im sure we could generate local.conf form osa's inventories and and run the plugsins after osa runs. devstack always runs it in tree code in each phase and then runs the plugins in the order they are enabled in each phase https://docs.openstack.org/devstack/latest/plugins.html networking-ovs-dpdk for example replaces the _neutron_ovs_base_install_agent_packages function https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/libs/o... with a noop and then in the install pahse we install ovs-dpdk form souce. _neutron_ovs_base_install_agent_packages just install kernel ovs but we replace it as our patches to make it condtional in devstack were rejected. its not nessiarily a patteren i encurage but if you have to you can replace any functionality that devstack provides via a plugin although most usecase relly dont requrie that.
Is there enough appeal in the idea of replacing devstack with something closer to what is used for production deployments to drive us to find an iterative approach that doesn't require changing everything at one time? Or are we stuck with devstack forever?
kolla-ansible has also tried to be a devstack replacement in the past via the introduction of dev-mode which clones the git repo of the dev mode project locally and bind mounts them into the container. the problem is it still breaks peoles plugins and workflow.
some devstack feature that osa would need to support in order to be a replacement for me are.
You've made a good start on a requirements list for a devstack replacement. Perhaps a first step would be for some of the folks who support this idea to compile a more complete list of those requirements, and then we could analyze OSA to see how it might need to be changed or whether it makes sense to use OSA as the basis for a new toolset that takes on some of the "dev" features we might not want in a "production" deployment tool.
Here's another potential gap for whoever is going to make that list: devstack pre-populates the environment with some data for things like flavors and images. I don't imagine OSA does that or, if it does, that they are an exact match. How do we change those settings?
+1 yes this is somehting i forgot about
That leads to a good second step: Do the rest of the analysis to understand what it would take to set up a base job like we have for devstack, that produces a similar setup. Not necessarily identical, but similar enough to be able to run tempest. It seems likely that already exists in some form for testing OSA itself. Could a developer run that on a local system (clearly being able to build the test environment locally is a requirement for replacing devstack)?
After that, I would want to see answers to some of the questions about dealing with plugins that I posed above.
And only then, I think, could I provide an answer to the question of whether we should make the change or not.
1 the ablity to install all openstack project form git if needed including gerrit reviews.
abiltiy to eailly specify gerrit reiews or commits for each project
# here i am declaring the os-vif should be installed from git not pypi LIBS_FROM_GIT=os-vif
# and here i am specifying that gerrit should be used as the source and # i am provide a gerrit/git refs branch for a specific un merged patch OS_VIF_REPO=https://git.openstack.org/openstack/os-vif OS_VIF_BRANCH=refs/changes/25/629025/9
# *_REPO can obvioulsy take anythign that is valid in a git clone command so # i can use a local repo too NEUTRON_REPO=file:///opt/repos/neutron # and *_BRANCH as the name implices works with branches, tag commits* and gerrit ref brances. NEUTRON_BRANCH=bug/1788009
the next thing that would be needed is a way to simply override any config value like this
[[post-config|/etc/nova/nova.conf]] #[compute] #live_migration_wait_for_vif_plug=True [libvirt] live_migration_uri = qemu+ssh://root@%s/system #cpu_mode = host-passthrough virt_type = kvm cpu_mode = custom cpu_model = kvm64
im sure that osa can do that but i really can just provide any path to any file if needed. so no need to update a role or plugin to set values in files created by plugins which is the next thing.
Does OSA need to support *every* configuration value? Or could it deploy a stack, and then rely on a separate tool to modify config values and restart a service? Clearly some values need to be there when the cloud first starts, but do they all?
i think to preserve the workflow yes we need to be able to override any config that is generated by OSA. kolla ansible supports a relly nice config override mechanism where you can supply overrieds are applied after it generates a template. even though i have used the generic functionality to change thing like libvirt configs in the past i generally have only used it for the openstack services and for development i think its very imporant to easibly configure different senarios without needing to lear the opinionated syntatic sugar provided by the install and just set the config values directly especially when developing a new feature that adds a new value.
we enable plugins with a single line like this
enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master some bugs meaning there is no need to preinstall or clone the repo. in theory the plugin should install all its dependeices and devstack will clone and execute the plugins based on the single line above. plugins however can also
This makes me think it might be most appropriate to be considering a tool that replaces devstack by wrapping OSA, rather than *being* OSA. Maybe that's just an extra playbook that runs before OSA, or maybe it's a simpler bash script that does some setup before invoking OSA.
on that point i had considerd porting networking-ovs-dpdk to an ansible role and invoking from the devstack plugin in the past but i have not had time to do that. part of what is nice about devstack plugin model is you can write you plugin in any language you like provided you have a plug.sh file as an entrypoint. i doublt we have devstack plugins today that just run ansibel or puppet but it is totally valid to do so.
read any varable defiend in the local.conf as it will be set in the environment which means i can easily share an exact configuration with someone by shareing a local.conf.
im not against improving or replacing devstack but with the devstack ansible roles and the fact we use devstack for all our testing in the gate it is actually has become one of the best openstack installer out there. we do not recommend people run it in production but with the ansible automation of grenade and the move to systemd for services there are less mainatined installers out there that devstack is proably a better foundation for a cloud to build on. people should still not use it in production but i can see why some might.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
> On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote: > > Hi everyone, > > This is something that I've discussed with a few people over time and > I think I'd probably want to bring it up by now. I'd like to propose > and ask if it makes sense to perhaps replace devstack entirely with > openstack-ansible. I think I have quite a few compelling reasons to > do this that I'd like to outline, as well as why I *feel* (and I could > be biased here, so call me out!) that OSA is the best option in terms > of a 'replacement' > > # Why not another deployment project? > I actually thought about this part too and considered this mainly for > ease of use for a *developer*. > > At this point, Puppet-OpenStack pretty much only deploys packages > (which means that it has no build infrastructure, a developer can't > just get $commit checked out and deployed). > > TripleO uses Kolla containers AFAIK and those have to be pre-built > beforehand, also, I feel they are much harder to use as a developer > because if you want to make quick edits and restart services, you have > to enter a container and make the edit there and somehow restart the > service without the container going back to it's original state. > Kolla-Ansible and the other combinations also suffer from the same > "issue". > > OpenStack Ansible is unique in the way that it pretty much just builds > a virtualenv and installs packages inside of it. The services are > deployed as systemd units. This is very much similar to the current > state of devstack at the moment (minus the virtualenv part, afaik). > It makes it pretty straight forward to go and edit code if you > need/have to. We also have support for Debian, CentOS, Ubuntu and > SUSE. This allows "devstack 2.0" to have far more coverage and make > it much more easy to deploy on a wider variety of operating systems. > It also has the ability to use commits checked out from Zuul so all > the fancy Depends-On stuff we use works. > > # Why do we care about this, I like my bash scripts! > As someone who's been around for a *really* long time in OpenStack, > I've seen a whole lot of really weird issues surface from the usage of > DevStack to do CI gating. For example, one of the recent things is > the fact it relies on installing package-shipped noVNC, where as the > 'master' noVNC has actually changed behavior a few months back and it > is completely incompatible at this point (it's just a ticking thing > until we realize we're entirely broken). > > To this day, I still see people who want to POC something up with > OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter > how many warnings we'll put up, they'll always try to do it. With > this way, at least they'll have something that has the shape of an > actual real deployment. In addition, it would be *good* in the > overall scheme of things for a deployment system to test against, > because this would make sure things don't break in both ways. > > Also: we run Zuul for our CI which supports Ansible natively, this can > remove one layer of indirection (Zuul to run Bash) and have Zuul run > the playbooks directly from the executor. > > # So how could we do this? > The OpenStack Ansible project is made of many roles that are all > composable, therefore, you can think of it as a combination of both > Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained > the base modules (i.e. puppet-nova, etc) and TripleO was the > integration of all of it in a distribution. OSA is currently both, > but it also includes both Ansible roles and playbooks. > > In order to make sure we maintain as much of backwards compatibility > as possible, we can simply run a small script which does a mapping of > devstack => OSA variables to make sure that the service is shipped > with all the necessary features as per local.conf. > > So the new process could be: > > 1) parse local.conf and generate Ansible variables files > 2) install Ansible (if not running in gate) > 3) run playbooks using variable generated in #1 > > The neat thing is after all of this, devstack just becomes a thin > wrapper around Ansible roles. I also think it brings a lot of hands > together, involving both the QA team and OSA team together, which I > believe that pooling our resources will greatly help in being able to > get more done and avoiding duplicating our efforts. > > # Conclusion > This is a start of a very open ended discussion, I'm sure there is a > lot of details involved here in the implementation that will surface, > but I think it could be a good step overall in simplifying our CI and > adding more coverage for real potential deployers. It will help two > teams unite together and have more resources for something (that > essentially is somewhat of duplicated effort at the moment). > > I will try to pick up sometime to POC a simple service being deployed > by an OSA role instead of Bash, placement which seems like a very > simple one and share that eventually. > > Thoughts? :) > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser@vexxhost.com > W. http://vexxhost.com >
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
Sean Mooney <smooney@redhat.com> writes:
Sean Mooney <smooney@redhat.com> writes:
the real probalem with that is who is going to port all of the existing plugins.
Do all projects and all jobs have to be converted at once? Or ever?
How much complexity do those plugins actually contain? Would they be fairly straightforward to convert?
On Tue, 2019-06-04 at 08:39 -0400, Doug Hellmann wrote: that depends. some jsut add support for indivigual projects. others install infrastructure services like ceph or kubernetes which will be used by openstack services. others download and compiles c projects form source like networking-ovs-dpdk. the neutron devstack pluging also used to compiles ovs form source to work around some distro bugs and networking-ovn i belive also can? do the same. a devstack plugin allows all of the above to be done trivally.
It's possible to do all of that sort of thing through Ansible, too. I compile a couple of different tools as part of my developer setup playbooks. If the logic is complicated, the playbook can always call a script.
Could we build a "devstack plugin wrapper" for OSA? Could we run OSA and then run devstack with just the plugin(s) needed for a given job? that would likely be possible. im sure we could generate local.conf form osa's inventories and and run the plugsins after osa runs. devstack always runs it in tree code in each phase and then runs the plugins in the order they are enabled in each phase
https://docs.openstack.org/devstack/latest/plugins.html
networking-ovs-dpdk for example replaces the _neutron_ovs_base_install_agent_packages function https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/libs/o... with a noop and then in the install pahse we install ovs-dpdk form souce. _neutron_ovs_base_install_agent_packages just install kernel ovs but we replace it as our patches to make it condtional in devstack were rejected.
What we end up with after this transition might work differently. Is there any reason it would have to maintain the "phase" approach? The ovs-dpdk example you give feels like it would be swapping one role for another in the playbook for the job that needs ovs-dpdk.
its not nessiarily a patteren i encurage but if you have to you can replace any functionality that devstack provides via a plugin although most usecase relly dont requrie that.
Maybe we don't need to design around that if the requirement isn't common, then? That's another question for the analysis someone needs to do.
Does OSA need to support *every* configuration value? Or could it deploy a stack, and then rely on a separate tool to modify config values and restart a service? Clearly some values need to be there when the cloud first starts, but do they all? i think to preserve the workflow yes we need to be able to override any config that is generated
OK, so it sounds like that's an area to look at for gaps for OSA. I would imagine it would be possible to create a role to change arbitrary config settings based on inputs from the playbook or a vars file. -- Doug
Hi everyone, I find myself wondering whether doing this in reverse would potentially be more useful and less disruptive. If devstack plugins in service repositories are converted from bash to ansible role(s), then there is potential for OSA to make use of that. This could potentially be a drop-in replacement for devstack by using a #!/bin/ansible (or whatever known path) shebang in a playbook file, or by changing the devstack entry point into a wrapper that runs ansible from a known path. Using this implementation process would allow a completely independent development process for the devstack conversion, and would allow OSA to retire its independent role repositories as and when the service’s ansible role is ready. Using this method would also allow devstack, OSA, triple-o and kolla-ansible to consume those ansible roles in whatever way they see fit using playbooks which are tailored to their own deployment philosophy. At the most recent PTG there was a discussion between OSA and kolla-ansible about something like this and the conversation for how that could be done would be to ensure that the roles have a clear set of inputs and outputs, with variables enabling the code paths to key outputs. My opinion is that the convergence of all Ansible-based deployment tools to use a common set of roles would be advantageous in many ways: 1. There will be more hands & eyeballs on the deployment code. 2. There will be more eyeballs on the reviews for service and deployment code. 3. There will be a convergence of developer and operator communities on the reviews. 4. The deployment code will co-exist with the service code, so changes can be done together. 5. Ansible is more pythonic than bash, and using it can likely result in the removal of a bunch of devstack bash libs. As Doug suggested, this starts with putting together some requirements - for the wrapping frameworks, as well as the component roles. It may be useful to get some sort of representative sample service to put together a PoC on to help figure out these requirements. I think that this may be useful for the tripleo-ansible team to have a view on, I’ve added the tag to the subject of this email. Best regards, Jesse IRC: odyssey4me
Jesse Pretorius <jesse@odyssey4.me> writes:
Hi everyone,
I find myself wondering whether doing this in reverse would potentially be more useful and less disruptive.
If devstack plugins in service repositories are converted from bash to ansible role(s), then there is potential for OSA to make use of that. This could potentially be a drop-in replacement for devstack by using a #!/bin/ansible (or whatever known path) shebang in a playbook file, or by changing the devstack entry point into a wrapper that runs ansible from a known path.
Using this implementation process would allow a completely independent development process for the devstack conversion, and would allow OSA to retire its independent role repositories as and when the service’s ansible role is ready.
It depends on whether you want to delay the deprecation of devstack itself until enough services have done that, or if you want to make NewDevstack (someone should come up with a name for the OSA-based devstack replacement) consume those existing plugins in parallel with OSA.
Using this method would also allow devstack, OSA, triple-o and kolla-ansible to consume those ansible roles in whatever way they see fit using playbooks which are tailored to their own deployment philosophy.
That would be useful.
At the most recent PTG there was a discussion between OSA and kolla-ansible about something like this and the conversation for how that could be done would be to ensure that the roles have a clear set of inputs and outputs, with variables enabling the code paths to key outputs.
My opinion is that the convergence of all Ansible-based deployment tools to use a common set of roles would be advantageous in many ways:
1. There will be more hands & eyeballs on the deployment code. 2. There will be more eyeballs on the reviews for service and deployment code. 3. There will be a convergence of developer and operator communities on the reviews.
That might make all of this worth it, even if there is no other benefit.
4. The deployment code will co-exist with the service code, so changes can be done together. 5. Ansible is more pythonic than bash, and using it can likely result in the removal of a bunch of devstack bash libs.
As Doug suggested, this starts with putting together some requirements - for the wrapping frameworks, as well as the component roles. It may be useful to get some sort of representative sample service to put together a PoC on to help figure out these requirements.
I think that this may be useful for the tripleo-ansible team to have a view on, I’ve added the tag to the subject of this email.
Best regards,
Jesse IRC: odyssey4me
-- Doug
On 6/4/19 7:39 AM, Doug Hellmann wrote:
Sean Mooney <smooney@redhat.com> writes:
On Mon, Jun 3, 2019 at 8:27 AM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
On 1 Jun 2019, at 20:49, Mohammed Naser <mnaser@vexxhost.com> wrote:
On Sat, Jun 1, 2019 at 1:46 PM Slawomir Kaplonski <skaplons@redhat.com> wrote:
Hi,
I don’t know OSA at all so sorry if my question is dumb but in devstack we can easily write plugins, keep it in separate repo and such plugin can be easy used in devstack (e.g. in CI jobs it’s used a lot). Is something similar possible with OSA or will it be needed to contribute always every change to OSA repository?
Not a dumb question at all. So, we do have this concept of 'roles' which you _could_ kinda technically identify similar to plugins. However, I think one of the things that would maybe come out of this is the inability for projects to maintain their own plugins (because now you can host neutron/devstack/plugins and you maintain that repo yourself), under this structure, you would indeed have to make those changes to the OpenStack Ansible Neutron role
i.e.: https://opendev.org/openstack/openstack-ansible-os_neutron
However, I think from an OSA perspective, we would be more than happy to add project maintainers for specific projects to their appropriate roles. It would make sense that there is someone from the Neutron team that could be a core on os_neutron from example.
Yes, that may work for official projects like Neutron. But what about everything else, like projects hosted now in opendev.org/x/ repositories? Devstack gives everyone easy way to integrate own plugin/driver/project with it and install it together with everything else by simply adding one line (usually) in local.conf file. I think that it may be a bit hard to OSA team to accept and review patches with new roles for every project or driver which isn’t official OpenStack project.
You raise a really good concern. Indeed, we might have to change the workflow from "write a plugin" to "write an Ansible role" to be able to test your project with DevStack at that page (or maintain both a "legacy" solution) with a new one.
On Mon, 2019-06-03 at 08:39 -0400, Mohammed Naser wrote: the real probalem with that is who is going to port all of the existing plugins.
Do all projects and all jobs have to be converted at once? Or ever?
Perhaps not all at once, but I would say they all need to be converted eventually or we end up in the situation Dean mentioned where we have to maintain two different deployment systems. I would argue that's much worse than just continuing with devstack as-is. On the other hand, practically speaking I don't think we can probably do them all at once, unless there are a lot fewer devstack plugins in the wild than I think there are (which is possible). Also, I suspect there may be downstream plugins running in third-party ci that need to be considered. That said, while I expect this would be _extremely_ painful in the short to medium term, I'm also a big proponent of making the thing developers care about the same as the thing users care about. However, if we go down this path I think we need sufficient buy in from a diverse enough group of contributors that losing one group (see OSIC) doesn't leave us with a half-finished migration. That would be a disaster IMHO.
How much complexity do those plugins actually contain? Would they be fairly straightforward to convert?
Could we build a "devstack plugin wrapper" for OSA? Could we run OSA and then run devstack with just the plugin(s) needed for a given job?
Is there enough appeal in the idea of replacing devstack with something closer to what is used for production deployments to drive us to find an iterative approach that doesn't require changing everything at one time? Or are we stuck with devstack forever?
kolla-ansible has also tried to be a devstack replacement in the past via the introduction of dev-mode which clones the git repo of the dev mode project locally and bind mounts them into the container. the problem is it still breaks peoles plugins and workflow.
some devstack feature that osa would need to support in order to be a replacement for me are.
You've made a good start on a requirements list for a devstack replacement. Perhaps a first step would be for some of the folks who support this idea to compile a more complete list of those requirements, and then we could analyze OSA to see how it might need to be changed or whether it makes sense to use OSA as the basis for a new toolset that takes on some of the "dev" features we might not want in a "production" deployment tool.
Here's another potential gap for whoever is going to make that list: devstack pre-populates the environment with some data for things like flavors and images. I don't imagine OSA does that or, if it does, that they are an exact match. How do we change those settings?
That leads to a good second step: Do the rest of the analysis to understand what it would take to set up a base job like we have for devstack, that produces a similar setup. Not necessarily identical, but similar enough to be able to run tempest. It seems likely that already exists in some form for testing OSA itself. Could a developer run that on a local system (clearly being able to build the test environment locally is a requirement for replacing devstack)?
After that, I would want to see answers to some of the questions about dealing with plugins that I posed above.
And only then, I think, could I provide an answer to the question of whether we should make the change or not.
1 the ablity to install all openstack project form git if needed including gerrit reviews.
abiltiy to eailly specify gerrit reiews or commits for each project
# here i am declaring the os-vif should be installed from git not pypi LIBS_FROM_GIT=os-vif
# and here i am specifying that gerrit should be used as the source and # i am provide a gerrit/git refs branch for a specific un merged patch OS_VIF_REPO=https://git.openstack.org/openstack/os-vif OS_VIF_BRANCH=refs/changes/25/629025/9
# *_REPO can obvioulsy take anythign that is valid in a git clone command so # i can use a local repo too NEUTRON_REPO=file:///opt/repos/neutron # and *_BRANCH as the name implices works with branches, tag commits* and gerrit ref brances. NEUTRON_BRANCH=bug/1788009
the next thing that would be needed is a way to simply override any config value like this
[[post-config|/etc/nova/nova.conf]] #[compute] #live_migration_wait_for_vif_plug=True [libvirt] live_migration_uri = qemu+ssh://root@%s/system #cpu_mode = host-passthrough virt_type = kvm cpu_mode = custom cpu_model = kvm64
im sure that osa can do that but i really can just provide any path to any file if needed. so no need to update a role or plugin to set values in files created by plugins which is the next thing.
Does OSA need to support *every* configuration value? Or could it deploy a stack, and then rely on a separate tool to modify config values and restart a service? Clearly some values need to be there when the cloud first starts, but do they all?
we enable plugins with a single line like this
enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk master
meaning there is no need to preinstall or clone the repo. in theory the plugin should install all its dependeices and devstack will clone and execute the plugins based on the single line above. plugins however can also
This makes me think it might be most appropriate to be considering a tool that replaces devstack by wrapping OSA, rather than *being* OSA. Maybe that's just an extra playbook that runs before OSA, or maybe it's a simpler bash script that does some setup before invoking OSA.
read any varable defiend in the local.conf as it will be set in the environment which means i can easily share an exact configuration with someone by shareing a local.conf.
im not against improving or replacing devstack but with the devstack ansible roles and the fact we use devstack for all our testing in the gate it is actually has become one of the best openstack installer out there. we do not recommend people run it in production but with the ansible automation of grenade and the move to systemd for services there are less mainatined installers out there that devstack is proably a better foundation for a cloud to build on. people should still not use it in production but i can see why some might.
Speaking about CI, e.g. in neutron we currently have jobs like neutron-functional or neutron-fullstack which uses only some parts of devstack. That kind of jobs will probably have to be rewritten after such change. I don’t know if neutron jobs are only which can be affected in that way but IMHO it’s something worth to keep in mind.
Indeed, with our current CI infrastructure with OSA, we have the ability to create these dynamic scenarios (which can actually be defined by a simple Zuul variable).
https://github.com/openstack/openstack-ansible/blob/master/zuul.d/playbooks/...
We do some really neat introspection of the project name being tested in order to run specific scenarios. Therefore, that is something that should be quite easy to accomplish simply by overriding a scenario name within Zuul. It also is worth mentioning we now support full metal deploys for a while now, so not having to worry about containers is something to keep in mind as well (with simplifying the developer experience again).
> On 1 Jun 2019, at 14:35, Mohammed Naser <mnaser@vexxhost.com> wrote: > > Hi everyone, > > This is something that I've discussed with a few people over time and > I think I'd probably want to bring it up by now. I'd like to propose > and ask if it makes sense to perhaps replace devstack entirely with > openstack-ansible. I think I have quite a few compelling reasons to > do this that I'd like to outline, as well as why I *feel* (and I could > be biased here, so call me out!) that OSA is the best option in terms > of a 'replacement' > > # Why not another deployment project? > I actually thought about this part too and considered this mainly for > ease of use for a *developer*. > > At this point, Puppet-OpenStack pretty much only deploys packages > (which means that it has no build infrastructure, a developer can't > just get $commit checked out and deployed). > > TripleO uses Kolla containers AFAIK and those have to be pre-built > beforehand, also, I feel they are much harder to use as a developer > because if you want to make quick edits and restart services, you have > to enter a container and make the edit there and somehow restart the > service without the container going back to it's original state. > Kolla-Ansible and the other combinations also suffer from the same > "issue". > > OpenStack Ansible is unique in the way that it pretty much just builds > a virtualenv and installs packages inside of it. The services are > deployed as systemd units. This is very much similar to the current > state of devstack at the moment (minus the virtualenv part, afaik). > It makes it pretty straight forward to go and edit code if you > need/have to. We also have support for Debian, CentOS, Ubuntu and > SUSE. This allows "devstack 2.0" to have far more coverage and make > it much more easy to deploy on a wider variety of operating systems. > It also has the ability to use commits checked out from Zuul so all > the fancy Depends-On stuff we use works. > > # Why do we care about this, I like my bash scripts! > As someone who's been around for a *really* long time in OpenStack, > I've seen a whole lot of really weird issues surface from the usage of > DevStack to do CI gating. For example, one of the recent things is > the fact it relies on installing package-shipped noVNC, where as the > 'master' noVNC has actually changed behavior a few months back and it > is completely incompatible at this point (it's just a ticking thing > until we realize we're entirely broken). > > To this day, I still see people who want to POC something up with > OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter > how many warnings we'll put up, they'll always try to do it. With > this way, at least they'll have something that has the shape of an > actual real deployment. In addition, it would be *good* in the > overall scheme of things for a deployment system to test against, > because this would make sure things don't break in both ways. > > Also: we run Zuul for our CI which supports Ansible natively, this can > remove one layer of indirection (Zuul to run Bash) and have Zuul run > the playbooks directly from the executor. > > # So how could we do this? > The OpenStack Ansible project is made of many roles that are all > composable, therefore, you can think of it as a combination of both > Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained > the base modules (i.e. puppet-nova, etc) and TripleO was the > integration of all of it in a distribution. OSA is currently both, > but it also includes both Ansible roles and playbooks. > > In order to make sure we maintain as much of backwards compatibility > as possible, we can simply run a small script which does a mapping of > devstack => OSA variables to make sure that the service is shipped > with all the necessary features as per local.conf. > > So the new process could be: > > 1) parse local.conf and generate Ansible variables files > 2) install Ansible (if not running in gate) > 3) run playbooks using variable generated in #1 > > The neat thing is after all of this, devstack just becomes a thin > wrapper around Ansible roles. I also think it brings a lot of hands > together, involving both the QA team and OSA team together, which I > believe that pooling our resources will greatly help in being able to > get more done and avoiding duplicating our efforts. > > # Conclusion > This is a start of a very open ended discussion, I'm sure there is a > lot of details involved here in the implementation that will surface, > but I think it could be a good step overall in simplifying our CI and > adding more coverage for real potential deployers. It will help two > teams unite together and have more resources for something (that > essentially is somewhat of duplicated effort at the moment). > > I will try to pick up sometime to POC a simple service being deployed > by an OSA role instead of Bash, placement which seems like a very > simple one and share that eventually. > > Thoughts? :) > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser@vexxhost.com > W. http://vexxhost.com >
— Slawek Kaplonski Senior software engineer Red Hat
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
— Slawek Kaplonski Senior software engineer Red Hat
Ben Nemec <openstack@nemebean.com> writes:
On 6/4/19 7:39 AM, Doug Hellmann wrote:
Do all projects and all jobs have to be converted at once? Or ever?
Perhaps not all at once, but I would say they all need to be converted eventually or we end up in the situation Dean mentioned where we have to maintain two different deployment systems. I would argue that's much worse than just continuing with devstack as-is. On the other hand, practically speaking I don't think we can probably do them all at once, unless there are a lot fewer devstack plugins in the wild than I think there are (which is possible). Also, I suspect there may be downstream plugins running in third-party ci that need to be considered.
I think we can't do them all at once. We can never do anything all at once; we're too big. I don't think we should have a problem saying that devstack is frozen for new features but will continue to run as-is, and new things should use the replacement (when it is available). As soon as the new thing can provide a bridge with *some* level of support for plugins, we could start transitioning as teams have time and need. Jesse's proposal to rewrite devstack plugins as ansible roles may give us that bridge.
That said, while I expect this would be _extremely_ painful in the short to medium term, I'm also a big proponent of making the thing developers care about the same as the thing users care about. However, if we go down this path I think we need sufficient buy in from a diverse enough group of contributors that losing one group (see OSIC) doesn't leave us with a half-finished migration. That would be a disaster IMHO.
Oh, yes. We would need this not to be a project undertaken by a group of people from one funding source. It needs to be a shift in direction of the community as a whole to improve our developer and testing tools. -- Doug
I don't think I have enough coffee in me to fully digest this, but wanted to point out a couple of things. FWIW, this is something I've thought we should do for a while now. On Sat, Jun 1, 2019 at 8:43 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
FWIW, kolla-ansible (and maybe tripleo?) has a "development" mode which mounts the code as a volume, so you can make edits and just run "docker restart $service". Though systemd does make that a bit nicer due to globs (e.g. systemctl restart nova-*). That said, I do agree moving to something where systemd is running the services would make for a smoother transition for developers.
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
++
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
++
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
The reason this hasn't been pushed on in the past is to avoid the perception that the TC or QA team is choosing a "winner" in the deployment space. I don't think that's a good reason not to do something like this (especially with the drop in contributors since I've had that discussion). However, we do need to message this carefully at a minimum.
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Mon, Jun 3, 2019 at 8:02 AM Jim Rollenhagen <jim@jimrollenhagen.com> wrote:
I don't think I have enough coffee in me to fully digest this, but wanted to point out a couple of things. FWIW, this is something I've thought we should do for a while now.
On Sat, Jun 1, 2019 at 8:43 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
FWIW, kolla-ansible (and maybe tripleo?) has a "development" mode which mounts the code as a volume, so you can make edits and just run "docker restart $service". Though systemd does make that a bit nicer due to globs (e.g. systemctl restart nova-*).
That said, I do agree moving to something where systemd is running the services would make for a smoother transition for developers.
I didn't know about this (and this wasn't around for the time that I was trying and experimenting with Kolla). This does seem like a possible solution if we're okay with adding the Docker dependency into DevStack and the workflow changing from restarting services to restarting containers.
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
++
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
++
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
The reason this hasn't been pushed on in the past is to avoid the perception that the TC or QA team is choosing a "winner" in the deployment space. I don't think that's a good reason not to do something like this (especially with the drop in contributors since I've had that discussion). However, we do need to message this carefully at a minimum.
Right. I think that's because in OpenStack-Ansible world, we have two things - OSA roles: nothing but basic roles to deploy OpenStack services, with external consumers - Integrated: contains all the playbooks In a way, our roles is "Puppet OpenStack" and our integrated repo is "TripleO", back when TripleO deployed via Puppet anyways... I have to be honest, I wish that our roles lived under a different name so we can collaborate all on them (because an Ansible role to deploy something generically is needed regardless). We've actually done a lot of work with the TripleO team and they are consuming one of our roles (os_tempest) to do all their tempest testing, we gate TripleO and they gate us for the role.
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Mon, 3 Jun 2019, 12:57 Jim Rollenhagen, <jim@jimrollenhagen.com> wrote:
I don't think I have enough coffee in me to fully digest this, but wanted to point out a couple of things. FWIW, this is something I've thought we should do for a while now.
On Sat, Jun 1, 2019 at 8:43 AM Mohammed Naser <mnaser@vexxhost.com> wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
FWIW, kolla-ansible (and maybe tripleo?) has a "development" mode which mounts the code as a volume, so you can make edits and just run "docker restart $service". Though systemd does make that a bit nicer due to globs (e.g. systemctl restart nova-*).
That said, I do agree moving to something where systemd is running the services would make for a smoother transition for developers.
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
++
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
++
This strikes me as being a considerable undertaking, that would never get full compatibility due to the lack of a defined API. It might get close with a bit of effort. I expect there are scripts and plugins that don't have an analogue in OSA (ironic, I'm looking at you).
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
The reason this hasn't been pushed on in the past is to avoid the perception that the TC or QA team is choosing a "winner" in the deployment space. I don't think that's a good reason not to do something like this (especially with the drop in contributors since I've had that discussion). However, we do need to message this carefully at a minimum.
With my Kolla hat on, this does concern me. If you're trying out OpenStack and spend enough quality time with OSA to become familiar with it, you're going to be less inclined to do your homework on deployment tools. It would be nice if the deployment space wasn't so fragmented, but we all have our reasons.
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
I'm not sure this is a great example case. We consume prebuilt software for many of our dependencies. Everything from the kernel to the database to rabbitmq to ovs (and so on) are consumed as prebuilt packages from our distros. In many cases this is desirable to ensure that our software work with the other software out there in the wild that people will be deploying with.
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
I think if you have developers running a small wrapper locally to deploy this new development stack you should run that same wrapper in CI. This ensure the wrapper doesn't break.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already. The other is logging. I spend a lot of time helping people to debug CI job runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs). Clark
On Mon, Jun 3, 2019 at 11:05 AM Clark Boylan <cboylan@sapwetik.org> wrote:
On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
I'm not sure this is a great example case. We consume prebuilt software for many of our dependencies. Everything from the kernel to the database to rabbitmq to ovs (and so on) are consumed as prebuilt packages from our distros. In many cases this is desirable to ensure that our software work with the other software out there in the wild that people will be deploying with.
Yeah. I guess that's fair, but there's still other things like lack of coverage for many other operating systems as well.
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
I think if you have developers running a small wrapper locally to deploy this new development stack you should run that same wrapper in CI. This ensure the wrapper doesn't break.
That's fair enough, that's always been the odd thing of driving things directly via Zuul or with a small executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already. The other is logging. I spend a lot of time helping people to debug CI job runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs).
The idea is *not* to use OpenStack Ansible to deploy DevStack, it's to use the roles to deploy the specific services. Therefore, the log collection stuff should all still be the same, as long as it pulls down the correct systemd unit (which should be matching). The idea that it should be 100% transparent to the user at the end of the day, there should be no functional changes in how DevStack runs or what it logs in the gate.
Clark
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Mon, Jun 3, 2019, at 8:15 AM, Mohammed Naser wrote:
On Mon, Jun 3, 2019 at 11:05 AM Clark Boylan <cboylan@sapwetik.org> wrote:
On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
snip
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already. The other is logging. I spend a lot of time helping people to debug CI job runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs).
The idea is *not* to use OpenStack Ansible to deploy DevStack, it's to use the roles to deploy the specific services. Therefore, the log collection stuff should all still be the same, as long as it pulls down the correct systemd unit (which should be matching).
I know. I'm saying the logging that these other systems produce is typically lacking compared to devstack. So any change needs to address that.
The idea that it should be 100% transparent to the user at the end of the day, there should be no functional changes in how DevStack runs or what it logs in the gate.
If this is the plan then the logging concerns should be addressed as part of the "don't make it noticeable change" work. Clark
On Mon, 3 Jun 2019, 15:59 Clark Boylan, <cboylan@sapwetik.org> wrote:
On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
I'm not sure this is a great example case. We consume prebuilt software for many of our dependencies. Everything from the kernel to the database to rabbitmq to ovs (and so on) are consumed as prebuilt packages from our distros. In many cases this is desirable to ensure that our software work with the other software out there in the wild that people will be deploying with.
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
I think if you have developers running a small wrapper locally to deploy this new development stack you should run that same wrapper in CI. This ensure the wrapper doesn't break.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already.
This is important. We would need to see benchmark comparisons between a devstack install and an OSA install. Shell may be slow but Ansible is generally slower. That's fine in production when reliability is king, but we need fast iteration for development. I haven't looked under the covers of devstack for some time, but it previously installed all python deps in one place, whereas OSA has virtualenvs for each service which could take a while to build. Perhaps this is configurable. The other is logging. I spend a lot of time helping people to debug CI job
runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs).
Clark
On Sat, Jun 1, 2019, at 05:36, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
You laid out three reasons below to switch, and to be frank, I don't find any of them compelling. This is tooling that hundreds of people and machines rely on and are familiar with, and to undertake a massive change like this deserves some *really* compelling, even *dire*, rationalization for it, and metrics showing it is better than the old thing. This thread reads as proposing change for the sake of change. Colleen
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On Mon, Jun 3, 2019 at 3:51 PM Colleen Murphy <colleen@gazlene.net> wrote:
On Sat, Jun 1, 2019, at 05:36, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
You laid out three reasons below to switch, and to be frank, I don't find any of them compelling. This is tooling that hundreds of people and machines rely on and are familiar with, and to undertake a massive change like this deserves some *really* compelling, even *dire*, rationalization for it, and metrics showing it is better than the old thing. This thread reads as proposing change for the sake of change.
That's fair. My argument was that we have a QA team that is strapped for resources which is doing the same work as the OSA team as working on, so most probably deduplicating efforts can help us get more things done because work can split across more people now. I do totally get people might not want to do it. That's fine, it is after all a proposal and if the majority of the community feels like devstack is okay, and the amount of maintainers it has is fine, then I wouldn't want to change that either.
Colleen
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
[I have been trying to decide where to jump in here, this seems as good a place as any.] On Mon, Jun 3, 2019 at 2:48 PM Colleen Murphy <colleen@gazlene.net> wrote:
You laid out three reasons below to switch, and to be frank, I don't find any of them compelling. This is tooling that hundreds of people and machines rely on and are familiar with, and to undertake a massive change like this deserves some *really* compelling, even *dire*, rationalization for it, and metrics showing it is better than the old thing. This thread reads as proposing change for the sake of change.
Colleen makes a great point here about the required scope of this proposal to actually be a replacement for DevStack... A few of us have wanted to replace DevStack with something better probably since a year after we introduced it in Boston (the first time). The primary problems with replacing it are both technical and business/political. There have been two serious attempts, the first was what became harlowja's Anvil project, which actually had different goals than DevStack, and the second was discussed at the first PTG in Atlanta as an OSA-based orchestrator that could replace parts incrementally and was going to (at least partially) leverage Zuul v3. That died with the rest of OSIC (RIP). The second proposal was very similar to mnaser's current one To actually _replace_ DevStack you have to meet a major fraction of its use cases, which are more than anyone imagined back in the day. Both prior attempts to replace it did not address all of the use cases and (I believe) that limited the number of people willing or able to get involved. Anything short of complete replacement fails to meet the 'deduplication of work' goal... (see https://xkcd.com/927/). IMHO the biggest problem here is finding anyone who is willing to fund this work. It is a huge project that will only count toward a sponsor company's stats in an area they usually do not pay much attention toward. I am not trying to throw cold water on this, I will gladly support from a moderate distance any effort to rid us of DevStack. I believe that knowing where things have been attempted in the past will either inform how to approach it differently now or identify what in our community has changed to make trying again worthwhile. Go for it! dt -- Dean Troyer dtroyer@gmail.com
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working. One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat. In order to make other jobs running correctly* I had to rely undoing the damage done by such chmod because I was not able to disable the historical hack. * ansible throws warning with unsafe file permissions * ssh refuses to load unsafe keys That is why I am in favor of dropping features that are slowing down the progress of others. I know that the reality is more complicated but I also think that sometimes less* is more. * deployment projects ;)
On 4 Jun 2019, at 04:36, Dean Troyer <dtroyer@gmail.com> wrote:
On Mon, 3 Jun 2019, 15:59 Clark Boylan, <cboylan@sapwetik.org <mailto:cboylan@sapwetik.org>> wrote: On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
I'm not sure this is a great example case. We consume prebuilt software for many of our dependencies. Everything from the kernel to the database to rabbitmq to ovs (and so on) are consumed as prebuilt packages from our distros. In many cases this is desirable to ensure that our software work with the other software out there in the wild that people will be deploying with.
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
I think if you have developers running a small wrapper locally to deploy this new development stack you should run that same wrapper in CI. This ensure the wrapper doesn't break.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already.
This is important. We would need to see benchmark comparisons between a devstack install and an OSA install. Shell may be slow but Ansible is generally slower. That's fine in production when reliability is king, but we need fast iteration for development.
I haven't looked under the covers of devstack for some time, but it previously installed all python deps in one place, whereas OSA has virtualenvs for each service which could take a while to build. Perhaps this is configurable.
The other is logging. I spend a lot of time helping people to debug CI job runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs).
Clark
On Tue, 2019-06-04 at 08:56 +0100, Sorin Sbarnea wrote:
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working.
this sound like the issue is more realted to the fact that it is still useing a legacy job. why not move it over to the ansible native devstack jobs.
One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat.
in a ci env it is not. and in a development env if it was in devstack gate or in the ansible jobs it is not. i would not want this in a production system but it feels a little contived.
In order to make other jobs running correctly* I had to rely undoing the damage done by such chmod because I was not able to disable the historical hack.
* ansible throws warning with unsafe file permissions * ssh refuses to load unsafe keys
That is why I am in favor of dropping features that are slowing down the progress of others.
that is a self contracdicting statement. if i depend on a feature then droping it slows donw my progress. e.g. if you state that as a goal you will find you will almost always fail as to speed someone up you slow someone else down. what you want to aim for is a better solution that supports both usecase in a clean and defiend way.
I know that the reality is more complicated but I also think that sometimes less* is more.
* deployment projects ;)
On 4 Jun 2019, at 04:36, Dean Troyer <dtroyer@gmail.com> wrote:
On Mon, 3 Jun 2019, 15:59 Clark Boylan, <cboylan@sapwetik.org <mailto:cboylan@sapwetik.org>> wrote: On Sat, Jun 1, 2019, at 5:36 AM, Mohammed Naser wrote:
Hi everyone,
This is something that I've discussed with a few people over time and I think I'd probably want to bring it up by now. I'd like to propose and ask if it makes sense to perhaps replace devstack entirely with openstack-ansible. I think I have quite a few compelling reasons to do this that I'd like to outline, as well as why I *feel* (and I could be biased here, so call me out!) that OSA is the best option in terms of a 'replacement'
# Why not another deployment project? I actually thought about this part too and considered this mainly for ease of use for a *developer*.
At this point, Puppet-OpenStack pretty much only deploys packages (which means that it has no build infrastructure, a developer can't just get $commit checked out and deployed).
TripleO uses Kolla containers AFAIK and those have to be pre-built beforehand, also, I feel they are much harder to use as a developer because if you want to make quick edits and restart services, you have to enter a container and make the edit there and somehow restart the service without the container going back to it's original state. Kolla-Ansible and the other combinations also suffer from the same "issue".
OpenStack Ansible is unique in the way that it pretty much just builds a virtualenv and installs packages inside of it. The services are deployed as systemd units. This is very much similar to the current state of devstack at the moment (minus the virtualenv part, afaik). It makes it pretty straight forward to go and edit code if you need/have to. We also have support for Debian, CentOS, Ubuntu and SUSE. This allows "devstack 2.0" to have far more coverage and make it much more easy to deploy on a wider variety of operating systems. It also has the ability to use commits checked out from Zuul so all the fancy Depends-On stuff we use works.
# Why do we care about this, I like my bash scripts! As someone who's been around for a *really* long time in OpenStack, I've seen a whole lot of really weird issues surface from the usage of DevStack to do CI gating. For example, one of the recent things is the fact it relies on installing package-shipped noVNC, where as the 'master' noVNC has actually changed behavior a few months back and it is completely incompatible at this point (it's just a ticking thing until we realize we're entirely broken).
I'm not sure this is a great example case. We consume prebuilt software for many of our dependencies. Everything from the kernel to the database to rabbitmq to ovs (and so on) are consumed as prebuilt packages from our distros. In many cases this is desirable to ensure that our software work with the other software out there in the wild that people will be deploying with.
To this day, I still see people who want to POC something up with OpenStack or *ACTUALLY* try to run OpenStack with DevStack. No matter how many warnings we'll put up, they'll always try to do it. With this way, at least they'll have something that has the shape of an actual real deployment. In addition, it would be *good* in the overall scheme of things for a deployment system to test against, because this would make sure things don't break in both ways.
Also: we run Zuul for our CI which supports Ansible natively, this can remove one layer of indirection (Zuul to run Bash) and have Zuul run the playbooks directly from the executor.
I think if you have developers running a small wrapper locally to deploy this new development stack you should run that same wrapper in CI. This ensure the wrapper doesn't break.
# So how could we do this? The OpenStack Ansible project is made of many roles that are all composable, therefore, you can think of it as a combination of both Puppet-OpenStack and TripleO (back then). Puppet-OpenStack contained the base modules (i.e. puppet-nova, etc) and TripleO was the integration of all of it in a distribution. OSA is currently both, but it also includes both Ansible roles and playbooks.
In order to make sure we maintain as much of backwards compatibility as possible, we can simply run a small script which does a mapping of devstack => OSA variables to make sure that the service is shipped with all the necessary features as per local.conf.
So the new process could be:
1) parse local.conf and generate Ansible variables files 2) install Ansible (if not running in gate) 3) run playbooks using variable generated in #1
The neat thing is after all of this, devstack just becomes a thin wrapper around Ansible roles. I also think it brings a lot of hands together, involving both the QA team and OSA team together, which I believe that pooling our resources will greatly help in being able to get more done and avoiding duplicating our efforts.
# Conclusion This is a start of a very open ended discussion, I'm sure there is a lot of details involved here in the implementation that will surface, but I think it could be a good step overall in simplifying our CI and adding more coverage for real potential deployers. It will help two teams unite together and have more resources for something (that essentially is somewhat of duplicated effort at the moment).
I will try to pick up sometime to POC a simple service being deployed by an OSA role instead of Bash, placement which seems like a very simple one and share that eventually.
Thoughts? :)
For me there are two major items to consider that haven't been brought up yet. The first is devstack's (lack of) speed. Any replacement should be at least as quick as the current tooling because the current tooling is slow enough already.
This is important. We would need to see benchmark comparisons between a devstack install and an OSA install. Shell may be slow but Ansible is generally slower. That's fine in production when reliability is king, but we need fast iteration for development.
I haven't looked under the covers of devstack for some time, but it previously installed all python deps in one place, whereas OSA has virtualenvs for each service which could take a while to build. Perhaps this is configurable.
The other is logging. I spend a lot of time helping people to debug CI job runs and devstack has grown a fairly effective set of logging that just about any time I have to help debug another deployment tool's CI jobs I miss (because they tend to log only a tiny fraction of what devstack logs).
Clark
On Tue, Jun 4, 2019, at 1:01 AM, Sorin Sbarnea wrote:
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working.
One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat.
This is needed by devstack-gate and *not* devstack. We have been trying now for almost two years to get people to stop using devstack-gate in favor of the zuul v3 jobs. Please don't conflate this with devstack itself, it is not related and not relevant to this discussion.
In order to make other jobs running correctly* I had to rely undoing the damage done by such chmod because I was not able to disable the historical hack.
In order to make other jobs run correctly we are asking you to stop using devstack-gate and use zuulv3 native jobs instead.
* ansible throws warning with unsafe file permissions * ssh refuses to load unsafe keys
That is why I am in favor of dropping features that are slowing down the progress of others.
Again this has nothing to do with devstack.
I know that the reality is more complicated but I also think that sometimes less* is more.
* deployment projects ;)
On 2019-06-04 07:30:11 -0700 (-0700), Clark Boylan wrote:
On Tue, Jun 4, 2019, at 1:01 AM, Sorin Sbarnea wrote:
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working.
One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat.
This is needed by devstack-gate and *not* devstack. We have been trying now for almost two years to get people to stop using devstack-gate in favor of the zuul v3 jobs. Please don't conflate this with devstack itself, it is not related and not relevant to this discussion. [...]
Unfortunately this is not entirely the case. It's likely that the chmod workaround in question is only needed by legacy jobs using the deprecated devstack-gate wrappers, but it's actually being done by the fetch-zuul-cloner role[0] from zuul-jobs which is incorporated in our base job[1]. I agree that the solution is to stop using devstack-gate (and the old zuul-cloner v2 compatibility shim for that matter), but for it to have the effect of removing the problem permissions we also need to move the fetch-zuul-cloner role out of our base job. I fully expect this will be a widely-disruptive change due to newer or converted jobs, which are no longer inheriting from legacy-base or legacy-dsvm-base in openstack-zuul-jobs[2], retaining a dependency on this behavior. But the longer we wait, the worse that is going to get. [0] https://opendev.org/zuul/zuul-jobs/src/commit/2f2d6ce3f7a0687fc8f655abc168d7... [1] https://opendev.org/opendev/base-jobs/src/commit/dbb56dda99e8e2346b22479b4da... [2] https://opendev.org/openstack/openstack-zuul-jobs/src/commit/a7aa530a6059b46... -- Jeremy Stanley
On 04/06/2019 16:47, Jeremy Stanley wrote:
On 2019-06-04 07:30:11 -0700 (-0700), Clark Boylan wrote:
On Tue, Jun 4, 2019, at 1:01 AM, Sorin Sbarnea wrote:
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working.
One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat.
This is needed by devstack-gate and *not* devstack. We have been trying now for almost two years to get people to stop using devstack-gate in favor of the zuul v3 jobs. Please don't conflate this with devstack itself, it is not related and not relevant to this discussion. [...]
Unfortunately this is not entirely the case. It's likely that the chmod workaround in question is only needed by legacy jobs using the deprecated devstack-gate wrappers, but it's actually being done by the fetch-zuul-cloner role[0] from zuul-jobs which is incorporated in our base job[1]. I agree that the solution is to stop using devstack-gate (and the old zuul-cloner v2 compatibility shim for that matter), but for it to have the effect of removing the problem permissions we also need to move the fetch-zuul-cloner role out of our base job. I fully expect this will be a widely-disruptive change due to newer or converted jobs, which are no longer inheriting from legacy-base or legacy-dsvm-base in openstack-zuul-jobs[2], retaining a dependency on this behavior. But the longer we wait, the worse that is going to get.
I have been trying to limit this behaviour for nearly 4 years [3] (it can actually add 10-15 mins sometimes depending on what source trees I have mounted via NFS into a devstack VM when doing dev)
[0] https://opendev.org/zuul/zuul-jobs/src/commit/2f2d6ce3f7a0687fc8f655abc168d7... [1] https://opendev.org/opendev/base-jobs/src/commit/dbb56dda99e8e2346b22479b4da... [2] https://opendev.org/openstack/openstack-zuul-jobs/src/commit/a7aa530a6059b46...
On Tue, 2019-06-04 at 17:23 +0100, Graham Hayes wrote:
On 04/06/2019 16:47, Jeremy Stanley wrote:
On 2019-06-04 07:30:11 -0700 (-0700), Clark Boylan wrote:
On Tue, Jun 4, 2019, at 1:01 AM, Sorin Sbarnea wrote:
I am in favour of ditching or at least refactoring devstack because during the last year I often found myself blocked from fixing some zuul/jobs issues because the buggy code was still required by legacy devstack jobs that nobody had time maintain or fix, so they were isolated and the default job configurations were forced to use dirty hack needed for keeping these working.
One such example is that there is a task that does a "chmod -R 0777 -R" on the entire source tree, a total security threat.
This is needed by devstack-gate and *not* devstack. We have been trying now for almost two years to get people to stop using devstack-gate in favor of the zuul v3 jobs. Please don't conflate this with devstack itself, it is not related and not relevant to this discussion.
[...]
Unfortunately this is not entirely the case. It's likely that the chmod workaround in question is only needed by legacy jobs using the deprecated devstack-gate wrappers, but it's actually being done by the fetch-zuul-cloner role[0] from zuul-jobs which is incorporated in our base job[1]. I agree that the solution is to stop using devstack-gate (and the old zuul-cloner v2 compatibility shim for that matter), but for it to have the effect of removing the problem permissions we also need to move the fetch-zuul-cloner role out of our base job. I fully expect this will be a widely-disruptive change due to newer or converted jobs, which are no longer inheriting from legacy-base or legacy-dsvm-base in openstack-zuul-jobs[2], retaining a dependency on this behavior. But the longer we wait, the worse that is going to get.
I have been trying to limit this behaviour for nearly 4 years [3] (it can actually add 10-15 mins sometimes depending on what source trees I have mounted via NFS into a devstack VM when doing dev) without looking into it i assuem this doeing this so that the stack user can read/execute scipts in the different git repos but chown -R stack:stack would be sainer.
in anycase this is still a ci issue not a devstack one as devstack does not do this iteslf. by defualt it clones the repos if they dont exist as the current user so you dont need to change permissions.
[0] https://opendev.org/zuul/zuul-jobs/src/commit/2f2d6ce3f7a0687fc8f655abc168d7... [1] https://opendev.org/opendev/base-jobs/src/commit/dbb56dda99e8e2346b22479b4da... [2] https://opendev.org/openstack/openstack-zuul-jobs/src/commit/a7aa530a6059b46...
On 2019-06-04 17:23:46 +0100 (+0100), Graham Hayes wrote: [...]
I have been trying to limit this behaviour for nearly 4 years [3] (it can actually add 10-15 mins sometimes depending on what source trees I have mounted via NFS into a devstack VM when doing dev)
Similar I suppose, though the problem mentioned in this subthread is actually not about the mass permission change itself, rather about the resulting permissions. In particular the fetch-zuul-cloner role makes the entire set of provided repositories world-writeable because the zuul-cloner v2 compatibility shim performs clones from those file paths and Git wants to hardlink them if they're being cloned within the same filesystem. This is necessary to support occasions where the original copies aren't owned by the same user running the zuul-cloner shim, since you can't hardlink files for which your account lacks write access. I've done a bit of digging into the history of this now, so the following is probably boring to the majority of you. If you want to help figure out why it's still there at the moment and what's left to do, read on... Change https://review.openstack.org/512285 which added the chmod task includes a rather prescient comment from Paul about not adding it to the mirror-workspace-git-repos role because "we might not want to chmod 777 on no-legacy jobs." Unfortunately I think we failed to realize that it already would because we had added fetch-zuul-cloner to our base job a month earlier in https://review.openstack.org/501843 for reasons which are not recorded in the change (presumably a pragmatic compromise related to the scramble to convert our v2 jobs at the time, I did not resort to digging in IRC history just yet). Soon after, we added fetch-zuul-cloner to the main "legacy" pre playbook with https://review.opendev.org/513067 and prepared to test its removal from the base job with https://review.opendev.org/513079 but that was never completed and I can't seem to find the results of the testing (or even any indication it was ever actually performed). At this point, I feel like we probably just need to re-propose an equivalent of 513079 in our base-jobs repository, exercise it with some DNM changes running a mix of legacy imported v2 and modern v3 native jobs, announce a flag day for the cut over, and try to help address whatever fallout we're unable to predict ahead of time. This is somewhat complicated by the need to also do something similar in https://review.opendev.org/656195 with the bindep "fallback" packages list, so we're going to need to decide how those two efforts will be sequenced, or whether we want to combine them into a single (and likely doubly-painful) event. -- Jeremy Stanley
On 2019-06-04 17:32:41 +0000 (+0000), Jeremy Stanley wrote: [...]
Change https://review.openstack.org/512285 which added the chmod task includes a rather prescient comment from Paul about not adding it to the mirror-workspace-git-repos role because "we might not want to chmod 777 on no-legacy jobs." Unfortunately I think we failed to realize that it already would because we had added fetch-zuul-cloner to our base job a month earlier in https://review.openstack.org/501843 for reasons which are not recorded in the change (presumably a pragmatic compromise related to the scramble to convert our v2 jobs at the time, I did not resort to digging in IRC history just yet).
David Shrewsbury reminded me that the reason was we didn't have a separate legacy-base job yet at the time fetch-zuul-cloner was added, so it initially went into the normal base job.
Soon after, we added fetch-zuul-cloner to the main "legacy" pre playbook with https://review.opendev.org/513067 and prepared to test its removal from the base job with https://review.opendev.org/513079 but that was never completed and I can't seem to find the results of the testing (or even any indication it was ever actually performed).
At this point, I feel like we probably just need to re-propose an equivalent of 513079 in our base-jobs repository,
Proposed as https://review.opendev.org/663135 and once that merges we should be able to...
exercise it with some DNM changes running a mix of legacy imported v2 and modern v3 native jobs, announce a flag day for the cut over, and try to help address whatever fallout we're unable to predict ahead of time. This is somewhat complicated by the need to also do something similar in https://review.opendev.org/656195 with the bindep "fallback" packages list, so we're going to need to decide how those two efforts will be sequenced, or whether we want to combine them into a single (and likely doubly-painful) event.
During the weekly Infrastructure team meeting which just wrapped up, we decided go ahead and combine the two cleanups for maximum pain and suffering. ;) http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-06-04-19.01.lo... Tentatively, we're scheduling the removal of the fetch-zuul-cloner role and the bindep fallback package list from non-legacy jobs for Monday June 24. The details of this plan will of course be more widely disseminated in the coming days, assuming we don't identify any early blockers. -- Jeremy Stanley
On Tue, Jun 04, 2019 at 05:32:41PM +0000, Jeremy Stanley wrote:
On 2019-06-04 17:23:46 +0100 (+0100), Graham Hayes wrote: [...]
I have been trying to limit this behaviour for nearly 4 years [3] (it can actually add 10-15 mins sometimes depending on what source trees I have mounted via NFS into a devstack VM when doing dev)
Similar I suppose, though the problem mentioned in this subthread is actually not about the mass permission change itself, rather about the resulting permissions. In particular the fetch-zuul-cloner role makes the entire set of provided repositories world-writeable because the zuul-cloner v2 compatibility shim performs clones from those file paths and Git wants to hardlink them if they're being cloned within the same filesystem. This is necessary to support occasions where the original copies aren't owned by the same user running the zuul-cloner shim, since you can't hardlink files for which your account lacks write access.
I've done a bit of digging into the history of this now, so the following is probably boring to the majority of you. If you want to help figure out why it's still there at the moment and what's left to do, read on...
Change https://review.openstack.org/512285 which added the chmod task includes a rather prescient comment from Paul about not adding it to the mirror-workspace-git-repos role because "we might not want to chmod 777 on no-legacy jobs." Unfortunately I think we failed to realize that it already would because we had added fetch-zuul-cloner to our base job a month earlier in https://review.openstack.org/501843 for reasons which are not recorded in the change (presumably a pragmatic compromise related to the scramble to convert our v2 jobs at the time, I did not resort to digging in IRC history just yet). Soon after, we added fetch-zuul-cloner to the main "legacy" pre playbook with https://review.opendev.org/513067 and prepared to test its removal from the base job with https://review.opendev.org/513079 but that was never completed and I can't seem to find the results of the testing (or even any indication it was ever actually performed).
Testing was done, you can see that in https://review.opendev.org/513506/. However the issue was, at the time, projects that were using tools/tox_install.sh would break (I have no idea is that is still the case). For humans interested, https://etherpad.openstack.org/p/zuulv3-remove-zuul-cloner was the etherpad to capture this work. Eventually I ended up abandoning the patch, because I wasn't able to keep pushing on it.
At this point, I feel like we probably just need to re-propose an equivalent of 513079 in our base-jobs repository, exercise it with some DNM changes running a mix of legacy imported v2 and modern v3 native jobs, announce a flag day for the cut over, and try to help address whatever fallout we're unable to predict ahead of time. This is somewhat complicated by the need to also do something similar in https://review.opendev.org/656195 with the bindep "fallback" packages list, so we're going to need to decide how those two efforts will be sequenced, or whether we want to combine them into a single (and likely doubly-painful) event. -- Jeremy Stanley
On 2019-06-04 18:07:27 -0400 (-0400), Paul Belanger wrote: [...]
Testing was done, you can see that in https://review.opendev.org/513506/. However the issue was, at the time, projects that were using tools/tox_install.sh would break (I have no idea is that is still the case).
For humans interested, https://etherpad.openstack.org/p/zuulv3-remove-zuul-cloner was the etherpad to capture this work.
Aha! I missed the breadcrumbs which led to those, though I'll admit to only having performed a cursory grep through the relevant repo histories.
Eventually I ended up abandoning the patch, because I wasn't able to keep pushing on it. [...]
Happy to start pushing that boulder uphill again, and thanks for paving the way the first time! -- Jeremy Stanley
On 05/06/2019 00.07, Paul Belanger wrote:
Testing was done, you can see that in https://review.opendev.org/513506/. However the issue was, at the time, projects that were using tools/tox_install.sh would break (I have no idea is that is still the case).
I have a couple of changes open to remove the final tools/tox_install.sh files, see: https://review.opendev.org/#/q/status:open+++topic:tox-siblings There are a few more repos that didn't take my changes from last year which I abandoned in the mean time - and a few dead repos that I did not submit to when double checking today ;( Also, compute-hyperv and nova-blazar need https://review.opendev.org/663234 (requirements change) first. So, we should be pretty good if these changes get reviewed and merged, Andreas -- Andreas Jaeger aj@suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 -- Andreas Jaeger aj@suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
On 05/06/2019 08.47, Andreas Jaeger wrote:
On 05/06/2019 00.07, Paul Belanger wrote:
Testing was done, you can see that in https://review.opendev.org/513506/. However the issue was, at the time, projects that were using tools/tox_install.sh would break (I have no idea is that is still the case).
I have a couple of changes open to remove the final tools/tox_install.sh files, see:
https://review.opendev.org/#/q/status:open+++topic:tox-siblings
There are a few more repos that didn't take my changes from last year which I abandoned in the mean time - and a few dead repos that I did not submit to when double checking today ;(
Also, compute-hyperv and nova-blazar need https://review.opendev.org/663234 (requirements change) first.
That one has a -2 now. ;( I won't be able to work on alternative solutions and neither can access whether this blocks the changes. Anybody to take this over, please?
So, we should be pretty good if these changes get reviewed and merged,
Andreas -- Andreas Jaeger aj@suse.com Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
On 2019-06-05 17:20:37 +0200 (+0200), Andreas Jaeger wrote:
On 05/06/2019 08.47, Andreas Jaeger wrote: [...]
There are a few more repos that didn't take my changes from last year which I abandoned in the mean time - and a few dead repos that I did not submit to when double checking today ;(
Also, compute-hyperv and nova-blazar need https://review.opendev.org/663234 (requirements change) first.
That one has a -2 now. ;(
I won't be able to work on alternative solutions and neither can access whether this blocks the changes. Anybody to take this over, please? [...]
It should be the responsibility of the compute-hyperv and nova-blazar maintainers to solve this problem, though your attempts to help them with a possible solution have been admirable. Thanks for this, and for all the others which did get merged already! -- Jeremy Stanley
participants (16)
-
Andreas Jaeger
-
Ben Nemec
-
Clark Boylan
-
Colleen Murphy
-
Dean Troyer
-
Doug Hellmann
-
Graham Hayes
-
Jeremy Stanley
-
Jesse Pretorius
-
Jim Rollenhagen
-
Mark Goddard
-
Mohammed Naser
-
Paul Belanger
-
Sean Mooney
-
Slawomir Kaplonski
-
Sorin Sbarnea