[tc][election] Simple question for the TC candidates
Hello, I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack? This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon. Thanks for your time. Regards, Jean-Philippe Evrard
Hi JP, Thanks for your great question. I admit that I struggled to answer it. Something that just came to my mind -- just right now, while trying to figure out an answer -- is that it would be really cool if we had some manner of documenting (repository tag?) which OpenStack services are able to run standalone or be reused outside of OpenStack. This is important information to highlight as standalone/reusable seems to be an important part of OpenStack's future. I could see it leading to a boost in new contributors too. Best, Jeremy On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Thanks for your time.
Regards, Jean-Philippe Evrard
On Fri, 2020-04-03 at 12:11 -0400, Jeremy Freudberg wrote:
it would be really cool if we had some manner of documenting (repository tag?) which OpenStack services are able to run standalone or be reused outside of OpenStack. This is important information to highlight as standalone/reusable seems to be an important part of OpenStack's future. I could see it leading to a boost in new contributors too.
I like this. Should this be self-asserted by the teams, or should we provide some kind of validation? For teams that are very close but have other openstack services projects dependencies, should the TC work on helping removing those dependencies? Regards, JP
On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
[...] I like this. Should this be self-asserted by the teams, or should we provide some kind of validation? For teams that are very close but have other openstack services projects dependencies, should the TC work on helping removing those dependencies? [...]
Yes the TC should support some kind of initiative to encourage standalone/reusability. I think self-assertion is fine at the start (I think this is really important info to publicize) ... but eventually there should be some kind of reference doc with clear criteria for what "standalone/reusable" actually means. Of course I'm really just thinking of services... libraries are another matter...
On Mon, 2020-04-06 at 12:42 -0400, Jeremy Freudberg wrote:
On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
[...] I like this. Should this be self-asserted by the teams, or should we provide some kind of validation? For teams that are very close but have other openstack services projects dependencies, should the TC work on helping removing those dependencies? [...]
Yes the TC should support some kind of initiative to encourage standalone/reusability. I think self-assertion is fine at the start (I think this is really important info to publicize) ... but eventually there should be some kind of reference doc with clear criteria for what "standalone/reusable" actually means. Of course I'm really just thinking of services... libraries are another matter... isnt that partly what constellations were ment to do
most of them unfrotnetely build on the compute kit https://www.openstack.org/software/sample-configs/#compute-starter-kit but it would be greate to have version that did not need nova or neutorn for example a storage kit that was just swift, keystone, cinder, glance and manila or something similar. a baremetal constalation that could be bifrost + ironic + optionally keystone and metal3? i cant rememebr the name of the project that wanted to have a group of openstack compnets that were useful for use in kubernetes but a constallation that showcased what comments were useful in that context would also be great.
Jeremy Freudberg wrote:
On Mon, Apr 6, 2020 at 3:19 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
[...] I like this. Should this be self-asserted by the teams, or should we provide some kind of validation? For teams that are very close but have other openstack services projects dependencies, should the TC work on helping removing those dependencies? [...]
Yes the TC should support some kind of initiative to encourage standalone/reusability. I think self-assertion is fine at the start (I think this is really important info to publicize) ... but eventually there should be some kind of reference doc with clear criteria for what "standalone/reusable" actually means. Of course I'm really just thinking of services... libraries are another matter...
That could easily be added to: https://opendev.org/osf/openstack-map/src/branch/master/openstack_components... and displayed on component pages under: https://www.openstack.org/software/project-navigator (same way we already display dependencies) -- Thierry Carrez (ttx)
JP, If elected to the TC, I would like to see some focus on efficiency in our overall CI system at the Openstack level. I have spent the last year working with the Infra team. The majority of my focus has been on improving the CI experience for our developers at my tiny scale for the things I can change. The workload profile for our CI jobs differ greatly. Some eat every last drop of memory they get, so don't eat any. Some jobs required significant amounts of I/O and others barely touched the disk. I am sure this topic has been discussed before, but I think it's worth looking at again. We are armed with enough data to make some decisions on how to best use our resources. We need to first analyze the data we have in what jobs run best on what providers. If possible we could prefer that provider for a given workload. While this will not add up to a significant speed up in the beginning, it could give us the power to make the system as a whole run faster. If we can shave just 30 seconds off every job we run by preferring the fastest provider for that job, we could save significant amounts of cpu hours. This results in a better overall experience for our team of developers. This is not a light task, and it would take a lot of work. It's also not something that would happen overnight or probably even in a single cycle. But it's worth the effort. Everything we do as a community is tested. Even minor improvements to the CI system that does the testing results in big gains at our scale. I would like to note that this isn't just an infra side issue. We need everyone to ensure their jobs are being conscious of the resources we do have. These resources aren't easy to come by and our infra team has done nothing short of amazing work to date. I don't have all of the answers to the technical questions, and I am not even sure how the community would feel about this. But I think a focus on the gate benefits everyone. Thanks Donny Davis On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Thanks for your time.
Regards, Jean-Philippe Evrard
-- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First"
On Sat, 2020-04-04 at 09:42 -0400, Donny Davis wrote:
I don't have all of the answers to the technical questions, and I am not even sure how the community would feel about this. But I think a focus on the gate benefits everyone.
Yes, gates (and coordinated gate testing) matters a lot, IMO! I like this. It seems like a couple of TC members should/could help on this. I think people outside the TC can also help, for example, clarkb :) Awesome! Regards, JP
Hello Jean-Philippe,
what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
I feel I answered this, at least partially, in my nomination notice. I will try to expand it: My opinion is we need to invest more in delivery of OpenStack deliverables (pun was not avoidable). I want to design and coordinate efforts around usage of deployment-ready prebuilt images so that they can be used both for CI to avoid lengthy, repeatable processes and for friendlier end-user consumption. I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images). The final idea has not fully crystallized yet. Another point for design around Kolla is that it provides both pre-requisites for OpenStack projects (think MariaDB, RabbitMQ) and services that work with OpenStack projects to deliver more value (e.g. collectd, Prometheus). On that matter, since Ironic plans of independence (more on that in a separate message of mine) have stirred such a lively discussion, I think my idea aligns with standalone goals of such projects. It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest! Since "kolla" is Greek for glue, I plan to use this wording for marketing of "gluing" the community around OpenStack projects. -yoctozepto
On Sun, 2020-04-05 at 22:06 +0200, Radosław Piliszek wrote:
My opinion is we need to invest more in delivery of OpenStack deliverables (pun was not avoidable). I want to design and coordinate efforts around usage of deployment-ready prebuilt images so that they can be used both for CI to avoid lengthy, repeatable processes and for friendlier end-user consumption.
That sounds like what I hoped to work on when I started the containers SIG, but I didn't get the chance to work on it recently. So you know where I stand :)
I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images).
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
On that matter, since Ironic plans of independence (more on that in a separate message of mine) have stirred such a lively discussion, I think my idea aligns with standalone goals of such projects.
Good news, that's what the TC is for :)
It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest!
I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs? Regards, JP
On Mon, 2020-04-06 at 09:41 +0200, Jean-Philippe Evrard wrote:
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)?
I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images).
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
(snip, continued)
Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)?
Let me answer both the versions. I feel like I put it in bad words, I did not mean to start a completely separate project. What I wanted to say is that I see it best to build possible common containerization solution off Kolla (both as in deliverable and project). The fact is Kolla has some shortcomings that would likely cripple its usage as possible DevStack replacement/booster in gate in its current state. My idea is to keep this centralized but with more visibility and ask projects to officially support this method of deliverable distribution. The whole undertaking stems from the fact that I perceive modern software distribution as based on containers - you have the recipe, you have the image, you can use it - you have the insight in how it got to be and also are able to reduce repeatability of deployment steps regarding building/installation, all by "official" means. Since TC is about _technical_ _governance_, I'd say this project fits as it deals with the technical part of organizing proper tooling for the job and promothing it in the OpenStack community far and wide.
It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest!
I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs?
More or less - yes. They seem to need some refresh in the first place (I guess this is where Ironic could shine as well). Currently "sample configs" are vague descriptions of possibilities with listings of projects and links. They should really be captured by deployment scenarios we can offer and have them tested in the gate. This is an interesting matter in its own right. We at Kolla Ansible started some discussion based also on user feedback (which we want to enhance with further with the Kolla Klub) [1]. The goal is to give good coverage of scenarios users are currently interested in while keeping CI resource usage low. You might notice these hardly align with "sample configs". One reason for that is catering for real service coverage, rather than very specific use case. Another is likely different audience than marketing site. [1] https://etherpad.openstack.org/p/KollaAnsibleScenarios -yoctozepto
I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images).
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
(snip, continued)
Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)?
Let me answer both the versions. I feel like I put it in bad words, I did not mean to start a completely separate project. What I wanted to say is that I see it best to build possible common containerization solution off Kolla (both as in deliverable and project). The fact is Kolla has some shortcomings that would likely cripple its usage as possible DevStack replacement/booster in gate in its current state. for what its worth i rememebr talking with clark about if using the kolla
On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote: pre built image for other project would help speed up the gate in anyway. the conclution we came too at the time was due to the way we prestage git repos ince the vm and we do not expect using kolla image in the gate to have any really speed up and it actully has a dissadvatage which is we loose any validation that service are co installable on the same host outside of contianers. e.g. we dont ahve validation that deps of different project dont conflcit sice they are all in different contianers so without adressing that i think it would actully be a regression. not that i dont like kolla imagess i do but it would not be good to use them in the gate an use kolla ansibel instead of devstack in my view if we consider co installablity to be important. the same goes for any contaienised soltution for that matter.
My idea is to keep this centralized but with more visibility and ask projects to officially support this method of deliverable distribution. The whole undertaking stems from the fact that I perceive modern software distribution as based on containers - you have the recipe, you have the image, you can use it - you have the insight in how it got to be and also are able to reduce repeatability of deployment steps regarding building/installation, all by "official" means. Since TC is about _technical_ _governance_, I'd say this project fits as it deals with the technical part of organizing proper tooling for the job and promothing it in the OpenStack community far and wide.
It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest! I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs?
More or less - yes. They seem to need some refresh in the first place (I guess this is where Ironic could shine as well). Currently "sample configs" are vague descriptions of possibilities with listings of projects and links. They should really be captured by deployment scenarios we can offer and have them tested in the gate. This is an interesting matter in its own right. We at Kolla Ansible started some discussion based also on user feedback (which we want to enhance with further with the Kolla Klub) [1]. The goal is to give good coverage of scenarios users are currently interested in while keeping CI resource usage low. You might notice these hardly align with "sample configs". One reason for that is catering for real service coverage, rather than very specific use case. Another is likely different audience than marketing site.
[1] https://etherpad.openstack.org/p/KollaAnsibleScenarios
-yoctozepto
Just to clarify - I'm actually in favor of dropping the plain coinstallability rule. I wrote this before: we have different ways to ensure separation: virtual envs and OCI images. People can and do use them. I don't think this rule brings any benefits nowadays, very diverse projects can be used together even if they decide they need different versions of some internal library - for whatever reason. What counts are well-defined interfaces - this is the only way to ensure cofunctionality, other measures are simply workarounds. ;-) Dropping py2 deserves dropping old mindset. :-) -yoctozepto pon., 6 kwi 2020 o 20:10 Sean Mooney <smooney@redhat.com> napisał(a):
I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images).
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
(snip, continued)
Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)?
Let me answer both the versions. I feel like I put it in bad words, I did not mean to start a completely separate project. What I wanted to say is that I see it best to build possible common containerization solution off Kolla (both as in deliverable and project). The fact is Kolla has some shortcomings that would likely cripple its usage as possible DevStack replacement/booster in gate in its current state. for what its worth i rememebr talking with clark about if using the kolla
On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote: pre built image for other project would help speed up the gate in anyway. the conclution we came too at the time was due to the way we prestage git repos ince the vm and we do not expect using kolla image in the gate to have any really speed up and it actully has a dissadvatage
which is we loose any validation that service are co installable on the same host outside of contianers.
e.g. we dont ahve validation that deps of different project dont conflcit sice they are all in different contianers so without adressing that i think it would actully be a regression.
not that i dont like kolla imagess i do but it would not be good to use them in the gate an use kolla ansibel instead of devstack in my view if we consider co installablity to be important. the same goes for any contaienised soltution for that matter.
My idea is to keep this centralized but with more visibility and ask projects to officially support this method of deliverable distribution. The whole undertaking stems from the fact that I perceive modern software distribution as based on containers - you have the recipe, you have the image, you can use it - you have the insight in how it got to be and also are able to reduce repeatability of deployment steps regarding building/installation, all by "official" means. Since TC is about _technical_ _governance_, I'd say this project fits as it deals with the technical part of organizing proper tooling for the job and promothing it in the OpenStack community far and wide.
It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest! I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs?
More or less - yes. They seem to need some refresh in the first place (I guess this is where Ironic could shine as well). Currently "sample configs" are vague descriptions of possibilities with listings of projects and links. They should really be captured by deployment scenarios we can offer and have them tested in the gate. This is an interesting matter in its own right. We at Kolla Ansible started some discussion based also on user feedback (which we want to enhance with further with the Kolla Klub) [1]. The goal is to give good coverage of scenarios users are currently interested in while keeping CI resource usage low. You might notice these hardly align with "sample configs". One reason for that is catering for real service coverage, rather than very specific use case. Another is likely different audience than marketing site.
[1] https://etherpad.openstack.org/p/KollaAnsibleScenarios
-yoctozepto
On Mon, 2020-04-06 at 20:22 +0200, Radosław Piliszek wrote:
Just to clarify - I'm actually in favor of dropping the plain coinstallability rule.
I wrote this before: we have different ways to ensure separation: virtual envs and OCI images. People can and do use them.
I don't think this rule brings any benefits nowadays, very diverse projects can be used together even if they decide they need different versions of some internal library - for whatever reason. What counts are well-defined interfaces - this is the only way to ensure cofunctionality, other measures are simply workarounds. ;-)
im not sure i agree. from a downstream perspecivit i think that will make packaging more difficult and from an upstrem perspeictive my two favorite installer are devstack and kolla but while i have tried to use kolla for dev in the past i still prefer devstack and im not sure i would vote for a move that may make that impossibel unless kollas dev mode has advanced signifcatly form wehn i last used it.
Dropping py2 deserves dropping old mindset. :-)
it deserves revaluting them sure but dropping for the sake fo it without considering the costs and beinfits no.
-yoctozepto
pon., 6 kwi 2020 o 20:10 Sean Mooney <smooney@redhat.com> napisał(a):
On Mon, 2020-04-06 at 19:44 +0200, Radosław Piliszek wrote:
I am thinking something Kolla-esque as LOCI does not seem that alive nowadays (and does not seem to be getting that much level of testing Kolla has from Kolla Ansible and TripleO as both deployment projects consume Kolla images).
Question: Why would we want another competing project? How do you intend to work with Kolla? Do you want to have this image building in the projects, and use another tooling to deploy those images? Did you start collaborating/discussing with non-TripleO projects on this?
(snip, continued)
Maybe I should rephrase. How do you want to make this work with Kolla, Triple-O, and other deployment projects outside those two? Do we distribute and collaborate (each project got a way to build its images), or do we centralize (LOCI/kolla - way)?
Let me answer both the versions. I feel like I put it in bad words, I did not mean to start a completely separate project. What I wanted to say is that I see it best to build possible common containerization solution off Kolla (both as in deliverable and project). The fact is Kolla has some shortcomings that would likely cripple its usage as possible DevStack replacement/booster in gate in its current state.
for what its worth i rememebr talking with clark about if using the kolla pre built image for other project would help speed up the gate in anyway. the conclution we came too at the time was due to the way we prestage git repos ince the vm and we do not expect using kolla image in the gate to have any really speed up and it actully has a dissadvatage
which is we loose any validation that service are co installable on the same host outside of contianers.
e.g. we dont ahve validation that deps of different project dont conflcit sice they are all in different contianers so without adressing that i think it would actully be a regression.
not that i dont like kolla imagess i do but it would not be good to use them in the gate an use kolla ansibel instead of devstack in my view if we consider co installablity to be important. the same goes for any contaienised soltution for that matter.
My idea is to keep this centralized but with more visibility and ask projects to officially support this method of deliverable distribution. The whole undertaking stems from the fact that I perceive modern software distribution as based on containers - you have the recipe, you have the image, you can use it - you have the insight in how it got to be and also are able to reduce repeatability of deployment steps regarding building/installation, all by "official" means. Since TC is about _technical_ _governance_, I'd say this project fits as it deals with the technical part of organizing proper tooling for the job and promothing it in the OpenStack community far and wide.
It would be easy to design deployment scenarios of subsets of OpenStack projects that work together to achieve some cloud-imagined goal other than plain IaaS. Want some baremetal provisioning? Here you go. Need a safe place for credentials? Be my guest!
I am not sure what you mean there? Do you want to map OpenStack "sample configs" with gate jobs?
More or less - yes. They seem to need some refresh in the first place (I guess this is where Ironic could shine as well). Currently "sample configs" are vague descriptions of possibilities with listings of projects and links. They should really be captured by deployment scenarios we can offer and have them tested in the gate. This is an interesting matter in its own right. We at Kolla Ansible started some discussion based also on user feedback (which we want to enhance with further with the Kolla Klub) [1]. The goal is to give good coverage of scenarios users are currently interested in while keeping CI resource usage low. You might notice these hardly align with "sample configs". One reason for that is catering for real service coverage, rather than very specific use case. Another is likely different audience than marketing site.
[1] https://etherpad.openstack.org/p/KollaAnsibleScenarios
-yoctozepto
Sean Mooney <smooney@redhat.com> wrote:
im not sure i agree. from a downstream perspecivit i think that will make packaging more difficult (snip, snip)
Dropping py2 deserves dropping old mindset. :-) it deserves revaluting them sure but dropping for the sake fo it without considering the costs and beinfits no.
I got these messages reordered so I gave full response to that in reply to fungi.
from an upstrem perspeictive my two favorite installer are devstack and kolla
Glad to hear that!
but while i have tried to use kolla for dev in the past i still prefer devstack and im not sure i would vote for a move that may make that impossibel unless kollas dev mode has advanced signifcatly form wehn i last used it.
I am really interested into knowing where it fell short. We can and probably should make this a separate thread though. The fact is that most our users we are aware of are deployers, aka operators, not developers. -yoctozepto
On 2020-04-06 20:22:05 +0200 (+0200), Radosław Piliszek wrote:
Just to clarify - I'm actually in favor of dropping the plain coinstallability rule.
I wrote this before: we have different ways to ensure separation: virtual envs and OCI images. People can and do use them.
I don't think this rule brings any benefits nowadays, very diverse projects can be used together even if they decide they need different versions of some internal library - for whatever reason. What counts are well-defined interfaces - this is the only way to ensure cofunctionality, other measures are simply workarounds. ;-) [...]
Just to be clear, we didn't add rules about coinstallability to make *our* upstream lives easier. We did it so that downstream distros don't have to provide and support multiple versions of various dependencies. What you're in effect recommending is that we stop supporting installation within traditional GNU/Linux distributions, since only package management solutions like Docker or Nix are going to allow us to punt on coinstallability of our software. You're *also* saying that our libraries now have to start supporting users on a broader variety of their own old versions, and also support/test working with a variety of different versions of their own dependencies, or that we also give up on having any abstracted Python libraries and go back to re-embedding slightly different copies of the same code in all OpenStack services. -- Jeremy Stanley
Hey fungi, Jeremy Stanley <fungi@yuggoth.org> wrote:
Just to be clear, we didn't add rules about coinstallability to make *our* upstream lives easier. We did it so that downstream distros don't have to provide and support multiple versions of various dependencies.
Indeed, these are not for us but for potential downstreams.
What you're in effect recommending is that we stop supporting installation within traditional GNU/Linux distributions, since only package management solutions like Docker or Nix are going to allow us to punt on coinstallability of our software.
I think you are taking this a bit too far. System-wide site-packages are an inherently flawed mechanism. Inability to have two different versions of the same library/program is due to this, not packaging. Distros already ship different versions of lower-level libraries, with differing ABI. Sometimes they separate the things in loadable modules with proper dynamic linking chains. In Python similar can be achieved with venvs (and I know you know that). They exist for a purpose and this is the one. Also, those same distros ship other Python projects, we can never ensure this level coinstallability with them even if they could enhance the overall service experience. What is done then? Isolation. :-) Same story if said distro needs to ship some old version of Python package for its internal usage.
You're *also* saying that our libraries now have to start supporting users on a broader variety of their own old versions, and also support/test working with a variety of different versions of their own dependencies, or that we also give up on having any abstracted Python libraries and go back to re-embedding slightly different copies of the same code in all OpenStack services.
No, why? Voting for containerization and abandonment of strict coinstallability enforcement does not invalidate the purpose of these. Again, the strength is in well-defined interfaces, not reliance on exact versions of every library. Program to interfaces, test the interfaces, live a happy life. All that said, I am not for dropping this all right away. This needs to take time and *will* take considerable time for sure. There could be a staged approach where we allow projects to drop out of coinstallability constraint (looking at Ironic now). Separate thing is that not all OpenStack software is packaged in the main distros. So in their case this is just wishful thinking: package me, I'm ready for the traditional packaging world where we drop everything in one big namespace*. :-) For reference you can inspect the non-buildability patterns of Kolla for the binary flavor: common (rhel+debuntu): https://opendev.org/openstack/kolla/src/commit/f1f1d854594da23321ad94ae6358e... debuntu specific extra: https://opendev.org/openstack/kolla/src/commit/f1f1d854594da23321ad94ae6358e... * As long as you only care about OpenStack because I'm gonna break your X. -yoctozepto
On 2020-04-07 19:48:41 +0200 (+0200), Radosław Piliszek wrote:
Jeremy Stanley <fungi@yuggoth.org> wrote: [...]
What you're in effect recommending is that we stop supporting installation within traditional GNU/Linux distributions, since only package management solutions like Docker or Nix are going to allow us to punt on coinstallability of our software.
I think you are taking this a bit too far. System-wide site-packages are an inherently flawed mechanism. Inability to have two different versions of the same library/program is due to this, not packaging. Distros already ship different versions of lower-level libraries, with differing ABI. Sometimes they separate the things in loadable modules with proper dynamic linking chains.
Yes, they do when it's absolutely necessary, but they also struggle to keep that to a minimum and, e.g., only carry one version of libssl at a time outside of transition periods. Every different version of some piece of software a distro carries means that many times they have to work out how to backport things like critical security fixes.
In Python similar can be achieved with venvs (and I know you know that). They exist for a purpose and this is the one.
I don't know of any traditional GNU/Linux distribution which packages virtualenvs of software or considers that acceptable within their main archives. Relying on virtualenv is effectively the same as the Docker/Nix solutions I mentioned above. "Use a virtualenv" is still essentially an argument for giving up on the viability of distribution packages.
Also, those same distros ship other Python projects, we can never ensure this level coinstallability with them even if they could enhance the overall service experience. [...]
We can't, but we do our best to enable them through not pinning the versions we test with in master any more than we have to. It's quite often distro package maintainers ask us on the ML whether we can drop some version cap on a dependency because it's in conflict with something else in their distribution and they don't want to have to carry multiple versions of it.
Separate thing is that not all OpenStack software is packaged in the main distros. So in their case this is just wishful thinking: package me, I'm ready for the traditional packaging world where we drop everything in one big namespace*. :-) [...]
Most OpenStack services, libraries and clients are packaged in Debian, Ubuntu, and RHEL (though via RDO in that case?). There are probably plenty more examples I'm just not aware of. -- Jeremy Stanley
Hi JP, There's two areas that I can think at the top of my head. The first, is making sure that OpenStack is approachable to new contributors. This includes more documentation related community goals, which I consider especially important when a lot of people that have been here forever start to move on to new ventures, taking with them a lot of tribal and undocumented knowledge. We have to start being able to do more with less. The second, is investigating avenues for better integration with other open source communities and projects. It's very likely that OpenStack is only one of the many tools in an operators toolbox (eg., we're also running OpenShift, and our own bare metal provisioner.) Once we have acknowledged those integration points, we can start documenting, testing and developing with those in mind as well. The end result will be that deployers can pick OpenStack, or a specific set of OpenStack components and know that what they are trying to do is possible, is tested and is reproducible. Thank you, Kristi Nikolla
On Apr 2, 2020, at 6:26 AM, Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Thanks for your time.
Regards, Jean-Philippe Evrard
On Sun, 2020-04-05 at 20:27 +0000, Nikolla, Kristi wrote:
The first, is making sure that OpenStack is approachable to new contributors. This includes more documentation related community goals, which I consider especially important when a lot of people that have been here forever start to move on to new ventures, taking with them a lot of tribal and undocumented knowledge. We have to start being able to do more with less.
I think this maps very well with Kendall's efforts :) What do you particularily have in mind?
The second, is investigating avenues for better integration with other open source communities and projects. It's very likely that OpenStack is only one of the many tools in an operators toolbox (eg., we're also running OpenShift, and our own bare metal provisioner.) Once we have acknowledged those integration points, we can start documenting, testing and developing with those in mind as well. The end result will be that deployers can pick OpenStack, or a specific set of OpenStack components and know that what they are trying to do is possible, is tested and is reproducible.
Does that mean you want to introduce a sample configuration to deploy OpenShift on top of OpenStack and do conformance testing, in our jobs? Or did I get that wrong? Please note the CI topic maps with other candidates' efforts, so I already see future collaboration happening. I am glad to have asked my questions :) Regards, JP
On Apr 6, 2020, at 3:45 AM, Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
On Sun, 2020-04-05 at 20:27 +0000, Nikolla, Kristi wrote:
The first, is making sure that OpenStack is approachable to new contributors. This includes more documentation related community goals, which I consider especially important when a lot of people that have been here forever start to move on to new ventures, taking with them a lot of tribal and undocumented knowledge. We have to start being able to do more with less.
I think this maps very well with Kendall's efforts :) What do you particularily have in mind?
I do not have an answer to this just yet and I need to think about it longer. The last time we did something radical on docs it broke all links.
The second, is investigating avenues for better integration with other open source communities and projects. It's very likely that OpenStack is only one of the many tools in an operators toolbox (eg., we're also running OpenShift, and our own bare metal provisioner.) Once we have acknowledged those integration points, we can start documenting, testing and developing with those in mind as well. The end result will be that deployers can pick OpenStack, or a specific set of OpenStack components and know that what they are trying to do is possible, is tested and is reproducible.
Does that mean you want to introduce a sample configuration to deploy OpenShift on top of OpenStack and do conformance testing, in our jobs? Or did I get that wrong? Please note the CI topic maps with other candidates' efforts, so I already see future collaboration happening.
It doesn't have to be limited to OpenShift/k8s, but yes. That would ensure that things work correctly and that we have an actual stake in fixing them if they brake. Furthermore, once the testing template is available, projects can start including in their project gates and start developing features with the broader datacenter in mind as well. See Mohammed's response about actually making Kubernetes a base service.
I am glad to have asked my questions :)
Regards, JP
On Thu, Apr 2, 2020 at 6:30 AM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
I'm aiming to simplify the overall deployment of OpenStack overall and this would happen by us starting to rechange our core architecture to using relevant technologies. We've always claimed to be a cloud operating system, the space has changed and we need to be so we can continue to be one.
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Regardless of my election or not, I intend to push on adding Kubernetes as a base service to OpenStack. This will be the first thing to help enable projects to leverage things like custom resources, scheduling features of Kubernetes to be able to deliver IaaS components easily
Thanks for your time.
Regards, Jean-Philippe Evrard
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. https://vexxhost.com
On 02/04/2020 11:26, Jean-Philippe Evrard wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Thanks for your time.
Regards, Jean-Philippe Evrard
I think modernizing some of our tooling and practices is important. Helping projects by looking at how we send data around OpenStack, and what transport layers we use is important, along side updating things like metrics and tracing to more up to date standards.[1] I think the ideas repo is a good place for people to put up basic outlines for this [2]. In short, some of the ideas bouncing around my head for this: - consolidate agents on nodes - see how we can leverage HTTP / gRPC / random point to point standard for control plane -> node (both hypervisor and VM) communications - OpenTracing - Metrics I know I will be lucky to get one of these complete in my term, but getting the ground work in place for even one or two of these would be great. On top of that - I think that something like Project Teapot [3] has a real future, and is something we should pursue. (disclosure: I was one of the people who was there for the initial formation of the idea, so I do have a bias). 1 - Yes, I know people have issues with prometheus, and how it works, but for better or worse, in the new era, it is a standard of sorts. 2 - Everyone doesn't have to do a Zane on it and upload 1400 lines :) 3 - https://governance.openstack.org/ideas/ideas/teapot/index.html
Hi Jean-Philippe, all, happy to see that your questions already created a passionate discussion, but I also would like to give my contribution. My background and contributions have always been around deploying and operating a large cloud infrastructure. As I mentioned in my nomination I have been involved in the OpenStack community for a long time. I lived the crazy days of "inflated expectations" and I still stick around in the "plateau of productivity". To answer your questions I would like to focus in 2 points that in my opinion are fundamental for the future success of OpenStack. 1) Operators; 2) Projects Consolidation; 1) Operators are the OpenStack users. Ultimately, they define the success of any OpenStack project because they select what to deploy in their clouds. And what's deployed is based of course in the requirements but also in the simplicity and project health. In my opinion the TC and the community in general should focus in its users (Operators) feedback. Making sure that OpenStack is integrated and easy to deploy and maintain over time. Also, make sure that their requirements/pain points are the priorities during the development cycles. Of course, many was done over the years (better docs, deployment tools, all upgrades discussions, ...) but I still think this should be the focus of all the development/integration direction. And looking into the number of OpenStack projects, this bring us to my second point, Project Consolidation. 2) As an Operator, I need to evaluate, deploy and maintain the OpenStack projects to meet my organization requirements. I think we all agree that the number of OpenStack projects is overwhelming! For any new organization that selects OpenStack to deploy their Cloud, navigating through all the projects is extremely challenging. What's worst is that some have very little activity and actually they were never seriously used in a production environment. This can create a lot of confusion and wrong expectations. Of course I know about the project navigator and in the past the project tags/maturity. In fact I'm not advocating more of that. Over the years we insisted to split projects. For example, as an Operator I still don't understand the value of having "Placement" as a separate project. Of course we can argue the architecture pros/cons, that other projects may use it, but at the end it only adds friction to the users (Operators) to deploy and maintain their OpenStack Cloud. This is only one example. Also, we see more and more projects without a PTL volunteer. This doesn't creates the required trust in those projects to anyone that is looking into OpenStack to deploy their Cloud. In my humble opinion the TC and the community in general needs to revaluate the value of each OpenStack project and consolidate or "retire" what is needed. If there's a strong dependency or the scope also matches a different project, maybe consolidate. If the user survey shows that no one is using a project and its health is questionable, we need to find another solution. The goal should be to have a clear set of projects that our users (Operators) can understand the scope and have the trust/confidence to deploy them. cheers, Belmiro On Thu, Apr 2, 2020 at 12:33 PM Jean-Philippe Evrard < jean-philippe@evrard.me> wrote:
Hello,
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
This can be a change in governance, in the projects, in the current structure... it can be really anything. I am just hoping to see practical OpenStack-wide changes here. It doesn't need to be a fully refined idea, but something that can be worked upon.
Thanks for your time.
Regards, Jean-Philippe Evrard
On 2020-04-07 01:28:01 +0200 (+0200), Belmiro Moreira wrote: [...]
Over the years we insisted to split projects. For example, as an Operator I still don't understand the value of having "Placement" as a separate project. Of course we can argue the architecture pros/cons, that other projects may use it, but at the end it only adds friction to the users (Operators) to deploy and maintain their OpenStack Cloud. This is only one example. [...]
I'm curious, do you then consider microservice architectures inherently flawed? Are you advocating for monolithic applications which can't be decomposed and scaled by moving different functions to different servers? Or do you see "consolidation" of the software as something else? -- Jeremy Stanley
On Thu, Apr 2, 2020 at 6:31 PM Jean-Philippe Evrard <jean-philippe@evrard.me> wrote:
I read your nominations, and as usual I will ask what do you _technically_ will do during your mandate, what do you _actively_ want to change in OpenStack?
From my last year as TC role, I try to involve SIGs, and I think Special Interest Group is the right format. IMO we didn't provide as many resources as we should to promote this kind of group, especially considering the success of SIG required involvement from all
I aim for improving the bridge cross Developers, Users, and Operators. three parts(devs, users, and ops). Also if possible, adding language and timezone friendly factors additionally. On the other hand, I will also work on CI for scenarios across projects or communities. We OpenStack surely can do a lot of stuff, but really needs better test coverage to ensure that's mostly stable for all time. We do short for test job for multi-arch and automation scenario (And which is why I stay in SIGs for these two kinds of direction) Last, the community-wide goal schedule, as it's something we should keep pushing and is not yet on track (I mean the scheduling part). -- My friends, stay home please, Rico Lin irc: ricolin
participants (12)
-
Belmiro Moreira
-
Donny Davis
-
Graham Hayes
-
Jean-Philippe Evrard
-
Jeremy Freudberg
-
Jeremy Stanley
-
Mohammed Naser
-
Nikolla, Kristi
-
Radosław Piliszek
-
Rico Lin
-
Sean Mooney
-
Thierry Carrez