From ichihara.hirofumi at gmail.com Sun Dec 2 14:08:25 2018 From: ichihara.hirofumi at gmail.com (Hirofumi Ichihara) Date: Sun, 2 Dec 2018 23:08:25 +0900 Subject: [openstack-dev] Stepping down from Neutron core team Message-ID: Hi all, I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. My start of neutron was 5 years ago. I had many good experiences from neutron team. Today neutron is great project. Neutron gets new reviewers, contributors and, users. Keep on being a great community. Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Sun Dec 2 20:57:30 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 2 Dec 2018 14:57:30 -0600 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: Hi Hirofumi, Thanks for your contributions to the project over these years. You will be missed. We also wish the best in your future endeavors. Best regards Miguel On Sun, Dec 2, 2018 at 8:11 AM Hirofumi Ichihara < ichihara.hirofumi at gmail.com> wrote: > Hi all, > > > I’m stepping down from the core team because my role changed and I cannot > have responsibilities of neutron core. > > > My start of neutron was 5 years ago. I had many good experiences from > neutron team. > > Today neutron is great project. Neutron gets new reviewers, contributors > and, users. > > Keep on being a great community. > > > Thanks, > > Hirofumi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranjankrchaubey at gmail.com Mon Dec 3 01:57:25 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 07:27:25 +0530 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <1778B743-99D8-4735-B057-6C19A457BF05@gmail.com> Hi Team , I am getting error of Http 500 server not fullfill request by id please help me how to fix Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 2:27 AM, Miguel Lavalle wrote: > > Hi Hirofumi, > > Thanks for your contributions to the project over these years. You will be missed. We also wish the best in your future endeavors. > > Best regards > > Miguel > >> On Sun, Dec 2, 2018 at 8:11 AM Hirofumi Ichihara wrote: >> Hi all, >> >> I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. >> >> My start of neutron was 5 years ago. I had many good experiences from neutron team. >> Today neutron is great project. Neutron gets new reviewers, contributors and, users. >> Keep on being a great community. >> >> Thanks, >> Hirofumi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Mon Dec 3 02:02:05 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 3 Dec 2018 10:02:05 +0800 Subject: [openstack-dev] [nova] about notification in nova Message-ID: Hi, all: I have a question about the notification in nova, that is the actual operator is different from the operator was record in panko. Such as the delete action, we create the VM as user1, and we delete the VM as user2, but the operator is user1 who delete the VM in panko event, not the actual operator user2. Can you tell me more about this?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Mon Dec 3 02:31:02 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 3 Dec 2018 10:31:02 +0800 Subject: [openstack-dev] [nova] about notification in nova In-Reply-To: References: Message-ID: Hi, Are you using versioned notification? If you are using versioned nofitication, you should get an ``action_initiator_user`` and an ``action_initiator_project`` indicating who initiated this action, we had them since I649d8a27baa8840bc1bb567fef027c749c663432 . If you are not using versioned notification, then versioned notification will be recommanded. Thanks On Mon, Dec 3, 2018 at 10:06 AM Rambo wrote: > Hi, all: > I have a question about the notification in nova, that is the > actual operator is different from the operator was record in panko. Such > as the delete action, we create the VM as user1, and we delete the VM as > user2, but the operator is user1 who delete the VM in panko event, not the > actual operator user2. > Can you tell me more about this?Thank you very much. > > > > > > > > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranjankrchaubey at gmail.com Mon Dec 3 02:40:06 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 08:10:06 +0530 Subject: [openstack-dev] [nova] about notification in nova In-Reply-To: References: Message-ID: <07726B1E-643C-40CF-A7E9-D906B909699F@gmail.com> This regarding keystone ? Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 8:01 AM, Zhenyu Zheng wrote: > > Hi, > > Are you using versioned notification? If you are using versioned nofitication, you should get an ``action_initiator_user`` and an ``action_initiator_project`` > indicating who initiated this action, we had them since I649d8a27baa8840bc1bb567fef027c749c663432 . If you are not using versioned notification, then > versioned notification will be recommanded. > > Thanks > >> On Mon, Dec 3, 2018 at 10:06 AM Rambo wrote: >> Hi, all: >> I have a question about the notification in nova, that is the actual operator is different from the operator was record in panko. Such as the delete action, we create the VM as user1, and we delete the VM as user2, but the operator is user1 who delete the VM in panko event, not the actual operator user2. >> Can you tell me more about this?Thank you very much. >> >> >> >> >> >> >> >> >> Best Regards >> Rambo >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Dec 3 08:09:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 3 Dec 2018 09:09:48 +0100 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> Hi, Thanks for all Your work in Neutron and good luck in Your new role. — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Hirofumi Ichihara w dniu 02.12.2018, o godz. 15:08: > > Hi all, > > I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. > > My start of neutron was 5 years ago. I had many good experiences from neutron team. > Today neutron is great project. Neutron gets new reviewers, contributors and, users. > Keep on being a great community. > > Thanks, > Hirofumi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Mon Dec 3 09:34:50 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 3 Dec 2018 10:34:50 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> Message-ID: Hi Kevin. Puppet not only creates config files but also executes a service dependent steps, like db sync, so neither '[base] -> [puppet]' nor '[base] -> [service]' would not be enough on its own. That requires some services specific code to be included into *config* images as well. PS. There is a related spec [0] created by Dan, please take a look and propose you feedback [0] https://review.openstack.org/620062 On 11/30/18 6:48 PM, Fox, Kevin M wrote: > Still confused by: > [base] -> [service] -> [+ puppet] > not: > [base] -> [puppet] > and > [base] -> [service] > ? > > Thanks, > Kevin > ________________________________________ > From: Bogdan Dobrelya [bdobreli at redhat.com] > Sent: Friday, November 30, 2018 5:31 AM > To: Dan Prince; openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes > > On 11/30/18 1:52 PM, Dan Prince wrote: >> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >>> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>>> >>>>>> >>>>>>> Reiterating again on previous points: >>>>>>> >>>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>>> not via 'rpm >>>>>>> -ev --nodeps'. >>>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>>> certainly put >>>>>>> them in a separate container outside of the runtime service >>>>>>> containers >>>>>>> but doing so would actually cost you much more >>>>>>> space/bandwidth for each >>>>>>> service container. As both of these have to get downloaded to >>>>>>> each node >>>>>>> anyway in order to generate config files with our current >>>>>>> mechanisms >>>>>>> I'm not sure this buys you anything. >>>>>> >>>>>> +1. I was actually under the impression that we concluded >>>>>> yesterday on >>>>>> IRC that this is the only thing that makes sense to seriously >>>>>> consider. >>>>>> But even then it's not a win-win -- we'd gain some security by >>>>>> leaner >>>>>> production images, but pay for it with space+bandwidth by >>>>>> duplicating >>>>>> image content (IOW we can help achieve one of the goals we had >>>>>> in mind >>>>>> by worsening the situation w/r/t the other goal we had in >>>>>> mind.) >>>>>> >>>>>> Personally i'm not sold yet but it's something that i'd >>>>>> consider if we >>>>>> got measurements of how much more space/bandwidth usage this >>>>>> would >>>>>> consume, and if we got some further details/examples about how >>>>>> serious >>>>>> are the security concerns if we leave config mgmt tools in >>>>>> runtime >>>>>> images. >>>>>> >>>>>> IIRC the other options (that were brought forward so far) were >>>>>> already >>>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>>> Bin/lib bind >>>>>> mounting being too hacky and fragile, and nsenter not really >>>>>> solving the >>>>>> problem (because it allows us to switch to having different >>>>>> bins/libs >>>>>> available, but it does not allow merging the availability of >>>>>> bins/libs >>>>>> from two containers into a single context). >>>>>> >>>>>>> We are going in circles here I think.... >>>>>> >>>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>>> to have >>>>>> config tools in runtime images", but IMO we all sorta agree >>>>>> that it >>>>>> would be better not to have them there, if it came at no cost. >>>>>> >>>>>> I think to move forward, it would be interesting to know: if we >>>>>> do this >>>>>> (i'll borrow Dan's drawing): >>>>>> >>>>>>> base container| --> |service container| --> |service >>>>>>> container w/ >>>>>> Puppet installed| >>>>>> >>>>>> How much more space and bandwidth would this consume per node >>>>>> (e.g. >>>>>> separately per controller, per compute). This could help with >>>>>> decision >>>>>> making. >>>>> >>>>> As I've already evaluated in the related bug, that is: >>>>> >>>>> puppet-* modules and manifests ~ 16MB >>>>> puppet with dependencies ~61MB >>>>> dependencies of the seemingly largest a dependency, systemd >>>>> ~190MB >>>>> >>>>> that would be an extra layer size for each of the container >>>>> images to be >>>>> downloaded/fetched into registries. >>>> >>>> Thanks, i tried to do the math of the reduction vs. inflation in >>>> sizes >>>> as follows. I think the crucial point here is the layering. If we >>>> do >>>> this image layering: >>>> >>>>> base| --> |+ service| --> |+ Puppet| >>>> >>>> we'd drop ~267 MB from base image, but we'd be installing that to >>>> the >>>> topmost level, per-component, right? >>> >>> Given we detached systemd from puppet, cronie et al, that would be >>> 267-190MB, so the math below would be looking much better >> >> Would it be worth writing a spec that summarizes what action items are >> bing taken to optimize our base image with regards to the systemd? > > Perhaps it would be. But honestly, I see nothing biggie to require a > full blown spec. Just changing RPM deps and layers for containers > images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, > it should be working as of fedora28(or 29) I hope) > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > >> >> It seems like the general consenses is that cleaning up some of the RPM >> dependencies so that we don't install Systemd is the biggest win. >> >> What confuses me is why are there still patches posted to move Puppet >> out of the base layer when we agree moving it out of the base layer >> would actually cause our resulting container image set to be larger in >> size. >> >> Dan >> >> >>> >>>> In my basic deployment, undercloud seems to have 17 "components" >>>> (49 >>>> containers), overcloud controller 15 components (48 containers), >>>> and >>>> overcloud compute 4 components (7 containers). Accounting for >>>> overlaps, >>>> the total number of "components" used seems to be 19. (By >>>> "components" >>>> here i mean whatever uses a different ConfigImage than other >>>> services. I >>>> just eyeballed it but i think i'm not too far off the correct >>>> number.) >>>> >>>> So we'd subtract 267 MB from base image and add that to 19 leaf >>>> images >>>> used in this deployment. That means difference of +4.8 GB to the >>>> current >>>> image sizes. My /var/lib/registry dir on undercloud with all the >>>> images >>>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>>> >>>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>>> CDNs >>>> (both external and e.g. internal within OpenStack Infra CI clouds). >>>> >>>> And for internal traffic between local registry and overcloud >>>> nodes, it >>>> gives +3.7 GB per controller and +800 MB per compute. That may not >>>> be so >>>> critical but still feels like a considerable downside. >>>> >>>> Another gut feeling is that this way of image layering would take >>>> longer >>>> time to build and to run the modify-image Ansible role which we use >>>> in >>>> CI, so that could endanger how our CI jobs fit into the time limit. >>>> We >>>> could also probably measure this but i'm not sure if it's worth >>>> spending >>>> the time. >>>> >>>> All in all i'd argue we should be looking at different options >>>> still. >>>> >>>>> Given that we should decouple systemd from all/some of the >>>>> dependencies >>>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>>> seems we >>>>> cannot break the love of puppet and systemd as it heavily relies >>>>> on the >>>>> latter and changing packaging like that would higly likely affect >>>>> baremetal deployments with puppet and systemd co-operating. >>>> >>>> Ack :/ >>>> >>>>> Long story short, we cannot shoot both rabbits with a single >>>>> shot, not >>>>> with puppet :) May be we could with ansible replacing puppet >>>>> fully... >>>>> So splitting config and runtime images is the only choice yet to >>>>> address >>>>> the raised security concerns. And let's forget about edge cases >>>>> for now. >>>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>>> computes ain't gonna be our the biggest problem for sure. >>>>> >>>>> [0] >>>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>>> >>>>>>> Dan >>>>>>> >>>>>> >>>>>> Thanks >>>>>> >>>>>> Jirka >>>>>> >>>>>> _______________________________________________________________ >>>>>> ___________ >>>>>> >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> ___________________________________________________________________ >>>> _______ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>>> bscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Mon Dec 3 09:37:57 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 3 Dec 2018 10:37:57 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> Message-ID: On 12/3/18 10:34 AM, Bogdan Dobrelya wrote: > Hi Kevin. > Puppet not only creates config files but also executes a service > dependent steps, like db sync, so neither '[base] -> [puppet]' nor > '[base] -> [service]' would not be enough on its own. That requires some > services specific code to be included into *config* images as well. > > PS. There is a related spec [0] created by Dan, please take a look and > propose you feedback > > [0] https://review.openstack.org/620062 I'm terribly sorry, but that's a corrected link [0] to that spec. [0] https://review.openstack.org/620909 > > On 11/30/18 6:48 PM, Fox, Kevin M wrote: >> Still confused by: >> [base] -> [service] -> [+ puppet] >> not: >> [base] -> [puppet] >> and >> [base] -> [service] >> ? >> >> Thanks, >> Kevin >> ________________________________________ >> From: Bogdan Dobrelya [bdobreli at redhat.com] >> Sent: Friday, November 30, 2018 5:31 AM >> To: Dan Prince; openstack-dev at lists.openstack.org; >> openstack-discuss at lists.openstack.org >> Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of >> containers for security and size of images (maintenance) sakes >> >> On 11/30/18 1:52 PM, Dan Prince wrote: >>> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >>>> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>>>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>>>> >>>>>>> >>>>>>>> Reiterating again on previous points: >>>>>>>> >>>>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>>>> not via 'rpm >>>>>>>> -ev --nodeps'. >>>>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>>>> certainly put >>>>>>>> them in a separate container outside of the runtime service >>>>>>>> containers >>>>>>>> but doing so would actually cost you much more >>>>>>>> space/bandwidth for each >>>>>>>> service container. As both of these have to get downloaded to >>>>>>>> each node >>>>>>>> anyway in order to generate config files with our current >>>>>>>> mechanisms >>>>>>>> I'm not sure this buys you anything. >>>>>>> >>>>>>> +1. I was actually under the impression that we concluded >>>>>>> yesterday on >>>>>>> IRC that this is the only thing that makes sense to seriously >>>>>>> consider. >>>>>>> But even then it's not a win-win -- we'd gain some security by >>>>>>> leaner >>>>>>> production images, but pay for it with space+bandwidth by >>>>>>> duplicating >>>>>>> image content (IOW we can help achieve one of the goals we had >>>>>>> in mind >>>>>>> by worsening the situation w/r/t the other goal we had in >>>>>>> mind.) >>>>>>> >>>>>>> Personally i'm not sold yet but it's something that i'd >>>>>>> consider if we >>>>>>> got measurements of how much more space/bandwidth usage this >>>>>>> would >>>>>>> consume, and if we got some further details/examples about how >>>>>>> serious >>>>>>> are the security concerns if we leave config mgmt tools in >>>>>>> runtime >>>>>>> images. >>>>>>> >>>>>>> IIRC the other options (that were brought forward so far) were >>>>>>> already >>>>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>>>> Bin/lib bind >>>>>>> mounting being too hacky and fragile, and nsenter not really >>>>>>> solving the >>>>>>> problem (because it allows us to switch to having different >>>>>>> bins/libs >>>>>>> available, but it does not allow merging the availability of >>>>>>> bins/libs >>>>>>> from two containers into a single context). >>>>>>> >>>>>>>> We are going in circles here I think.... >>>>>>> >>>>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>>>> to have >>>>>>> config tools in runtime images", but IMO we all sorta agree >>>>>>> that it >>>>>>> would be better not to have them there, if it came at no cost. >>>>>>> >>>>>>> I think to move forward, it would be interesting to know: if we >>>>>>> do this >>>>>>> (i'll borrow Dan's drawing): >>>>>>> >>>>>>>> base container| --> |service container| --> |service >>>>>>>> container w/ >>>>>>> Puppet installed| >>>>>>> >>>>>>> How much more space and bandwidth would this consume per node >>>>>>> (e.g. >>>>>>> separately per controller, per compute). This could help with >>>>>>> decision >>>>>>> making. >>>>>> >>>>>> As I've already evaluated in the related bug, that is: >>>>>> >>>>>> puppet-* modules and manifests ~ 16MB >>>>>> puppet with dependencies ~61MB >>>>>> dependencies of the seemingly largest a dependency, systemd >>>>>> ~190MB >>>>>> >>>>>> that would be an extra layer size for each of the container >>>>>> images to be >>>>>> downloaded/fetched into registries. >>>>> >>>>> Thanks, i tried to do the math of the reduction vs. inflation in >>>>> sizes >>>>> as follows. I think the crucial point here is the layering. If we >>>>> do >>>>> this image layering: >>>>> >>>>>> base| --> |+ service| --> |+ Puppet| >>>>> >>>>> we'd drop ~267 MB from base image, but we'd be installing that to >>>>> the >>>>> topmost level, per-component, right? >>>> >>>> Given we detached systemd from puppet, cronie et al, that would be >>>> 267-190MB, so the math below would be looking much better >>> >>> Would it be worth writing a spec that summarizes what action items are >>> bing taken to optimize our base image with regards to the systemd? >> >> Perhaps it would be. But honestly, I see nothing biggie to require a >> full blown spec. Just changing RPM deps and layers for containers >> images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, >> it should be working as of fedora28(or 29) I hope) >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 >> >> >>> >>> It seems like the general consenses is that cleaning up some of the RPM >>> dependencies so that we don't install Systemd is the biggest win. >>> >>> What confuses me is why are there still patches posted to move Puppet >>> out of the base layer when we agree moving it out of the base layer >>> would actually cause our resulting container image set to be larger in >>> size. >>> >>> Dan >>> >>> >>>> >>>>> In my basic deployment, undercloud seems to have 17 "components" >>>>> (49 >>>>> containers), overcloud controller 15 components (48 containers), >>>>> and >>>>> overcloud compute 4 components (7 containers). Accounting for >>>>> overlaps, >>>>> the total number of "components" used seems to be 19. (By >>>>> "components" >>>>> here i mean whatever uses a different ConfigImage than other >>>>> services. I >>>>> just eyeballed it but i think i'm not too far off the correct >>>>> number.) >>>>> >>>>> So we'd subtract 267 MB from base image and add that to 19 leaf >>>>> images >>>>> used in this deployment. That means difference of +4.8 GB to the >>>>> current >>>>> image sizes. My /var/lib/registry dir on undercloud with all the >>>>> images >>>>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>>>> >>>>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>>>> CDNs >>>>> (both external and e.g. internal within OpenStack Infra CI clouds). >>>>> >>>>> And for internal traffic between local registry and overcloud >>>>> nodes, it >>>>> gives +3.7 GB per controller and +800 MB per compute. That may not >>>>> be so >>>>> critical but still feels like a considerable downside. >>>>> >>>>> Another gut feeling is that this way of image layering would take >>>>> longer >>>>> time to build and to run the modify-image Ansible role which we use >>>>> in >>>>> CI, so that could endanger how our CI jobs fit into the time limit. >>>>> We >>>>> could also probably measure this but i'm not sure if it's worth >>>>> spending >>>>> the time. >>>>> >>>>> All in all i'd argue we should be looking at different options >>>>> still. >>>>> >>>>>> Given that we should decouple systemd from all/some of the >>>>>> dependencies >>>>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>>>> seems we >>>>>> cannot break the love of puppet and systemd as it heavily relies >>>>>> on the >>>>>> latter and changing packaging like that would higly likely affect >>>>>> baremetal deployments with puppet and systemd co-operating. >>>>> >>>>> Ack :/ >>>>> >>>>>> Long story short, we cannot shoot both rabbits with a single >>>>>> shot, not >>>>>> with puppet :) May be we could with ansible replacing puppet >>>>>> fully... >>>>>> So splitting config and runtime images is the only choice yet to >>>>>> address >>>>>> the raised security concerns. And let's forget about edge cases >>>>>> for now. >>>>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>>>> computes ain't gonna be our the biggest problem for sure. >>>>>> >>>>>> [0] >>>>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>>>> >>>>>>>> Dan >>>>>>>> >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Jirka >>>>>>> >>>>>>> _______________________________________________________________ >>>>>>> ___________ >>>>>>> >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> ___________________________________________________________________ >>>>> _______ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>>>> bscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From ranjankrchaubey at gmail.com Mon Dec 3 13:06:06 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 18:36:06 +0530 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> References: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> Message-ID: Hi all, Can any one help me to resvolve error 111 on keystone Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 1:39 PM, Slawomir Kaplonski wrote: > > Hi, > > Thanks for all Your work in Neutron and good luck in Your new role. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > >> Wiadomość napisana przez Hirofumi Ichihara w dniu 02.12.2018, o godz. 15:08: >> >> Hi all, >> >> I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. >> >> My start of neutron was 5 years ago. I had many good experiences from neutron team. >> Today neutron is great project. Neutron gets new reviewers, contributors and, users. >> Keep on being a great community. >> >> Thanks, >> Hirofumi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nate.johnston at redhat.com Mon Dec 3 14:15:06 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 3 Dec 2018 09:15:06 -0500 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <20181203141506.usaxv36gz56f4vic@bishop> On Sun, Dec 02, 2018 at 11:08:25PM +0900, Hirofumi Ichihara wrote: > I’m stepping down from the core team because my role changed and I cannot > have responsibilities of neutron core. Thank you very much for all of the insightful reviews over the years. Good luck on your next adventure! Nate Johnston (njohnston) From strigazi at gmail.com Mon Dec 3 22:24:48 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 3 Dec 2018 23:24:48 +0100 Subject: [openstack-dev] [magnum] kubernetes images for magnum rocky Message-ID: Hello all, Following the vulnerability [0], with magnum rocky and the kubernetes driver on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To upgrade the apiserver in existing clusters, on the master node(s) you can run: sudo atomic pull --storage ostree docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 sudo atomic containers update --rebase docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver You can upgrade the other k8s components with similar commands. I'll share instructions for magnum queens tomorrow morning CET time. Cheers, Spyros [0] https://github.com/kubernetes/kubernetes/issues/71411 [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Dec 3 23:13:52 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 4 Dec 2018 00:13:52 +0100 Subject: [openstack-dev] [magnum] kubernetes images for magnum rocky In-Reply-To: References: Message-ID: Magnum queens, uses kubernetes 1.9.3 by default. You can upgrade to v1.10.11-1. From a quick test v1.11.5-1 is also compatible with 1.9.x. We are working to make this painless, sorry you have to ssh to the nodes for now. Cheers, Spyros On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis wrote: > Hello all, > > Following the vulnerability [0], with magnum rocky and the kubernetes > driver > on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To > upgrade > the apiserver in existing clusters, on the master node(s) you can run: > sudo atomic pull --storage ostree > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 > sudo atomic containers update --rebase > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver > > You can upgrade the other k8s components with similar commands. > > I'll share instructions for magnum queens tomorrow morning CET time. > > Cheers, > Spyros > > [0] https://github.com/kubernetes/kubernetes/issues/71411 > [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Dec 3 23:56:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 3 Dec 2018 23:56:22 +0000 Subject: [openstack-dev] IMPORTANT: This list is retired Message-ID: <20181203235621.obbrxf5rkfdlzfwi@yuggoth.org> This mailing list was replaced by a new openstack-discuss at lists.openstack.org mailing list[0] as of Monday November 19, 2018 and starting now will no longer receive any new messages. The archive of prior messages will remain published in the expected location indefinitely for future reference. For convenience posts to the old list address will be rerouted to the new list for an indeterminate period of time, but please correct it in your replies if you notice this. See my original notice[1] (and the many reminders sent in months since) for an explanation of this change. [0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: